text
stringlengths
4
2.78M
--- abstract: 'The main purpose of this paper is to prove that the extensions of a nilpotent block algebra and its Glauberman correspondent block algebra are Morita equivalent under an additional group-theoretic condition (see Theorem 1.6); in particular, Harris and Linckelman’s theorem and Koshitani and Michler’s theorem are covered (see Theorems £7.5 and £7.6). The ingredient to carry out our purpose is the two main results in Külshammer and Puig’s work [*Extensions of nilpotent blocks*]{}; we actually revisited them, giving completely new proofs of both and slightly improving the second one (see Theorems £3.5 and £3.14).' --- [**Glauberman correspondents and extensions of nilpotent block algebras**]{} **Lluis Puig and Yuanyang Zhou** [**1. Introduction**]{} [**1.1.**]{}Let $\O$ be a complete discrete valuation ring with an algebraically closed residue field $k$ of characteristic $p$ and a quotient field $\K$ of characteristic 0. In addition, $\K$ is also assumed to be big enough for all finite groups that we consider below. Let $H$ be a finite group. We denote by ${\rm Irr}_\K(H)$ the set of all irreducible characters of $H$ over $\K$. Let $A$ be another finite group and assume that there is a group homomorphism $A\rightarrow {\rm Aut}(H)$. Such a group $H$ with an $A$-action is called an $A$-group. We denote by $H^A$ the subgroup of all $A$-fixed elements in $H$. Clearly $A$ acts on ${\rm Irr}_\K(H)$. We denote by ${\rm Irr}_{\cal K}(H)^A$ the set of all $A$-fixed elements in ${\rm Irr}_{\cal K}(H)$. Assume that $A$ is solvable and the order of $A$ is coprime to the order of $H$. By [@I Theorem 13.1], there is a bijection $$\pi(H,\,A): {\rm Irr}_{\cal K}(H)^A \rightarrow {\rm Irr}_{\cal K}(H^A)$$ such that [**1.1.1.**]{} For any normal subgroup $B$ of $A$, the bijection $\pi(H,\, B)$ maps ${\rm Irr}_{\K}(H)^A$ to ${\rm Irr}_{\K}(H^B)^A$, and in ${\rm Irr}_{\K}(H)^A$ we have $$\pi(H,\,A) = \pi(H^B,\,A/B) \circ \pi(H,\,B)\,.$$ [**1.1.2.**]{} If $A$ is a $q$-group for some prime $q$, then for any $\chi\in {\rm Irr}_{\K}(H)^A$, the corresponding irreducible character $\pi(H,\,A)(\chi)$ of $G^A$ is the unique irreducible constituent of ${\rm Res}^H_{H^A}(\chi)$ occurring with a multiplicity coprime to $q$. The character $\pi(H,\,A)(\chi)$ of $H^A$ is called the [*Glauberman correspondent*]{} of the character $\chi$ of $H$. [**1.2.**]{}For any central idempotent $c$ of $\O H$, we denote by ${\rm Irr}_{\K}(H, c)$ the set of all irreducible characters of $H$ provided by some $\K Hc$-module. Let $b$ be a block of $H$ — namely $b$ is a primitive central idempotent of $\O H\,;$ then $\O Hb$ is called the [*block algebra*]{} corresponding to $b$. Assume that $A$ stabilizes the block $b$ and centralizes a defect group of $b$. Then, by [@W Proposition 1 and Theorem 1], $A$ stabilizes all characters of ${\rm Irr}_\K(H, b)$ and there is a unique block ${\it w}(b)$ of $\O (H^A)$ such that $${\rm Irr}_{\K}(H^A, {\it w}(b))=\pi(H,\,A)({\rm Irr}_\K(H, b))\,;$$ moreover ,there is a perfect isometry (see [@B1]) $$R_H^b: {\cal R}_\K (H, b)\rightarrow {\cal R}_\K (H^A, {\it w}(b))$$ such that $R_H^b(\chi)=\pm\pi(H,\,A)(\chi)$ for any $\chi\in {\rm Irr}_{\K}(H, b)$, where we denote by ${\cal R}_\K (H, b)$ and ${\cal R}_\K (H^A, {\it w}(b))$ the additive groups generated by ${\rm Irr}_{\K}(H, b)$ and ${\rm Irr}_{\K}(H^A, {\it w}(b))$. Such a block ${\it w}(b)$ is called the [*Glauberman correspondent*]{} of $b$ (see [@W]). Since a perfect isometry between blocks is often nothing but the character-theoretic ‘shadow’ of a derived equivalence, it seems reasonable to ask whether there is a derived equivalence between a block and its Glauberman correspondent. In the last few years, some Morita equivalences between $b$ and ${\it w}(b)$ were found in the cases where $H$ is $p$-solvable or the defect groups of $b$ are normal in $H$, which supply Glauberman correspondences from ${\rm Irr}_{\K}(H, b)$ to ${\rm Irr}_{\K}(H^A, {\it w}(b))$ (see [@H], [@KG] and [@H1]); moreover, all these Morita equivalences between $b$ and ${\it w}(b)$ are [*basic*]{} in the sense of [@P1]. [**1.3.**]{}By induction, the groups $H$ and $H^A$ and the blocks $b$ and ${\it w}(b)$ in the main results of [@H], [@KG] and [@H1] can be reduced to the situation where, for some $A$-stable normal subgroup $K$ of $H\,,$ we have $H=H^A \. K$ , the block $b$ is an $H$-stable block of $K$ with trivial or central defect group, and the block ${\it w}(b)$ is an $H^A$-stable block of $K^A$ with trivial or central defect group. Recall that the block $b$ of $H$ is called [*nilpotent*]{} (see [@P4]) if the quotient group $N_H(R_\varepsilon)/C_H(R)$ is a $p$-group for any local pointed group $R_\varepsilon$ on $\O H b$. Blocks with trivial or central defect group are nilpotent and therefore in these situations $\O Hb$ and $\O (H^A){\it w}(b)$ are extensions of the nilpotent block algebras $\O K b$ and $\O K^A{\it w}(b)$ respectively. Külshammer and Puig already precisely described the algebraic structure of extensions of nilpotent block algebras (see [@KP] or Section 3 below) and these results can be applied to blocks of $p$-solvable groups (see [@P2]) and to blocks with normal defect groups (see [@R; @K]). Thus, it is reasonable to seek a common generalization of the main results of [@H; @KG; @H1] in the setting of extensions of nilpotent block algebras. [**1.4.**]{}Let $G$ be another finite $A$-group having $H$ as an $A$-stable normal subgroup and consider the $A$-action on $H$ induced by the $A$-group $G$. We assume that $A$ stabilizes $b$ and denote by $N$ the stabilizer of $b$ in $G$. Clearly $N$ is $A$-stable. Set $ c={\rm Tr}_N^G (b)$ and $\alpha=\{c\}$; then the idempotent $c$ is $A$-stable and $\alpha$ is an $A$-stable point of $G$ on the group algebra $\O H$ (the action of $G$ on $\O H$ is induced by conjugation). In particular, $G_\alpha$ is a pointed group on $\O H$. Let $P$ be a defect group of $G_\alpha\,;$ then, by [@KP Proposition 5.3], $Q=P\cap H$ is a defect group of the block $b$ of $H$. [**Theorem 1.5.**]{} The following theorem shows that there is a “[*basic*]{}" Morita equivalence between $\O G c$ and $\O G^A {\it w}(c)$; that is to say, this Morita equivalence induces basic Morita equivalences [@P1] between corresponding block algebras. [**Theorem 1.6.**]{} [**Remark 1.7.**]{}Since $G=H\. G^A$, we have $N=H\. N^A$ and then the inclusion $N^A\subset N$ induces a group isomorphism $N/H\cong N^A/H^A$. We use pointed groups introduced by Lluis Puig. For more details on pointed groups, readers can see [@P5] or Paragraph 2.5 below. In Section 2, we introduce some notation and terminology. Section 3 revisits Külshammer and Puig’s main results on extensions of nilpotent blocks; the proof of the existence and uniqueness of the finite group $L$ (see [@KP Theorem 1.8] and Theorem 3.5 below) is dramatically simplified; actually, Corollary 3.14 below slightly improves [@KP Theorem 1.12]; explicitly, $S_\gamma$ in Corollary 3.14 is unique up to determinant one. With the Glauberman correspondents of blocks due to Watanabe, in Section 4 we define Glauberman correspondents of extensions of blocks and compare the local structures of extensions of blocks and their Glauberman correspondents. By Puig’s structure theorem of nilpotent blocks, there is a bijection between the sets of irreducible characters of the nilpotent block $b$ of $H$ and of its defect $Q$; in Section 5, for a suitable local point $\delta$ of $Q\,,$ we prove that this bijection preserves $N_G(Q_\delta)$-actions on these sets. As a consequence, we obtain an $N_G(Q_\delta)$-stable irreducible character $\chi$ of $H$ such that $\chi$ lifts the unique irreducible Brauer character of the nilpotent block $b$ of $H$ and that the Glauberman correspondent character $\pi(H, A)(\chi)$ lifts the unique irreducible Brauer character of the Glauberman correspondent block ${\it w}(b)$ of $H^A$ (see Lemma 5.6). Obviously, $N$ stabilizes the unique simple module in the nilpotent block $b$ of $H$; with this $N$-stable $\O H b$-simple module, we construct an $A$-stable $k^*$-group $\skew3\hat {\bar N}^{^k}$ (see £2.3 and £3.13 below); since $N^A$ stabilizes the unique simple module of the nilpotent block ${\it w}(b)$ of $H^A\,,$ a $k^*$-group $\,\widehat{\overline{\! N^A}}^k$ is similarly constructed. In Section 6, we prove that $\,\widehat{\overline{\! N^A}}^k$ and $(\skew3\hat{\bar N}^{^k})^A$ are isomorphic as $k^*$-groups (see Theorem 6.4). In Section 7, we use the improved version of Külshammer and Puig’s main result to prove our main theorem 1.6. [**2. Notation and terminology**]{} [**2.1.**]{}Throughout this paper, all $\O$-modules are $\O$-free finitely generated — except in 2.4 below; all $\O$-algebras have identity elements, but their subalgebras need not have the same identity element. Let $\cal A$ be an $\O$-algebra; we denote by $\A^\circ\,,$ ${\cal A}^*$, $Z({\cal A})$, $J({\cal A})$ and $1_{\cal A}$ the opposite $\O\-$algebra of $\A\,,$ the multiplicative group of all invertible elements of ${\cal A}$, the center of ${\cal A}$, the radical of ${\cal A}$ and the identity element of ${\cal A}$ respectively. Sometimes we write $1$ instead of $1_{\cal A}\,.$ For any abelian group $V$, ${\rm id }_V$ denotes the identity automorphism on $V$. Let ${\cal B}$ be an $\O$-algebra; a homomorphism ${\cal F}: {\cal A}\rightarrow {\cal B}$ of $\O$-algebras is said to be an [*embedding*]{} if ${\cal F}$ is injective and we have $${\cal F}({\cal A})={\cal F}(1_{\cal A}){\cal B}{\cal F}(1_{\cal A})\quad .$$ Let $S$ be a set and $G$ be a group acting on $S$. For any $g\in G$ and $s\in S$, we write the action of $g$ on $s$ as $s\.g$. [**2.2.**]{}Let $X$ be a finite group. An $X$-interior $\O$-algebra ${\cal A}$ is an $\O$-algebra ${\cal A}$ endowed with a group homomorphism $\rho:X\rightarrow {\cal A}^*$; for any $x, y\in X $ and $a\in {\cal A}$, we write $\rho(x)a\rho(y)$ as $x\. a\. y$ or $xay$ if there is no confusion. Let $\varrho: Y\rightarrow X$ be a group homomorphism; the $\O$-algebra ${\cal A}$ with the group homomorphism $\rho\circ\varrho: Y\rightarrow {\cal A}^*$ is an $Y$-interior $\O$-algebra and we denote it by ${\rm Res}_{\varrho}({\cal A})$. Let ${\cal A}'$ be another $X$-interior $\O$-algebra; an $\O$-algebra homomorphism ${\cal F}:{\cal A}\rightarrow {\cal A}'$ is said to be a homomorphism of $X$-interior $\O$-algebras if for any $x, y\in X $ and any $a\in {\cal A}$, we have ${\cal F}(xay)=x{\cal F}(a)y$. The tensor product ${\cal A}\bigotimes_\O {\cal A}'$ of ${\cal A}$ and ${\cal A}'$ is an $X$-interior $\O$-algebra with the group homomorphism $$X\rightarrow ({\cal A}\otimes_\O {\cal A}')^*,\quad x\mapsto x1_{\cal A}\otimes x1_{{\cal A}'}\quad .$$ Let $Z$ be a subgroup of $X$ and let ${\cal B}$ be an $\O Z$-interior algebra. Obviously, the left and right multiplications by $\O Z$ on ${\cal B}$ define an $(\O Z, \O Z)$-bimodule structure on ${\cal B}$. Set $${\rm Ind}_Z^X ({\cal B})=\O X\otimes_{\O Z} {\cal B}\otimes_{\O Z} \O X$$ and then this the $(\O X, \O X)$-bimodule ${\rm Ind}_Z^X ({\cal B})$ becomes an $X$-interior $\O$-algebra with the product $$(x\otimes b\otimes y)(x'\otimes b'\otimes y') =\cases{x\otimes b\. y x'\.b'\otimes y'&if $yx'\in Z$\cr {}&{}\cr 0 &otherwise\cr}$$ for any $x, y, x', y'\in X$ and any $b,b'\in {\cal B}\,,$ and with the homomorphism $\O X\rightarrow {\rm Ind}_Z^X ({\cal B})$ mapping $x\in X$ onto $\sum_y xy\otimes 1\otimes y^{-1}$, where $y$ runs over a set of representatives for left cosets of $Z$ in $X$. [**2.3.**]{}A $k^*$-group with $k^*$-quotient $X$ is a group $\hat X$ endowed with an injective group homomorphism $\theta: k^*\rightarrow Z(\hat X)$ together with an isomorphism $\hat X/\theta(k^*)\cong X$; usually we omit to mention $\theta$ and the quotient $X=\hat X/\theta(k^*)$ is called the $k^*$-quotient of $\hat X$, writing $\lambda\. \hat x$ instead of $\theta(\lambda)\hat x$ for any $\lambda\in k^*$ and any $\hat x\in \hat X$. We denote by $\hat Y$ the inverse image of $Y$ in $\hat X$ for any subset $Y$ of $X$ and, if no precision is needed, we often denote by $\hat x$ some lifting of an element $x\in X$. We denote by $\hat X^\circ$ the $k^*$-group with the same underlying group $\hat X$ endowed with the group homomorphism $\theta^{-1}: k^*\rightarrow Z(\hat X),\, \lambda\mapsto \theta(\lambda)^{-1}$. Let $\vartheta: Z\rightarrow X$ be a group homomorphism; we denote by ${\rm Res}_{\vartheta}(\hat X)$ the $k^*$-group formed by the group of pairs $(\hat x, y)\in \hat X\times Z$ such that $\vartheta(y)$ is the image of $\hat x$ in $X$, endowed with the group homomorphism mapping $\lambda\in k^*$ on $(\theta(\lambda), 1)$; up to suitable identifications, $Z$ is the $k^*$-quotient of ${\rm Res}_{\vartheta}(\hat X)$. Let $\hat U$ be another $k^*$-group with $k^*$-quotient $U$. A group homomorphism $\phi: \hat X\rightarrow \hat U$ is a homomorphism of $k^*$-groups if $\phi(\lambda\. \hat x)=\lambda\. \phi(\hat x)$ for any $\lambda\in k^*$ and $\hat x\in \hat X$. For more details on $k^*$-groups, please see [@P6 §5]. [**2.4.**]{}Let $\hat X$ be a $k^*$-group with $k^*$-quotient $X$. By [@S Charpter II, Proposition 8], there exists a canonical decomposition $\O^*\cong (1+J(\O))\times k^*$, thus $k^*$ can be canonically regarded as a subgroup of $\O^*$. Set $$\O_*\hat X=\O\otimes_{\O k^*}\O \hat X \quad,$$ where the left $\O k^*$-module $\O\hat X$ and the right $\O k^*$-module $\O$ are defined by the left and right multiplication by $k^*$ on $\hat X$ and $\O ^*$ respectively. It is straightforward to verify that the $\O$-module $\O_*\hat X$ is an $\O$-algebra with the distributive multiplication $$(a_1\otimes\hat x_1)(a_2\otimes\hat {x}_2) =a_1a_2\otimes\hat x_1\hat{x}_2$$ for any $a_1,a_2\in \O$ and any $\hat x_1,\hat{x}_2 \in \hat X$. [**2.5.**]{}Let ${\cal A}$ be an $X$-algebra over $\O$; that is to say, $\A$ is endowed with a group homomorphism $\psi: X\rightarrow {\rm Aut}({\cal A})$, where ${\rm Aut}({\cal A})$ is the group of all $\O$-automorphisms of $A\,;$ usually, we omit to mention $\psi\,.$ For any subgroup $Y$ of $X$, we denote by ${\cal A}^Y$ the $\O$-subalgebra of all $Y$-fixed elements in ${\cal A}$. A [*pointed group*]{} $Y_\beta$ on ${\cal A}$ consists of a subgroup $Y$ of $X$ and of an $({\cal A}^Y)^*$-conjugate class $\beta$ of primitive idempotents of ${\cal A}^Y$. We often say that $\beta$ is a [*point*]{} of $Y$ on ${\cal A}$. Obviously, $X$ acts on the set of all pointed groups on ${\cal A}$ by the equality $(Y_\beta)^x=Y^x_{\psi(x^{-1})(\beta)}$ and we denote by $N_X(Y_\beta)$ the stabilizer of $Y_\beta$ in $X$ for any pointed group $Y_\beta$ on ${\cal A}$. Another pointed group $Z_\gamma$ is said [*contained in*]{} $Y_\beta$ if $Z\leq Y$ and there exist some $i\in \beta$ and $j\in \gamma$ such that $ij=ji=j$. For a subgroup $U$ of $G$, set $${\cal A}(U)=k\otimes _\O ({\cal A}^U/\sum_V {\cal A}^U_V)\quad$$ where $V$ runs over the set of proper subgroups of $U$ and ${\cal A}^U_V$ is the image of the relative trace map ${\rm Tr}_V^U: {\cal A}^V\rightarrow {\cal A}^U$; the canonical surjective homomorphism ${\rm Br}^{\cal A}_U: {\cal A}^U\rightarrow {\cal A}(U)$ is called the [*Brauer homomorphism*]{} of the $X$-algebra ${\cal A}$ at $U$. When ${\cal A}$ is equal to the group algebra $\O X$, the homomorphism $kC_X(U)\rightarrow {\cal A}(U)$ sending $x\in C_X(U)$ onto the image of $x$ in ${\cal A}(U)$ is an isomorphism, through which we identify ${\cal A}(U)$ with $kC_X(U)$. A pointed group $U_\gamma$ on $\cal A$ is said [*local*]{} if the image of $\gamma$ in ${\cal A}(U)$ is not equal to $\{0\}\,,$ which forces $U$ to be a $p$-group; then, a local pointed group $U_\gamma$ is said a [*defect pointed group*]{} of a pointed group $Y_\beta$ on $\cal A$ if $U_\gamma\leq Y_\beta$ and we have $\beta\subset {\rm Tr}_U^Z({\cal A}^U\. \gamma\. A^U)$, where ${\cal A}^U\. \gamma\. A^U$ is the ideal of ${\cal A}^U$ generated by $\gamma$. Let $c$ be a block of $X\,;$ then $\{c\}$ is a point of $X$ on $\O X$ and if $P_\gamma$ is a defect pointed group of $X_{\{c\}}$ then $P$ is a defect group of $c$. [**3. Extensions of nilpotent blocks revisited**]{} In this section, we assume that $\O$ is a complete discrete valuation ring with an algebraically closed residue field of characteristic $p$. [**£3.1.**]{}Let $G$ be a finite group, $H$ be a normal subgroup of $G$ and $b$ be a block of $H$ over $\O$. Denote by $N$ the stabilizer of $b$ in $G$ and set $\bar N=N/H$. Obviously, $\beta=\{b\}$ is a point of $H$ and $N$ on $\O H$ and there is a unique pointed group $G_\alpha$ on $\O H$ such that $$H_\beta\leq N_\beta\leq G_\alpha\quad .$$ Let $Q_\delta$ be a defect pointed group of $H_\beta$ and $P_\gamma$ be a defect pointed group of $N_\beta$ such that $Q_\delta\leq P_\gamma\,;$ by [@KP Proposition 5.3], we have $Q=P\cap H$ and, since we have [@KP 1.7] $$\O G\,{\rm Tr}_N^G( b)\cong {\rm Ind}_N^G (\O N b) \quad ,$$ it is easily checked that $P_\gamma$ is also a [*defect pointed group*]{} of $G_\alpha$ [@P3 1.12]. Assume that the block $b$ is [*nilpotent*]{}; it follows from [@KP Proposition 6.5] that $b$ remains a [nilpotent block]{} of $H\. R$ for any subgroup $R$ of $P\,,$ and from [@KP Theorem 6.6] that there is a unique [local point]{} $\varepsilon$ of $R$ on $\O H$ such that $R_\varepsilon\i P_\gamma\,.$ [**£3.2.**]{}Set ${\cal A} = \O Nb$ and ${\cal B} = \O Hb\,$. Choosing $j\in \delta$ and $i\in \gamma$ such that $ij=ji=j\,,$ we set ${\cal A}_\gamma=(\O G)_\gamma=i{\cal A} i$, ${\cal B}_\gamma=(\O H)_\gamma=i{\cal B}i$ and ${\cal B}_\delta=(\O H)_\delta=j{\cal B}j$. Then ${\cal A}_\gamma$ is a $P$-interior algebra with the group homomorphism $P\rightarrow {\cal A}_\gamma^*$ mapping $u$ onto $ui$ for any $u\in P$, ${\cal B}_\gamma$ is a $P$-stable subalgebra of ${\cal A}_{\gamma}$ and ${\cal B}_\delta$ is a $Q$-interior algebra with the group homomorphism $Q\rightarrow {\cal B}_\delta^*$ mapping $v\in Q$ onto $vj$ for any $v\in Q$. Clearly $\cal A$ is an $N/H$-graded algebra with the $\bar x$-component $\O H x b$, where $\bar x\in N/H$ and $x$ is a representative of $\bar x$ in $N$. Since $i$ belongs to the $1$-component $\cal B$, ${\cal A}_\gamma$ is an $N/H$-graded algebra with the $\bar x$-component $i(\O H x)i$. [**£3.3.**]{}In [@KP] Külshammer and Puig describe the structure of any block of $G$ lying over $b$ in terms of a new finite group $L$ which need not be [involved]{} in $G$ [@KP Theorem 1.8]. More explicitly, $L$ is a [group extension]{} of $\bar N$ by $Q$ holding [strong uniqueness]{} properties. In order to prove these properties, in [@KP] the group $L$ is exhibited inside a suitable $\O\-$algebra [@KP Theorem 8.13], demanding a huge effort. But, as a matter of fact, these properties can be obtained [directly]{} from the so-called [local structure]{} of $G$ over $\O H b\,,$ a fact that we only have understood recently. Then, with these uniqueness properties in hand, the second main result [@KP Theorem 1.12] follows quite easily. With the notation and framework of [@KP], we completely develop both new proofs. [**£3.4.**]{}Denote by $\E_{(b,H,G)}$ the category — called the [*extension category*]{} associated to $G\,,$ $H$ and $b$ — where the objects are all the subgroups of $P$ and, for any pair of subgroups $R$ and $T$ of $P\,,$ the morphisms from $T$ to $R$ are the pairs $(\psi_x,\bar x)$ formed by an injective group homomorphism $\psi_x\,\colon T\to R$ and an element $\bar x$ of $\bar N$ both determined by an element $x\in N$ fulfilling $T_\nu\i (R_\varepsilon)^x$ where $\varepsilon$ and $\nu$ are the respective local points of $R$ and $T$ on $\O H$ determined by $P_\gamma$ — in general, we should consider the [*$(b,N)\-$Brauer pairs*]{} over the [$p\-$permutation $N\-$algebra $\O H b$]{} \[5, Definition 1.6 and Theorem 1.8\] but, in our situation, they coincide with the [local pointed groups]{} over this $N\-$algebra. The composition in $\E_{(b,H,G)}$ is determined by the composition of group homomorphisms and by the product in $\bar N \,.$ [**Theorem £3.5.**]{} [**Proof:**]{} Set $Z = Z(Q)\,,$ $M = N_G (Q_\delta)$ and $\E = \E_{(b,H,G)}\,,$ denote by $\E (R,T)$ the set of $\E\-$morphisms from $T$ to $R\,,$ and write $\E (R)$ instead of $\E (R,R)\,;$ by the very definition of the category $\E\,,$ we have the exact sequence $$1\too C_H (Q)\too M\too \E (Q) \too 1 ;$$ it is clear that $M$ contains $P$ and that we have $C_H (Q)\cap P = Z\,;$ moreover, denoting by $\E_P (Q)$ the image of $P$ in $\E (Q)\,,$ it is easily checked from [@KP  Proposition 5.3] that $\E_P (Q)$ is a Sylow $p\-$subgroup of $\E (Q)\,.$ We claim that the element $\bar h$ induced by $P$ in the second cohomology group ${\Bbb H}^2 \big(\E_P (Q),Z\big)$ belongs to the image of ${\Bbb H}^2 (\E (Q),Z)\,.$ Indeed, according to in  [@CE  Ch. XII, Theorem 10.1], it suffices to prove that, for any subgroup $R$ of $P$ containing $Z$ and any $(\varphi_x,\bar x)\in \E (Q)$ such that $$(\varphi_x,\bar x)\circ\E_{\! R}(Q)\circ (\varphi_x,\bar x)^{-1}\i \E_{\!P} (Q), \leqno £3.5.3$$ the restriction ${\rm res}_{(\varphi_x,\bar x)} (\bar h)$ of $\bar h$ [via]{} the conjugation by $(\varphi_x,\bar x)$ and the element of ${\Bbb H}^2\big(\E_R(Q), Z \big)$ determined by $R$ coincide; actually, we may assume that $R$ contains $Q\,.$ Thus, $x$ normalizes $Q_\delta$ and inclusion £3.5.3 forces $$C_H(Q)\. R \i \big(C_H (Q)\. P\big)^x;$$ in particular, respectively denoting by $\lambda$ and $\mu$ the points of $C_H (Q)\. P$ and $C_H (Q)\. R$ on $\O H$ such that $\big(C_H (Q)\. P\big)_\lambda$ and $\big(C_H (Q)\. R\big)_\mu$ contain $Q_\delta$ [@P4 Lemma 3.9], by uniqueness we have $$\big(C_H (Q)\. R\big)_\mu\i \big(C_H (Q)\. P\big)_\lambda$$ and, with the notation above, it follows from [@KP  Proposition 3.5] that $P_\gamma$ and $R_\varepsilon$ are [defect pointed groups]{} of the respective pointed groups $\big(C_H (Q)\. P\big)_\lambda$ and $\big(C_H (Q)\. R\big)_\mu\,;$ consequently, there is $z\in C_H (Q)$ fulfilling $R_\varepsilon \i (P_\gamma)^{zx}$ [@P5  Theorem 1.2]. That is to say, the conjugation by $zx$ induces a group homomorphism $R\to P$ mapping $Z$ onto $Z$ and inducing the element $(\psi_{zx},\overline{zx})$ of $\E (P,R)$ which extends $(\varphi_x,\bar x)\,,$ so that the map $${\rm res}_{(\varphi_x,\bar x)} : \Bbb H^2\big(\E_P (Q),Z\big)\too \Bbb H^2\big(\E_R (Q),Z\big)$$ sends $\bar h$ to the element of ${\Bbb H}^2\big(\E_R(Q), Z \big)$ determined by $R$ [@CE  Chap. XIV, Theorem 4.2]. In particular, the corresponding element of ${\Bbb H}^2 \big(\E (Q),Z\big)$ determines a group extension $$1\too Z\buildrel \tau \over\too L \buildrel \pi \over \longrightarrow \E (Q)\too 1$$ and, since $\bar h\in \Bbb H^2\big(\E_P (Q),Z\big)$ is the image of this element, there is a [group extension]{} homomorphism $\tau\,\colon P\to L$ [@CE  Chap. XIV, Theorem 4.2]; it is clear that $\tau$ is injective and, since $\E_P (Q)$ is a Sylow $p$-subgroup of $\E (Q)\,,$ ${\rm Im}(\tau)$ is a Sylow $p\-$subgroup of $L\,;$ moreover, since $N = H\. M$ [@P5 Theorem 1.2], we have $$\bar N \cong M/C_H (Q)\. Q\cong \E (Q)/\E_Q (Q) ;$$ in particular, $\pi$ determines a group homomorphism $\bar\pi\,\colon L\to \bar N$ and, since $\tau$ is a [group extension]{} homomorphism, we get $\bar\pi\big(\tau (u)\big) = \bar u$ for any $u\in P$ and may choose $\pi$ in such a way that we have $$y\tau (v)y^{-1} = \tau\big(\varphi_x (v)\big) \leqno £3.5.4\phantom{.}$$ for any $y\in L$ and any $v\in Q$ where $\pi (y) = (\varphi_x,\bar x)$ for some $x\in N\,.$ Then, we claim that, up to a suitable modification of our choice of $\tau\,,$ the group $L$ endowed with $\tau$ and $\bar\pi$ fulfills the conditions above; set $\hat\E = \E_{(1,\tau (Q),L)}$ for short. For any pair of subgroups $R$ and $T$ of $P$ containing $Q\,,$ since we have $H\cap R = Q = H\cap T\,,$ denoting by $\varepsilon$ and $\nu$ the respective local points of $R$ and $T$ such that $P_\gamma$ contains $R_\varepsilon$ and $T_\nu\,,$ these local pointed groups contain $Q_\delta$ and, in particular, any $\E\-$morphism $$(\psi_x,\bar x) : T\too R$$ determines an element $(\varphi_x,\bar x)$ of $\E (Q)$ fulfilling $$(\varphi_x,\bar x)\circ \E_T (Q)\circ (\varphi_x,\bar x)^{-1} \subset \E_R (Q) \quad .$$ Thus, for any $y\in L$ such that $\pi (y) = (\varphi_x,\bar x)\,,$ we have $$y\,\tau (T)\,y^{-1}\i \tau (R) \quad ;$$ more precisely, for any $w\in T$ and any $v\in Q\,,$ from equality £3.5.4 we get $$y\,\tau (v^w)\,y^{-1} = \tau \big(\varphi_x (v^w)\big) = \tau \big(\varphi_x(v)\big)^{\tau (\psi_x (w))} \quad ;$$ moreover, since $x T x^{-1}\i R\,,$ we have $$\bar\pi \big(y\,\tau (w)\,y^{-1}\big) = \bar x\,\bar w\, \bar x^{-1} = \bar\pi \Big(\tau (\psi_x (w))\Big) \quad .$$ Hence, for any $w\in T$ and a suitable $\theta_x (w)\in Z\,,$ we get $$y\,\tau \big(w\,\theta_x (w)\big)\,y^{-1} = \tau (\psi_x (w)) \quad .$$ Conversely, since $R$ and $T$ have a unique (local) point on $\O Q\,,$ any $\hat\E\-$morphism from $T$ to $R$ induced by an element $y$ of $L$ determines an element $\pi (y) = (\varphi_x,\bar x)$ of $\E (Q)\,,$ for some $x\in N\,,$ which still fulfills $$(\varphi_x,\bar x)\circ \E_T (Q)\circ (\varphi_x,\bar x)^{-1} \subset \E_R (Q) \quad ;$$ thus, as above, $x$ normalizes $Q_\delta$ and this inclusion forces $$C_H(Q)\. T \i \big(C_H (Q)\. R\big)^x \quad .$$ Once again, respectively denoting by $\lambda$ and $\mu$ the points of $C_H (Q)\. R$ and $C_H (Q)\. T$ on $\O H$ such that $\big(C_H (Q)\. R\big)_\lambda$ and $\big(C_H (Q)\. T\big)_\mu$ contain $Q_\delta$ [@P4 Lemma 3.9], and by $\varepsilon$ and $\nu$ the local points of $R$ and $T$ on $\O H$ such that $P_\gamma$ contains $R_\varepsilon$ and $T_\nu\,,$ it follows from [@KP  Proposition 3.5] that $R_\varepsilon$ and $T_\nu$ are defect pointed groups of the respective pointed groups $\big(C_H (Q)\. R\big)_\lambda$ and $\big(C_H (Q)\. T\big)_\mu\,;$ since by uniqueness we have $$\big(C_H (Q)\. T\big)_\mu\i \big(C_H (Q)\. R\big)_\lambda ,$$ there is $z\in C_H (Q)$ fulfilling $T_\nu \i (R_\varepsilon)^{zx}$ [@P5 Theorem 1.2]. That is to say, the conjugation by $zx$ induces a group homomorphism $\psi_{zx}\,\colon T\to R$ mapping $Z$ onto $Z$ and inducing the element $(\psi_{zx},\overline{zx})$ of $\E (R,T)$ which extends $(\varphi_x,\bar x)\,;$ hence, as above, for any $w\in T$ and a suitable $\theta_y (w)\in Z$ we get $$y\,\tau \big(w\,\theta_y (w)\big)\,y^{-1} = \tau (\psi_{zx} (w)). \leqno £3.5.5$$ We claim that, for a suitable choice of $\tau\,,$ the elements $\theta_x (w)$ and $\theta_y (w)$ are always trivial; then, the equivalence of categories £3.5.2 will be an easy consequence of the above correspondences. Above, for any $y\in L$ such that $\tau (T)\subset \tau (R)^y$ we have found an element $\big(\psi_y,\bar\pi (y)\big)\in \E (R,T)$ lifting $\pi (y)$ in such a way that, for any $w\in T\,,$ we have $$\tau \big(w\,\theta_y (w)\big) = \tau\big(\psi_y (w)\big)^y \leqno £3.5.6\phantom{.}$$ for a suitable $\theta_y (w)\in Z\,;$ note that, according to equality £3.5.4, for any $v\in Q$ we have $\theta_y (v) = 1\,,$ and whenever $y$ belongs to $\tau (R)$ we may choose $\psi_y$ in such a way that $\theta_y (w) = 1\,.$ In this situation, for any $w,w'\in T\,,$ we get $$\begin{aligned} \tau \big(ww'\theta_y (ww')\big) &=& \tau \big(\psi_y (ww')\big)^y \\ &=& \tau\big(\psi_y (w) \big)^y\tau\big(\psi_y (w') \big)^y \\ &=& \tau \big(w\,\theta_y (w)\big)\tau \big(w'\theta_y (w')\big) \\ &=& \tau\big(w\,\theta_y (w)\,w'\theta_y (w')\big) \\ &=& \tau \big(ww'\theta_y (w)^{w'}\theta_y (w')\big)\end{aligned}$$ and therefore, since $\tau$ is injective, we still get $$\theta_y (ww') = \theta_y (w)^{w'}\theta_y (w') \quad ;$$ in particular, for any $z\in Z$ we have $$\theta_y (wz) = \theta_y (w)^z\,\theta_y (z) = \theta_y (w) \quad .$$ In other words, the map $\theta_y$ determines a $Z\-$valued $1\-$[cocycle]{} from the image $\tilde T$ of $T$ in $\widetilde{\rm Aut}(Q) = {\rm Out} (Q)\,.$ Actually, the [cohomology class]{} $\bar\theta_y$ of this $1\-$cocycle does not depend on the choice of $\psi_y\,;$ indeed, if another choice $\psi'_y$ determines $\theta'_y\,\colon T\to Z$ then we clearly have $\psi'_y (T) = \psi_y (T)$ and, according to our argument above, there is $z\in C_H (Q)$ such that $$(T_\nu)^z = T_\nu\qq \psi'_y = \psi_y\circ \kappa_z \quad ,$$ where $\kappa_z\,\colon T\to T$ denotes the conjugation by $z\,;$ actually, we still have $$[z,T]\i H\cap T = Q \quad .$$ But, since $T_\nu$ is a defect pointed group of $\big(C_H (Q)\. T\big)_\mu$ and, according to \[4, Theorem 1.2\] and [@KP Proposition 6.5], $\mu$ determines a [nilpotent block]{} of the group $C_H (Q)\. T\,,$ we have $N_{C_H (Q)\. T} (T_\nu) = C_H (T)\. T\,.$ Thus, $z$ belongs to $Z\. C_H (T)$ and we actually may assume that $z$ belongs to $Z\,.$ In this case, it follows from equality £3.5.6 applied twice that $$\begin{aligned} \tau \big(w\,\theta'_y (w)\big) &=& \tau\big(\psi'_y (w)\big)^y \\ &=& \tau\big(\psi_y (z w z^{-1})\big)^y \\ &=& \tau \big((z w z^{-1})\, \theta_y (z w z^{-1})\big)\end{aligned}$$ for any $w\in T$ and, since $\theta_y (z w z^{-1}) =\theta_y (w)$ and $\tau$ is injective, we get $$\theta'_y (w)\theta_y (w)^{-1} = w^{-1}z w z^{-1} = (z^{-1})^w z \quad .$$ Consequently, denoting by $\T_{\!L}$ the category where the objects are the subgroups of $\tau (P)$ and the set of morphisms $\T_{\! L} \big(\tau (R),\tau (T)\big)$ from $\tau (T)$ to $\tau (R)$ is just the corresponding [*transporter*]{} in $L\,,$ the correspondence sending an element $y\in \T_{\! L} \big(\tau (R),\tau (T)\big)$ to the cohomology class $\bar\theta_y$ of $\theta_y$ determines a map $$\bar\theta_{_{R,T}} : \T_{\! L} \big(\tau (R),\tau (T)\big)\too {\Bbb H}^1 (\tilde T,Z) \quad .$$ Moreover, if $U$ is a subgroup of $P$ containing $Q$ and $t$ an element of $L$ fulfilling $\tau (U)\subset \tau (T)^t\,,$ as above we can choose $\big(\psi_t,\bar\pi (t)\big)\in \E (T,U)$ lifting $\pi (t)$ in such a way that, for any $u\in U\,,$ we have $$\tau \big(u\,\theta_t (u)\big) = \tau\big(\psi_t (u)\big)^t$$ for a suitable $\theta_t (u)\in Z\,;$ then, the composition $\big(\psi_y,\bar\pi (y)\big)\circ\big(\psi_t,\bar\pi (t)\big)$ lifts $\pi (yt)$ and, for any $u\in U\,,$ we may assume that (cf. £3.5.4) $$\begin{aligned} \tau \big(u\,\theta_{yt} (u)\big) &=& \tau\big((\psi_y\circ\psi_t) (u)\big)^{yt} \\ &=& \tau \Big(\psi_t (u)\,\theta_y\big(\psi_t (u)\big)\Big)^t \\ &=& \tau \big(u\,\theta_t (u)\big) \tau \Big(\theta_y \big(\psi_t (u)\big)\Big)^t \\ &=& \tau \bigg(u\,\theta_t (u)\,\pi (t)^{-1} \Big(\theta_y \big(\psi_t (u)\big)\Big)\bigg)\quad ;\end{aligned}$$ finally, since $\tau$ is injective, using [additive notation]{} in $Z$ we get $$\theta_{yt} (u) = \theta_t (u) + \pi (t)^{-1}\Big(\theta_y \big(\psi_t (u)\big)\Big) \quad .$$ Hence, denoting by $\tilde t$ the image of $t$ in $\widetilde{\rm Aut}(Q)$ and by $\psi_{\tilde t}\,\colon \tilde U\to \tilde T$ and $\Z(\tilde t)\,\colon Z\cong Z$ the corresponding group homomorphisms, we get the [$1\-$cocycle condition]{} $$\bar\theta_{yt} = \bar\theta_t + {\Bbb H}^1 \big(\psi_{\tilde t}, \Z (\tilde t)\big) (\bar\theta_y) \quad ; \leqno £3.5.7$$ in particular, since $\theta_y (w) = 0$ whenever $y\in\tau (R)\,,$ it is easily checked from this condition that $\bar\theta_y$ only depends on the class of $y$ in the [*exterior quotient*]{} $$\tilde\T_{\! L} \big(\tau (R),\tau (T)\big) = \tau (R)\backslash \T_{\! L} \big(\tau (R),\tau (T)\big) .$$ Thus, respectively denoting by $\tilde L\,,$ $\tilde R\,,$ $\tilde T$ and $\tilde P$ the images of $L\,,$ $\tau(R)\,,$ $\tau(T)$ and $\tau (P)$ in $\widetilde{\rm Aut}(Q)\,,$ the map $\bar\theta_{_{R,T}}$ above admits a factorization $$\skew4\tilde{\bar\theta}_{_{\tilde R,\tilde T}} : \tilde \T_{\!\tilde L} (\tilde R,\tilde T)\too {\Bbb H}^1 \big(\tilde T,Z\big) .$$ That is to say, let us consider the [exterior quotient]{} $\tilde\T_{\!\tilde L}$ of the category $\T_{\!\tilde L}$ and the [contravariant]{} functor $${\frak h^1_Z }: \tilde\T_{\!\tilde L}\too \Ab$$ to the category of Abelian groups $\Ab$ mapping $\tilde T$ on ${\Bbb H}^1 \big(\tilde T,Z\big)\,;$ then, identifying the $\tilde\T_{\!\tilde L}\-$morphism $\tilde y\in \tilde \T_{\!\tilde L} (\tilde R,\tilde T)$ with the obvious [$\tilde\T_{\!\tilde L}\-$chain]{} $\Delta_1\too \tilde\T_{\tilde L}$ — the [functor]{} from the category $\Delta_1\,,$ formed by the objects $0$ and $1$ and a non-identity morphism $0\bullet 1$ from $0$ to $1\,,$ mapping $0$ on $T\,,$ $1$ on $R$ and $0\bullet 1$ on $\tilde y$ — the family $\bar \theta = \{\bar\theta_{\tilde y}\}_{\tilde y}\,,$ where $\tilde y$ runs over the set of all the $\tilde\T_{\!\tilde L}\-$morphisms, defines a [$1\-$cocycle]{} from $\tilde \T_{\!\tilde L}$ to ${\frak h^1_Z}$ since equalities £3.5.7 guarantee that the [differential map]{} sends $\bar\theta$ to zero. We claim that this [$1\-$cocycle]{} is a [$1\-$coboundary]{}; indeed, for any subgroup $\tilde R$ of $\tilde P\,,$ choose a set of representatives $\tilde X_{\tilde R}\subset \tilde L$ for the set of [double classes]{} $\tilde P\backslash \tilde L/\tilde R$ and, for any $\tilde n\in \tilde X_{\tilde R}\,,$ set $\tilde R_{\tilde n} = \tilde R\cap P^{\tilde n}\,,$ consider the $\tilde\T_{\tilde L}\-$morphisms $\tilde n\,\colon \tilde R_{\tilde n}\to \tilde P$ and $\tilde\imath_{\tilde R_{\tilde n}}^{\tilde R} \,\colon \tilde R_{\tilde n} \to \tilde R$ respectively determined by $\tilde n$ and by the trivial element of $\tilde L\,,$ and denote by $$({\frak h}^1_Z)^{^{\!\circ}} (\tilde\imath_{\tilde R_{\tilde n}}^{\tilde R}) : {\Bbb H}^1 \big(\tilde R_{\tilde n},Z\big)\too {\Bbb H}^1 \big(\tilde R,Z\big)$$ the corresponding [transfer homomorphism]{} [@CE  Ch. XII, §8]; then, setting $$\bar\sigma_{\tilde R} = {\vert P\vert \over \vert L\vert}\. \sum_{\tilde n\in \tilde X_{\tilde R}} \big(({\frak h}^1_Z)^{^{\!\circ}} (\tilde\imath_{\tilde R_{\tilde n}}^{\tilde R})\big) (\bar\theta_{\tilde n}) \quad ,$$ we claim that, for any ${\tilde y}\in\bar\T_{\!\tilde L}(\tilde R,\tilde T)\,,$ we have $$\bar\theta_{\tilde y} = \bar\sigma_{\tilde T} - \big({\frak h^1_Z} (\tilde y)\big)(\bar\sigma_{\tilde R}) \quad . \leqno £3.5.8$$ Indeed, note that ${\frak h^1_Z} (\tilde y)$ is the composition of the restriction [via]{} the $\tilde\T_{\tilde L}\-$morphism $$\tilde\imath_{\tilde y\tilde T \tilde y^{-1}}^{\tilde R} : \tilde y\,\tilde T \,\tilde y^{-1}\too \tilde R$$ determined by the trivial element of $L\,,$ with the conjugation determined by $\tilde y\,,$ which we denote by ${\frak h^1_Z} (\tilde y_*)\,;$ thus, by the corresponding [Mackey equalities]{} [@CE Ch. XII, Proposition 9.1], we get $$\begin{aligned} &{\frak h^1_Z} (\tilde y)\Big(\sum_{\tilde n\in \tilde X_R} \big(({\frak h}^1_Z)^{^{\!\circ}} (\tilde\imath_{\tilde R_{\tilde n}}^{\tilde R})\big) (\bar\theta_{\tilde n})\Big) \\ =& {\frak h^1_Z} (\tilde y_*)\Big(\sum_{\tilde n\in \tilde X_{\tilde R}}\, \sum_{\tilde r\in \tilde Y_{\tilde n}} \big(({\frak h}^1_Z)^{^{\!\circ}} (\tilde\imath_{\tilde P^{\tilde n \tilde r} \,\cap\, \tilde y\,\tilde T\, \tilde y^{-1}}^{\tilde y\, \tilde T\,\tilde y^{-1}}) \circ {\frak h^1_Z} (\tilde r)\big) (\bar\theta_{\tilde n})\Big) \\ =& \sum_{\tilde n\in \tilde X_{\tilde R}}\, \sum_{\tilde r\in \tilde Y_{\tilde n}} \big(({\frak h}^1_Z)^{^{\!\circ}} (\tilde\imath_{\tilde P^{\tilde n\tilde r \tilde y} \,\cap\,\tilde T}^{\tilde T}) \circ {\frak h^1_Z} (\tilde r\tilde y)\big) (\bar\theta_{\tilde n})\quad ,\end{aligned}$$ where, for any $\tilde n\in X_{\tilde R}\,,$ the subset $\tilde Y_{\tilde n}\subset \tilde R$ is a set of representatives for the set of [double classes]{} $\tilde R_{\tilde n} \backslash \tilde R/\,\tilde y\,\tilde T\, \tilde y^{-1}$ and, for any $\tilde r\in \tilde Y_{\tilde n}\,,$ we consider the $\tilde\T_{\tilde L}\-$morphisms $$\tilde r: \tilde P^{\tilde n \tilde r} \cap \tilde y\,\tilde T\,\tilde y^{-1} \too \tilde R_{\tilde n}\qq \tilde r\tilde y : \tilde P^{\tilde n\tilde r \tilde y}\cap\tilde T \too \tilde R_{\tilde n} \quad .$$ Moreover, setting $\tilde m = \tilde n\tilde r\tilde y$ for $\tilde n\in \tilde X_{\tilde R}$ and $\tilde r\in \tilde Y_{\tilde n}\,,$ since we assume that $\theta_{\tilde r} = 0\,,$ it follows from equality £3.5.7 that $$\big({\frak h^1_Z} (\tilde r\tilde y)\big) (\bar\theta_{\tilde n}) = \bar\theta_{\tilde m} - \bar\theta_{\tilde r\tilde y} =\bar\theta_{\tilde m} - \big({\frak h^1_Z}(\tilde\imath_{\tilde T_{\tilde m}}^{\tilde T})\big)(\bar\theta_{\tilde y}) \quad ;$$ thus, choosing $\tilde X_{\tilde T} = \bigsqcup_{\,\tilde n\in \tilde X_{\tilde R}} \tilde n\,\tilde Y_{\tilde n}\,\tilde y\,,$ we get [@CE  Ch. XII, §8.(6)] $$\begin{aligned} \bar\sigma_{\tilde T} - \big({\frak h^1_Z} (\tilde y)\big) (\bar\sigma_{\tilde R}) &=& {\vert P\vert \over \vert L\vert}\. \sum_{\tilde m\in \tilde X_{\tilde T}} \big(({\frak h}^1_Z)^{^{\!\circ}} (\tilde\imath_{ \tilde T_{\tilde m}}^{\tilde T})\big)\Big(\bar\theta_{\tilde m} - \big({\frak h^1_Z} (\tilde r\tilde y)\big) (\bar\theta_{\tilde n})\Big) \\ &=& {\vert P\vert \over \vert L\vert}\. \sum_{\tilde m\in \tilde X_{\tilde T}} \big(({\frak h}^1_Z)^{^{\!\circ}} (\tilde\imath_{\tilde T_{\tilde m}}^{\tilde T})\big)\Big(\big({\frak h^1_Z}(\tilde\imath_{\tilde T_{\tilde m}}^{\tilde T})\big) (\bar\theta_{\tilde y})\Big) \\ &=& \sum_{\tilde m\in \tilde X_{\tilde T}}{\vert\tilde T/ \tilde T_{\tilde m}\vert \over\vert\tilde L/\tilde P\vert}\. \bar\theta_{\tilde y}=\bar\theta_{\tilde y} \quad .\end{aligned}$$ In particular, for any subgroup $\tilde R$ of $\tilde P\,,$ we get $$\bar \sigma_{\tilde R} = \big({\frak h^1_Z}(\tilde\imath_{\tilde R}^{\tilde P})\big)(\bar \sigma_{\tilde P})$$ and the element $\bar\sigma_{\tilde P}\in \Bbb H^1(\tilde P,Z)$ can be lifted to a $1\-$cocycle $\sigma_{\tilde P}\,\colon \tilde P\to Z$ which determines a group automorphism $\sigma\,\colon P\cong P$ mapping $u\in P$ on $u\,\sigma_{\tilde P}(\tilde u)$ where $\tilde u$ denotes the image of $u$ in $\tilde P\,;$ moreover, according to equality £3.5.8, in £3.5.5 we may choose $$\theta_y (w) = \sigma_{\tilde P}(\tilde w)\big(\pi (y)\big)^{-1}\Big(\sigma_{\tilde P} \big(\widetilde{\psi_y (w)}\big)\Big)^{-1} .$$ Hence, replacing $\tau$ by $\hat\tau = \tau\circ\sigma\,,$ the maps $\pi$ and $\hat\tau$ still fulfill the conditons above and, for any $w\in T\,,$ in equality £3.5.6 we get $$\begin{aligned} \tau\big(\psi_y (w)\big)^y &=& \tau \big(w\,\theta_y (w)\big) \\ &=& \tau\bigg(w\big(w^{-1}\sigma (w)\big)\big(\pi (y)\big)^{-1} \Big(\psi_y (w)^{-1} \sigma \big(\psi_y (w)\big)\Big)^{-1}\bigg) \\ &=& \tau \bigg(\sigma (w)\big(\pi (y)\big)^{-1}\Big(\sigma \big(\psi_y (w) \big)^{-1}\psi_y (w)\Big)\bigg) \\ &=& \hat\tau (w)\tau \Big(\sigma\big(\psi_y (w)\big)^{-1} \psi_y (w)\Big)^y \\ &=& \hat\tau (w) \hat\tau\big(\psi_y (w)^{-1}\big)^y \tau\big(\psi_y (w) \big)^y\end{aligned}$$ so that, as announced, we obtain $$\hat\tau\big(\psi_y (w)\big)^y = \hat\tau (w) \quad .$$ In conclusion, we get a functor from $\hat\E$ to $\E$ mapping any $\hat \E\-$morphism $$(\kappa_y,\bar y) : \hat\tau (T)\too \hat\tau (R)$$ induced by an element $y$ of $L\,,$ where $\kappa_y$ denotes the corresponding conjugation by $y$ which actually fulfills $\hat\tau (Q\. T)\i \big(\hat\tau (Q\. R)\big)^y\,,$ on the $\E\-$morphism $$\big(\psi_y,\bar\pi (y)\big) : T\too R$$ where $\psi_y\,\colon T\to R$ is the group homomorphism determined by the equality $$\hat\tau_R \circ\psi_y = \kappa_y\circ \hat\tau_T \quad ,$$ $\hat\tau_R$ and $\hat\tau_T$ denoting the respective restrictions of $\hat\tau$ to $R$ and $T\,;$ indeed, it is clear that this correspondence maps the composition of $\hat\E\-$morphisms on the corresponding composition of $\E\-$morphisms. Moreover, it is clear that this functor is [faithful]{}, and it follows from our argument above that any $\E\-$morphism $$(\psi_x,\bar x) : T\too R$$ comes from an $\hat\E\-$morphism from $\hat\tau (T)$ to $\hat\tau (R)\,.$ Moreover, for another triple $L'\,,$ $\tau'$ and $\bar\pi'$ fulfilling the above conditions, the corresponding equivalences of categories £3.5.2 induce an equivalence of categories $$\hat\E\cong \E_{(1,\tau' (Q),L')} = \E' \quad ;\leqno £3.5.9$$ in particular, we have a group homomorphism $$\bar\sigma : L\too \hat\E \big(\hat\tau (Q)\big)\cong \E' \big(\tau' (Q)\big)\cong L'/\tau' (Z)$$ and we claim that Lemma £3.6 below applies to the finite groups $L$ and $L'\,,$ with the Sylow $p\-$subgroup $\hat\tau (P)$ of $L\,,$ the Abelian normal $p\-$group $\tau' (Z)$ of $L'$ and the group homomorphism $\bar\sigma\,\colon L\to L'/\tau' (Z)$ above; indeed, the group homomorphism $\hat\tau (P)\to L'$ mapping $\hat\tau (u)$ on $\tau' (u)\,,$ for any $u\in P\,,$ clearly lifts the restriction of $\bar\sigma$ and it is easily checked from the equi-valence £3.5.9 that it fulfills condition £3.6.1 below. Consequently, the last statement immediately follows from this lemma. We are done. [**Lemma £3.6.**]{} [**Proof:**]{} It is clear that $\bar\sigma$ determines an action of $L$ on $Z$ and it makes sense to consider the [cohomology groups]{} $\Bbb H^n (L,Z)$ and $\Bbb H^n (P,Z)$ for any $n$ in $\Bbb N\,.$ But, $M$ determines an element $\bar\mu$ of ${\Bbb H}^2 (\bar M,Z)$ [@CE  Chap. XIV, Theorem 4.2] and if there is a group homomorphism $\tau\,\colon P\to M$ lifting the restriction of $\bar\sigma$ then the corresponding image of $\bar\mu$ in ${\Bbb H}^2 (P,Z)$ has to be zero [@CE Chap. XIV, Theorem 4.2]; thus, since the restriction map $${\Bbb H}^2 (L,Z)\too {\Bbb H}^2 (P,Z)$$ is injective [@CE  Ch. XII, Theorem 10.1], we also get $$\big({\Bbb H}^2 (\bar\sigma,{\rm id}_Z)\big)(\bar\mu) = 0$$ and therefore there is a group homomorphism $\sigma\,\colon L\to M$ lifting $\bar\sigma\,.$ At this point, the [difference]{} between $\tau$ and the restriction of $\sigma$ to $P$ defines a [$1\-$cocycle]{} $\theta\,\colon P\to Z$ and, for any subgroup $R$ of $P$ and any $x\in L$ such that $R^x\subset P\,,$ it follows from condition £3.6.1 that, for a suitable $y\in M$ fulfilling $\bar y = \bar\sigma (x)\,,$ for any $u\in R$ we have $$\begin{aligned} \theta (u^x) &=& \tau (u^x)^{-1}\sigma (u^x)\\ & =& \tau (u^{-1})^y\sigma (u)^{\sigma (x)} \\ &=& \tau (u^{-1})^y \tau (u)^{\sigma (x)}\theta (u)^{\sigma (x)} \\ &=& \Big(\big(y\sigma (x)^{-1}\big)^{-1}\big(y\sigma (x)^{-1}\big)^{\tau (u)}\theta (u)\Big)^{\sigma (x)}\quad ; \end{aligned}$$ consequently, since the map sending $u\in R$ to $$\big(y\sigma (x)^{-1}\big)^{-1}\big(y\sigma (x)^{-1}\big)^{\tau (u)}\in Z$$ is a [$1\-$coboundary]{}, the cohomology class $\bar\theta$ of $\theta$ is $L\-$[stable]{}, and it follows again from [@CE  Ch. XII, Theorem 10.1] that it is the restriction of a suitable element $\bar \eta\in {\Bbb H}^1 (L,Z)\,;$ then, it suffices to modify $\sigma$ by a representative of $\bar\eta$ to get a new group homomorphism $\sigma'\,\colon L\to M$ lifting $\bar\sigma$ and extending $\tau\,.$ Now, if $\sigma'\,\colon L\to M$ is a group homomorphism which lifts $\bar\sigma$ and extends $\tau\,,$ the element $\sigma' (x)\sigma (x)^{-1}$ belongs to $Z$ for any $x\in L$ and thus, we get a [$1\-$cocycle]{} $\lambda\, \,\colon L\to Z$ mapping $x\in L$ on $\sigma' (x)\sigma (x)^{-1}\,,$ which vanish over $P\,;$ hence, it is a [$1\-$coboundary]{} [@CE  Ch. XII, Theorem 10.1] and therefore there exists $z\in Z$ such that $$\lambda (x) = z^{-1}\sigma (x) z\sigma (x)^{-1}$$ so that we have $\sigma' (x) =\sigma (x)^z$ for any $x\in L\,.$ We are done. [**£3.7.**]{}Since $Q$ normalizes a unitary [full matrix]{} $\O\-$subalgebra $T$ of ${\cal B}_\delta$ such that [@P4  Theorem 1.6] $${\cal B}_\delta\cong T\,Q\qq {\rm rank}_\O (T)\equiv 1 \bmod p \quad ,\leqno £3.7.1$$ the action of $Q$ on $T$ admits a unique lifting to a group homomorphism [@P4 1.8] $$Q\too {\rm Ker}({\rm det}_T) \quad ;$$ hence, we have $$\B_\delta\cong T\otimes_\O \O Q$$ and therefore ${\B}_\delta$ admits a unique two-sided ideal ${\frak n}_\delta$ such that, considering ${\cal B}_\delta/{\frak n}_\delta$ as a $Q$-interior $\O$-algebra, there is an isomorphism $${\cal B}_\delta/{\frak n}_\delta\cong T$$ of $Q$-interior $\O$-algebras. Then, a canonical [embedding]{} $f_\delta\,\colon {\cal B}_\delta\to {\rm Res}_Q^H ({\cal B})$ [@P4  2.8] and the ideal ${\frak n}_\delta$ determine a two-sided ideal ${\frak n}$ of $\cal B$ such that $S = {\cal B}/{\frak n}$ is also a [full matrix]{} $\O\-$algebra. [**Proposition £3.8.**]{} [*With the notation above, the action of $N$ on $\cal B$ stabilizes ${\frak n}\,.$*]{} [**Proof:**]{} Since we have $N = H\. N_G (Q_\delta)\,,$ for the first statement we may consider $x\in N_G (Q_\delta)\,;$ then, denoting by $\sigma_x$ the automorphism of $Q$ induced by the conjugation by $x\,,$ it is clear that the isomorphism $$f_x : {\rm Res}_{\sigma_x}\big({\rm Res}_Q^H ({\cal B})\big)\cong {\rm Res}_Q^H ({\cal B})$$ of $Q$-interior algebras mapping $a\in \cal B$ on $a^x$ induces a commutative diagram of [*exterior*]{} homomorphisms of $Q$-interior algebras [@P4 2.8] $$\matrix{{\rm Res}_{\sigma_x}\big({\rm Res}_Q^H ({\cal B})\big)&\buildrel \tilde f_x\over\cong &{\rm Res}_Q^H ({\cal B})\cr \hskip-10pt{\scriptstyle \tilde f_\delta}\hskip4pt\uparrow&\phantom{\Big\uparrow}&\uparrow\hskip4pt {\scriptstyle \tilde f_\delta}\hskip-10pt\cr {\rm Res}_{\sigma_x} ({\cal B}_\delta)&\buildrel (\tilde f_x)_\delta\over\cong& {\cal B}_\delta\cr} \quad ;$$ moreover, the uniqueness of ${\frak n}_\delta$ clearly implies that this ideal is stabilized by $(\tilde f_x)_\delta\,;$ consequently, ${\frak n} $ is still stabilized by $\tilde f_x\,.$ [**£3.9.**]{} In particular, $N$ acts on the [full matrix]{} $\O\-$algebra $S$ and therefore the action on $S$ of any element $x\in N$ can be lifted to a suitable $s_x\in S^*\,;$ thus, setting ${\rm r }= {\rm rank}_\O(S)\,,$ denoting by $\bar H$ the image of $H$ in $S^*$ and considering a finite extension $\O'$ of $\O$ containing the group $U$ of $\vert H\vert\-$th roots of unity and the ${\rm r}\-$th roots of ${\rm det}_S (s_x)$ for any $x\in N\,,$ since ${\rm r}$ divides $\vert H\vert\,,$ the [*pull-back*]{} $$\matrix{N &\too & {\rm Aut}(\O'\otimes_\O S)\cr \uparrow&\phantom{\big\uparrow}&\uparrow\cr \hat N &\too &(U\otimes\bar H)\.{\rm Ker}({\rm det}_{\O'\otimes_\O S})\cr}$$ determines a central extension $\hat N$ of $N$ by $U\,,$ which clearly does not depend on the choice of $\O'\,;$ moreover, the inclusion $H\i N$ and the structural group homomorphism $H\to (\O'\otimes_\O S)^*$ induces an injective group homomorphism $H\to \hat N$ with an image which is a normal subgroup of $\check N$ and has a [trivial]{} intersection with the image of $U$ — we identify this image with $H$ and set $$\skew3\hat{\bar N} = \hat N/H\quad .$$ We will consider the $H\-$interior $N\-$algebras (see [@P7 2.1]) $$\hat{\cal A} = S^\circ\otimes_\O {\cal A}\qq \hat{\cal B} = S^\circ\otimes_\O {\cal B}$$ and note that $\O'\otimes \hat \A$ actually has an $\hat N\-$interior algebra structure. [**£3.10.**]{}On the other hand, since $b$ is also a [nilpotent]{} block of the group $H\. P\,,$ it is easily checked that [@P4 1.9] $$\O(H\. P)b\big/J\big(\O(H\. P)b\big)\cong k\otimes_\O S \quad ;$$ moreover, since the inclusion map $\O H\to \O (H\. P)$ is a [semicovering of $P\-$algebras]{} [@KP Example 3.9, 3.10 and Theorem 3.16], we can identify $\gamma$ with a local point of $P$ on $\O(H\. P)b$. Set $\O(H\. P)_\gamma=i(\O(H\. P))i$ and $S_\gamma=\bar\imath S\bar\imath$, where $\bar\imath$ is the image of $i$ in $S\,;$ then, as in £3.7 above, we have an isomorphism of $P$-interior algebras [@P4 Theorem 1.6] $$\O(H\. P)_\gamma\cong S_\gamma\, P \quad ,\leqno £3.10.1$$ $S_\gamma$ is actually a [*Dade $P\-$algebra*]{} — namely, a [full matrix]{} $P\-$algebra over $\O$ where $P$ stabilizes an $\O\-$basis containing the unity element — such that ${\rm rank}_\O(S_\gamma)\equiv 1\,\, {\rm mod}\,\, p$, and the action of $P$ on $S_\gamma$ can be uniquely lifted to a group homomorphism $P\to {\rm Ker}({\rm det}_{S_\gamma})$ [@P4 1.8], so that isomorphism £3.10.1 becomes $$\O(H\. P)_\gamma\cong S_\gamma\otimes_\O \O P¡£ \quad .\leqno £3.10.2$$ [**Proposition £3.11.**]{} [**Proof:**]{} It follows from isomorphism £3.10.2 that the canonical homomorphism of $P\-$algebras $$\O(H\. P)_\gamma\too S_\gamma \leqno £3.11.1\phantom{.}$$ admits a $P\-$algebra section mapping $s\in S_\gamma$ on the image of $s\otimes 1$ by the inverse of that isomorphism, which proves that the $P$-interior algebra homomorphism £3.11.1 is a [covering]{} [@P4 4.14 and Example 4.25]; thus, since the inclusion map $\O H\to \O (H\. P)$ is a semicovering of $P\-$algebras, the canonical homomorphism of $P\-$algebras $${\cal B}_\gamma = (\O H)_\gamma\too S_\gamma$$ remains a [semicovering]{} [@KP  Proposition 3.13]; moreover, since ${\frak n}\i J(\cal B)\,,$ it is a [strict semicovering]{} [@KP  3.10]. [**£3.12.**]{}Consequently, it easily follows from [@KP  Theorem 3.16] and [@P4  Proposition 5.6] that we still have a [strict semicovering]{} homomorphism of $P\-$algebras $$(S_\gamma)^\circ\otimes_\O {\cal B}_\gamma\too (S_\gamma)^\circ\otimes_\O S_\gamma\cong {\rm End}_\O (S_\gamma) \quad ;\leqno £3.12.1$$ hence, denoting by $\hat \gamma$ the local point of $P$ over $(S_\gamma)^\circ\otimes_\O {\cal B}_\gamma$ determined by $\gamma\,,$ the image of $\hat\gamma$ in $(S_\gamma)^\circ\otimes_\O S_\gamma$ is contained in the corresponding local point of $P$ and therefore we get a [strict semicovering]{} homomorphism [@P4  5.7] $$\hat{\cal B}_{\hat\gamma}\too \O \cong ((S_\gamma)^\circ\otimes_\O S_\gamma)_{\hat\gamma}$$ of $P\-$algebras; that is to say, any $\hat\imath\in \hat \gamma$ is actually a primitive idempotent in $\hat{\cal B}$ and therefore, for any local pointed group $R_{\hat\varepsilon}$ over $\hat{\cal B}$ contained in $P_{\hat\gamma}\,,$ it also belongs to $\hat\varepsilon\,;$ in particular, denoting by $\hat \delta$ the local point of $Q$ over $(S_\gamma)^\circ\otimes_\O {\cal B}_\gamma$ determined by $\delta\,,$ we clearly have $\hat{\cal B}_{\hat\delta} = \hat\imath\hat{\cal B}\hat\imath\cong \O Q$ (cf. £3.7.1). [**£3.13.**]{}As in [@KP  2.11], we consider the $P$-interior algebra $\hat{\cal A}_{\hat\gamma} = \hat\imath\hat{\cal A}\hat\imath\,;$ since $\cal A$ is an $N/H$-graded algebra, $\hat{\cal A}_{\hat\gamma}$ is also an $N/H$-graded algebra. On the other hand, since $\O'/J(\O')\cong k\,,$ we get a group homomorphism $\varpi\,\colon U\to k^*$ and, setting $\Delta_\varpi (U) = \{(\varpi (\xi),\xi^{-1})\}_{\xi\in U}\,,$ we obtain the obvious $k^*\-$group $$\skew3\hat{\bar N}^{^k} =( k^*\times \skew3\hat{\bar N})/\Delta_\varpi (U) \quad ;$$ then, with the notation of Theorem £3.5, we set [@P6  5.7] $$\hat L = {\rm Res}_{\bar\pi} (\skew3\hat{\bar N}^{^k}) \quad ;\leqno £3.13.1$$ thus, $\O_*\hat L^{^\circ}$ becomes a $P$-interior algebra [via]{} the lifting $\hat\tau\,\colon P\to \hat L^{^\circ}$ of the group homomorphism $\tau\,\colon P\to L\,,$ and it has an obvious $L/\tau(Q)$-graded algebra structure. The group homomorphism $\bar \pi$ induces a group isomorphism $L/\tau(Q)\cong N/H$, through which we identify $L/\tau(Q)$ and $N/H\,,$ so that $\O_*\hat L^{^\circ}$ becomes an $N/H$-graded algebra. [**Theorem £3.14.**]{} [**Proof:**]{} Choosing $\hat\imath\in \hat\gamma\,,$ we consider the groups $$M = N_{(\hat\imath\hat{\cal A} \hat\imath)^*} (Q\. \hat\imath)/k^*\. \hat\imath\qq Z = \big((\hat\imath\hat{\cal B}\hat\imath)^Q\big)^*\! \big/k^*\. \hat\imath\cong 1 + J\big(Z (\O Q)\big) \quad ;$$ it is clear that $Z$ is a normal Abelian $p'\-$divisible subgroup of $M\,,$ and we set $\bar M = M/Z\,.$ In order to apply Lemma £3.6, let $R$ be a subgroup of $P$ and $y$ an element of $L$ such that $\tau (R)\i \tau (P)^y\,;$ since $\tau (Q)$ is normal in $L\,,$ we actually may assume that $R$ contains $Q\,.$ According to the equivalence of categories £3.5.2, denoting by $\varepsilon$ the unique local point of $R$ on $\cal B$ fulfilling $R_\varepsilon\i P_\gamma$ [@KP  Theorem 6.6], there is $x_y\in N$ such that $$\bar x_y = \bar\pi (y)\quad,\quad R_\varepsilon\i (P_\gamma)^{x_y} \qq \tau ({}^{x_y}v) ={}^y\tau (v) \hbox{\ \ for any $v\in R$} \quad ;\leqno £3.14.1$$ in particular, $x_y$ normalizes $Q_\delta\,.$ By Proposition 3.11, a local pointed group $R_\varepsilon$ on $\cal B$ such that $$Q_\delta\leq R_\varepsilon\leq P_\gamma$$ determines a local pointed group $R_{\tilde \varepsilon}$ on $S$ through the composition $${\cal B}_\gamma\too S_\gamma\hookrightarrow S$$ (see [@KP Proposition 3.15]). Since $S_\gamma$ has a $P$-stable $\O$-basis, $S_\varepsilon$ still has a $R$-stable $\O$-basis and, by [@P4 Theorem 5.3], there are unique local pointed groups $R_{\tilde\varepsilon}$ on $S_\varepsilon$ and $R_{\hat\varepsilon}$ on $\hat{\cal B}$ such that $\hat l(\tilde l\otimes l)=\hat l=(\tilde l\otimes l)\hat l$ for suitable $l\in \varepsilon$, $\tilde l\in \tilde\varepsilon$ and $\hat l\in \hat\varepsilon\,;$. then, we claim that $R_{\hat\varepsilon}\i (P_{\hat\gamma})^{x_y}$ and that $x_y$ stabilizes $Q_{\hat\delta}\,.$ Indeed, since $(R_\varepsilon)^{x_y^{-1}}\i P_\gamma$, we have $(R_{\tilde\varepsilon})^{x_y^{-1}}\i P_{\tilde\gamma}$ and then it follows from [@P4 Proposition 5.6] that we have $(R_{\hat\varepsilon})^{x_y^{-1}}\i P_{\hat\gamma}$ or, equivalently, $R_{\hat\varepsilon}\i (P_{\hat\gamma})^{x_y}\,;$ moreover, since $\delta$ is the unique local point of $Q$ such that $Q_\delta$ is contained in $P_\gamma\,,$ again by [@P4 Proposition 5.6] we can easily conclude that $x_y$ stabilizes $Q_{\hat\delta}\,.$ In particular, since the image of $\hat\imath^{\,x_y}$ in $\hat{\cal B} (R_{\hat\varepsilon})$ is not zero \[14, 2.7\] and since $\hat\imath$ is primitive in $\hat{\cal B}\,,$ $\hat\imath^{\,x_y}$ belongs to $\hat\varepsilon$ and therefore, since $\hat\imath$ also belongs to $\hat\varepsilon\,,$ there is $\hat a_y\in (\hat{\cal B}^R)^*$ such that $\hat\imath^{\,x_y} = \hat\imath^{\,\hat a_y}\,;$ choose $s_y\in S^*$ lifting the action of $x_y$ on $S$ and set $\hat x_y = s_y\otimes x_y\,,$ so that we have $$\hat\imath^{\,x_y} = (\hat x_y)^{-1} \hat\imath\, \hat x_y \quad ;$$ then, since $\hat x_y$ and $\hat a_y$ normalize $Q\,,$ the element $\hat x_y \hat a_y^{-1}$ of $\hat A$ normalizes $Q\. \hat\imath$ and therefore $\hat x_y \hat a_y^{-1}\hat\imath$ determines an element $m_y$ of $M\,.$ We claim that the image $\bar m_y$ of $m_y$ in $\bar M$ only depends on $y\in L$ and that, in the case where $R_\varepsilon =Q_\delta\,,$ this correspondence determines a group homomorphism $$\bar\sigma : L\too \bar M \quad .$$ Indeed, if $x'\in N$ still fulfills conditions £3.14.1 then we necessarily have $x' = x_y\,z$ for some $z\in C_H (R)$ and therefore it suffices to choose the element $\hat a_y\. z$ of $(\hat B^R)^*$ in the definition above. On the other hand, if $\hat a'\in (\hat B^R)^*$ still fulfills $\hat\imath^{\,\hat x_y} = \hat\imath^{\,\hat a'}$ then we clearly have $\hat a' = \hat c\,\hat a_y$ for some $\hat c\in (\hat B^R)^*$ centralizing $\hat\imath\,,$ so that $\hat c\,\hat\imath$ belongs to $(\hat\imath\hat B\hat\imath)^Q\,;$ hence, the image of $\hat x_y\hat a_y^{-1}\hat c^{-1}\hat\imath$ in $\bar M$ coincides with $\bar m_y\,.$ Moreover, in the case where $R_\varepsilon =Q_\delta\,,$ for any element $y'$ in $L$ we clearly can choose $\hat x_{yy'} = \hat x_y\, \hat x_{y'}\,;$ then, we have $$\hat\imath^{\,\hat x_{yy'}} = (\hat\imath^{\,\hat a_y})^{\hat x_{y'}} = \hat\imath^{\hat x_{y'}\. (\hat a_y)^{\hat x_{y'}}} = \hat\imath^{\hat a_{y'}(\hat a_y)^{\hat x_{y'}}}$$ and therefore, since $\hat a_{y'}(\hat a_y)^{\hat x_{y'}}$ still belongs to $(\hat B^Q)^*\,,$ we clearly can choose $\hat a_{yy'} = \hat a_{y'}(\hat a_y)^{\hat x_{y'}}\,,$ so that we get $$\hat x_{yy'}\. \hat a_{yy'}^{-1}\hat\imath = \hat x_y\,\hat x_{y'}\. \big(\hat a_{y'} (\hat a_y)^{\hat x_{y'}}\big)^{-1}\hat\imath = (\hat x_y\. \hat a_y^{-1}\hat\imath)(\hat x_{y'}\. \hat a_{y'}^{-1}\hat\imath)$$ which implies that $\bar m_{yy'} = \bar m_y\,\bar m_{y'}\,.$ This proves our claim. In particular, for any $u\in P\,,$ we can choose $x_{\tau (u)} = u$ and $\hat a_{\tau (u)} = 1\,;$ moreover, since the action of $P$ on $S_\gamma$ can be lifted to a unique group homomorphism $\varrho \,\colon P\to {\rm Ker}({\rm det}_{S_\gamma})$ [@P4  1.8], we may choose $\hat x_{\tau (u)} = \varrho (u)\otimes u\,;$ then, it is clear that the correspondence $\tau^*$ mapping $\tau (u)$ on the image of $(\varrho (u)\otimes u) \hat\imath$ in $M$ defines a group homomorphism from $\tau (P)\i L$ to $M$ lifting the corresponding restriction of $\bar\sigma\,.$ Finally, we claim that $\tau^*$ fulfills condition £3.6.1; indeed, coming back to the general inclusion $\tau (R)\i \tau (P)^y$ above, we clearly have $\bar\sigma (y) = \bar m_y$ and, according to the right-hand equalities in £3.14.1, for any $v\in R$ we get $$\tau^*\big(\tau (v)^y\big) = v^{x_y}\. \hat\imath = (v\. \hat\imath)^{m_y} =\tau^*\big(\tau (v)\big)^{m_y} \quad .$$ Consequently, it follows from Lemma £3.6 that $\bar\sigma$ can be lifted to a group homomorphism $\sigma\,\colon L\to M$ extending $\tau^*\,;$ moreover, the inverse image of $\sigma (L)$ in $N_{(\hat\imath\hat{\cal A} \hat\imath)^*} (Q\.\hat\imath)$ is a $k^*\-$group which is clearly contained in $$\hat N\.(\O'^*\otimes 1)\subset \O'\otimes_\O \hat\A \quad ;$$ hence, according to definition £3.13.1, $\sigma$ still can be lifted to a $k^*\-$group homomorphism $$\hat\sigma : \hat L^{^\circ}\too N_{(\hat\imath\hat{\cal A} \hat\imath)^*} (Q\. \hat\imath)$$ mapping $\tau (u)$ on $u\. \hat\imath$ for any $u\in P\,;$ hence, we get a $P$-interior and $N/H$-graded algebra homomorphism $$\O_*\hat L^{^\circ}\too \hat{\cal A}_{\hat\gamma} \quad .\leqno £3.14.2$$ We claim that homomorphism £3.14.2 is an isomorphism. Indeed, denoting by $X\i N_G (Q_\delta)$ a set of representatives for $\bar N = N/H\,,$ it is clear that we have $${\cal A} = \bigoplus_{x\in X} x\. {\cal B}$$ and therefore we still have $$\hat{\cal A} = S\otimes_\O {\cal A} = \bigoplus_{x\in X} (s_x\otimes x) (S\otimes_\O {\cal B}) = \bigoplus_{x\in X} (s_x\otimes x) \hat{\cal B} \quad ;$$ moreover, choosing as above an element $\hat a_x\in (\hat{\cal B}^Q)^*$ such that $\hat\imath^{\, x} = \hat\imath^{\,\hat a_x}\,,$ it is clear that $(s_x\otimes x) \hat a_x^{-1}\hat{\cal B} =(s_x\otimes x) \hat{\cal B} $ for any $x\in X$ and therefore we get $$\hat{\cal A}_{\hat\gamma} = \hat\imath \hat{\cal A}\hat\imath = \bigoplus_{x\in X} ((s_x\otimes x) \hat a_x^{-1}\hat\imath) (\hat\imath\hat{\cal B}\hat\imath) \quad ;$$ thus, since we know that $\hat\imath\hat{\cal B}\hat\imath\cong \O Q$ and that $L/\tau (Q)\cong \bar N\,,$ denoting by $Y\i L$ a set of representatives for $L/\tau (Q)$ and by $\hat y$ a lifting of $y\in Y$ to $\hat L\,,$ we still get $$\hat{\cal A}_{\hat\gamma} \cong \bigoplus_{y\in Y} \hat\sigma (\hat y)\, \O Q$$ which proves that homomorphism £3.14.2 is an isomorphism. [**Corollary £3.15.**]{} [**Proof:**]{} Since $\hat{\cal A} = S^\circ\otimes_\O {\cal A}$ and we have a $P$-interior algebra embedding $\O\to S_\gamma\otimes_\O S_\gamma^\circ$ [@P4  5.7], we still have the following commutative diagram of [*exterior*]{} $P$-interior algebra embeddings and homomorphisms [@KP 2.10] $$\matrix{&{\cal A}_\gamma&\too&\hskip-20pt S_\gamma\otimes_\O S_\gamma^\circ \otimes_\O {\cal A}_\gamma&\cr &\hskip-20pt\nearrow&\nearrow\hskip-60pt&\uparrow\cr {\cal B}_\gamma&\too& S_\gamma\otimes_\O S_\gamma^\circ \otimes_\O {\cal B}_\gamma& S_\gamma\otimes_\O \hat{\cal A}_{\hat\gamma}&\hskip-20pt\cong&\hskip-20pt S_\gamma\otimes_\O \O_*\hat L^{^\circ}\cr &&&\hskip-40pt\nearrow&\nearrow\hskip-20pt&\cr &&S_\gamma\otimes_\O \hat{\cal B}_{\hat\gamma}\hskip-40pt&\hskip-20pt\cong&\hskip-20pt S_\gamma\otimes_\O \O Q\cr} \quad ;\leqno £3.15.1$$ moreover, since the unity element is primitive in $(S_\gamma)^P$ and the kernel of the canonical homomorphism $$(S_\gamma\otimes_\O \O Q)^P\too (S_\gamma)^P$$ is contained in the radical, the unity element is primitive in $(S_\gamma\otimes_\O \O Q)^P$ too; since $P$ has a unique local point over $ S_\gamma\otimes_\O S_\gamma^\circ \otimes_\O {\cal A}_\gamma$ [@P4 Proposition 5.6], from diagram £3.15.1 we get the announced isomorphism. [**£3.16.**]{}Let us take advantage of this revision to correct the erroneous proof of [@KP 1.15.1]. Indeed, as proved in Proposition £3.11 above, we have a [strict covering]{} of $Q\-$interior $k\-$algebras $$k\otimes_\O {\cal B}_\delta\too k\otimes_\O S_\delta \leqno £3.16.1$$ but [*not*]{} a [strict covering]{} $k\otimes_\O {\cal B}\too k\otimes_\O S$ of $H\-$interior $k\-$algebras as stated in [@KP 1.15]; however, it follows from [@P6  2.14.4 and Lemma 9.12] that the isomorphism ${\cal B}_\delta (Q_\delta)\cong S_\delta (Q)$ induced by homomorphism £3.16.1 [@P4 4.14] forces the [embedding]{} ${\cal B}(Q_\delta)\to S (Q_{\bar\delta})$ where $\bar\delta$ denotes the local point of $Q$ over $S$ determined by $\delta\,;$ hence, we still have the isomorphism [@KP 1.15.5] which allows us to complete the argument. [**4. Extensions of Glauberman correspondents of blocks**]{} In this section, we continue to use the notation in Paragraph 3.1, namely $\O$ is a complete discrete valuation ring with an algebraically closed residue field $k$ of characteristic $p\,;$ moreover we assume that its quotient field $\K$ has characteristic 0 and is big enough for all finite groups that we will consider; this assumption is kept throughout the rest of this paper. [**4.1.**]{}Let $A$ be a cyclic group of order $q$, where $q$ is a power of a prime. Assume that $G$ is an $A$-group, that $H$ is an $A$-stable normal subgroup of $G$ and that $b$ is $A$-stable. Note that, in this section, $b$ is not necessarily nilpotent. Assume that $A$ and $G$ have coprime orders; by [@P5 Theorem 1.2], $G$ acts transitively on the set of all defect groups of $G_\alpha$ and, obviously, $A$ also acts on this set; hence, since $A$ and $G$ have coprime orders, by [@I Lemma 13.8 and Corollary 13.9] $A$ stabilizes some defect group of $G_\alpha$ and $G^A$ acts transitively on the set of them. Similarly, $A$ stabilizes some defect group of $N_\beta$ and $N^A$ acts transitively on the set of them. Thus, we may assume that $A$ stabilizes $P\i N$ and actually we ssume that $A$ centralizes $P\,;$ recall that $Q=P\cap H$. [**4.2.**]{}Clearly $H^A$ is normal in $G^A$. We claim that $N^A$ is the stabilizer of ${\it w}(b)$ in $G^A$. Indeed, for any $x\in G^A$, $b^x$ is a block of $H$ and $Q^x$ is a defect group of $b^x$; since $A$ stabilizes $b^x$ and centralizes $Q^x$, ${\it w}(b^x)$ makes sense. Note that $G$ acts on ${\rm Irr}_\K(H)\,,$ that $G^A$ acts on ${\rm Irr}_\K(H^A)$ and that the Glauberman correspondence $\pi(G, A)$ is compatible with the obvious actions of $G^A$ on ${\rm Irr}_\K(H)$ and ${\rm Irr}_\K(H^A)$. So we have $$\begin{aligned} {\rm Irr}_\K(H^A, {\it w}(b^x)) &=& \pi(H, A)({\rm Irr}_\K(H, b^x)) \\ &=& \pi(H, A)({\rm Irr}_\K(H,b)^x )\\ &=& (\pi(H, A)({\rm Irr}_\K(H, b)))^x \\ &=& {\rm Irr}_\K(H^A,{\it w}(b))^x\,;\end{aligned}$$ in particular, we get ${\it w}(b^x)={\it w}(b)^x$ and therefore we have ${\it w}(b)^x={\it w}(b)$ if and only if $x$ belongs to $N^A$. We set ${\it w}(c)={\rm Tr}^{G^A}_{N^A}({\it w}(b))$, ${\it w}(\beta)=\{{\it w}(b)\}$ and ${\it w}(\alpha)=\{{\it w}(c)\}$. Then ${\it w}(\beta)$ is a point of $N^A$ on $\O (H^A)$, ${\it w}(\alpha)$ is a point of $G^A$ on $\O (H^A)$, we have $(N^A)_{{\it w}(\beta)}\leq (G^A)_{{\it w}(\alpha)}$ and any defect group of $(N^A)_{{\it w}(\beta)}$ is a defect group of $(G^A)_{{\it w}(\alpha)}$. [**4.3.**]{}Let ${\frak B}$ and ${\it w}({\frak B})$ be the respective sets of $A$-stable blocks of $G$ covering $b$ and of $G^A$ covering ${\it w}(b)\,.$ Take $e\in {\frak B}\,;$ since $P$ is a defect group of $G_\alpha$ and $c$ fulfills $ec=e$, $e$ has a defect group contained in $P$ and therefore, since $A$ centralizes $P\,,$ $e$ has a defect group centralized by $A\,;$ hence, by [@W Proposition 1 and Theorem 1], ${\it w}(e)$ makes sense and $A$ stabilizes all the characters in ${\rm Irr}_\K(G, e)\,;$ that is to say, $A$ stabilizes all the characters of blocks in ${\frak B}$. Moreover, by [@I Theorem 13.29], ${\it w}(e)$ belongs to ${\it w}({\frak B})$. [**Proposition 4.4.**]{} [*Proof.*]{}Assume that $g\in {\frak B}$ and ${\it w}(e)={\it w}(g)\,;$ then there exist $\chi\in {\rm Irr}_\K(G, e)$ and $\phi\in {\rm Irr}_\K(G, g)$ such that $\pi(G, A)(\chi)=\pi(G, A)(\phi)$; but this contradicts the bijectivity of the Glauberman correspondence. Therefore the map ${\it w}$ is injective. Take $h\in {\it w}({\frak B})\,;$ then $h$ covers ${\it w}(b)$ and so there exist $\zeta\in {\rm Irr}_\K(G^A, h)$ and $\eta\in {\rm Irr}_\K(H^A, {\it w}(b))$ such that $\eta$ is a constituent of ${\rm Res}^{G^A}_{H^A}(\zeta)$. Set $$\theta=(\pi(G, A))^{-1}(\zeta)\qq \vartheta=(\pi(H, A))^{-1}(\eta) \quad ;$$ by [@I Theorem 13.29], $\vartheta$ is a constituent of ${\rm Res}^G_H(\theta)\,;$ let $l$ be the block of $G$ acting as the identity map on a $\K G$-module affording $\theta\,;$ then $l$ covers $b$ and we have ${\it w}(l)=h$. Finally, we have $$\begin{aligned} \pi(G,\,A)({\rm Irr}_\K(G, c)^A) &=& \pi(G,\,A)(\cup_{e\in {\frak B}}{\rm Irr}_\K(G, e)) \\ &=& \cup_{{\it w}(e)\in {\it w}({\frak B})}{\rm Irr}_\K(G^A, {\it w}(e)) \\ &=& {\rm Irr}_\K(G^A, {\it w}(c))\end{aligned}$$ [**Proposition 4.5.**]{} [*Proof.*]{}It suffices to show that $P$ is a defect group of $(N^A)_{{\it w}(\beta)}$ (cf. £3.1); thus, without loss of generality, we can assume that $G=N$. Obviously, $A$ stabilizes $P\. H$ and $b$ is the unique block of $P\. H$ covering the block $b$ of $H\,;$ since $P$ is a defect group of $G_\alpha$ and $N_\beta$, $P$ is maximal in $N$ such that ${\rm Br}_P^{\O H}(b)\neq 0\,;$ thus $P$ is maximal in $P\. H$ such that ${\rm Br}_P^{\O (P\. H)}(b)\neq 0\,;$ therefore $P$ is a defect group of $b$ as a block of $P\. H$. Since $A$ centralizes $P$, the Glauberman correspondent $b'$ of $b$ as a block of $P\. H$ makes sense; moreover by Proposition 4.4, $b'$ covers ${\it w}(b)$. Since ${\it w}(b)$ is the unique block of $P\. H^A$ covering the block ${\it w}(b)$ of $H^A$, $b'={\it w}(b)$, and then, by [@W Theorem 1], $P$ is a defect group of ${\it w}(b)$ as a block of $P\. H^A$; in particular, ${\rm Br}_P^{\O (H^A)}({\it w}(b))\neq 0$. Since $P$ is a defect group of $G_\alpha$, by [@KP Theorem 5.3] the image of $P$ in the quotient group $N/H$ is a Sylow $p$-subgroup of $N/H\,;$ but, the inclusion map $N^A\hookrightarrow N$ induces a group isomorphism $N^A/H^A\cong (N/H)^A\,;$ hence, the image of $P$ in $N^A/H^A$ is a Sylow $p$-subgroup of $N^A/H^A\,;$ then, by [@KP Theorem 5.3] again, $P$ is a defect group of $(N^A)_{{\it w}(\alpha)}$. [**4.6.**]{}We may assume that $A$ stabilizes $P_\gamma\,;$ then $A$ stabilizes $Q_\delta$ too (see [@KP Proposition 5.5]). Let $R$ be a subgroup such that $Q\leq R\leq P$ and $R_\varepsilon$ a local pointed group on $\O H$ contained in $P_\gamma$. Since $A$ stabilizes $P_\gamma$ and centralizes $P$, $A$ centralizes $R$ and then, by [@KP Proposition 5.5], it stabilizes $R_\varepsilon$. Since ${\rm Br}_R^{\O H}(\varepsilon)$ is a point of $kC_H(R)$, then there is a unique block $b_\varepsilon$ of $\O C_H(R)$ such that ${\rm Br}_R^{\O H}(b_\varepsilon\varepsilon)={\rm Br}_R^{\O H}(\varepsilon)$ and, by [@Z Lemma 2.3], $C_Q(R)$ is a defect group of $b_\varepsilon$; in particular, $b_\varepsilon$ is nilpotent. Obviously, $A$ centralizes $C_Q(R)$ and, since $A$ stabilizes $R_\varepsilon$ and thus it stabilizes $b_\varepsilon$, ${\it w}(b_\varepsilon)$ makes sense; moreover, ${\it w}(b_\varepsilon)$ is nilpotent and, since we have $$C_{H^A}(R)=C_{C_H(R)}(A)\quad ,$$ there is a unique local point ${\it w}(\varepsilon)$ of $R$ on $\O (H^A)$ such that ${\rm Br}_R^{\O (H^A)}({\it w}(\varepsilon){\it w}(b_\varepsilon))={\rm Br}_R^{\O (H^A)}({\it w}(\varepsilon))\quad .$ [**Proposition 4.7.**]{} [*Proof.*]{}By [@P7 Proposition 2.8], the inclusion map $\O H\hookrightarrow \O (P\.H)$ is actually a strict semicovering $P\. H$-algebra homomorphism; hence, $\gamma$ determines a unique local point $\gamma'$ of $P$ on $\O (P\. H)$ such that $\gamma \subset \gamma'$. Obviously, $b$ is a block of $P\. H$. Since $\beta$ is also a point of $P\. H$ on $\O H$ and $P_\gamma$ is also a defect pointed group of the pointed group $(P\. H)_\beta$ on $\O H$, by [@KP Corollary 6.3] $P_{\gamma'}$ is a defect pointed group of the pointed group $(P\. H)_\beta$ on $\O (P\. H)$. Let $b_{\gamma'}$ be the block of $C_{P\. H}(P)$ such that $${\rm Br}_P^{\O (P\. H)}(b_{\gamma'}\gamma')={\rm Br}_P^{\O (P\. H)}(\gamma')\quad ;$$ then $Z(P)$ is a defect group of $b_{\gamma'}$ and therefore ${\it w}(b_{\gamma'})$ makes sense. Obviously, $b_{\gamma'}$ covers $b_\gamma$ and thus ${\it w}(b_{\gamma'})$ covers ${\it w}(b_{\gamma})$ (see Proposition 4.4); but, since ${\it w}(b)$ is also the Glauberman correspondent of the block $b$ of $P\. H$ (see the first paragraph of the proof of Proposition 4.5), by [@W Proposition 4] we have ${\rm Br}_P^{\O (P\. H^A)}({\it w}(b){\it w}(b_{\gamma'}))={\rm Br}_P^{\O (P\. H^A)}({\it w}(b_{\gamma'}))$ ; this forces ${\rm Br}_P^{\O (H^A)}({\it w}(b){\it w}(b_{\gamma}))={\rm Br}_P^{\O (H^A)}({\it w}(b_{\gamma}))$, which implies that $$P_{{\it w}(\gamma)}\leq (P\. H^A)_{{\it w}(\beta)}\leq (G^A)_{{\it w}(\alpha)}\quad ;$$ hence, by Proposition 4.5, $P_{{\it w}(\gamma)}$ is a defect pointed group of $(G^A)_{{\it w}(\alpha)}$. The statement that $Q_{{\it w}(\delta)}$ is a defect pointed group of $(H^A)_{\{{\it w}(b)\}}$ is clear. [**Lemma 4.8.**]{} [*Proof.*]{}Obviously, $\B$ is a $p$-permutation $P\. H$-algebra (see [@BP1 Def. 1.1]) by $P\. H$-conjugation and $(T, {\rm Br}_T^{\B}(b_{\eta}))$ and $(R, {\rm Br}_R^{\B}(b_{\varepsilon}))$ are $(b, P\. H)\-$Brauer pairs (see [@BP1 Def. 1.6]). Moreover $T$ stabilizes $b_\varepsilon$, and $\eta$ and $\varepsilon$ are the unique local points of $T$ and $R$ on $\B$ (see [@KP Proposition 5.5]) such that ${\rm Br}_T^{\B}(\eta){\rm Br}_T^{\B}(b_{\eta})={\rm Br}_T^{\B}(\eta)$ and ${\rm Br}_R^{\B}(\varepsilon){\rm Br}_R^{\B}(b_{\varepsilon})={\rm Br}_R^{\B}(\varepsilon)\quad .$ Assume that $R_\varepsilon\leq T_\eta\,;$ then, there are $h\in \eta$ and $l\in \varepsilon$ such that $hl=l=lh\,;$ thus, we have $${\rm Br}_R^{\B}(hl)={\rm Br}_R^{\B}(l)\qq {\rm Br}_R^{\B}(h){\rm Br}_R^{\B}(b_{\varepsilon})\neq 0\quad .$$ Then, it follows from [@BP1 Def. 1.7] that $$(R, {\rm Br}_R^{\B}(b_{\varepsilon}))\subset (T,{\rm Br}_T^{\B}(b_{\eta}))$$ and from [@BP1 Theorem 1.8] that we have ${\rm Br}_T^{\O C_{H}(R)}(b_{\eta} b_{\varepsilon})={\rm Br}_T^{\O C_{H}(R)}(b_{\eta})$. Conversely, if we have $${\rm Br}_T^{\O C_{H}(R)}(b_{\eta} b_{\varepsilon})={\rm Br}_T^{\O C_{H}(R)}(b_{\eta})$$ then, by [@BP1 Theorem 1.8] we still have ${\rm Br}_R^{\B}(e_{\varepsilon} h)={\rm Br}_R^{\B}(h)$ for any $h\in \eta$; hence, by the lifting theorem for idempotents, we get $R_{\varepsilon}\leq T_\eta$. Let $\cal R$ be a Dedekind domain of characteristic 0, $\pi$ be a finite set of prime numbers such that $l\cal R\neq \cal R$ for all $l\in \pi\,,$ and $X$ and $Y$ be finite groups with $X$ acting on $Y$. We consider the group algebra ${\cal R} Y$ and set $$Z_{\rm id}({\cal R} Y)=\oplus {\cal R} c$$ where $c$ runs over all central primitive idempotents of ${\cal R}Y$. Obviously, $X$ acts on $Z_{\rm id}({\cal R} Y)$ and, in the case that $X$ is a solvable $\pi$-group, Lluis Puig exhibes a $\cal R$-algebra homomorphism ${\mathcal G l}^Y_X: Z_{\rm id}({\cal R} X)\rightarrow Z_{\rm id}({\cal R} Y^X)$ (see [@P7 Theorem 4.6]), which unifies the usual Brauer homomorphism and the Glauberman correspondence of characters — called the [*Brauer-Glauberman correspondence*]{}. [**Proposition 4.9.**]{} [*Proof.*]{}By induction we can assume that $R$ is normal and maximal in $T$; in particular, the quotient $T/R$ is cyclic. In this case, it follows from Lemma 4.8 that the inclusion $R_{{\it w}(\varepsilon)}\leq T_{{\it w}(\eta)}$ is equivalent to $${\rm Br}_T^{\O C_{H^A}(R)}({\it w}(b_\varepsilon){\it w}(b_\eta)) ={\rm Br}_T^{\O C_{H^A}(R)}({\it w}(b_\eta)) \quad .\leqno £4.9.1$$ Let ${\mathbb Z}$ be the ring of all rational integers and $S$ be the complement set of $p{\mathbb Z}\cup q\mathbb Z$ in $\mathbb Z\,;$ then $S$ is a multiplicatively closed set in $\mathbb Z$. We take the localization $S^{-1}\mathbb Z$ of $\mathbb Z$ at $S$ and regard it as a subring of $\K\,;$ since we assume that $\cal K$ is big enough for all finite groups we consider, we can assume that $\cal K$ contains an $|H|$-th primitive root $\omega$ of unity and we set ${\cal R}=(S^{-1}{\mathbb Z})[\omega]\quad .$ Then $\cal R$ is a Dedekind domain (see [@AM Example 2 in Page 96 and Exercise 1 in Page 99]) and given a prime $l$, we have $l\cal R\neq \cal R$ if and only if $l=p$ or $l=q$. We consider the group algebra ${\cal R}C_H(R)$ and the obvious action of $(T\times A)/R\cong (T/R)\times A$ on it. Since $\cal R$ contains an $|H|$-th primitive unity root $\omega$, the blocks $b_\varepsilon$, $b_\eta$, ${\it w}(b_\varepsilon)$ and ${\it w}(b_\eta)$ respectively belong to $$Z_{\rm id}({\cal R}C_H(R))\;\;,\;\; Z_{\rm id}({\cal R}C_H(T))\;\;,\;\; Z_{\rm id}({\cal R}C_{H^A}(R))\qq Z_{\rm id}({\cal R}C_{H^A}(T))$$ (see [@F Charpter IV, Lemma 7.2]); then, by [@P7 Corollary 5.9], we have $${\mathcal G l}^{C_{H}(R)}_{A}(b_\varepsilon)={\it w}(b_\varepsilon)\qq {\mathcal G l}^{C_{H}(T)}_{A}(b_\eta)={\it w}(b_\eta) \quad .$$ If $R_\varepsilon\leq T_\eta$, by Lemma 4.8 we have the equality $${\rm Br}_{T}^{\O C_H(R)}(b_\varepsilon b_\eta)={\rm Br}_{T}^{\O C_H(R)}(b_\eta)$$ which is equivalent to ${\mathcal G l}^{C_H(R)}_{T/R}(b_\varepsilon)b_\eta=b_\eta$ (see [@P7 4.6.1 and the proof of Corollary 3.6]). Then by [@P7 4.6.2], we have $$\begin{aligned} {\it w}(b_\eta) &=& {\mathcal G l}^{C_H(T)}_{A}(b_\eta)= {\mathcal G l}^{C_H(T)}_{A}({\mathcal G l}^{C_H(R)}_{T/R} (b_\varepsilon)b_\eta) \\&=& {\mathcal G l}^{C_H(R)}_{(T/R)\times A}(b_\varepsilon) {\mathcal G l}^{C_H(T)}_{A}(b_\eta) \\ &=&{\mathcal G l}^{C_{H^A}(R)}_{T/R} ({\mathcal G l}^{C_{H}(R)}_{A}(b_\varepsilon)) {\mathcal G l}^{C_H(T)}_{A}(b_\eta)\\ &=&{\mathcal G l}^{C_{H^A}(R)}_{T/R} ({\it w}(b_\varepsilon)) {\it w}(b_\eta)\quad. \end{aligned}$$ which is equivalent again to equality £4.9.1 above (see [@P7 4.6.1 and the proof of Corollary 3.6] and therefore it implies $R_{{\it w}(\varepsilon)}\leq T_{{\it w}(\eta)}$. The prove that $R_{{\it w}(\varepsilon)}\leq T_{{\it w}(\eta)}$ implies $R_\varepsilon\leq T_\eta$ is similar. [**4.10.**]{}The assumptions and consequences above are very scattered; we collect them in this paragraph, so that readers can easily find them and we can conveniently quote them later. Let $A$ be a cyclic group of order $q$, where $q$ is a prime number; we assume that $G$ is an $A$-group, that $H$ is an $A$-stable normal subgroup of $G\,,$ that $b$ is $A$-stable, that $A$ centralizes $P$ and stabilizes $P_\gamma\,,$ and that $A$ and $G$ have coprime orders. Without loss of generality, we may assume that $P\leq N$. Then, $A$ centralizes $Q$ and stabilizes $Q_\delta\,,$ so that the Glauberman correspondent ${\it w}(b)$ of the block $b$ makes sense; moreover, the block ${\it w}(b)$ determines two pointed group $(N^A)_{{\it w}(\beta)}$ and $(G^A)_{{\it w}(\alpha)}$ such that $(N^A)_{{\it w}(\beta)}\leq (G^A)_{{\it w}(\alpha)}$ (see them in Paragraph 4.2)., and the local pointed groups $P_\gamma$ and $Q_\delta$ determine respective defect pointed groups $P_{{\it w}(\gamma)}$ and $Q_{{\it w}(\delta)}$ of $(G^A)_{{\it w}(\alpha)}$ and $(H^A)_{{\it w}(\beta)}$ (see Paragraph 4.6 and Proposition 4.7); actually, by Proposition 4.9, we have $Q_{{\it w}(\delta)}\leq P_{{\it w}(\gamma)}$. Take ${\it w}(i)\in {\it w}(\gamma)$ and ${\it w}(j)\in {\it w}(\delta)\,,$ and set $(\O G^A)_{{\it w}(\gamma)}={\it w}(i)(\O G^A){\it w}(i)$ , $(\O H^A)_{{\it w}(\gamma)}={\it w}(i)(\O H^A){\it w}(i)$\ and $(\O H^A)_{{\it w}(\delta)}={\it w}(j)(\O H^A){\it w}(j)\quad ;$ then, $(\O G^A)_{{\it w}(\gamma)}$ is a $P$-interior and $(N^A/H^A)$-graded algebra; moreover, the $Q$-interior algebra $(\O H^A)_{{\it w}(\delta)}$ with the group homomorphism $$Q\too (\O H^A)_{{\it w}(\delta)}^*\quad , \quad u\mapsto u{\it w}(j)$$ is a source algebra of the block algebra $\O H^A{\it w}(b)$ (see [@P5]). [**5. A Lemma**]{} From now on, we use the notation and assumption in Paragraphs 3.1, 3.2 and 4.10; in particular, we assume that the block $b$ of $H$ is nilpotent. Obviously, $N_G(Q_\delta)$ acts on ${\rm Irr}_\K(H, b)$ and ${\rm Irr}_\K(Q)$ via the corresponding conjugation conjugation. Since $b$ is nilpotent, there is an explicit bijection between ${\rm Irr}_\K(H, b)$ and ${\rm Irr}_\K(Q)$ (see [@T Theorem 52.8]); in this section, we will show that this bijection is compatible with the $N_G(Q_\delta)$-actions; our main purpose is to obtain Lemma 5.6 below as a consequence of this compatibility. [**5.1.**]{}For any $x\in N_G(Q_\delta)$, $xjx^{-1}$ belongs to $\delta$ and thus there is some invertible element $a_x\in \B^Q$ such that $xjx^{-1}=a_x ja_x^{-1}\,;$ let us denote by $X$ the set of all elements $(a_x^{-1}x)j$ such that $a_x$ is invertible in $\B^Q$ and we have $xjx^{-1}=a_x ja_x^{-1}$ when $x$ runs over $N_G(Q_\delta)$. Set $E_G(Q_\delta)= N_G(Q_\delta)/QC_H(Q)\quad ;$ then, the following equality $$\Big((a_x^{-1}x)j\Big)\. \Big((a_y^{-1}y)j\Big)= \Big((a_x^{-1}xa_y^{-1}x^{-1})xy\Big)j$$ shows that $X$ is a group with respect to the multiplication and it is easily checked that $Q\. (\B_\delta^Q)^*$ is normal in $X$ and that the map $$E_G(Q_\delta)\too X/Q(\B_\delta^Q)^*\leqno 5.1.1$$ sending the coset of $x\in N_G(Q_\delta)$ in $N_G(Q_\delta)/QC_H(Q)$ to the coset of $(a_x^{-1}x)j$ in $X/Q(\B_\delta^Q)^*$ is a group isomorphism. [**5.2.**]{}We denote by $Y$ the set of all such elements $a_x^{-1}x$ when $x$ runs over $N_G(Q_\delta)$ and $a_x$ over the invertible element of $\B^Q$ such that $a_x^{-1}x$ commutes with $j$. As in 5.1, it is easily checked that $Y$ is a group with respect to the multiplication $$(a_x^{-1}x)\. (a_y^{-1}y)= (a_x^{-1}xa_y^{-1}x^{-1})xy \quad,$$ that $Y$ normalizes $Q\. ((\O H)^Q)^*$ and that the map $$E_G(Q_\delta)\too \Big(Y\. Q\. (\B^Q)^*\Big)\Big/\Big(Q\. (\B^Q)^*\Big) \leqno 5.2.1$$ sending the coset of $x\in N_G(Q_\delta)$ to the coset of $a_x^{-1}x$ in the right-hand quotient is a group isomorphism. [**5.3.**]{}Let $I$ and $J$ be the sets of isomorphism classes of all simple $\K\otimes_\O \B\-$ and $\K\otimes_\O \B_\delta\-$modules respectively. Cleraly, $Y$ acts on $I\,;$ but, since $Y\cap (\B^Q)^*$ acts trivially on $I$, the action of $Y$ on $I$ induces an action of $E_G(Q_\delta)$ on $I$ through isomorphism 5.2.1; actually, this action coincides with the action of $E_G(Q_\delta)$ on ${\rm Irr}_\K(H, b)$ induced by the $N_G(Q_\delta)$-conjugation. Similarly, $X$ acts on $J$ and this action of $X$ on $J$ induces an action of $E_G(Q_\delta)$ on $J$ through isomorphism 5.1.1. But, by [@P5 Corollary 3.5], the functor $M\mapsto j\. M$ is an equivalence between the categories of finitely generated $\B$- and $\B_\delta$-modules, which induces a bijection between the sets $I$ and $J$. Then, since $Y$ commutes with $j$ and the map $$Y\too X\quad ,\quad y\mapsto yj$$ is a group homomorphism, it is easily checked that this bijection is com-patible with the actions of $E_G(Q_\delta)$ on $I$ and $J$. [**5.4.**]{}Recall that (cf. £3.7) $$\B_\delta\cong T\otimes_\O \O Q \leqno 5.4.1$$ where $T = {\rm End}_\O (W)$ for an endo-permutation $\O Q$-module $W$ such that the determinant of the image of any element of $Q$ in is one; in this case, the $\O Q$-module $W$ with these properties is unique up to isomorphism. Then, for any simple $\K\otimes_\O \B_\delta$-module $V$ there is a $\K Q$-module $V_W$, unique up to isomorphism, such that $$V\cong W\otimes_\O V_W$$ as $\K\otimes_\O \B_\delta$-modules; moreover the correspondence $$V\mapsto V_W\leqno 5.4.2$$ determines a bijection between $J$ and the set of isomorphism classes of all simple $\K Q$-modules. Now, the composition of this bijection with the bijection between isomorphism classes in 5.3 is a bijection from $I$ to the set of isomorphism classes of all simple $\K Q$-modules; translating this bijection to characters, we obtain a bijection $${\rm Irr}_\K(H, b)\too {\rm Irr}_\K (Q)\quad ,\quad \chi_\lambda\mapsto \lambda \quad ;\leqno 5.4.3$$ let us denote by $\chi\in {\rm Irr}_\K(H, b)$ the image ofthe trivial character of $Q\,.$ [**5.5.**]{}Moreover, the $N_G(Q_\delta)$-conjugation induces an action of $E_G(Q_\delta)$ on the set of isomorphism classes of all simple $\K Q$-modules and we claim that, for any simple $\K\otimes_\O \B_\delta$-module $V$ and any $\bar x\in E_G(Q_\delta)\,,$ we have a $\K Q$-module isomorphism $$^{\bar x}(V_W)\cong (^{\bar x}V)_W \quad ;\leqno 5.5.1$$ in particular, bijection 5.4.2 is compatible with the actions of $E_G(Q_\delta)$ on $J$ and on the set of isomorphism classes of simple $\K Q$-modules. Indeed, let $x$ be a lifting of $\bar x$ in $N_G(Q_\delta)$ and denote by $\varphi_x$ the isomorphism $$Q\cong Q\quad ,\quad u\mapsto xux^{-1} \quad ;$$ take a lifting $y=a_x^{-1}xj$ of $\bar x$ in $X$ through isomorphism 5.1.1; since the conjugation by $y$ stabilizes $\B_\delta$, the map $$f_y: \B_\delta\cong {\rm Res}_{\varphi_x}(\B_\delta)\quad ,\quad a\mapsto yay^{-1}$$ is a $Q$-interior algebra isomorphism; then, by [@P4 Corollary 6.9], we can modify $y$ with a suitable element of $(\B_\delta^Q)^*$ in such a way that $f_y$ stabilizes $T\,;$ in this case, the restriction of $f_y$ to $T$ has to be inner and thus we have $W\cong {\rm Res}_{f_y}(W)$ as ${\rm T}\-$modules. Moreover, since the action of $Q$ on $T$ can be uniquely lifted to a $Q$-interior algebra structure such that the determinant of the image of any $u\in Q$ in $T$ is one, $f_y$ also stabilizes the image of $Q$ in $T\,;$ more precisely, $f_y$ maps the image of $u\in Q$ onto the image of $\varphi_x (u)\,.$ The claim follows. [**Lemma 5.6.**]{} [*Proof.*]{}It follows from £5.3 and £5.5 that the bijection £5.4.3 is compatible with the actions of $E_G(Q_\delta)$ in ${\rm Irr}_\K(H, b)$ and ${\rm Irr}_\K (Q)\,;$ hence, $\chi$ is $E_G(Q_\delta)$-stable and thus $N_G(Q_\delta)$-stable. Since $\phi$ is the unique irreducible constituent of ${\rm Res}^H_{H^A}(\chi)$ occurring with a multiplicity coprime to $q$ and $N_{G^A}(Q_{{\it w}(\delta)})$ is contained in $N_G(Q_\delta)$, $\phi$ has to be $N_{G^A}(Q_{{\it w}(\delta)})$-stable. By the very definition of the bijection 5.4.3, the restriction of $\chi$ to $H_{p'}$ is the unique Brauer character of $H$. Since the perfect isometry $R_H^b$ between $ {\cal R}_\K (H, b)$ and ${\cal R}_\K (H^A, {\it w}(b))$ maps $\psi\in I$ onto $\pm\pi(H, A)(\psi)$ and the blocks $b$ and ${\it w}(b)$ are nilpotent, by [@B Theorem 4.11] the decomposition matrices of $b$ and ${\it w}(b)$ are the same if the characters indexing their columns correspond to each other by the Glauberman correspondence; hence, the restriction of $\phi$ to $H^A_{p'}$ is the unique Brauer character of $H^A$. [**6. A $k^*$-group isomorphism $(\skew3\hat {\bar N}^{^k})^A\cong \,\widehat{\overline{\!N^A}}^{k}$**]{} [**6.1.**]{}Let $xH$ be an $A$-stable coset in $\bar N$. We consider the action of $H\rtimes A$ on $xH$ defined by the obvious action of $A$ on $xH$ and the right multiplication of $H$ on $xH\,;$ since $A$ and $G$ have coprime orders, it follows from [@I Lemma 13.8 and Corollary 13.9] that $xH\cap N^A$ is non-empty and that $H^A$ acts transitively on it; consequently, we have $\bar N^A= (H\. N^A)/H$ and the inclusion $N^A\subset N$ induces a group isomorphism $$\overline{ \!N^A}\cong \bar N^A= (H\. N^A)/H \quad .\leqno 6.1.1$$ Note that if $G=H \. G^A$ then we have $\bar N^A= \bar N\,.$ [**6.2.**]{}It follows from Lemma £5.6 that $N = H\. N_G (Q_\delta)$ stabilizes $\chi$ and actually the central extension $\skew3\hat{\bar N}$ of $\bar N$ by $U$ in £3.9 above is nothing but the so-called [*Clifford extension*]{} of $\bar N$ over $\chi\,;$ moreover, since $A$ and $U$ also have coprime orders, we can prove as above that $\skew3\hat{\bar N}^A$ is a central extension of $\bar N^A$ by $U\,,$ which is the [*Clifford extension*]{} of $\bar N^A$ over $\chi\,.$ Since the Glauberman correspondent ${\it w}(b)$ is nilpotent, we can repeat all the above constructions for $G^A$, $H^A\,,$ ${\it w}(b)$ and $N^A\,;$ then, denoting by $U_A$ the group of $\vert H^A\vert\-$th roots of unity, we obtain a central extension $\,\widehat{\overline{\!N^A}}$ of $\,\overline{ \!N^A} =\bar N^A$ by $U_A\,,$ which is the [*Clifford extension*]{} of $\bar N^A$ over $\phi\,;$ moreover, note that $U_A$ is contained in $U\,.$ [**6.3.**]{}At this point, it follows from [@P8 Corollary 4.16] that there is an extension group isomorphism $$\hat N^A\cong (U\times \,\widehat{\!N^A})/\Delta_{-1} (U_A) \leqno £6.3.1$$ where we are setting $\Delta_{-1}(U_A) = \{(\xi^{-1},\xi)\}_{\xi\in U_A}\,;$ moreover, according to [@P7 Remark 4.17], this isomorphism is defined by a sequence of Brauer homomorphisms — in different characteristics — and, in particular, it is quite clear that it maps any $y\in H\i \hat N^A$ in the classes of $(1,y)$ in the right-hand member, so that isomorphism £6.3.1 induces a new extension group isomorphism $$\skew3\hat{\bar N}^A\cong (U\times \,\widehat{\overline{\!N^A}})/\Delta_{-1} (U_A) \quad .$$ Consequently, denoting by $\varpi_A\,\colon U_A\to k^*$ the restriction of $\varpi\,,$ we get a $k^*\-$group isomorphism $$\begin{aligned} (\skew3\hat {\bar N}^{^k})^A &=& \Big((k^*\times \skew3\hat {\bar N})/\Delta_\varrho (U)\Big)^A \cong (k^*\times \skew3\hat {\bar N}^A)/\Delta_\varrho (U) \\ &\cong &(k^*\times \,\widehat{\overline{\!N^A}})/\Delta_{\varrho_A} (U_A) \,= \,\widehat{\overline{\!N^A}}^k\end{aligned}$$ as announced. [**Remark 6.4.**]{}Note that if $G=H\. G^A$ then we have $\skew3\hat {\bar N}^A= \skew3\hat {\bar N}$. [**7. Proofs of Theorems 1.5 and 1.6**]{} [**7.1.**]{}The first statement in Theorem £1.5 follows from Propositions £4.4 and £4.5. From now on, we assume that the block $b$ of $H$ is nilpotent; thus, the Glauberman correspondent ${\it w}(b)$ is also nilpotent and $(\O G^A){\it w}(c)$ is an extension of the nilpotent block algebra $(\O H^A){\it w}(b)$. This section will be devoted to comparing the extensions $\O G c$ and $\O G^A{\it w}(c)$ of the nilpotent block algebras $\O H b$ and $\O H^A{\it w}(b)$. Applying Theorem 3.5 to the finite groups $G^A$ and $H^A$ and the nilpotent block ${\it w}(b)$ of $H^A$, we get a finite group $L^A$ and respective injective and surjective group homomorphisms $$\tau^A: P\too L^A\qq \bar\pi^A: L^A\too \,\overline{\!N^A}$$ such that $\bar\pi^A(\tau^A(u))=\bar u$ for any $u\in P$, that ${\rm Ker}(\bar\pi^A)=\tau^A(Q)$ and that they induce an equivalence of categories $${\cal E}_{({\it w}(b),\, H^A,\, G^A)}\cong {\cal E}_{(1,\, \tau^A(Q),\, L^A)} \quad .$$ Similarly, we et $\widehat{ L^A}= {\rm res}_{\bar \pi^A}(\,\widehat{\overline{\!N^A}}^k)$ and denote by $\widehat{ \tau^A}\,\colon P\to \widehat{ L^A}$ the lifting of $\tau^A\,;$ then, by Corollary 3.15, there is a $P$-interior full matrix algebra ${\it w}(S_\gamma)$ such that we have an isomorphism $$(\O (G^A))_{{\it w}(\gamma)}\cong {\it w}(S_\gamma)\otimes_{\O } \O_*\widehat{L^A}^\circ\quad \leqno{7.1.1}$$ of both $P$-interior and $N^A/H^A$-graded algebras. [**Lemma 7.2.**]{} [*Proof.*]{}For any subgroups $R$ and $T$ of $P$ containing $Q$, let us denote by $${\cal E}_{(b,\, H,\, G)}(R, T)\qq {\cal E}_{({\it w}(b),\, H^A,\, G^A)}(R, T)$$ the respective sets of ${\cal E}_{(b,\, H,\, G)}\-$ and ${\cal E}_{({\it w}(b),\, H^A,\, G^A)}\-$morphisms from $T$ to $R\,;$ since $A$ acts trivially in ${\cal E}_{(b,\, H,\, G)}(R, T)$, by [@I Lemma 13.8 and Corollary 13.9] each morphism in ${\cal E}_{(b,\, H,\, G)}(R, T)$ is induced by some element in $N^A\,;$ moreover, if $T_\nu$ and $R_\varepsilon$ are local pointed groups contained in $P_\gamma\,,$ it follows from Proposition 4.9 that we have $T_\nu\leq (R_\varepsilon)^x$ for some $x\in N^A$ if and only if we have $T_{{\it w}(\nu)}\leq (R_{{\it w}(\varepsilon)})^x$. Therefore, we get $${\cal E}_{(b,\, H,\, G)}(T, R)={\cal E}_{({\it w}(b),\, H^A,\, G^A)}(T, R) \quad .$$ At this point, it is easy to check that $L$, $\tau$ and $\bar\pi$ fulfill the conditions in Theorem 3.5 with respect to $G^A$, $H^A$ and the nilpotent block ${\it w}(b)$. Then this lemma follows from the uniqueness part in Theorem 3.5. [**Lemma 7.3.**]{} [*Proof.*]{}The first statement is an easy consequence of £6.3 and Lemma 7.2; then, the last equality follows from Corollary £3.15. [**7.4.**]{} [*Proof of Theorem 1.6.*]{}Firstly we consider the case where the block $b$ of $H$ is not stabilized by $G\,;$ then we have an isomorphism $${\rm Ind}^G_{N}(\O N b)\cong \O G b$$ of $\O G$-interior algebras mapping $1\otimes a\otimes 1$ onto $a$ for any $a\in \O N b$ and an isomorphism $${\rm Ind}^{G^A}_{N^A}(\O (N^A ){\it w}(b))\cong \O (G^A){\it w}(b)$$ of $\O (G^A)$-interior algebras mapping $1\otimes a\otimes 1$ onto $a$ for any $a\in \O (N^A) {\it w}(b)$. Suppose that an $\O(N^A\times N)$-module $M$ induces a Morita equivalence from $\O (N^A) {\it w}(b)$ to $\O N b$. Then it is easy to see that the $\O(G^A\times G)$-module ${\rm Ind}^{G^A\times G}_{N^A\times N} (M)$ induces a Morita equivalence from $\O Gc$ to $\O (G^A) {\it w}(c)\,.$ So, we can assume that $G= N$ and then we have $G^A=N^A\,.$ By Corollary 3.15, there exists an isomorphism of both $(N/H)$-graded and $P$-interior algebras $$(\O G)_\gamma\cong S_\gamma\otimes_{\O } \O_*\hat L^{^\circ}\quad ; \leqno{7.4.1}$$ denote by $V_\gamma$ an $\O P$-module such that ${\rm End}_\O(V_\gamma)\cong S_\gamma\,;$ choosing $i\in \gamma$ and assuming that $(\O G)_\gamma = i(\O G)i\,,$ we know that the $\O Gb\otimes_\O (\O G)_\gamma^\circ\-$module $(\O G)i$ determines a Morita equivalence from $\O Gb$ to $(\O G)_\gamma\,,$ whereas the $(\O G)_\gamma\otimes_\O \O_*\hat L\-$module $V_\gamma\otimes_\O \O_*\hat L^{^\circ}$ determines a Morita equivalence from $(\O G)_\gamma$ to $\O_*\hat L^{^\circ}\,,$ so that the $\O Gb\otimes_\O \O_*\hat L\-$ module $$(\O G)i\otimes_{(\O G)_\gamma} (V_\gamma\otimes_\O \O_*\hat L^{^\circ}) \cong (\O G)i\otimes_{S_\gamma} V_\gamma$$ determines a Morita equivalence from $\O Gb$ to $\O_*\hat L^{^\circ}\,.$ Similarly, choosing $j\in \delta$ such that $ji = j = ij\,,$ assuming that $j(\O H)j = (\O H)_\delta$ and setting $j\. V_\gamma = V_\delta\,,$ so that $S_\delta = {\rm End}_\O (V_\delta)\,,$ the $\O Hb\otimes_\O \O Q\-$ module $$(\O H)j\otimes_{(\O H)_\delta} (V_\delta\otimes_\O \O Q) \cong (\O H)j\otimes_{S_\delta} V_\delta$$ determines a Morita equivalence from $\O Hb$ to $\O Q\,.$ Analogously, with evident notation, the $\O (G^A) w(b)\otimes_\O \O_*\,\widehat{\!L^A}\-$ module $$\O (G^A) w(i) \otimes_{w(S_\gamma)} w(V_\gamma)$$ determines a Morita equivalence from $\O (G^A)(b)$ to $\O_*\,\widehat{L^A}^\circ\,,$ whereas the $\O (H^A) w(b)\otimes_\O \O Q\-$module $$\O (H^A)w(j)\otimes_{w(S_\delta)} w(V_\delta)$$ determines a Morita equivalence from $\O (H^A) w(b)$ to $\O Q\,.$ Consequently, identifying $\,\widehat{L^A}$ with $\hat L$ through the isomorphism $\hat\sigma$ (cf. Lemma £7.3), the $\O (G\times G^A)\-$module $$D= ((\O G)i\otimes_{S_\gamma} V_\gamma)\otimes_{\O_*\hat L} (w(V_\gamma)^\circ \otimes_{w(S_\gamma)} w(i) \O (G^A))$$ determines a Morita equivalence from $\O Gb$ to $\O (G^A) w(b)\,,$ whereas the $\O (H\times H^A)\-$module $$M = ((\O H)j\otimes_{S_\delta} V_\delta)\otimes_{\O Q } (w(V_\delta)^\circ \otimes_{w(S_\delta)} w(j)\O (H^A))$$ determines a Morita equivalence from $\O Hb$ to $\O (H^A) w(b)\,.$ Moreover, since we have the obvious inclusions $$(\O H)j\subset (\O G)i\quad ,\quad S_\delta \subset S_\gamma\qq V_\delta \subset V_\gamma \quad ,$$ it is easily checked that we have $$(\O H)j\otimes_{S_\delta} V_\delta\cong (\O H)i\otimes_{S_\gamma} V_\gamma\subset (\O G)i\otimes_{S_\gamma} V_\gamma \quad ;\leqno £7.4.2$$ in particular, we have an evident section $$(\O G)i\otimes_{S_\gamma} V_\gamma\too (\O H)j\otimes_{S_\delta} V_\delta$$ which is actually an $\O Hb\otimes_\O \O Q\-$module homomorphism. Similarly, we have a split $\O (H^A) w(b)\otimes_\O \O Q\-$module monomorphism $$\O (H^A)w(j)\otimes_{w(S_\delta)} w(V_\delta)\too \O (G^A) w(i) \otimes_{w(S_\gamma)} w(V_\gamma) \quad .\leqno £7.4.3$$ In conclusion, the $\O Hb\otimes_\O \O Q\-$ and $\O (H^A) w(b)\otimes_\O \O Q\-$module homomorphisms £7.4.2 and £7.4.3, together with the inclusion $\O Q\subset \O \hat L\,,$ determine an $\O (H\times H^A)\-$module homomorphism $$M\too {\rm Res}_{H\times H^A}^{G\times G^A} (D) \leqno £7.4.4$$ which actually admits a section too. Now, denoting by $K$ the inverse image in $G\times G^A$ of the “diagonal” subgroup of $(G/H)\times (G^A/H^A)\,,$ we claim that the product by $K$ stabilizes the image of $M$ in $D\,,$ so that $M$ can be extended to an $\O K\-$module. Actually, we have $$K = (H\times H^A)\.\Delta (N_{G^A}(Q_\delta)) \quad ,$$ so that it suffices to prove that the image of $M$ is stable by multiplication by $\Delta (N_{G^A}(Q_\delta))\,.$ Given $x\in N_{G^A}(Q_\delta)$, there are some invertible elements $a_x\in (\O H)^Q$ and $b_x\in (\O (H^A))^Q$ such that $$xjx^{-1} = a_xj a_x^{-1}\qq x w(j)x^{-1} = b_x w(j )b_x^{-1}$$ and therefore $a_x^{-1}x$ and $b_x^{-1}x$ respectively centralize $j$ and $w(j)\,,$ so that $a_x^{-1}xj$ and $b_x^{-1}x w(j)$ respectively belong to $(\O G)_\delta$ and to $(\O G^A)_{w(\delta)}\,;$ but, according to isomorphisms £7.4.1 and £7.1.1, we have $G/H\-$ and $G^A/H^A\-$gra-ded isomorphisms $$(\O G)_\delta\cong S_\delta\otimes_\O \O_*\hat L^\circ\qq (\O G^A)_{w(\delta)}\cong w(S_\delta)\otimes_\O \O_*\hat L^\circ$$ where we are setting $w(S_\delta) = w(j)w(S_\gamma)w(j)\,.$ Hence, identifying with each other both members of these isomorphisms and modifying if necessary our choice of $a_x\,,$ for some $s_x\in S_\delta\,,$ $t_x\in w(S_\delta)$ and $\hat y_x\in \hat L^\circ\,,$ we get $$a_x^{-1}x j = s_x\otimes \hat y_x\qq b_x^{-1}x w(j) = t_x\otimes \hat y_x \quad .$$ Thus, setting $w(V_\delta) = w(j)w(V_\gamma)\,,$ for any $a\in (\O H)j\,,$ any $b\in (\O H^A)w(j)\,,$ any $v\in V_\delta$ and any $w\in w(V_\delta)\,,$ in $D$ we have $$\begin{aligned} (x,x)\!\!\!\!&\.&\!\!\!\!(a\otimes v)\otimes (w\otimes b) = (x a\otimes v)\otimes (w\otimes bx^{-1})\\ &=& (x ax^{-1}a_x (a_x^{-1}xj)\otimes v)\otimes (w\otimes (w(j)x^{-1}b_x)b_x^{-1}xbx^{-1})\\ &=&(x ax^{-1}a_x \otimes s_x\.v)\. \hat y_x\otimes \hat y_x^{-1}\.(w\.t_x^{-1}\otimes b_x^{-1}xbx^{-1})\\ &=&(x ax^{-1}a_x \otimes s_x\.v) \otimes (w\.t_x^{-1}\otimes b_x^{-1}xbx^{-1})\quad ;\end{aligned}$$ since $x ax^{-1}a_x $ and $b_x^{-1}xbx^{-1}$ respectively belong to $(\O H)j$ and $w(j)(\O H^A)\,,$ this proves our claim. Finally, since homomorphism £7.4.4 actually becomes an $\O K\-$module homomorphism, it induces an $\O (G\times G^A)\-$module homomorphism $${\rm Ind}_K^{G\times G^A}(M)\too D$$ which is actually an isomorphism as it is easily checked. We are done. The following theorem is due to Harris and Linckelmann (see [@H]). [**Theorem 7.5.**]{} [*Proof.*]{}By [@H Theorem 5.1], we can assume that $b$ is a $G\rtimes A$-stable block of ${\rm O}_{p'}(G)$, where ${\rm O}_{p'}(G)$ is the maximal normal $p'$-subgroup of $G$. Clearly $b$ as a block of ${\rm O}_{p'}(G)$ is nilpotent and thus $\O Gb$ is an extension of the nilpotent block algebra $\O {\rm O}_{p'}(G) b$. By [@H Theorem 5.1] again, ${\it w}(b)$ is a $G^A$-stable block of ${\rm O}_{p'}(G^A)$ and thus is nilpotent; thus $\O (G^A) {\it w}(b)$ is an extension of the nilpotent block algebra $\O {\rm O}_{p'}(G^A) {\it w}(b)$. By [@H Theorem 4.1], ${\it w}(b)$ is also the Glauberman correspondent of $b$ as a block of ${\rm O}_{p'}(G)$. Then, by Theorem 1.6, the block algebras $\O Gb$ and $\O (G^A){\it w}(b)$ are basically Morita equivalent. The following theorem is due to Koshitani and Michler (see [@KG]). [**Theorem 7.6.**]{} [*Proof.*]{}Since $P$ is normal in $G$, by [@AB 2.9] there is a block $b_P$ of $C_G(P)$ such that $b={\rm Tr}^G_{G_{b_P}}(b_P)$, where $G_{b_P}$ is the stabilizer of $b_P$ in $G$. Since $A$ and $G$ have coprime orders, by [@I Lemma 13.8 and Corollary 13.9], $b_P$ can be chosen such that $A$ stabilizes $b_P$. Since $P$ is the unique defect group of $b$, $P$ has to be contained in $G_{b_P}\,;$ then by [@KP Proposition 5.3], the intersection $Z(P)=P\cap C_G(P)$ is the defect group of $b_P$ and, in particular, $b_P$ is nilpotent. Thus the block $\O G b$ is an extension of the nilpotent block algebra $\O(P\. C_G(P))b_P$ and, in particular, we have $\bar N \cong E_G (P_\gamma)\,.$ The Glauberman correspondent of $b_P$ makes sense and by [@W Proposition 4], we have $${\it w}(b)={\rm Tr}^{G^A}_{(G^A)_{{\it w}(b_P)}}({\it w}(b_P)) \quad .$$ Since ${\it w}(b_P)$ has defect group $Z(P)$, it is also nilpotent and thus $\O (G^A){\it w}(b)$ is an extension of the nilpotent block algebra $\O(P\. C_{G^A}(P)){\it w}(b_P)\,;$ once again, we have $\bar N ^A\cong E_{G^A} (P_{w(\gamma)})\,.$ On the other hand, since $P$ is normal in $G\,,$ it follows from [@P6 Proposition 14.6] that $$(\O G)_\gamma\cong \O_*(P \rtimes \hat E_G (P_\gamma))\qq (\O G^A)_{w(\gamma)}\cong \O_*(P \rtimes \hat E_{G^A} (P_{w(\gamma)})) \quad ;$$ but, it follows from £6.3 that we have a $k^*\-$group isomorphism $$\hat E_G (P_\gamma)\cong \hat E_{G^A} (P_{w(\gamma)}) \quad .$$ We are done. [19]{} =-1mm J. L. Alperin and M. Broué, Local methods in block theory, Ann. of Math. [**110**]{} (1979), 143-157. M. F. Atiyah and I. G. Macdonald, Introduction to commutative algebra. Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont. 1969. M. Broué, Equivalences of blocks of group algebras, Finite-Dimensional Algebras and Related Topics Ottawa, ON, 1992, NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci. vol. [**424**]{}, Kluwer Acad. Publ., Dordrecht (1994), pp. 1¨C26. M. Broué and L. Puig, A Frobenius theorem for blocks. Invent. Math. [**56**]{} (1980), 117-128. M. Broué and L. Puig, Characters and local structure in $G$-algebras. J. Algebra [**63**]{} (1980), no. 2, 306–317 M. Broué, Isométries parfaites, types de blocs, catégories dérivées. Astérisque No. [**181-182**]{} (1990), 61–92. H. Cartan and S. Eilenberg, Homological Algebra, Princeton Math. 19, 1956, Princeton University Press. W. Feit, The Representation Theory of Finite Groups, North-Holland Math. Library vol. [**25**]{}, North-Holland, Amsterdam (1982). M. E. Harris and M. Linckelmann, On the Glauberman and Watanabe correspondences for blocks of finite $p$-solvable groups, Trans. Amer. Math. Soc. [**354**]{} (2002), no. 9, 3435–3453. M. E. Harris, Glauberman-Watanabe corresponding $p$-blocks of finite groups with normal defect groups are Morita equivalent. Trans. Amer. Math. Soc. [**357**]{} (2005), no. 1, 309–335 (electronic). I. M. Isaacs, Character theory of finite groups, Academic Press, New York, 1976. S. Koshitani and G. Michler, Glauberman correspondence of $p$-blocks of finite groups. J. Algebra [**243**]{} (2001), no. 2, 504–517. B. Küshammer, Crossed products and blocks with normal defect groups. Comm. Algebra [**13**]{} (1985), 147-168. B. Küshammer and L. Puig, Extensions of nilpotent blocks. Invent. Math. [**102**]{} (1990), no. 1, 17–71. L. Puig, Pointed groups and construction of characters, Math. Z. [**176**]{}(1981), 265-292. L. Puig, Local fusions in block source algebras, J. of Algebra [**104**]{} (1986), 358-369. L. Puig, Pointed groups and construction of modules, Journal of Algebra, [**116**]{} (1988), 7-129. L. Puig, Nilpotent blocks and their source algebras, Invent. Math. [**93**]{} (1988), 77-116. L. Puig, Notes on $\O^*$-groups, preprint, 1998. L. Puig, On the local structure of Morita and Rickard equivalences between Brauer blocks, Progress in Mathematics, [**178**]{}. Birkhäuser Verlag, Basel, 1999. L. Puig, The hyperfocal subalgebra of a block. Invent. Math. [**141**]{} (2000), no. 2, 365–397. L. Puig, Source Algebras of $p$-Central Group Extensions, J. Algebra [**235**]{} (2001), 359-398. L. Puig, On the Brauer-Glauberman correspondence. J. Algebra [**319**]{} (2008), no. 2, 629–656. W. Reynolds, Blocks and normal subgroups of finite groups. Nagoya Math. J. [**22**]{} (1963), 15- 32. J. -P. Serre, Local Fields, GTM 67, 1979, Springer-Verlag, New York. A. Watanabe, The Glauberman Character Correspondence and Perfect Isometries for Blocks of Finite Groups, J. Algebra [**216**]{} (1999), 548-565. J. Thévenaz, $G$-algebras and modular representation theory. Oxford Mathematical Monographs. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1995. Y. Zhou, Extensions of nilpotent blocks over arbitrary fields. Math. Z. [**261**]{} (2009), no. 2, 351–371. Lluis Puig CNRS, Institut de Mathématiques de Jussieu 6 Av Bizet, 94340 Joinville-le-Pont, France puig@math.jussieu.fr Yuanyang Zhou Department of Mathematics and Statistics Central China Normal University Wuhan, 430079 P.R. China zhouyy74@163.com
--- abstract: 'We study the robustness of a fault-tolerant quantum computer subject to Gaussian non-Markovian quantum noise, and we show that scalable quantum computation is possible if the noise power spectrum satisfies an appropriate “threshold condition.” Our condition is less sensitive to very-high-frequency noise than previously derived threshold conditions for non-Markovian noise.' author: - Hui Khoon Ng and John Preskill title: 'Fault-tolerant quantum computation versus Gaussian noise' --- Introduction {#sec:introduction} ============ The theory of fault-tolerant quantum computation shows that properly encoded quantum information can be protected against decoherence and processed reliably with imperfect hardware [@Shor96]. Demonstrating that this theory really works in practice is one of the great challenges facing contemporary science. A large-scale fault-tolerant quantum computer would be a scientific milestone, and it should also be [*useful*]{}, capable of solving hard problems that are beyond the reach of ordinary digital computers. Though the theory of quantum fault tolerance strengthens our confidence that truly scalable quantum computers can be realized in the next few decades, failure is certainly possible. Perhaps the engineering challenges will prove to be so daunting, and the resources needed to overcome them so demanding, that society will be unable or unwilling to bear the cost for the foreseeable future. Perhaps new fundamental principles of physics, as yet undiscovered, will prevent large-scale quantum computers from behaving as currently accepted theory dictates. Finding that quantum computers fail for a fundamental reason would be a significant scientific advance, but would disappoint prospective users. There is a third reason to worry about the future prospects for fault-tolerant quantum computing. Mathematical results establishing that fault tolerance works effectively are premised on assumptions about the properties of the noise. The most obvious requirement is that the noise must be sufficiently weak — if the noise strength is below a [*threshold of accuracy*]{} then quantum computing is scalable in principle. But in addition, the noise must be suitably local, both spatially and temporally. Perhaps the quest for a quantum computer will be frustrated because the noise afflicting actual hardware is just not amenable to fault-tolerant protocols. We can anticipate therefore that progress toward scalable quantum computing will require an ongoing dialog between experimenters who will better understand the limitations of their devices and theorists who will propose better ways to overcome the limitations and to evaluate the efficacy of these proposals. In the meantime, an important task for theorists is to broaden the range of noise models for which useful accuracy threshold theorems can be proven, and we pursue that task in this paper. Our main result is a new proof of the threshold theorem for non-Markovian Gaussian noise models, in which system qubits are locally coupled to bath variables that have Gaussian fluctuations. Specifically, if the bath is a system of uncoupled harmonic oscillators, at either zero or nonzero temperature, our theorem expresses the threshold condition in terms of the power spectrum of the bath fluctuations. Early proofs of the threshold theorem [@ben-or; @kitaev-threshold; @knill-laflamme] assumed that the noise is [*Markovian*]{}. This means that each quantum gate in the noisy circuit can be modeled as a unitary transformation that acts jointly on a set of the qubits in the computer (the system qubits) and on the environment (the bath variables), but where it is assumed that the bath has no memory — the state of the bath is refreshed after every gate. The theorem was extended to a class of non-Markovian noise models in [@terhal], and further generalized in [@AGP] and [@AKP]. The results of [@AGP; @AKP] have the substantial virtue that the state of the bath and its internal dynamics can be arbitrary; for fault-tolerant quantum computing to work, it is only required that the bath couple weakly and locally to the system. However these results also have two serious drawbacks. First, the threshold condition is not easily related to experimentally accessible quantities; rather it requires terms in the Hamiltonian that couple the system to the bath to have a sufficiently small operator norm. Second, this condition severely constrains the very-high-frequency fluctuations of the bath. Intuitively, it seems that this constraint, which may limit the applicability of the threshold theorem to noise in some realistic settings, ought not to be necessary, since fluctuations with a time scale much shorter than the time it takes to execute a quantum gate tend to average out. One possible way to reach more pleasing conclusions is to make physically reasonable assumptions about the noise that go beyond the assumptions of [@AGP; @AKP]; that is the approach we follow here. Our new threshold theorem applies to any noise model in which the bath variables are free fields (aside from their coupling to the system qubits), and expresses the threshold condition in terms of the bath’s two-point correlation function, which is in principle measurable. It should be possible to extend our analysis to the case where the bath variables have sufficiently weak self-interactions, though we will not pursue that extension here. Furthermore, though our new threshold condition still requires the very-high-frequency bath fluctuations to be sufficiently weak, this requirement is considerably relaxed compared to previous threshold theorems that apply to non-Markovian noise. Showing that these requirements can be relaxed even further, perhaps by making additional physically motivated assumptions, is an important open problem. Experimenters use a variety of techniques to suppress the noise in quantum hardware, such as cleverly designed pulse sequences to improve the fidelity of quantum gates (spin echos, dynamical decoupling, etc.) and intrinsically robust encodings of quantum information (noiseless subsystems, topologically protected qubits, etc.). These techniques can be highly effective and are likely to be incorporated into the design of future quantum computers, but do not by themselves suffice to ensure the scalability of quantum computing. After such tricks are exhausted some residual noise inevitably remains that must be controlled using quantum error-correcting codes and fault-tolerant methods. Since our objective in this paper is to study the effectiveness of these fault-tolerant methods, our noise models may be viewed as effective descriptions of this residual noise in “fundamental” quantum gates that might already be realized using complex and sophisticated protocols. After reviewing previously known formulations of the quantum accuracy threshold theorem in Sec. \[sec:models\] (with some details relegated to Appendix A), we state our new result in Sec. \[sec:gaussian\], explore some of its implications in Sec. \[sec:implications\], derive it in Sec. \[sec:derivation\], and discuss some generalizations in Sec. \[sec:generalizations\]. We derive a sharper result for the case of pure dephasing noise in Sec. \[sec:diagonal\]. Sec. \[sec:conclusions\] contains our conclusions. Noise models and quantum accuracy threshold theorems {#sec:models} ==================================================== Here we will briefly review some previously know formulations of the quantum accuracy threshold theorem, and explain why these results still leave something to be desired. Then in Sec. \[sec:gaussian\] we will state our new result, which addresses some of the shortcomings of the previous results. The goal of fault-tolerant quantum computing is to simulate an ideal quantum circuit using the noisy gates that can be executed by actual devices. Theoretical results show that this goal is attainable if the noise is not too strong and not too strongly correlated. The essential trick that makes fault tolerance work is that the logical quantum state processed by the computer can be encoded very nonlocally, so that it is well protected from damage caused by local noise. It is convenient to analyze the effectiveness of a fault-tolerant noisy circuit by invoking a [*fault-path expansion*]{}; schematically, $$\label{fault-path-expansion} {\rm Noisy ~ Circuit} = \sum {\rm ``Fault ~ Path"}~.$$ Let us use the term [*location*]{} to speak of an operation in a quantum circuit that is performed in a single time step; a location may be a single-qubit or multi-qubit gate, a qubit preparation step, a qubit measurement, or the identity operation in the case of a qubit that is idle during the time step. In each fault path, the quantum gates are faulty at a specified set of locations in the circuit, while at all other locations the quantum gates are assumed to be ideal. We say that the faulty locations are “bad” and that the ideal locations are “good.” The general concept of a fault-path expansion applies quite broadly, and different noise models can be distinguished according to how we flesh out the meaning of eq. (\[fault-path-expansion\]). Local stochastic noise ---------------------- In a “stochastic” noise model we assign a [*probability*]{} to each fault path [@AGP]. We speak of [*local stochastic noise*]{} with strength $\varepsilon$ if, for any specified set $\mathcal{I}_r$ of $r$ locations in the circuit, the sum ${P}^{\rm bad}(\mathcal{I}_r)$ of the probabilities of all fault paths that are bad at [*all*]{} of these $r$ locations satisfies $${P}^{\rm bad}(\mathcal{I}_r) \le \varepsilon^r~.$$ In this noise model, no further restrictions are imposed on the noise, and in particular the trace-preserving quantum operation applied at the faulty locations may be chosen for each fault path by an adversary who wants the computation to fail. Thus the faults can be correlated, both spatially and temporally, but the adversary’s power is limited because an attack on $r$ specified circuit locations occurs with probability at most $\varepsilon^r$. The noise is “local” in the sense that attacking each additional location suppresses the probability of the fault path by another power of $\varepsilon$. Most proofs of the threshold theorem use [*recursive simulations*]{}. This means that quantum information is protected by a hierarchy of codes within codes, and that the fault-tolerant circuit has a self-similar structure. We refer to an unencoded quantum circuit as a “level-0” simulation. In a level-1 simulation, each elementary gate in the level-0 circuit is replaced by a level-1 [*gadget*]{} constructed from elementary gates; this 1-gadget performs the appropriate encoded operation on logical qubits that are protected by a quantum error-correcting code $\mathcal{C}$. In a level-2 simulation, each elementary gate in the ideal circuit is replaced by a level-2 gadget; the 2-gadget is constructed by replacing each elementary gate in the 1-gadget by a 1-gadget. A 2-gadget operates on quantum information protected by $\mathcal{C}\triangleright\mathcal{C}$, where $\triangleright$ denotes code concatenation. (That is, $\mathcal{C}_1\triangleright\mathcal{C}_2$ is encoded by first encoding the “outer” code $\mathcal{C}_2$, and then encoding each qubit in the $\mathcal{C}_2$ block using the “inner” code $\mathcal{C}_1$.) In a level-$k$ simulation, each elementary gate in the ideal circuit is replaced by a level-$k$ gadget, constructed by replacing each elementary gate in the $(k{-}1)$-gadget by a 1-gadget; it operates on quantum information protected by $\mathcal{C}^{\triangleright k}$. For local stochastic noise, and also for other noise models with suitable properties, a recursive simulation can be analyzed by a procedure called [*level reduction*]{}, in which a level-$k$ simulation is mapped to a “coarse-grained” level-$(k{-}1)$ simulation that acts on the top-level logical information in exactly the same way. Suppose for example that $\mathcal{C}$ is a distance-3 code that can correct one error. Then if the 1-gadgets are properly designed, each “good” 1-gadget that contains no more than one faulty location simulates the corresponding ideal gate correctly, while “bad” 1-gadgets with more than one fault may simulate the ideal gate incorrectly. In the level reduction step, for each fault path the good 1-gadgets are mapped to ideal level-0 gates, while the bad 1-gadgets are mapped to faulty level-0 gates. After this step, the resulting noisy circuit is still subject to local stochastic noise, but with a renormalized value of the noise strength $$\label{noise-renormalization} \varepsilon^{(1)} = \varepsilon^2/\varepsilon_0 = \varepsilon_0\left(\varepsilon/\varepsilon_0\right)^2~.$$ The renormalized value of the noise strength is $O(\varepsilon^2)$, because at least two faults are required for a 1-gadget to fail; the quantity $\varepsilon_0^{-1}$ is a combinatoric factor counting the number of “malignant” sets of locations within the 1-gadget where faults can cause failure. Since level reduction maps local stochastic noise to local stochastic noise (but with a revised value of the noise strength), the level reduction step can be carried out repeatedly, and analyzed by the same method each time. That the structure of the noise is preserved, even though its strength is renormalized, is a useful feature of the local stochastic noise model not shared by some noise models. For example, if faults in level-0 gates were independently and identically distributed, the effective noise model after one level reduction step would become correlated rather than independent. See [@AGP; @Aliferis07] for a more detailed discussion of the level reduction procedure. By repeating the level reduction step all together $k$ times, we reduce the level-$k$ simulation to an effective level-0 ([*i.e.*]{}, unencoded) simulation with noise strength $$\label{level-k-strength} \varepsilon^{(k)} = \varepsilon_0\left(\varepsilon/\varepsilon_0\right)^{2^k}~.$$ It follows that for $\varepsilon < \varepsilon_0$ (the [*accuracy threshold*]{}), the effective noise strength becomes negligibly small for $k$ sufficiently large, and the simulation becomes highly reliable. More precisely, for any fixed $\varepsilon < \varepsilon_0$ and fixed $\delta > 0$, an ideal circuit with $L$ gates can be simulated with error probability $\delta$ by a noisy circuit with $L^*$ gates, where for some constant $c$ $$L^*/L =O\left(\left( \frac{\log (L/\delta)}{\log(\varepsilon_0/\varepsilon)} \right)^c \right)~$$ (The constant $c$ is determined by the size of the 1-gadgets.) Thus, with reasonable overhead cost, the noisy simulation gets the right answer with high probability. This is the quantum accuracy threshold theorem for local stochastic noise. For the threshold theorem to apply, two features of the simulation are essential: First, we must assume that quantum gates can be executed in parallel — otherwise we would be unable to control storage errors that occur simultaneously in different parts of the computer. Second, we assume that qubits can be “discarded” and replaced by fresh qubits (for example, by measuring the qubits and resetting them) — otherwise we would be unable to flush from the computer the entropy introduced by noise. Estimates of the accuracy threshold $\varepsilon_0$ often rely on further assumptions. For example, if we assume that qubit measurements are as fast as quantum gates, that classical computations are arbitrarily accurate, that the accuracy of a two-qubit quantum gate does not depend on the spatial separation of the qubits, and that no data qubits “leak” from the computational Hilbert space, then it has been shown that $\varepsilon_0 > .67 \times 10^{-3}$ [@fibonacci]. For noise models with weaker correlations than in the local stochastic noise model, the proven accuracy threshold is above $10^{-3}$ [@AGP2; @Reichardt06], and numerical evidence suggests that the actual value of the threshold can be of order $1\%$ [@knill; @raussendorf]. Furthermore, it has been shown that the threshold is not drastically reduced if some of these assumptions are relaxed, for example by allowing measurements to be slow [@slow], allowing leakage [@leakage], or requiring quantum gates to be local on a two-dimensional array [@svore]. Local non-Markovian noise {#subsec:local-noise} ------------------------- The local stochastic noise model is handy for analysis and has some quasi-realistic features, but it is still rather artificial. From a physics perspective, it is more natural to formulate the noise model in terms of a Hamiltonian $H$ that governs the joint evolution of the system and the bath. We may express $H$ as $$H=H_S+H_B +H_{SB}~,$$ where $H_S$ is the time-dependent system Hamiltonian that realizes the ideal quantum circuit, $H_B$ is the (arbitrary) Hamiltonian of the bath, and $H_{SB}$ is a perturbation, responsible for the noise, that may couple the system to the bath. We say that such a noise model is [*non-Markovian*]{}, meaning that quantum information can escape from the system to the bath and then return to the system at a later time, so that the state of the system at time $t + dt$ is not uniquely determined by its state at time $t$. Furthermore, $H_{SB}$ may also contain terms that act nontrivially only on the system, representing unitary noise arising from imperfect control of the system Hamiltonian. Actually, the local stochastic noise model already incorporates some non-Markovian effects; even when fault paths are weighted by probabilities, the adversary who attacks the circuit might employ a quantum memory. But different methods are needed to analyze the consequences of Hamiltonian noise models, because fault paths are summed coherently rather than stochastically. The locations in a quantum circuit include not only quantum gates and storage steps, but also qubit preparation and measurement steps. Preparation and measurement noise can be incorporated into a Hamiltonian description by various means. In this paper we will take an especially simple approach, modeling an imperfect preparation by an ideal preparation followed by evolution governed by $H$, and modeling an imperfect measurement by an ideal measurement preceded by evolution governed by $H$. For the time being, to simplify the discussion, we will imagine that system qubits are prepared only at the very beginning of the computation and measured only at the very end. Preparations and measurements that occur at intermediate times can easily be incorporated; we will elaborate on this point in Sec. \[sec:generalizations\]. For the continuous-time Hamiltonian dynamics we are now considering, a “location” consists of a specified qubit or set of qubits to which a gate is applied, and a specified [*time interval*]{} during which that gate is realized by the ideal system Hamiltonian $H_S$. We may say that the Hamiltonian noise model is “local” if the perturbation $H_{SB}$ can be expressed as a sum of terms $$\label{expansion-system-bath-Hamiltonian} H_{SB}= \sum_a H_{SB}^{(a)}~,$$ where each $H_{SB}^{(a)}$ acts on only a small number of system qubits (while perhaps also acting collectively on many bath variables). The joint unitary time evolution operator $U_{SB}$ for system and bath, resulting from integrating the Schrödinger equation for Hamiltonian $H$, can be formally expanded to all orders in time-dependent perturbation theory in $H_{SB}$. In any fixed term in this expansion, perturbations chosen from the set $\{H_{SB}^{(a)}\}$ are inserted at specified times. For such a fixed term in the perturbation expansion, let us say that a location in the (level-0) noisy simulation is “bad” if an inserted perturbation acts nontrivially somewhere inside that location; otherwise that location is “good.” Of course, under this definition a single insertion of $H_{SB}^{(a)}$ might cause two (or perhaps more) locations to be bad in a particular time step, if $H_{SB}^{(a)}$ acts collectively on two qubits that are undergoing different gates executed in parallel in the ideal circuit. As already noted, we may assume that the system qubits have been initialized ideally at the start of the Hamiltonian evolution; we denote this initial system state by $|\Psi_S^0\rangle$. We also assume that the initial state of the bath is a pure state $|\Psi_B^0\rangle$. There is really no loss of generality in supposing that the bath starts out in a pure state; if we wish to consider a mixed initial state of the bath instead (for example, a thermal state), we may include in the bath a “reference” system that “purifies” the mixed state. We have also noted that we may assume that final measurements performed on system qubits are ideal. Just before these final measurements are conducted, the (pure) state of the system and bath is $$|\Psi_{SB}\rangle=U_{SB}|\Psi_{SB}^0\rangle~,$$ where $|\Psi_{SB}^0\rangle = |\Psi_S^0\rangle\otimes |\Psi_B^0\rangle$ is the initial state of system and bath. For any specified set $\mathcal{I}_r$ of $r$ locations in the circuit, let us denote by $|\Psi_{SB}^{\rm bad}(\mathcal{I}_r)\rangle$ the sum of all the terms in the formal perturbation expansion of $|\Psi_{SB}\rangle$ such that all of these $r$ locations are bad. Then we speak of [*local non-Markovian noise*]{} (or more briefly [*local noise*]{}) with strength $\varepsilon$ if $$\label{local-noise-strength} \| |\Psi_{SB}^{\rm bad}(\mathcal{I}_r)\rangle\| \le \varepsilon^r~.$$ The noise strength $\varepsilon$ can be related to properties of the perturbation $H_{SB}$. We will sometimes refer to this model as local [*coherent*]{} noise, to emphasize that (in contrast to the local [*stochastic*]{} noise model) fault paths are assigned amplitudes rather than probabilities. Although there are some new subtleties (see [@AGP] and Appendix \[sec:appendix\]), the level-reduction concept can be applied to Hamiltonian noise models in much the same way as for stochastic models. We may say that a 1-gadget is bad if it contains bad level-0 gates at a malignant set of locations, that a 2-gadget is bad if it contains bad 1-gadgets at a malignant set of locations, that a 3-gadget is bad if it contains bad 2-gadgets at a malignant set of locations, and so on. For any specified set $\mathcal{I}_r^{(k)}$ of $r$ $k$-gadgets in the circuit, let us denote by $|\Psi_{SB}^{\rm bad}(\mathcal{I}_r^{(k)})\rangle$ the sum of all the terms in the formal perturbation expansion of $|\Psi_{SB}\rangle$ such that all of these $r$ $k$-gadgets are bad. Then it follows from eq. (\[local-noise-strength\]) that $$\label{local-noise-strength-levelk} \| |\Psi_{SB}^{\rm bad}(\mathcal{I}_r^{(k)})\rangle\| \le \left(\varepsilon^{(k)}\right)^r~,$$ with $\varepsilon^{(k)}$ as in eq. (\[level-k-strength\]); the derivation of eq. (\[local-noise-strength-levelk\]) is sketched in Appendix A. Furthermore, a level-$k$ simulation in which all $k$-gadgets are good simulates the ideal circuit perfectly. In this sense, repeated level reduction reduces a level-$k$ simulation to an equivalent level-0 simulation while mapping local noise to local noise with a renormalized noise strength $\varepsilon^{(k)}$, and for $\varepsilon < \varepsilon_0$, the renormalized noise strength becomes negligible for large $k$. The threshold value $\varepsilon_0$ of the noise strength for local noise is of the same order (though not exactly the same) as the threshold for local stochastic noise. We emphasize that, once eq. (\[local-noise-strength\]) is established, we can derive eq. (\[local-noise-strength-levelk\]) without any further assumptions about the Hamiltonian $H=H_S+H_B+H_{SB}$. The strength $\varepsilon$ of local noise can be estimated based on the detailed properties of the expansion in eq. (\[expansion-system-bath-Hamiltonian\]) of the perturbation $H_{SB}$ in terms of local system operators. For example, in [@AGP] the noise was assumed to be “short range” in the sense that the perturbation $H_{SB}$ acts collectively on a pair of data qubits only while the ideal system Hamiltonian $H_S$ also couples those two data qubits — that is, only while the ideal quantum circuit calls for that pair of qubits to undergo a two-qubit gate. For this short-range local noise model, it was shown that eq. (\[local-noise-strength\]) is satisfied if we choose $$\label{short-range} \varepsilon = \left(\max_{a,t} \| H_{SB}^{(a)}(t)\| \right)\cdot t_0 ~,$$ where $t_0$ is the time needed to execute a quantum gate, $\| \cdot \|$ denotes the sup operator norm, and the maximum is over all circuit locations and all times. On the other hand, in [@AKP] the noise was assumed to be “long range” with $H_{SB}$ coupling each pair of data qubits irrespective of the structure of the ideal circuit. In that case we may write $$H_{SB}=\sum_{<ij>} H_{<ij>}~.$$ where the sum is over all unordered pairs of system qubits; $H_{<ij>}$ acts collectively on the pair of qubits $<ij>$ and also on the bath. For this long-range local noise model, it was shown that eq. (\[local-noise-strength\]) is satisfied if we choose $$\label{long-range} \varepsilon^2 = C\cdot \left( \max_{i,t}\sum_j \| H_{<ij>}\|\right) \cdot t_0 ~,$$ where $ C$ is the numerical constant $C = 2e \approx (2.34)^2$. (This is actually a slight improvement over the value of $C$ reported in [@AKP]; the improved value can be derived using the reasoning described in Sec. \[subsec:many-marked\] below, if we assume $\varepsilon^2 \le e$.) The origin of eq. (\[short-range\]) is easy to understand intuitively [@terhal]. If each one of the $r$ specified locations in $\mathcal{I}_r$ is bad, then the perturbation must be inserted at least once in each of these locations, and each insertion reduces the norm of the state by a factor of at least $\|H_{SB}^{(a)}\|$. Inside each location, there is an [*earliest*]{} insertion of the perturbation that can occur at any time during the duration of the location, a time window of width $t_0$. Integrating over the time of the earliest insertion of the perturbation inside each location, we obtain eq. (\[short-range\]). For the long-range noise model, a single insertion of the perturbation $H_{<ij>}$ can cause two circuit locations to be bad if qubits $i$ and $j$ are participating in separate gates, and therefore the noise strength is correspondingly higher (observe that $\varepsilon^2$ rather than $\varepsilon$ appears on the left-hand-side of eq. (\[long-range\])). Assessment {#subsec:assessment} ---------- The results eq. (\[short-range\]) and eq. (\[long-range\]) are significant, because they demonstrate that quantum computing is scalable in principle for non-Markovian noise described by a system-bath Hamiltonian. Furthermore, this formulation of the threshold theorem has the noteworthy advantage that the argument works for any bath Hamiltonian $H_B$. The dynamics of the bath does not matter, as long as the perturbation $H_{SB}$ is “local” and sufficiently weak. However, expressing the threshold condition as in eq. (\[short-range\]) or eq. (\[long-range\]) has serious drawbacks. First we should note that while in the local stochastic noise model we may interpret the noise strength $\varepsilon$ as an error [*probability*]{} per gate, in the non-Markovian noise model $\varepsilon$ is really an error [*amplitude*]{}. Since a probability is a square of an amplitude, requiring $\varepsilon < \varepsilon_0$ in the local noise model is a far more stringent criterion than requiring $\varepsilon < \varepsilon_0$ in the local stochastic noise model. Our analysis yields a much weaker lower bound on the accuracy threshold for the local noise model than for the local stochastic noise model because we pessimistically allow the bad fault paths to add together with a common phase and thus to interfere constructively. Most likely this analysis is far too pessimistic; it is reasonable to expect that distinct fault paths have only weakly correlated phases, and if so, then the modulus of a sum of $N$ fault paths should grow like $\sqrt{N}$ rather than linearly in $N$. That is, if the phases of fault paths can be regarded as random, then we expect the [*probabilities*]{} of the fault paths, rather than their amplitudes, to accumulate linearly. An important open problem for the theory of quantum fault tolerance is to put this phase-randomization hypothesis on a rigorous footing, and thereby to establish a much higher estimate of the accuracy threshold for local noise. But we will not be addressing this problem in this paper. There are other drawbacks of the threshold condition eq. (\[short-range\]) that we [*will*]{} try to address, however. One issue is that the norm of the system-bath Hamiltonian is not directly measurable in experiments, and it would be far preferable to state the threshold condition in terms of experimentally accessible quantities, such as the noise power spectrum. In fact, for otherwise reasonable noise models, the norm $\| H_{SB}^{(a)}\|$ could be formally infinite (if for example the system qubits couple to unbounded bath operators such as the quadrature amplitudes of bath oscillators), and in such cases the threshold theorem has little force. In more physical terms, an undesirable feature of demanding small $\varepsilon$ where $\varepsilon$ is given by eq. (\[short-range\]) is that this condition requires that the very-high-frequency component of the noise be particularly weak, a requirement that seems not to be physically well motivated. To be concrete, suppose that $$H_{SB}^{(a)}= \mathcal{S}^{(a)}\otimes \mathcal{B}^{(a)}~,$$ where $\mathcal{S}^{(a)}$ is a local Hermitian system operator with $\|\mathcal{S}^{(a)}\|=1$ and $\mathcal{B}^{(a)}$ is a Hermitian bath operator. Then combining the condition $\varepsilon < \varepsilon_0$ with eq. (\[short-range\]) implies in particular that $$\label{bath-t-equals-t-prime} \langle \Psi_{SB}^0|\mathcal{B}^{(a)}(t)\mathcal{B}^{(a)}(t)|\Psi_{SB}^0\rangle= \int_{-\infty}^\infty \frac{d\omega}{2\pi}~ \tilde \Delta(\omega)~ < ~ \varepsilon_0^2/t_0^2~,$$ where $\tilde \Delta(\omega)$ is the Fourier transform of the bath’s two-point correlation function, defined by $$\langle\Psi_{SB}^0|\mathcal{B}^{(a)}(t)\mathcal{B}^{(a)}(t')|\Psi_{SB}^0\rangle=\int_{-\infty}^\infty \frac{d\omega}{2\pi} e^{-i\omega(t-t')}\tilde \Delta(\omega)~$$ ($\mathcal{B}^{(a)}(t)$ denotes the interaction-picture bath operator). Suppose that the fluctuations of the bath variables are Ohmic (and at zero temperature); that is, linear in frequency at low (positive) frequency and exponentially decaying at frequencies large compared to the cutoff frequency $\tau_c^{-1}$: $$\label{ohmic-spectrum} \tilde \Delta(\omega)= \begin{cases} 2\pi A \omega e^{-\omega\tau_c} & \mbox{if }\omega \ge 0 \\ 0 & \mbox{if } \omega < 0 \end{cases} ~,$$ where $A$ is a positive dimensionless parameter quantifying the strength of the Ohmic noise. Then the threshold condition implies that $$\label{linear-divergence} \sqrt{A}\cdot \left( \frac{t_0}{\tau_c}\right) < \varepsilon_0~.$$ For the case of Ohmic noise, then, the quantity that is required to be small is linearly “ultraviolet divergent;” that is, it has a linear sensitivity to the high-frequency cutoff $\tau_c^{-1}$, which may be orders of magnitude higher than the characteristic frequency $t_0^{-1}$ of the ideal computation. The extreme sensitivity of the threshold condition to the very-high-frequency noise seems surprising, since one’s naive expectation is that noise with zero mean and frequency much larger than $t_0^{-1}$ should nearly average out. This unsatisfying limitation of eq. (\[short-range\]), already pointed out in the original paper by Terhal and Burkard [@terhal] (and later highlighted by Alicki [@alicki] and by Hines and Stamp [@stamp]) may just be a shortcoming of the analysis, but conceivably it hints at a deeper problem for quantum fault tolerance. For example, it has been suggested [@alicki-horodecki] that during the course of a long quantum computation, an initially benign state of the bath may be pushed toward a far more malicious state that compromises the fault-tolerant protocol. Perhaps high-frequency noise with zero mean, which locally seems incapable of inflicting serious harm, has cumulative global effects that are surprisingly troublesome. Whether or not one suspects that the environment could be so cunning an adversary, stronger rigorous arguments establishing that quantum computing is robust against non-Markovian noise would surely be welcome! Our central result in this paper is a new estimate of the noise strength $\varepsilon$ that applies to a Hamiltonian description of Gaussian non-Markovian noise. We will formulate the noise model and state our result in Sec. \[sec:gaussian\], discuss some implications in Sec. \[sec:implications\], and postpone the derivation until Sec. \[sec:derivation\]. For this particular important class of noise models, we will be able to state a threshold condition that is less sensitive to very-high-frequency noise, though some sensitivity will still remain. The combinatoric analysis that leads to our result borrows substantially from the derivation in [@AKP] of eq. (\[long-range\]), though the context is rather different. Gaussian noise and the threshold condition {#sec:gaussian} ========================================== By “Gaussian noise” we mean a Hamiltonian noise model where the bath is a set of uncoupled harmonic oscillators, and each system qubit couples to a linear combination of oscillator quadrature amplitudes; hence (in units with $\hbar=1$) $$H_B= \sum_k \omega_k a_k^\dagger a_k~,$$ and $$\label{gaussian-system-bath} H_{SB}= \sum_x \sum_\alpha \sigma_\alpha(x) \otimes \tilde\phi_\alpha(x,t)~,$$ where $\tilde\phi_\alpha(x,t)$ is the Hermitian operator $$\label{gaussian-field} \tilde\phi_\alpha(x,t) = \sum_k \left( g_{k,\alpha}(x,t) a_k + g_{k,\alpha}^*(x,t)a_k^\dagger\right)~.$$ Here $x$ is a label indicating a system qubit’s position, and $\{\sigma_\alpha(x), \alpha=1,2,3\}$ are the three Pauli operators acting on qubit $x$. The $a_k$’s are annihilation operators for the bath oscillators, satisfying the commutation relation $[a_k,a_{k'}^\dagger]= \delta_{kk'}$, and $g_{k,\alpha}(x,t)$ is a complex coupling parameter that determines how strongly oscillator $k$ couples to qubit $x$ at time $t$. As explained in Sec. \[subsec:local-noise\] (see also Sec. \[subsec:initial-state\]), we may assume without loss of generality that the bath is prepared in a pure state $|\Psi_B^0\rangle$ at the beginning of the computation. The Hamiltonian $H_B +H_{SB}$, along with the choice of the bath’s initial state $|\Psi_B^0\rangle$, defines our noise model. The bath fluctuations will be Gaussian if the state $|\Psi_B^0\rangle$ is a Gaussian state (that is, a generalized “squeezed” state) of the oscillator bath — a purified thermal state is a special case of such a squeezed state. It is useful to define the “interaction picture” bath operator $\phi_\alpha(x,t)$ as $$\phi_\alpha(x,t)= e^{i H_{B}t} \tilde\phi_\alpha(x,t)e^{-iH_{B} t} = \sum_k \left( g_{k,\alpha}(x,t) a_ke^{-i\omega_k t} + g_{k,\alpha}^*(x,t)a_k^\dagger e^{i\omega_k t}\right)~,$$ and to define the bath’s two-point correlation function as $$\Delta(\alpha_1,x_1,t_1;\alpha_2,x_2,t_2)= \langle \Psi_B^0|\phi_{\alpha_1}(x_1,t_1)\phi_{\alpha_2}(x_2,t_2)|\Psi_B^0\rangle~.$$ We will sometimes use the abbreviated notation $\phi(1)$ for $\phi_{\alpha_1}(x_1,t_1)$ and $\Delta(1,2)$ for $\Delta(\alpha_1,x_1,t_1;\alpha_2,x_2,t_2)$; we also define $$\label{delta-bar} |\bar\Delta(1,2)|= \sum_{\alpha_1,\alpha_2} |\Delta(\alpha_1,x_1,t_1;\alpha_2,x_2,t_2)|~.$$ When we say that the noise is Gaussian, we mean that the bath variable $\phi_\alpha(x,t)$ obeys Gaussian statistics: all $n$-point bath correlation functions vanish for $n$ odd, and the $2n$-point function can be expressed in terms of two-point functions. Using $\langle ~\cdot ~\rangle$ to denote the expectation value in the state $|\Psi_B^0\rangle$, Gaussian statistics implies that $$\langle \phi(1)\phi(2)\phi(3) \cdots \phi(2n)\rangle = \sum_{\rm contractions} \Delta(i_1,i_2)\Delta(i_3,i_4)\cdots \Delta(i_{2n-1},i_{2n})~,$$ where summing over “contractions” means summing over the $(2n)!/2^nn!$ ways to divide the labels $1,2,3, \dots 2n$ into $n$ unordered pairs. For example, if $\phi$ is a Gaussian variable, then the four-point function is $$\langle \phi(1)\phi(2)\phi(3)\phi(4)\rangle=\Delta(1,2)\Delta(3,4)+\Delta(1,3)\Delta(2,4)+\Delta(1,4)\Delta(2,3)~,$$ as illustrated in Fig. \[fig:four-point\]. This expansion of the $2n$-point function in terms of two-point functions is sometimes called “Wick’s theorem.” ![image](four-point.pdf){width="16cm"} Now we can state our main result: Gaussian noise obeys the local noise condition eq. (\[local-noise-strength\]), with noise strength $$\label{result} \varepsilon^2= C \cdot \max_{\rm Loc} \left(\int_{1, {\rm Loc}}\int_{2, {\rm All}}|\bar\Delta(1,2)|\right)~,$$ where $C=2e\approx(2.34)^2$ is the numerical constant defined earlier (and where we have assumed $\varepsilon^2 \le e$). Here $\int_{1,\rm {Loc}}$ indicates that one leg $(x_1,t_1)$ of the two-point function is integrated over a single location in the circuit: $x_1$ is summed over the qubits participating in a particular gate, and $t_1$ is integrated over the time interval in which that gate is executed. And $\int_{2,\rm{All}}$ indicates that the other leg $(x_2,t_2)$ of the two-point function is summed over all system qubits and integrated over the entire duration of the computation. The maximum is with respect to all possible circuit locations for $(x_1,t_1)$. The threshold condition $\varepsilon < \varepsilon_0$, with $\varepsilon$ given by eq. (\[result\]), now becomes a condition on the two-point correlation function of the bath. We note that the ordering of the operators $\phi(1)$ and $\phi(2)$ does not matter in eq. (\[result\]) because $|\Delta(1,2)|=|\Delta(2,1)|$; changing the ordering modifies only the phase of $\Delta(1,2)$, not its modulus. Another noteworthy feature is that our estimate of $\varepsilon$ applies for an arbitrary system Hamiltonian. This property may seem unexpected at first, as we know that in some settings the damage caused by the noise can depend on the relation between the energy spectrum of the $H_S$ and the power spectrum of the noise. For example, the spontaneous decay rate for a qubit with energy splitting $\hbar\omega$ depends on the noise power at circular frequency $\omega$. How, then, can our threshold condition depend only on the noise spectrum and not on the energy spectrum of $H_S$? The answer is that by taking the modulus $|\bar\Delta(1,2)|$ of the bath two-point function in eq. (\[result\]) we are already being maximally pessimistic about how the spectrum of $H_S$ matches the noise power spectrum. Thus there are both advantages and disadvantages in formulating a threshold condition that is general enough to apply for any ideal system Hamiltonian. On the one hand we find a criterion for scalable quantum computing that can be stated easily and rigorously proved by a reasonably simple argument. On the other hand, the price of such rigor is that our stated criterion may be far more demanding than it really needs to be. The crucial assumption in the derivation of eq. (\[result\]) is eq. (\[gaussian-system-bath\]), where $\tilde\phi_\alpha(x,t)$ is a “free field,” [*i.e.*]{}, obeys Gaussian statistics; thus eq. (\[gaussian-field\]) could be regarded as merely a general phenomenological representation of a Gaussian field, and not necessarily as a fundamentally accurate microscopic description of the bath. Caldeira and Leggett [@leggett] have argued that noise is expected to be Gaussian, at least to an excellent approximation, in a wide variety of realistic physical settings where the system is weakly coupled to many environmental degrees of freedom. If the initial state of the bath is a thermal state with inverse temperature $\beta=1/kT$, then the mean occupation number of each oscillator is determined by the Bose-Einstein distribution function; we have $$\begin{aligned} &&\langle \Psi_B^0|a_k^\dagger a_{k'}|\Psi_B^0\rangle= \frac{\delta_{kk'}}{e^{\beta\omega_k}-1} = \langle\Psi_B^0|a_k a_{k'}^\dagger|\Psi_B^0\rangle - 1~,\nonumber\\ &&\langle \Psi_B^0|a_k a_{k'}|\Psi_B^0\rangle= 0 = \langle \Psi_B^0|a_k^\dagger a_{k'}^\dagger|\Psi_B^0\rangle~,\end{aligned}$$ and therefore $$\begin{aligned} \Delta(\alpha_1,x_1,t_1;\alpha_2,x_2,t_2)= &&\frac{1}{2}\sum_k g_{k,\alpha_1}(x_1,t_1)g_{k,\alpha_2}^*(x_2,t_2)e^{-i\omega_k(t_1-t_2)}\left(\coth(\beta\omega_k/2)+1\right)\nonumber\\ + &&\frac{1}{2}\sum_k g_{k,\alpha_1}^*(x_1,t_1)g_{k,\alpha_2}(x_2,t_2)e^{i\omega_k(t_1-t_2)}\left(\coth(\beta\omega_k/2)-1\right)~.\end{aligned}$$ Just to be concrete, consider the case where the noise is stationary and spatially uncorrelated — each qubit has a time-independent coupling to its own independent oscillator bath (though admittedly these are dubious assumptions when multi-qubit gates are executed). Then $$\Delta(\alpha_1,x_1,t_1;\alpha_2,x_2,t_2)= \delta_{x_1 x_2}\Delta(\alpha_1,x_1,t_1;\alpha_2,x_1,t_2)~,$$ where $$\Delta(\alpha_1,x_1,t_1;\alpha_2,x_1,t_2)= \int_{-\infty}^\infty\frac{d\omega}{2\pi}e^{-i\omega (t_1-t_2)}\tilde \Delta_{\alpha_1\alpha_2}(x_1,\omega)$$ and $$\tilde \Delta_{\alpha_1\alpha_2}(x_1,\omega)= \begin{cases} \pi J_{\alpha_1,\alpha_2}(x_1,\omega)\left(\coth(\beta\omega/2) +1\right)& \mbox{if }\omega > 0 \\ \pi J_{\alpha_1,\alpha_2}^*(x_1,\omega)\left(\coth(\beta\omega/2) -1\right) & \mbox{if } \omega < 0 \end{cases} ~.$$ Here $J_{\alpha_1,\alpha_2}(x_1,\omega)$ is the Hermitian matrix $$J_{\alpha_1,\alpha_2}(x_1,\omega)= \sum_k \delta(\omega-\omega_k)g_{k,\alpha_1}(x_1) g_{k,\alpha_2}^*(x_1)~.$$ The function $J_{\alpha_1,\alpha_2}(x_1,\omega)$ is the spin-polarization-dependent power spectrum of the noise acting on qubit $x_1$. If the energy splitting $\hbar\omega$ of the qubit is tunable, this function can be measured by observing the qubit’s relaxation rate as a function of the energy splitting and the polarization. In principle, multi-qubit correlations in the noise can also be measured using quantum process tomography. Some implications {#sec:implications} ================= Before presenting our derivation of eq. (\[result\]) in Sec. \[sec:derivation\], we will discuss a few of its implications. Dimensional criterion {#sec:dimension} --------------------- Our expression for $\varepsilon$ in eq. (\[result\]) involves a formal integration over all space and time. If the bath correlations decay slowly in space or time, this integral might diverge in the limit of a computation that is very wide, very deep, or both. In that case, our “threshold condition” cannot be satisfied asymptotically, and we cannot conclude that quantum computation is scalable. On the other hand, if the integral converges “in the infrared,” then the threshold condition has value, as it establishes scalability if the coupling of the system to the bath is sufficiently weak. As long as $\varepsilon$ is finite, we can make it as small as we please by weakening the coupling of the qubits to the bath, [*i.e.*]{}, by rescaling the perturbation $H_{SB}$, or equivalently by rescaling the field $\phi_\alpha(x,t)$. What is the criterion for infrared convergence? Let us suppose that the qubits are uniformly distributed in $D$-dimensional space, and that the bath fluctuations are “critical;” [*i.e.*]{}, algebraically decaying in space and in time. We say that the scale dimension of the field $\phi$ is $\delta$ and the dynamical critical exponent is $z$ if, for large scale factor $\lambda$, the bath two-point function scales according to $$\Delta(\lambda x_1, \lambda^z t_1; \lambda x_2, \lambda^z t_2)\sim \lambda^{-2\delta} \Delta(x_1,t_1;x_2,t_2)~;$$ thus the time $t$ scales like $z$ powers of the spatial distance $x$. This means that the integral of the two-point function scales as $$\int dt~ d^Dx~ |\Delta(x,t;0,0)| \sim R^{D+z-2\delta}~,$$ where $R$ is an infrared cutoff. Convergence in the infrared (finiteness of the limit $R\to\infty$) is ensured provided that $$D+z < 2\delta~;$$ if this criterion is satisfied, then scalable quantum computing is achievable at weak coupling. If it is not satisfied, then scalable quantum computing might still be possible, but our version of the threshold theorem does not guarantee it. The same criterion was previously stated by Novais [*et al.*]{} [@novais1; @novais2], though without rigorous justification. Almost-Markovian noise ---------------------- The noise is Markovian if the bath immediately “forgets” any quantum information it receives, so that the information never returns to the system. Though this is never strictly the case, it can be true to an excellent approximation if the characteristic correlation time of the bath is very short compared to the time resolution with which we monitor the system’s behavior. In the Gaussian noise model, the noise is Markovian if the bath’s two-point correlation function is proportional to a delta function of the time difference, $$\Delta(t_1,x_1;t_2,x_2)\propto \delta(t_1 - t_2)~.$$ We could say that the noise is “almost Markovian” if the correlation function $\Delta$ is a sharply peaked function of the time difference, [*e.g.*]{}, with width $\tau_c$ much less than the duration $t_0$ of a single quantum gate. In that case, our expression for the noise strength becomes $$\label{almost-markovian} \varepsilon^2= C\cdot \max_{\rm Loc} \left(\int_{1, {\rm Loc}}\int_{2, {\rm All}}|\bar\Delta(1,2)|\right)\approx \Gamma t_0~;$$ for each fixed value of $t_1$, the sharply peaked $t_2$ integral generates the factor $\Gamma$, and then integrating $t_1$ over the duration of the location generates the factor $t_0$. We may interpret $\Gamma$ as an error rate per unit time, and $\Gamma t_0$ as an error probability per gate. But note that the noise strength $\varepsilon$ is not this error rate, but rather its square root $\left(\Gamma t_0\right)^{1/2}$, in effect the [*amplitude*]{} of the error. In the Markovian case, fault paths really [*do*]{} decohere, and errors [*can*]{} be assigned probabilities rather than amplitudes. But our derivation of eq. (\[result\]) is too general and insufficiently clever to exploit this property; hence our threshold condition requires the error amplitude rather than its square to be less than $\varepsilon_0$. Despite this deficiency, at least our threshold criterion for local noise, when applied to the almost-Markovian case, improves on the operator norm criterion eq. (\[short-range\]). If the two-point function has a narrow peak of width $\tau_c$ whose integral is $\Gamma$, then the [*height*]{} of the peak is of order $\Gamma/\tau_c$, and this peak height can be interpreted as the norm squared of the system-bath Hamiltonian, as in eq. (\[bath-t-equals-t-prime\]). Thus eq. (\[short-range\]) estimates the noise strength as $$\label{almost-short-range} \varepsilon \sim \sqrt{\Gamma/\tau_c}\cdot t_0~,$$ which is even more pessimistic than eq. (\[almost-markovian\]). The estimate eq. (\[almost-short-range\]) diverges as the ultraviolet cutoff $\tau_c^{-1}$ is removed. But the estimate eq. (\[almost-markovian\]) depends on the area under the peak rather than its height, and so has a smooth limit as $\tau_c\to 0$. Ohmic noise ----------- To further explore the sensitivity to high-frequency noise of our estimated noise strength, let us consider the Ohmic case, as in Sec. \[subsec:assessment\]. If the Fourier transform $\tilde\Delta(\omega)$ of the two-point correlation function is given by eq. (\[ohmic-spectrum\]), then the real-time correlation function is $$\Delta(t)=\int_{-\infty}^\infty \frac{d\omega}{2\pi} e^{-i\omega t}\tilde \Delta(\omega) = \frac{-A}{(t-i\tau_c)^2}~.$$ The function $\Delta(t)$ has a short-time singularity at $t=0$ that is regulated by the cutoff $\tau_c$, but the real and imaginary parts of $\Delta(t)$ both oscillate, so that its time integral vanishes: $$\int_{-\infty}^\infty dt~\Delta(t) = \tilde \Delta(\omega=0)=0~.$$ However the estimated noise strength $\varepsilon$, which is required to be small by the threshold condition, depends on the integral of the [*modulus*]{} of $\Delta(t)$, $$|\Delta(t)|= \frac{A}{t^2 + \tau_c^2}~,$$ which is of course nonnegative and has a nonvanishing time integral; the estimated noise strength is $$\label{ohmic-integral} \varepsilon = \left(C\cdot \int_{\rm Loc} dt \int_{\rm All} ds ~|\Delta(t-s)|\right)^{1/2}=\sqrt{\pi C A}\cdot \left(\frac{t_0}{\tau_c}\right)^{1/2}~.$$ This estimate is ultraviolet divergent, but comparing to eq. (\[linear-divergence\]) we see that the divergence has been improved from linear to square-root dependence on the ultraviolet cutoff $\tau_c^{-1}$. Despite the improvement, the surviving ultraviolet sensitivity in this estimate of $\varepsilon$ (for the case of Ohmic noise) is troubling, as it significantly reduces the class of noise models for which we can conclude that quantum computing is scalable. Therefore it is important to understand the origin of the ultraviolet divergence. One might suspect at first that the ultraviolet sensitivity arises because the range of the $dt$ integral in eq. (\[ohmic-integral\]) is a window of width $t_0$ with a sharp boundary. But in fact, for Ohmic noise the sharp boundary generates only a mild logarithmic ultraviolet divergence, not a power divergence. The actual reason for the power divergence is that we have pled complete ignorance regarding the frequency spectrum of the ideal system Hamiltonian $H_S$. Therefore, we are required to be maximally pessimistic about how the oscillating phase of the wave function arising from the ideal system dynamics matches with the phase of the bath fluctuations. For that reason our estimate of $\varepsilon$ involves an integral of the modulus of $\Delta(t)$ rather than $\Delta(t)$ itself. With further assumptions about the ideal system dynamics we ought to be able to exclude this highly pessimistic scenario, leading to an estimate of $\varepsilon$ with milder ultraviolet sensitivity. A natural idea is to attempt a “renormalization group improvement” of the noise model; that is, to “coarse grain” in time, stretching the short-time cutoff $\tau_c$, while adjusting the bath fluctuations to keep invariant the effect of the noise on the system. Formally Ohmic noise is “marginal,” meaning that the naive renormalization-group scaling generates only logarithmic cutoff dependence, not the square-root dependence found in eq. (\[ohmic-integral\]). However, rigorously justifying this naive scaling turns out to be technically difficult, in part because $H_{SB}$ couples the system operators to [*unbounded*]{} bath operators in the Gaussian noise model. It might be interesting to see if further technical assumptions (which one would hope to justify [*a posteriori*]{}) about the system-bath state $|\Psi_{SB}(t)\rangle$ during the course of the computation would lead to a less ultraviolet-sensitive threshold condition, but we have not yet succeeded in finding useful results with this character. Derivation {#sec:derivation} ========== In this section we will derive eq. (\[result\]). Our task is to estimate a value of $\varepsilon$ such that $$\||\Psi_{SB}^{\rm bad}(\mathcal{I}_r)\rangle\|^2 = \langle \Psi_{SB}^{\rm bad}(\mathcal{I}_r)|\Psi_{SB}^{\rm bad}(\mathcal{I}_r)\rangle = \langle \Psi_{SB}^0 |U_{SB}^{\rm bad}(\mathcal{I}_r)^\dagger U_{SB}^{\rm bad}(\mathcal{I}_r)|\Psi_{SB}^0\rangle \le \varepsilon^{2r}~$$ (see eq. (\[local-noise-strength\])). Here $U_{SB}$ is the joint system-bath time evolution operator from the beginning of the computation until just before the measurements that will read out the final result, and $U_{SB}^{\rm bad}(\mathcal{I}_r)$ denotes the sum of all the terms in the perturbation expansion of $U_{SB}$ such that the perturbation $H_{SB}$ is inserted at least once in each of the $r$ specified locations in the set $\mathcal{I}_r$. The initial state of the system and bath is assumed to be the pure product state $|\Psi_{SB}^0\rangle = |\Psi_{S}^0\rangle\otimes |\Psi_B^0\rangle$, where the bath’s state $|\Psi_B^0\rangle$ is Gaussian; that is, the expectation values of the bath operators $\{\phi_\alpha(x,t)\}$ obey Gaussian statistics in this state. For now we assume that system qubits are prepared only at the start of the computation and measured only at the end, with evolution governed by the Hamiltonian $H=H_S+H_B+H_{SB}$ in between; this assumption can be relaxed, as we will discuss in Sec. \[sec:generalizations\]. ![image](keldysh.pdf){height="8cm"} Keldysh diagrams ---------------- The terms in the perturbation expansion can be associated with diagrams, where in each diagram the perturbation $H_{SB}$ is inserted at a specified set of points in spacetime. We may think of the sum of these diagrams as representing the expectation in the state $|\Psi_{SB}^0\rangle$ of the product of the forward evolution operator of the system and bath ([*i.e.*]{}, $U_{SB}$), from the initial to the final time, followed by the backward evolution operator ([*i.e.*]{}, $U_{SB}^\dagger$), from the final to the initial time. It is convenient to fold the diagram into a hairpin shape as in Fig. \[fig:keldysh\], so that the diagram has two branches that are aligned in time. The upper branch represents the evolution forward in time; here the inserted perturbations are “time-ordered,” meaning that operators inserted at later times act after operators inserted at earlier times. The lower branch represents the evolution backward in time; here the inserted perturbations are “anti-time-ordered,” meaning that operators inserted at earlier times act after operators inserted at later times. Furthermore, all operators inserted on the lower branch act after operators inserted on the upper branch. Diagrams with this structure are sometimes called “Keldysh diagrams.” In each diagram, the evolution of system and bath is governed by the uncoupled Hamiltonian $H_0=H_S+H_B$ in between successive insertions of $H_{SB}$. Since $|\Psi_{SB}^0\rangle$ is a product state, the diagram’s contribution to the expectation value factorizes into the product of a system expectation value and a bath expectation value. Consider a diagram where the operator $\sigma_{\alpha_j}\otimes \tilde\phi_{\alpha_j}$ is inserted on the upper branch acting on qubit $x_j$ at time $t_j$, for $j=1,2,3,\dots, n$, and the operator $\sigma_{\beta_k}\otimes \tilde\phi_{\beta_k} $ is inserted on the lower branch acting on qubit $y_k$ at time $s_k$, for $k=1,2,3,\dots, m$. Taking into account the uncoupled evolution in between insertions, and the Keldysh operator ordering rules (where $t_n > t_{n-1} > \cdots > t_1$ and $s_m < s_{m-1} < \cdots < s_1$), this diagram’s contribution to the expectation value $\langle \Psi_{SB}^0|U_{SB}^\dagger U_{SB}|\Psi_{SB}^0\rangle$ is $$\begin{aligned} i^m(-i)^n\times &&\langle \Psi_S^0| \sigma_{\beta_m}(y_m,s_m) \cdots \sigma_{\beta_1}(y_1,s_1)\sigma_{\alpha_n}(x_n,t_n)\cdots \sigma_{\alpha_1}(x_1,t_1)|\Psi_S^0\rangle\nonumber\\ \times && \langle\Psi_B^0| \phi_{\beta_m}(y_m,s_m) \cdots \phi_{\beta_1}(y_1,s_1)\phi_{\alpha_n}(x_n,t_n)\cdots \phi_{\alpha_1}(x_1,t_1) |\Psi_B^0\rangle~.\end{aligned}$$ Here $\sigma_\alpha(x,t)= U_S(t)^\dagger\sigma_\alpha(x)U_S(t)$ and $\phi_\alpha(x,t)=U_B(t)^\dagger\tilde\phi_\alpha(x,t)U_B(t)$ are the “interaction picture” operators that evolve according to the uncoupled system-bath dynamics. Using the Gaussian statistics ([*i.e.*]{}, “Wick’s theorem”), the bath expectation value can be expressed as a sum of products of Keldysh-ordered two-point correlation functions. Summing $\{\alpha_1,\alpha_2,\dots \alpha_n\}$ and $\{\beta_1,\beta_2,\dots,\beta_m\}$ from $1$ to $3$, summing $\{x_1,x_2,\dots x_n\}$ and $\{y_1, y_2, \dots y_m\}$ over all qubits, integrating $\{t_1,t_2,\dots\}$ and $\{s_1,s_2,\dots, s_m\}$ over the interval from the initial to the final time, and finally summing $n$ and $m$ from $0$ to $\infty$, we would recover the full expectation value $\langle \Psi_{SB}^0|U_{SB}^\dagger U_{SB}|\Psi_{SB}^0\rangle=1$. More precisely, to generate the full system-bath evolution operator $U_{SB}$, for each fixed $n$ we sum $\{(x_1,t_1),(x_2,t_2),(x_3,t_3), \dots, (x_n,t_n)\}$ over all time-ordered sets of $n$ spacetime positions inside the circuit. This is equivalent to integrating each $(x_j,t_j)$ over all spacetime, and then dividing by $n!$ to compensate for the overcounting of the sets (each set has been included $n!$ times). Similarly, to generate $U_{SB}^\dagger$, for each fixed $m$ we sum $\{(y_1,s_1),(y_2,s_2),(y_3,s_3), \dots, (y_m,s_m)\}$ over all anti-time-ordered sets of $m$ spacetime positions inside the circuit. One marked location ------------------- But we do not want to sum all the diagrams; instead we want to sum all and only those such that all of the $r$ locations in the set $\mathcal{I}_r$ are bad on both the upper and lower branches. Let us first consider the case $r=1$, where one particular circuit location has been “marked” as bad. To get a useful bound, it is helpful to organize this sum in a particular way. Because the marked location is bad, there must be at least one insertion of the perturbation inside this location on both the upper and lower branch, as in Fig. \[fig:r-equals-one\]. Therefore, there must be an [*earliest*]{} insertion inside the marked location on each branch. Also, if the marked location is a two-qubit gate, then the earliest insertion could act on either one of the two qubits. For now, let us fix on each branch the time of the earliest insertion inside the marked location, the qubit on which the earliest insertion acts, and the corresponding Pauli operator. Later on we will integrate the time of the earliest insertion over the marked location, and also sum over Pauli operators and the qubits at the location, but not yet. ![image](r-equals-one.pdf){width="9.5cm"} With the earliest insertion fixed on each branch, and after expanding the bath expectation value in terms of bath two-point functions, we can identify two classes of diagrams. In “class 1” diagrams, the earliest insertions on the two branches are contracted with one another, and in “class 2” diagrams they are not, as shown in Fig. \[fig:class-one-and-two\]. We will find upper bounds on the sum of all the diagrams in each class. ![image](class-one-and-two.pdf){height="5cm"} ### Class 1 diagrams {#subsubsec:class1} First consider the class 1 diagrams. Each diagram in the class has as a factor the two-point function $\Delta(\beta,y,s;\alpha,x,t)$, where $\sigma_\alpha$ acts on qubit $x$ at time $t$ on the upper branch, and $\sigma_\beta$ acts on qubit $y$ at time $s$ on the lower branch. The simplest diagram in the class, which we will call the “skeleton” digram, has only two insertions and one contraction; its value is $$\begin{aligned} \label{skeleton} \langle \Psi_S^0| \sigma_{\beta}(y,s) \sigma_{\alpha}(x,t)|\Psi_S^0\rangle \times \langle\Psi_B^0| \phi_{\beta}(y,s) \phi_{\alpha}(x,t) |\Psi_B^0\rangle~.\end{aligned}$$ The other class 1 diagrams are obtained by “dressing” this skeleton in all possible ways, by adding further insertions and contractions. However, remember that we have fixed $t$ and $s$ to be the times of the earliest insertions of the perturbation on the upper and lower branches respectively. Therefore, an additional insertion is “legal” only if it avoids times earlier than $t$ inside the marked location on the upper branch, and times earlier than $s$ inside the marked location on the lower branch. With this proviso, all class 1 diagrams arise when we dress the skeleton class 1 diagram with all possible additional legal insertions and contractions. The class 1 diagrams can be summed up and expressed in a compact form. For this purpose we introduce what we call the “hybrid picture,” which is in a sense intermediate between the interaction and Heisenberg pictures. Let us define the “hybrid Hamiltonian” $H^{\rm hyb}$ by $$H^{\rm hyb}= \begin{cases} H_S + H_B & \mbox{in each marked location prior to the earliest insertion, }\\ H_S+H_B +H_{SB} & \mbox{elsewhere. } \end{cases}$$ That is, in the hybrid Hamiltonian, the perturbation $H_{SB}$ “turns off” inside the marked location before time $t$ on the upper branch and before time $s$ on the lower branch. When we sum up all the legal insertions and contractions, the value eq. (\[skeleton\]) of the skeleton diagram is transformed into $$\begin{aligned} \label{class1} \langle \Psi_{SB}^0| \sigma_{\beta}^{\rm hyb}(y,s) \sigma_{\alpha}^{\rm hyb}(x,t)|\Psi_{SB}^0\rangle \times \langle\Psi_B^0| \phi_{\beta}(y,s) \phi_{\alpha}(x,t) |\Psi_B^0\rangle~.\end{aligned}$$ Here the interaction picture operator $\sigma_\alpha(x,t)$ has been replaced by the hybrid-picture operator $\sigma_{\alpha}^{\rm hyb}(x,t)=U_{SB}^{\rm hyb}(t)^\dagger \sigma_\alpha(x)U_{SB}^{\rm hyb}(t)$, and furthermore the expectation value of the system operator is now evaluated in the system-bath state $|\Psi_{SB}^0\rangle$ rather than the system state $|\Psi_{S}^0\rangle$. If the expression eq. (\[class1\]) is expanded in powers of $H^{\rm hyb}_{SB}$, all the legal insertions and only the legal insertions are generated. And for each choice of insertions, evaluating the bath expectation value using Wick’s theorem yields a sum over all the contractions included in class 1. Thus, the assumption that the bath fluctuations are Gaussian is crucial for the derivation of eq. (\[class1\]). Therefore, we obtain an exact expression for the sum of all the diagrams in class 1 from eq. (\[class1\]) by now integrating the earliest insertions on both branches over the marked location, finding $$\begin{aligned} \label{class1-sum} &&\sum \mbox{Class 1 Diagrams}\nonumber\\ &&=\sum_{x,y \in {\rm Loc}} \int_{\rm Loc}dt \int_{\rm Loc} ds \sum_{\alpha,\beta} \langle \Psi_{SB}^0| \sigma_{\beta}^{\rm hyb}(y,s) \sigma_{\alpha}^{\rm hyb}(x,t)|\Psi_{SB}^0\rangle \langle\Psi_B^0| \phi_{\beta}(y,s) \phi_{\alpha}(x,t) |\Psi_B^0\rangle~.\end{aligned}$$ Now, the operator $\sigma_\alpha^{\rm hyb}(x,t)$ differs from the Pauli operator $\sigma_\alpha(x)$ by a mere unitary change of basis, and therefore has sup norm $\|\sigma_\alpha^{\rm hyb}(x,t)\|=1$. From eq. (\[class1-sum\]) we then conclude that $$\begin{aligned} \label{class1-sum-bound} && \left|\sum \mbox{Class 1 Diagrams}\right|\nonumber\\ && \le \sum_{x,y \in {\rm Loc}} \int_{\rm Loc}dt \int_{\rm Loc} ds \sum_{\alpha,\beta} \left|\langle\Psi_B^0| \phi_{\beta}(y,s) \phi_{\alpha}(x,t) |\Psi_B^0\rangle\right|~ = \int_{1,{\rm Loc}}\int_{2,{\rm Loc}}|\bar \Delta(1,2)|~,\end{aligned}$$ in the notation of eq. (\[delta-bar\]). This is our bound on the sum of all class 1 diagrams. Note that in eq. (\[class1-sum\]) the integrand is the product of a bath two-point correlation function and a “hybridized” system two-point correlation function. If the bath correlation function has a high-frequency component and the system correlation function does not, then the contribution to the time integral arising from the high-frequency bath fluctuations may be strongly suppressed. But the estimate in eq. (\[class1-sum-bound\]) is very crude — it applies irrespective of the frequency spectrum of the system correlation function — and we could get a better estimate if we assumed that the system correlation function has little power at high frequency. Furthermore, such an assumption seems physically reasonable; the natural frequencies of the system dynamics are set by the energy splitting of the logical states and by the characteristic time scale ([*e.g.*]{}, the gate duration $t_0$) on which the time-dependent system Hamiltonian varies. Unfortunately, though, finding a rigorous bound on the high-frequency hybrid system correlation function is not trivial, because the hybrid Hamiltonian includes the system-bath coupling $H_{SB}$, an unbounded operator. If the bath has a low temperature, then we expect that high-frequency bath oscillators are likely to be in their ground states, but to prove a threshold theorem, we need to rule out relatively unlikely events that might foil the computation. That is not so easy to do, especially if the Hamiltonian is unbounded. So in this paper we will mostly pursue the consequences of the crude estimate eq. (\[class1-sum-bound\]) and other similar estimates, leaving for future work the challenge of improving the results via tighter bounds on the integral in eq. (\[class1-sum\]). However, we [*can*]{} obtain a stronger bound for the case of pure dephasing noise, discussed in Sec. \[sec:diagonal\]. To prevent confusion, we remark that our “hybrid picture” is a rather strange concept, in that the Hamiltonian that governs evolution on the upper branch of the Keldysh diagram is different than the Hamiltonian for the lower branch. If we were using Keldysh diagrams the way they are usually used, to track the evolution of the system’s density operator, this feature would be unacceptable because time evolution would not preserve the density operator’s trace. For us, though, the hybrid Hamiltonian is merely a technical trick for bounding the sum of a class of diagrams, and should not be interpreted literally as the Hamiltonian of a physical system. ### Class 2 diagrams Now consider the class 2 diagrams. The earliest insertions inside the marked location on the upper and lower branches of the Keldysh diagram are not contracted with one another; rather each is contracted with an insertion at another location. Let us say that the earliest insertion at $(x,t)$ in the marked location on the upper branch is contracted with an insertion at spacetime position $(z,u)$, which could be on either the upper or lower branch, and that the earliest insertion at $(y,s)$ in the marked location on the lower branch is contracted with an insertion at $(w,v)$, which also could be on either the upper or lower branch. In principle $z$, $w$ could be the spatial labels of any two qubits in the computer, and $u$, $v$ could be any time between the initial and final time, [*except*]{} that the insertions at $(z,u)$ and $(w,v)$ must be [*legal*]{}; that is, neither can be inside the marked location on the upper branch earlier than $t$, or inside the marked location on the lower branch earlier than $s$. For the class 2 diagrams, let us for now imagine fixing the insertions at $(z,u)$ and at $(w,v)$ that are contracted with the earliest insertions; we will integrate over these spacetime positions later on. The simplest diagram in the class, the “skeleton” diagram with only two contractions, has the value (except for a phase factor that depends on the choice of branch for the insertions at $(z,u)$ and $(w,v)$) $$\begin{aligned} \label{skeleton-class2} && \langle \Psi_S^0| T^*\big(\sigma_{\beta}(y,s) \sigma_{\delta}(w,v) \sigma_{\gamma}(z,u)\sigma_{\alpha}(x,t)\big)|\Psi_S^0\rangle\nonumber\\ && \times \langle\Psi_B^0| T^*\big(\phi_{\beta}(y,s) \phi_{\delta}(w,v)\big)|\Psi_B^0\rangle \langle\Psi_B^0|T^*\big(\phi_{\gamma}(z,u) \phi_{\alpha}(x,t) \big)|\Psi_B^0\rangle~.\end{aligned}$$ where $T^*$ denotes the proper Keldysh ordering. Other diagrams in class 2 are obtained by dressing this skeleton with additional insertions and contractions in all possible legal ways. As in our discussion of the class 1 diagrams, summing all the ways to dress the skeleton transforms the interaction-picture system operators into hybrid-picture operators, yielding (up to a phase) $$\begin{aligned} \label{class2} && \langle \Psi_{SB}^0| T^*\big(\sigma^{\rm hyb}_{\beta}(y,s) \sigma^{\rm hyb}_{\delta}(w,v) \sigma^{\rm hyb}_{\gamma}(z,u)\sigma^{\rm hyb}_{\alpha}(x,t)\big)|\Psi_{SB}^0\rangle\nonumber\\ && \times \langle\Psi_B^0| T^*\big(\phi_{\beta}(y,s) \phi_{\delta}(w,v)\big)|\Psi_B^0\rangle \langle\Psi_B^0|T^*\big(\phi_{\gamma}(z,u) \phi_{\alpha}(x,t) \big)|\Psi_B^0\rangle~.\end{aligned}$$ To obtain the sum of all class 2 diagrams, we now sum over Pauli operator labels and spacetime positions, obtaining $$\begin{aligned} \label{class2-sum} &&\sum \mbox{Class 2 Diagrams}\nonumber\\ &&=\sum_{x,y \in {\rm Loc}} \, \sum_{z,w \in {\rm All}'} \int_{\rm Loc}dt \int_{\rm Loc} ds \int_{{\rm All}'}du \int_{{\rm All}'} dv \sum_{\alpha,\beta, \gamma,\delta}({\rm phase})\nonumber\\ && \times\langle \Psi_{SB}^0| T^*\big(\sigma^{\rm hyb}_{\beta}(y,s) \sigma^{\rm hyb}_{\delta}(w,v) \sigma^{\rm hyb}_{\gamma}(z,u)\sigma^{\rm hyb}_{\alpha}(x,t)\big)|\Psi_{SB}^0\rangle\nonumber\\ && \times \langle\Psi_B^0| T^*\big(\phi_{\beta}(y,s) \phi_{\delta}(w,v)\big)|\Psi_B^0\rangle \langle\Psi_B^0|T^*\big(\phi_{\gamma}(z,u) \phi_{\alpha}(x,t) \big)|\Psi_B^0\rangle~.\end{aligned}$$ Here the notation All$'$ indicates that the qubit positions $z$ and $w$ are summed over [*both*]{} branches of the Keldysh diagram, and that the times $u$ and $v$ are also integrated over both branches. Furthermore, it is understood that the integral over $u$ and $v$ is restricted to legal insertions (times in the upper-branch marked location earlier than $t$, and in the lower-branch marked location earlier than $s$, are excluded). As for the class 1 diagrams, we obtain a bound on the sum of class 2 diagrams by noting that the expectation value of the product of system operators has modulus no larger than one, finding $$\begin{aligned} \label{class2-sum-bound} \left|\sum\mbox{Class 2 Diagrams}\right| \le \left( 2\int_{1,{\rm Loc}}\int_{2,{\rm All}}|\bar\Delta(1,2)|\right)^2~.\end{aligned}$$ To obtain eq. (\[class2-sum-bound\]) we have noted that the Keldysh ordering is irrelevant when we take the modulus of the bath two-point function, and that in the sum of the moduli of all diagrams we can extend the integral over legal insertions to an integral over all insertions to obtain an upper bound. Here the notation All indicates that the second leg of the correlation function is summed over all qubits and integrated over all times; the factor of 2 accompanies the integral $\int_{2,{\rm All}}$ because the insertions at $(z,u)$ and at $(w,v)$ can be on either one of the two branches of the Keldysh diagram. For our upper bound on the sum of class 1 diagrams, both legs of the bath’s two-point function are integrated over the marked location, while in the upper bound on the sum of class 2 diagrams, one leg is integrated over the marked location, while the other is integrated over all qubits and all times. This distinction is not so important if the spatial and temporal correlations decay rapidly, but it can be quite important if the decay is slow, as we have already discussed in Sec. \[sec:dimension\]. The upper bound on the sum of class 1 diagrams is still valid, though weaker, if we extend the integral for one of the legs from the marked location to all of spacetime. Then by adding together the contributions from diagrams of both classes, we find $$\label{r-equals-one-bound} \|\Psi_{SB}^{\rm bad}(\mathcal{I}_{r=1})\|^2 \le E + 4E^2~,$$ where $$\label{E-defined} E= \max_{\rm Loc}\left(\int_{1,{\rm Loc}}\int_{2,{\rm All}}|\bar\Delta(1,2)|\right)~.$$ If $E$ is small (the typical case of interest), then the class 1 diagrams dominate, and the contribution from class 2 diagrams is higher order in $E$. We emphasize again that the integral $\int_{2,{\rm All}}$ in the definition of $E$ is confined to a single branch of the Keldysh diagram, and that the factor of 2 in eq. (\[class2-sum-bound\]) arises because the insertion inside the marked location can be contracted with an insertion on either branch. Many marked locations {#subsec:many-marked} --------------------- Now we want to consider the case where there are $r$ marked locations. The perturbation $H_{SB}$ must be inserted at least once in each of the $r$ marked locations, on both the upper and lower branches of the Keldysh diagram. In order to get an upper bound on the sum of all such diagrams, we will organize the sum following the same ideas as in our discussion of the $r=1$ case. In each marked location on each branch, there must be an earliest insertion of the perturbation, and this earliest insertion is contracted with another insertion elsewhere, which could be on either branch. A skeleton graph contains a “minimal” set of contractions — each contraction in the skeleton has at least one leg attached to the earliest insertion in a marked location. We distinguish two types of contractions in the skeleton: an “internal” contraction links two earliest insertions, and an “external” contraction links an earliest insertion with another legal insertion which is not an earliest insertion. The skeleton diagrams can be classified according to the number $k$ of internal contractions. If there are $r$ marked locations, and therefore all together $2r$ marked locations between the two branches, then $k$ can vary from 0 to $r$; if there are $k$ internal contractions then there are $2(r-k)$ external contractions. For $r=1$, a skeleton diagram in what we called class 1 has $k=1$ internal contractions, and a skeleton diagram in class 2 has $k=0$ internal contractions. For $r=2$, the ten distinct skeleton diagrams are shown in Fig. \[fig:r-equals-two\]. There are three diagrams with two internal contractions, six diagrams with one internal contraction, and one diagram with no internal contractions. ![image](r-equals-two.pdf){height="5cm"} The value of a skeleton diagram, with all insertions and contractions fixed, can be expressed as a product of the expectation value of a string of Keldysh-ordered interaction-picture system operators $$\begin{aligned} \label{skeleton-k-internal-system} \langle \Psi_S^0| T^*\big( \sigma(i_1)\cdots \sigma(i_k) \sigma(j_1)\cdots \sigma(j_k) \sigma(m_1)\cdots \sigma(m_{2(r-k)}) \sigma(n_1)\cdots \sigma(n_{2(r-k)}) \big)|\Psi_S^0\rangle\end{aligned}$$ and a product of Keldysh-ordered bath two-point functions $$\begin{aligned} \label{skeleton-k-internal-bath} && {\rm (phase)}\times \langle\Psi_B^0| T^*\big(\phi(i_1) \phi(j_1)\big)|\Psi_B^0\rangle \cdots \langle\Psi_B^0| T^*\big(\phi(i_k) \phi(j_k)\big)|\Psi_B^0\rangle \nonumber\\ && \times\langle\Psi_B^0|T^*\big(\phi(m_1) \phi(n_1) \big)|\Psi_B^0\rangle\cdots \langle\Psi_B^0| T^*\big(\phi(m_{2(r-k)}) \phi(n_{2(r-k)}\big)|\Psi_B^0\rangle ~.\end{aligned}$$ Here we have attached labels $1,2,3, \dots, 2r$ to the $2r$ marked locations on the two branches, and [e.g.]{}, $\sigma(i_1)$ is shorthand for $\sigma_{\alpha_{i_1}}(x_{i_1},t_{i_1})$, where $(x_{i_1},t_{i_1})$ is the spacetime position of the first insertion inside marked location number $i_1$. In eqs. (\[skeleton-k-internal-system\]) and (\[skeleton-k-internal-bath\]), locations $i_1$ through $i_k$ are internally contracted with locations $j_1$ through $j_k$, while the remaining earliest insertions in locations $m_1$ to $m_{2(r-k)}$ are contracted with insertions labeled $n_1$ through $n_{2(r-k)}$ which are not earliest insertions. When we sum up all ways to dress the skeleton with additional insertions and contractions, we obtain an expression of the same form, but with the interaction-picture system operators replaced by hybrid-picture system operators, and $|\Psi_{S}^0\rangle$ replaced by $|\Psi_{SB}^0\rangle$. Bounding the system operator expectation value by one, and summing over the Pauli operator labels, we obtain a bound on the sum of all dressed skeleton diagrams, $$\begin{aligned} \left|\sum \mbox{Dressed Skeletons}\right| \le \prod_{a=1}^k |\bar\Delta(i_a,j_a)|\prod_{b=1}^{2(r-k)}|\bar\Delta(m_b,n_b)|~.\end{aligned}$$ Now, keeping fixed the choice of which locations are internally contracted with one another, we can integrate each $i_a$, $j_a$, and $m_b$ over the specified marked location, while integrating $n_b$ over all locations on both branches. The integral is bounded above by $$\label{bound-contractions-fixed} \int \left|\sum \mbox{Dressed Skeletons}\right| \le \prod_{a=1}^k G(i_a,j_a)\prod_{b=1}^{2(r-k)} 2E(m_b)~,$$ where $$G(i_a,j_a) = \int_{1,{\rm Loc}(i_a)}\int_{2,{\rm Loc}(j_a)}|\bar\Delta(1,2)|~,\quad E(m_b)= \int_{1,{\rm Loc}(m_b)}\int_{2,{\rm All}}|\bar\Delta(1,2)|~,\quad$$ and where the factor of 2 multiplying $E(m_b)$ results from summing $n_b$ over both branches. Now, with the number $k$ of internal contractions still fixed, we can sum eq. (\[bound-contractions-fixed\]) over all the ways that $k$ contracted pairs of locations can be chosen from among $2r$ locations. We note that $$\sum_{{\rm contractions}(k)}\left(\prod_{a=1}^k G(i_a,j_a) \right)\le \frac{1}{ k!}\left(\sum_{i,j=1\atop i<j}^{2r} G(i,j)\right)^k~;$$ this inequality holds because $\left(\sum_{i,j=1}^{2r} G(i,j)\right)^k$ contains the term corresponding to each contraction $k!$ times, and also contains other nonnegative terms. Furthermore, $$\sum_{i,j=1\atop i<j}^{2r} G(i,j)\le \sum_{i=1}^{2r} E(i)~,$$ because the expression on the right-hand-side contains all the terms on the left-hand side, plus other nonnegative terms. We conclude that $$\sum_{{\rm contractions}(k)}\int \left|\sum \mbox{Dressed Skeletons}\right| \le \frac{1}{k!} \left( 2rE\right)^k (2E)^{2(r-k)}~,$$ with $E$ defined as in eq. (\[E-defined\]). It remains to sum over $k$, the number of internal contractions: $$\left| \sum \mbox{Diagrams}\right| \le \sum_{k=0}^r \left(\sum_{{\rm contractions}(k)}\int \left|\sum \mbox{Dressed Skeletons}\right| \right)\le \sum_{k=0}^r \frac{r^k}{k!} (2E)^{2r-k}~.$$ Therefore, if we assume that $2E \le 1$, $$\label{diagram-bound-hooray} \||\Psi_{SB}^{\rm bad}(\mathcal{I}_{r})\rangle\|^2 \le (2E)^r \sum_{k=0}^r \frac{r^k}{k!} (2E)^{r-k} \le (2E)^r \sum_{k=0}^\infty \frac{r^k}{k!} = (2eE)^r = \varepsilon^{2r}~,$$ where $$\varepsilon ~=~ \sqrt{2eE}~ \approx ~2.34 \sqrt{E}~.$$ Thus we have derived eq. (\[result\]). We note that eq. (\[diagram-bound-hooray\]) also applies for $r=1$, and in that case is weaker than the upper bound $\||\Psi_{SB}^{\rm bad}(\mathcal{I}_{r=1})\rangle\|^2\le E+4E^2 = E(1+4E)\le 3E$ found in eq. (\[r-equals-one-bound\]), assuming $E \le 1/2$. Generalizations {#sec:generalizations} =============== Initial state of the bath {#subsec:initial-state} ------------------------- In our analysis, we have found it convenient to assume that the initial state of the bath is a pure state, but the analysis also applies if the bath starts out in a mixed state. Actually, we can “purify” a mixed state $\rho_B^0$ of the bath by introducing a fictitious reference system $R$, and choosing the pure state $|\Psi_{BR}^0\rangle$ of $BR$ so that $$\rho_B^0 ={\rm tr}_R \left(|\Psi_{BR}^0\rangle\langle \Psi_{BR}^0|\right)~.$$ Our previous analysis then applies, if we consider $BR$ to be an “extended” bath, such that the system interacts with only the subsystem $B$ of the extended bath. However, the state $|\Psi_{BR}^0\rangle$ is not arbitrary; for our argument to apply it must be chosen so that the interaction-picture free field $\phi_\alpha(x,t)$ has Gaussian statistics in this state. For this it suffices for $|\Psi_{BR}^0\rangle$ to be an [*undisplaced Gaussian squeezed state*]{}. If we consider the reference system $R$, like the bath $B$, to be a system of uncoupled oscillators, then an undisplaced Gaussian squeezed state is obtained by applying a unitary transformation $V$ to the oscillator ground state $|0_B,0_R\rangle$, where the action of $V$ on the annihilation operators is homogeneous and linear: $$\label{bogolubov} V^{-1} a_k V = \sum_j M_{kj} a_j + \sum_{j}N_{kj} a_j^\dagger~;$$ here the set $\{a_k\}$ includes annihilation operators for both the $B$ oscillators and the $R$ oscillators, and the matrices $M$ and $N$ obey constraints that ensure preservation of the commutation relations. $V$ satisfies eq. (\[bogolubov\]) if its logarithm is strictly quadratic in creation and annihilation operators, with no linear term. A special case is the thermal state of the bath, whose purification can be written as $$\begin{aligned} \exp\left( \sum_k r_k \left(a_{B,k}^\dagger a_{R,k}^\dagger - a_{B,k} a_{R,k}\right)\right)|0_B,0_R\rangle =\bigotimes_k\left( \sqrt{1-\gamma_k^2}\sum_{n_k=0}^\infty\gamma_k^{n_k} |\left(n_k\right)_B,\left(n_k\right)_R\rangle\right)~,\end{aligned}$$ where $\gamma_k^2=\tanh^2 r_k=e^{-\beta\omega_k}$ and $\beta$ is the inverse temperature. But our arguments apply to any Gaussian state $V|0_B,0_R\rangle$, since the action of $V$ in eq. (\[bogolubov\]) maps free fields to new free fields that still satisfy Wick’s theorem and have mean zero. Measurement and entropy removal ------------------------------- For fault-tolerant computing to work, there must be a mechanism for flushing the entropy introduced by noise. In the scheme we have analyzed, entropy is removed from the computer because error-correction gadgets use a supply of fresh ancilla qubits that are discarded after use. It has been understood that the initial state $|\Psi_S^0\rangle$ of the system includes all of the ancilla qubits that will be needed during the full course of the computation. But to model the actual situation, in which ancilla qubits are prepared as needed just before being used, we imagine that ancilla qubits are perfectly isolated from the bath until “opened” at the onset of the gadget in which they participate. Similarly, we imagine that the measurements of all ancilla qubits are delayed until the very end of the computation, but that these qubits are “closed” (their coupling to the bath is turned off) at the conclusion of the gadget in which they participate. With these stipulations, our noise model is equivalent to one in which ancilla qubits are repeatedly measured, reset, and reused. We model the noisy preparation of an ancilla qubit as an ideal preparation followed by interaction with the bath for a specified duration. Since the state of the bath may evolve during the computation, the noise in the preparation may also depend on when the qubit is prepared. Still, we are taking it for granted that “pretty good” fresh ancillas can be prepared at any time, or equivalently that qubits can be effectively erased at any time. Implicitly, we have adopted a “two-reservoir” hypothesis. One reservoir, which we have called the “bath,” interacts with the system qubits, causing noise. The other reservoir is the entropy “sink,” which carries away heat each time a qubit is erased. In our model, the bath and the sink are uncoupled, and the sink has infinite heat capacity — it never heats up no matter how many qubit erasures occur. Because the bath interacts with the system, in principle it might be driven far from its initial state in a manner that depends on the ideal computation being simulated. Our arguments have shown that, at least if the bath is a system of uncoupled oscillators and its initial state is Gaussian, the bath will not be pushed to a highly adversarial state that overpowers our efforts to make the computation robust. One wonders how that conclusion could be altered if we relax the two-reservoir hypothesis by coupling the sink and the bath, or by eliminating the sink entirely. For example, we could attempt to model measurement and erasure more realistically by including entropy flow from the system to the bath. In that case, a bath of unbounded heat capacity would be needed to remove entropy from a noisy computation of unbounded size, and our modeling would need to incorporate a mechanism for equilibration of the bath. The goal would be to specify conditions under which the entropy flow from system to bath can be maintained well enough to support scalable quantum computation. For now, we put aside this ambitious project as an open problem for future consideration. Postselection ------------- Some fault-tolerant gadgets include [*postselection*]{}. For example, a gadget might consume a disposable piece of “quantum software,” an encoded ancilla state that is prepared offline and verified before coming into contact with the encoded data processed by the computation. The verification procedure includes measurements that check the accuracy of the preparation of the software, and the software is accepted only if the measurements have suitable outcomes; otherwise the software is rejected and the preparation is repeated. Therefore, estimates of the reliability of gadgets are conditioned on acceptance of the software, which is said to have a “postselected” state. Some fault-tolerant protocols, such as those analyzed in [@knill; @AGP2; @Reichardt06] make “extreme” use of postselection, meaning that the software is usually rejected and the preparation is typically repeated many times before it finally succeeds. For such protocols, noise with adversarial correlations can be a formidable foe, since the adversary is empowered to enhance the probability of acceptance for atypical fault paths that are especially damaging. Thus, the threshold estimates based on extreme postselection proved in [@AGP2; @Reichardt06] apply for independent noise but not for local stochastic noise. But other protocols, such as those analyzed in [@AGP; @Aliferis07; @Aliferis06c], make only “modest” use of postselection, meaning that software is accepted with reasonably high probability. For such protocols, a gadget’s failure rate, conditioned on acceptance of the software, can be easily estimated using the Bayes rule, even for the case of local stochastic noise. The threshold estimate for local noise whose proof is sketched in Appendix A applies to a protocol with no postselection at all. For local noise, as for local stochastic noise, we do not know how to extend this proof to a protocol with extreme postselection. But it [*can*]{} be extended to a protocol with modest postselection. This observation is useful, because threshold estimates based on protocols with modest postselection are typically higher than estimates based on protocols without postselection. Before considering the case of local coherent noise, we recall how protocols with modest postselection can be analyzed for the case of local stochastic noise [@AGP]; to be concrete, we will discuss the case where the level-1 gadgets are based on a quantum error-correcting code that corrects one error in a block. A properly designed gadget processes encoded data correctly if the software is accepted and the gadget contains no more than one fault. Therefore, the [*joint*]{} probability $P_{\rm joint}$ of acceptance of the software and failure of the gadget is bounded above by $B\varepsilon^2+D\varepsilon^3$ for local stochastic noise with strength $\varepsilon$, where $B$ is the number of malignant pairs of locations in the gadget where faults can cause failure (assuming the software is accepted), and $D$ is the total number of sets of three locations in the gadget. On the other hand, the software will surely be accepted if there are no faults in the software preparation circuit, so the probability of acceptance $P_{\rm accept}$ is bounded below by $1-C\varepsilon$, where $C$ is the total number of locations in the circuit for software preparation and verification. Using the Bayes rule, we obtain an upper bound on the probability $P_{\rm conditional}$ of failure conditioned on acceptance: $$\begin{aligned} \label{P-conditional} P_{\rm conditional}=\frac{P_{\rm joint}}{P_{\rm accept} }\le \frac{B\varepsilon^2+D\varepsilon^3}{1-C\varepsilon}\le \varepsilon^2/\varepsilon_0= \varepsilon^{(1)} ~,\end{aligned}$$ where $$\begin{aligned} \varepsilon_0^{-1}=\frac{1}{2}(B+C)\left(1+\sqrt{1+4D/(B+C)^2}\right)~\end{aligned}$$ is determined by solving the equation $(B\varepsilon_0^2 + D\varepsilon_0^3)/(1-C\varepsilon_0)=\varepsilon_0$. This argument gives a useful result if our lower bound on $P_{\rm accept}$ is not too small. In practice, it is often the case that $C\ll B$ and therefore $C\varepsilon_0\ll 1$, so that the “postselection correction” arising from division by $P_{\rm accept}$ is a small effect. There is another way to describe this estimate that is more readily generalized to the case of local coherent noise, and also clarifies why the estimate applies to adversarial local stochastic noise. Imagine that $n$ software preparation and verification attempts are executed in parallel, where we label the attempts by an index $i=1,2,3,\dots, n$, and suppose for the moment that the noise is uncorrelated. Now we distinguish $n+1$ possible ways for the gadget to be bad, depending on which preparation attempt (if any) is the first to be accepted. If ancilla 1 is accepted, then the gadget fails with probability $P_{\rm joint}$. But ancilla 1 is rejected with probability $P_{\rm reject}=1-P_{\rm accept}$, so the probability that ancilla 1 is rejected, ancilla 2 is accepted, and the gadget fails is $P_{\rm reject}P_{\rm joint}$. Similarly, the probability that ancilla $m$ is the first to be accepted and the gadget fails is $P_{\rm reject}^{m-1}P_{\rm joint}$, and the probability that all $n$ ancillas are rejected is $P_{\rm reject}^n$. Summing the probability of all failure scenarios, we find $$\begin{aligned} P_{\rm fail}= P_{\rm joint}\left(\sum_{m=1}^n P_{\rm reject}^{m-1}\right) + P_{\rm reject}^n = \frac{P_{\rm joint}}{P_{\rm accept} }\left(1-P_{\rm reject}^n\right) +P_{\rm reject}^n =\frac{P_{\rm joint}}{P_{\rm accept} } + P_{\rm reject}^n\left(1-\frac{P_{\rm joint}}{P_{\rm accept}}\right)~.\end{aligned}$$ In the limit $n\to\infty$, we recover the estimate eq. (\[P-conditional\]), and even for $n=2$ we have $P_{\rm fail}=O(\varepsilon^2)$. Furthermore, the upper bound on $P_{\rm fail}$ obtained from upper bounds on $P_{\rm joint}$ and $P_{\rm reject}$ applies not only to independent noise but also to correlated local stochastic noise — it can be regarded as an estimate of the effective noise strength $\varepsilon^{(1)}$ after one level reduction step. For local stochastic noise, we sum over all failure scenarios at each of $r$ marked locations, and conclude that the probability that all $r$ locations are bad is bounded above by $\left(\varepsilon^{(1)}\right)^r$. We can also apply this strategy of summing over all failure scenarios in the case of local coherent noise. First we note that, to preserve the framework assumed in Sec. \[sec:derivation\], we may imagine that all measurements in verification steps are postponed until the end of the computation. In the actual circuit, the “verification qubits” are measured inside gadgets, and then subsequent operations are conditioned on the classical measurement outcomes. To model this circuit in the framework where all measurements are postponed, we suppose that a verification qubit decouples from the bath at the time when it is measured in the actual circuit, and we replace operations conditioned on measurement outcomes by noiseless quantum gates conditioned on the state of the verification qubit, after decoupling from the bath but prior to being measured. Then we can estimate $\| |\Psi_{SB}^{\rm bad}(\mathcal{I}_r^{(1)})\rangle\|^2$ by summing over $n + 1$ failure scenarios at each of the $r$ marked locations. In scenario 1, ancilla 1 is accepted and the gadget using ancilla 1 (including the preparation and verification of ancilla 1) has two or more faults. In scenario $m$, for $m=2,3, \dots, n$, the first $m-1$ ancillas are rejected, ancilla $m$ is accepted, and the gadget using ancilla $m$ has two or more faults. In scenario $n+1$, all $n$ ancillas are rejected. Since the scenarios are perfectly distinguishable, they should be summed incoherently. Now, in order for an ancilla to be rejected, there must be at least one fault in the circuit that prepares and verifies that ancilla. Therefore, in scenario $m$, we sum coherently over all fault paths such that there is at least one fault in each of the first $m-1$ ancilla preparation/verification circuits, and at least two faults in the gadget using ancilla $m$. This sum includes all of the fault paths that contribute to the badness of the gadget under scenario $m$, but it also includes other fault paths that do not contribute to scenario $m$. However, since the scenarios are distinguishable, there is no harm in including these additional unwanted scenarios, if our goal is to obtain an upper bound on $\| |\Psi_{SB}^{\rm bad}(\mathcal{I}_r^{(1)})\rangle\|^2$. This coherent sum for each scenario can be estimated by the method described in Appendix A. One finds that, for gadgets such that the “postselection correction” to $\varepsilon^{(1)}$ is small in the case of local stochastic noise, the correction is small for local coherent noise as well. In [@Aliferis06c], the lower bound on the accuracy threshold $\varepsilon_0 \ge 1.94 \times 10^{-4}$ was established for local stochastic noise, based on a protocol with modest use of postselection. Though we have not done the calculation in detail, we expect that a similar estimate $\varepsilon_0\sim 10^{-4}$, based on the same protocol, also applies to the threshold noise strength for local coherent noise. (The argument in [@fibonacci] achieves a higher threshold estimate for local stochastic noise, but uses a different method that is less easily adapted to the case of local coherent noise.) Of course, for the case of Gaussian noise, if gadgets include multiple parallel attempts to prepare and verify software, then all of these attempts should be included in the integral $\int_{2,{\rm All}}$ in our estimate of the noise strength in eq. (\[result\]). Other considerations -------------------- It would be desirable to extend the derivation of our threshold result in several other directions. One possible approach is to allow the bath fluctuations to be weakly non-Gaussian by including small anharmonic corrections in the bath Hamiltonian $H_B$. But, though the effects of bath self-interactions can be analyzed perturbatively by standard methods, obtaining useful rigorous results summed to all orders of perturbation theory is not simple. Another worthy goal, already emphasized at the end of Sec. \[sec:implications\], is to formulate a threshold condition less sensitive to the high-frequency fluctuations of the bath; [i.e.]{}, to noise with a frequency large compared to the natural frequencies of the ideal system Hamiltonian $H_S$. In principle this might be done by “integrating out” high-frequency noise, obtaining an effective noise model with a lower frequency cutoff that faithfully reproduces the impact of the noise on the simulated computation. Making such an analysis rigorous is another challenging open problem. In the next Section, though, we will discuss one special case in which an improved threshold estimate less sensitive to high-frequency noise can be achieved. Diagonal Gaussian noise {#sec:diagonal} ======================= As we discussed in Sec. \[subsubsec:class1\], our general arguments do not place any constraints on the frequency spectrum of the “hybrid-picture” system operators. Therefore, we were forced to take the modulus of the bath two-point function in our estimate of the noise strength $\varepsilon$. And as a result, our estimate has a sensitivity to high-frequency bath fluctuations that seems rather artificial. There is at least one case where we have much better analytic control over the time-dependence of the system operators, allowing us to obtain a better estimate of the noise strength that has milder sensitivity to high-frequency noise. That is the case of pure dephasing noise, which we will discuss now. In this noise model, the bath couples only to the $z$-components of the qubits, so that the system-bath Hamiltonian is $$\label{diagonal-system-bath} H_{SB}=\sum_x \sigma_z(x)\otimes \tilde\phi(x,t)~,$$ where $\tilde\phi(x,t)$ is a Gaussian bath variable with mean zero. To further simplify the discussion (whose purpose is merely illustrative anyway), we will also assume there are no multi-qubit correlations in the noise (even though this might not be an accurate description of the noise in multi-qubit gates). That is, we assume $\langle\phi(x,t)\phi(y,s)\rangle=0$ for $x\ne y$, so that in effect each qubit is coupled to its own independent bath. A scheme for fault-tolerant quantum computation customized for highly-biased noise dominated by dephasing was formulated in [@AP-bias] and further discussed in [@brito]. In this scheme, all gates are teleported. Furthermore the only fundamental operations used are single-qubit preparations, single-qubit measurements in the $\sigma_x$-eigenstate basis, and two-qubit controlled-phase ([cphase]{}) gates. A [cphase]{} gate is diagonal in the computational ([*i.e.*]{}, $\sigma_z$-eigenstate) basis, with eigenvalues $(1,1,1,-1)$. Thus it can be realized by a time-dependent two-qubit system Hamiltonian that is also diagonal: $$H_S= f(t)\left( \sigma_z\otimes \sigma_z - \sigma_z\otimes I - I\otimes\sigma_z\right)~,$$ where $\int dt f(t) = \pi/4$. This diagonal system Hamiltonian commutes with the system-bath Hamiltonian eq. (\[diagonal-system-bath\]), whose action on the system qubits is diagonal. As in previous Sections, we model a noisy qubit preparation as an ideal preparation followed by interaction with the oscillator bath, and we model a noisy measurement as interaction with the bath followed by an ideal measurement. We can analyze the effect of the noise on the computation using interaction-picture perturbation theory, and in fact we can estimate a [*probability*]{} (rather than an amplitude) for the outcome of a qubit measurement to differ from the measurement outcome in the ideal computation. For each qubit, we distinguish between [*good*]{} diagrams, in which the perturbation $H_{SB}$ is inserted an [*even*]{} number of times in between the (ideal) qubit preparation and the (ideal) qubit measurement, and the [*bad*]{} diagrams, in which the perturbation is inserted an [*odd*]{} number of times in between the preparation and the measurement. Because $\sigma_z$ commutes with the ideal system Hamiltonian $H_S$, and because $\sigma_z^2=I$, in all good diagrams the outcome of the final $\sigma_x$ measurement agrees with the result in the ideal quantum circuit, while in bad diagrams the measurement outcome is flipped. Furthermore, the good and the bad part of the system-bath state $|\Psi_{SB}\rangle$ are mutually orthogonal. To see this, imagine evaluating the inner product $\langle\Psi^{\rm good}_{SB}|\Psi^{\rm bad}_{SB}\rangle$ between the good and bad parts of the state for a single qubit and its associated bath. Since $|\Psi^{\rm bad}_{SB}\rangle$ has an odd number of perturbation insertions and $|\Psi^{\rm good}_{SB}\rangle$ has an even number, each Keldysh diagram contributing to $\langle\Psi^{\rm good}_{SB}|\Psi^{\rm bad}_{SB}\rangle$ is proportional to the expectation value of a product of an odd number of interaction-picture bath fields. All such diagrams vanish, since $\phi(x,t)$ is Gaussian with mean zero. Because the good and bad parts of $|\Psi_{SB}\rangle$ are perfectly distinguishable, we can regard $\langle\Psi^{\rm bad}_{SB}|\Psi^{\rm bad}_{SB}\rangle$ as the [*probability*]{} of error in the final qubit measurement. ![image](dephasing.pdf){height="3cm"} Let us compute this probability. The sum of all Keldysh diagrams (both good and bad) contributing to $\langle\Psi_{SB}|\Psi_{SB}\rangle$ for a single qubit is the exponential of the sum of “connected” diagrams. There are three connected diagrams, shown in Fig. \[fig:dephasing\]. Thus $$\label{diagonal-sb-norm} 1= \langle\Psi_{SB}|\Psi_{SB}\rangle = \exp\left( C_U + C_L + D\right)~;$$ here, $$C_U=-\int_{t>s} dt ~ds ~\langle \phi(t)\phi(s)\rangle~$$ is the connected diagram in which two insertions on the upper Keldysh branch are contracted, $$C_L=-\int_{t<s} dt ~ds ~\langle \phi(t)\phi(s)\rangle~$$ is the connected diagram in which two insertions on the lower branch are contracted, and $$D=\int dt ~ds ~\langle \phi(t)\phi(s)\rangle= -(C_U+C_L)~.$$ is the connected diagram in which an insertion on the upper branch is contracted with an insertion on the lower branch. (In all three diagrams, the factor due to the expectation value of the product of system operators is simply 1.) When eq. (\[diagonal-sb-norm\]) is expanded in powers of $D$, terms with an odd number of powers contribute to $\langle\Psi^{\rm bad}_{SB}|\Psi^{\rm bad}_{SB}\rangle$, because $H_{SB}$ is inserted an odd number of times on each branch, and terms with an even number of powers contribute to $\langle\Psi^{\rm good}_{SB}|\Psi^{\rm good}_{SB}\rangle$, because $H_{SB}$ is inserted an even number of times on each branch. Thus, $$\begin{aligned} &&{P}^{\rm bad}\equiv\langle \Psi_{SB}^{\rm bad}|\Psi_{SB}^{\rm bad}\rangle=e^{-D} \sinh D~, \nonumber\\ &&{P}^{\rm good}\equiv\langle \Psi_{SB}^{\rm good}|\Psi_{SB}^{\rm good}\rangle=e^{-D} \cosh D~.\end{aligned}$$ If $T$ is the elapsed time between the preparation and measurement of the qubit, then $$D=\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}\tilde\Delta(\omega)\frac{4\sin^2(\omega T/2)}{\omega^2}~.$$ where $$\Delta(t-s) \equiv \langle \phi(t)\phi(s)\rangle =\int_{-\infty}^\infty \frac{d\omega}{2\pi}e^{-i\omega(t-s)}\tilde\Delta(\omega)~$$ (we have assumed the noise is stationary). In the case of zero-temperature Ohmic noise, with $\tilde\Delta(\omega)$ given by eq. (\[ohmic-spectrum\]), we find that $$D= -\int_0^T dt \int_0^T~ds ~\frac{A}{(t-s -i\tau_c)^2}= A~\ln \left(\frac{T^2+\tau_c^2}{\tau_c^2}\right)\approx 2A~\ln(T/\tau_c)~.$$ Thus the quantity $D$ (an upper bound on the probability $P^{\rm bad}$ of a measurement error) has only a mild logarithmic sensitivity to the ultraviolet cutoff $\tau_c^{-1}$, in contrast to the power dependence on the cutoff found in eq. (\[ohmic-integral\]). This improvement occurs because $D$ is found by integrating the bath two-point function $\Delta(t)$, rather than its modulus $|\Delta(t)|$, which can be justified because the perturbation $H_{SB}$ commutes with the ideal system Hamiltonian $H_S$. Even this logarithmic dependence on $\tau_c$ may be spurious; it arises because we have assumed that the ideal qubit preparation (at time $t=0$) and qubit measurement (at time $t=T$) are instantaneous. The divergence would be softened further if we used a smoother model of preparation and measurement. Perhaps the logarithmic dependence of the error probability on the elapsed time $T$ should not be taken too seriously; it applies only if the noise spectrum is Ohmic down to a frequency of order $T^{-1}$. Let us nevertheless pursue the implications of this behavior. The crux of the scheme formulated in [@AP-bias] is a teleported logical [cnot]{} gate protected against dephasing by an $n$-qubit repetition code (where $n$ is odd). This [cnot]{} gadget contains four logical measurements, each of which is decoded by a majority vote. Furthermore, for each qubit, there are at most $3n$ time steps (each of duration $t_0$) in between the preparation and measurement of the qubit, where a [cphase]{} gate acts on the qubit in each step. Therefore, the probability $\varepsilon_{\rm CNOT}$ of an encoded error in this [cnot]{} gadget can be bounded as $$\label{cnot-error} \varepsilon_{\rm CNOT} \le 4{n\choose \frac{n+1}{2}}\left(P^{\rm bad}\right)^{\frac{n+1}{2}}~,$$ where $$P^{\rm bad} ~\le ~D ~\le~ 2A \ln\big((3n+2)t_0/\tau_c\big)~$$ (here we have allowed the noise to act for a time $t_0$ during each [cphase]{} gate and also during the initial preparation and final measurement). Hence the logical [cnot]{} gate is well protected if $A$ is small and $t_0/\tau_c$ is not too large. While the underlying noise model is Gaussian dephasing noise, under our assumptions the effective noise model for the [cnot]{} gadgets is independent stochastic noise. Just to be specific, suppose that $A =10^{-3}$ and $t_0/\tau_c= 10^3$. Then, for code length $n=9$, eq. (\[cnot-error\]) yields $\varepsilon_{\rm CNOT} < 1.85 \times 10^{-6}$. This [cnot]{} error rate is well below the accuracy threshold for the local stochastic noise model, indicating that these logical [cnot]{} gates are adequate for scalable quantum computing. Conclusions {#sec:conclusions} =========== The quantum accuracy threshold theorem indicates that scalable quantum computing is feasible in principle. But will fault-tolerant quantum computation [*really*]{} work? One concern is that the noise models assumed by theorists are highly idealized, at best crude approximations to the noise in actual devices. In formulating these models, one desires on the one hand to capture essential features of realistic noise, but on the other hand to allow a succinct and elegant analysis of the computation’s reliability. Seeking an appropriate balance between these two desiderata, we have proved in this paper a new version of the threshold theorem that applies to Gaussian quantum noise, which is physically well motivated and analytically tractable. Our result shows that quantum computing is scalable if the noise power spectrum obeys a certain condition. Compared to previous results regarding the effectiveness of fault-tolerant methods against non-Markovian noise [@terhal; @AGP; @AKP], our threshold condition has two advantages: it is expressed in terms of experimentally observable features of the noise, and it is less sensitive to high-frequency noise. As mentioned in Sec. \[sec:generalizations\], it might be useful to extend our results by relaxing the noise model in several ways, for example by including weak non-Gaussian corrections to the bath fluctuations, or by modeling more realistically the dissipative flow of heat from system to bath. It should also be possible to make further improvements in the sensitivity of the threshold condition to high-frequency noise; however, an improved condition would be likely to depend on the details of the frequency spectrum of the ideal system dynamics, and deriving it would require a more complicated analysis. Experimenters tend to worry less about high-frequency noise than about low-frequency noise, particularly $1/f$ dephasing noise. We anticipate that low frequency noise in quantum gates can be suppressed substantially through clever design of pulse sequences, leaving weak residual noise to be tamed via the fault-tolerant methods we have studied here. Joining pulse shaping methods with fault-tolerant circuit construction will be a fruitful topic for future research. We thank Panos Aliferis, Matt Hastings, Alexei Kitaev, Eduardo Novais, and Gil Refael for useful discussions. This research is supported in part by DoE under Grant No. DE-FG03-92-ER40701, NSF under Grant No. PHY-0456720, NSA under ARO Contract No. W911NF-05-1-0294, and by the Gordon and Betty Moore Foundation. [99]{} P. Shor, “Fault-tolerant quantum computation,” in [*Proc. 37$^{th}$ Annual Symposium on Foundations of Computer Science*]{}, p. 56, Los Alamitos, CA, IEEE Computer Society Press (1996), arXiv:quant-ph/9605011. D. Aharonov and M. Ben-Or, “Fault-tolerant quantum computation with constant error,” [*Proc. 29th Ann. ACM Symp. on Theory of Computing*]{}, p. 176 (New York, ACM, 1998), arXiv:quan-ph/9611025; D. Aharonov and M. Ben-Or, “Fault-tolerant quantum computation with constant error rate,” arXiv:quan-ph/9906129. A. Yu. Kitaev, “Quantum computations: algorithms and error correction,” Russian Math. Surveys [52]{}, 1191-1249 (1997). E Knill, R. Laflamme, W. H. Zurek, “Resilient quantum computation: error models and thresholds,” Proc. Roy. Soc. London, Ser. A 454, 365 (1998), arXiv:quan-ph/9702058. B. M. Terhal and G. Burkard, “Fault-tolerant quantum computation for local non-Markovian noise,” Phys. Rev. A 71, 012336 (2005), arXiv:quant-ph/0402104. P. Aliferis, D. Gottesman, and J. Preskill, “Quantum accuracy threshold for concatenated distance-3 codes,” Quantum Inf. Comp. 6, 97–165 (2006), arXiv:quant-ph/0504218. D. Aharonov, A. Kitaev, and J. Preskill, “Fault-tolerant quantum computation with long-range correlated noise,” Phys. Rev. Lett. 96, 050504 (2006), arXiv:quant-ph/0510231. P. Aliferis, “Level Reduction and the Quantum Threshold Theorem,” Ph.D. thesis, California Institute of Technology, Pasadena, CA (2007), arXiv:quant-ph/0703230. P. Aliferis and J. Preskill, “The Fibonacci scheme for fault-tolerant quantum computation,” arXiv:0809.5063. P. Aliferis, D. Gottesman, and J. Preskill, “Accuracy threshold for postselected quantum computation,” Quantum Inf. Comp. 8, 181 (2008), arXiv:quant-ph/0703264. B. W. Reichardt, “Error-detection-based quantum fault tolerance against discrete Pauli noise,” Ph.D. thesis, University of California, Berkeley, CA (2006), arXiv:quant-ph/0612004. E. Knill, “Quantum computing with realistically noisy devices,” Nature 434, 39–44 (2005), arXiv:quant-ph/0410199. R. Raussendorf, J. Harrington, and K. Goyal, “Topological fault-tolerance in cluster state quantum computation,” New Journal of Physics [9]{}, 199 (2007), arXiv:quant-ph/0703143. D. P. DiVincenzo and P. Aliferis, “Effective fault-tolerant quantum computation with slow measurements,” Phys. Rev. Lett. 98, 220501 (2007), arXiv:quant-ph/0607047. P. Aliferis and B. Terhal, “Fault-tolerant quantum computation for local leakage faults,” Quant. Inf. Comp. 7, 139–156 (2007), arXiv:quant-ph/0511065. K. M. Svore, D. P. DiVincenzo, and B. M. Terhal, “Noise threshold for a fault-tolerant two-dimensional lattice architecture,” Quantum Inf. Comp. 7, 297–318 (2007), arXiv:quant-ph/0604090. R. Alicki, “Comment on ‘Resilient Quantum Computation in Correlated Environments: A Quantum Phase Transition Perspective’ and ‘Fault-tolerant Quantum Computation with Longe-range Correlated Noise’,” arXiv:quant-ph/0702050 (2007). A. P. Hines and P. C. E. Stamp, “Decoherence in quantum walks and quantum computers,” arXiv:0711.1555. R. Alicki, M. Horodecki, P. Horodecki, and R. Horodecki, “Dynamical description of quantum computing: generic nonlocality of quantum noise,” [*Phys. Rev. A*]{} 65, 062101 (2002), arXiv:quant-ph/0105115. E. Novais, E. R. Mucciolo, H. U. Baranger, “Resilient quantum computation in correlated environments: A quantum phase transition perspective,” [*Phys. Rev. Lett.*]{} 98, 040501 (2007), arXiv:quant-ph/0607155. E. Novais, E. R. Mucciolo, H. U. Baranger, “Hamiltonian formulation of quantum error correction and correlated noise: The effects of syndrome extraction in the long time limit,” [*Phys. Rev. A*]{} 78, 012314 (2008), arXiv:0710.1624. A. 0. Caldeira and A. J. Leggett, “Quantum tunneling in a dissipative system,” Ann. Phys. 149, 374 (1983). P. Aliferis and A. W. Cross. “Subsystem fault tolerance with the Bacon-Shor code,” [*Phys. Rev. Lett.*]{} [98]{}, 220502 (2007), arXiv:quant-ph/0610063. P. Aliferis and J. Preskill, “Fault-tolerant quantum computation against biased noise,” arXiv:0710.1301. P. Aliferis, F. Brito, D. P. DiVincenzo, J. Preskill, M. Steffen, and B. M. Terhal, “Fault-tolerant computing with biased-noise superconducting qubits,” arXiv:0806.0383. Threshold theorem for local noise {#sec:appendix} ================================= Here we will briefly sketch the proof of the quantum accuracy threshold theorem for local noise, following the argument in [@AGP]. We assume that the joint evolution of the quantum computer (the system $S$) and its environment (the bath $B$) is governed by the Hamiltonian $H=H_S+H_B +H_{SB}$, where the perturbation $H_{SB}$ is responsible for the deviation of the system from its ideal evolution. Although this framework can be generalized (as discussed in Sec. \[sec:generalizations\]), we also assume that the system qubits are initialized ideally in the pure state $|\Psi_S^0\rangle$ before the Hamiltonian evolution begins and measured ideally after it ends. Furthermore, the initial state of the bath is the pure state $|\Psi_B^0\rangle$. Just before the ideal measurements are performed on the system qubits, the joint state of the system and bath is $|\Psi_{SB}\rangle=U_{SB}|\Psi_{SB}^0\rangle$, where $U_{SB}$ is the time-evolution operator determined by $H$, and $|\Psi_{SB}^0\rangle=|\Psi_S^0\rangle \otimes |\Psi_B^0\rangle$. We obtain a fault-path expansion for $|\Psi_{SB}\rangle$ by expanding $U_{SB}$ as a power series in $H_{SB}$, and for each term in this expansion we declare a level-0 circuit location to be bad if $H_{SB}$ acts nontrivially somewhere within that location. For any specified set $\mathcal{I}_r$ of $r$ locations in the circuit, we denote by $|\Psi_{SB}^{\rm bad}(\mathcal{I}_r)\rangle$ the sum of all the terms in the fault-path expansion of $|\Psi_{SB}\rangle$ such that all of these $r$ locations are bad. The noise is local with strength $\varepsilon$ if $$\label{local-noise-strength-again} \| |\Psi_{SB}^{\rm bad}(\mathcal{I}_r)\rangle\| \le \varepsilon^r~.$$ Our objective is to show that scalable quantum computing is possible provided that $\varepsilon < \varepsilon_0$, where $\varepsilon_0$ is (a lower bound on) the accuracy threshold. Suppose that a universal set of fault-tolerant level-1 gadgets can be constructed such that a 1-gadget containing fewer than $s$ faulty level-0 gates simulates the corresponding ideal 0-gate correctly. We can estimate the effective noise strength for a level-1 simulation using the following observation: Consider a set $\mathcal{I}$ of level-0 locations in a quantum circuit. Then the sum of all fault paths such that at least $s$ of the locations in the set $\mathcal{I}$ are faulty can be expressed as $$\begin{aligned} \label{inclusion-exclusion} |\Psi_{SB}(\ge s {\rm ~faults ~in~}\mathcal{I})\rangle =\sum_{\ell=s}^{|\mathcal{I}|}(-1)^{\ell-s}{\ell-1\choose s-1} \left(\sum_{\mathcal{I}_\ell\subseteq \mathcal{I}}|\Psi_{SB}^{\rm bad}(\mathcal{I}_\ell)\rangle\right)~,\end{aligned}$$ where $\sum_{\mathcal{I}_\ell}$ denotes the sum over all subsets of $\mathcal{I}$ that contain $\ell$ elements. Eq. (\[inclusion-exclusion\]) follows from the “inclusion-exclusion principle” of combinatorics. For example, in the case $s{=}1$ it becomes $$\begin{aligned} &&|\Psi_{SB}(\ge 1 {\rm ~fault~in~}\mathcal{I})\rangle \nonumber\\ &&= \sum_{\mathcal{I}_1\subseteq \mathcal{I}}|\Psi_{SB}^{\rm bad}(\mathcal{I}_1)\rangle -\sum_{\mathcal{I}_2\subseteq \mathcal{I}}|\Psi_{SB}^{\rm bad}(\mathcal{I}_2)\rangle +\sum_{\mathcal{I}_3\subseteq \mathcal{I}}|\Psi_{SB}^{\rm bad}(\mathcal{I}_3)\rangle -\sum_{\mathcal{I}_4\subseteq \mathcal{I}}|\Psi_{SB}^{\rm bad}(\mathcal{I}_4)\rangle + \cdots~,\end{aligned}$$ whose origin is easy to understand. The first term counts correctly each fault path with exactly one fault in $\mathcal{I}$, but it double-counts each fault path with exactly two faults, and this over-counting is corrected by the second term. The first term counts three times each fault path with exactly three faults, and the second term subtracts these fault paths ${3}\choose{2}$ times; this under-counting is corrected by the third term. And so on. The norm of the left-hand side of eq. (\[inclusion-exclusion\]) is bounded above by the sum of the norms of the terms on the right hand side. Using $\| \Psi_{SB}^{\rm bad}(\mathcal{I}_\ell)\rangle\| \le \varepsilon^\ell$ and denoting $|\mathcal{I}|=A$ we find $$\begin{aligned} \label{s-fault-bound} \big\||\Psi_{SB}(\ge s {\rm ~faults~in~}A{\rm ~locations})\rangle\big\| && \le \sum_{\ell=s}^{A}{\ell-1\choose s-1}{A\choose \ell}\varepsilon^\ell ={A\choose s}\varepsilon^s\sum_{\ell=s}^A {A-\ell\choose \ell-s}\varepsilon^{\ell-s}\nonumber\\ && \le {A\choose s}\varepsilon^s\sum_{t=0}^\infty \frac{(A-s)^t\varepsilon^t}{t!} = {A\choose s}e^{(A-s)\varepsilon} \varepsilon^s \le \zeta {A\choose s}\varepsilon^s~;\end{aligned}$$ here $\zeta$ is a constant satisfying $\zeta \ge e^{(A-s)\varepsilon}$ for values of $\varepsilon$ in some specified range of interest, which typically can be chosen such that $\zeta$ is close to 1. In a level-1 simulation of a quantum circuit, let us say that a level-1 gadget is “bad” if it contains $s$ or more bad level-0 gates, and let $\mathcal{I}_r^{(1)}$ denote a set of $r$ specified level-1 gadgets. We assume, for now, that these level-1 gadgets are nonoverlapping; [*i.e.*]{}, that no 0-gate is contained in two different 1-gadgets. We denote by $|\Psi_{SB}^{\rm bad}(\mathcal{I}_r^{(1)})\rangle$ the sum of all terms in the perturbation expansion of $|\Psi_{SB}\rangle$ such that all of the $r$ 1-gadgets in $\mathcal{I}_r^{(1)}$ are bad. By performing an “inclusion-exclusion” sum independently inside each 1-gadget, we find that $$\begin{aligned} \label{inclusion-exclusion-level1} |\Psi_{SB}^{\rm bad}(\mathcal{I}_r^{(1)})\rangle =\sum_{\ell_1=s}^{|\mathcal{I}(1)|}(-1)^{\ell_1-s}{\ell_1-1\choose s-1} \cdots \sum_{\ell_r=s}^{|\mathcal{I}(r)|}(-1)^{\ell_r-s}{\ell_r-1\choose s-1}\nonumber\\ \left(\sum_{\mathcal{I}(1)_{\ell_1}\subseteq \mathcal{I}(1)}\cdots \sum_{\mathcal{I}(r)_{\ell_r}\subseteq \mathcal{I}(r)} |\Psi_{SB}^{\rm bad}(\mathcal{I}(1)_{\ell_1}\cup\dots\cup \mathcal{I}(r)_{\ell_r})\rangle\right)~,\end{aligned}$$ where $\mathcal{I}(j)$ denotes the set of 0-gates inside the 1-gadget $j$, for $j\in \{1,2,3,\dots, r\}$, and $\sum_{\mathcal{I}(j)_{\ell_j}}$ denotes the sum over all subsets of $\mathcal{I}(j)$ that contain $\ell_j$ elements. The local noise condition $\| \Psi_{SB}^{\rm bad}(\mathcal{I}_\ell)\rangle\| \le \varepsilon^\ell$ implies that $$\begin{aligned} \label{local-noise-level1} \big\||\Psi_{SB}^{\rm bad}(\mathcal{I}(1)_{\ell_1}\cup\dots\cup \mathcal{I}(r)_{\ell_r})\rangle\big\|\le \prod_{j=1}^r\varepsilon^{\ell_j}~.\end{aligned}$$ As above, we can bound the norm of the left-hand side of eq. (\[inclusion-exclusion-level1\]) by the sum of the norms of the terms on the right-hand side. Using eq. (\[local-noise-level1\]), this upper bound factorizes into a product of $r$ sums, each of which can be bounded as in eq (\[s-fault-bound\]). We obtain $$\begin{aligned} \| |\Psi_{SB}^{\rm bad}(\mathcal{I}_r^{(1)})\rangle\| \le \prod_{j\in 1}^r\varepsilon^{(1)}_j~,\end{aligned}$$ where $$\begin{aligned} \label{noise-strength-level1} \varepsilon^{(1)}_j= \zeta_j {A_j\choose s} \varepsilon^s~,\end{aligned}$$ and hence $$\begin{aligned} \| |\Psi_{SB}^{\rm bad}(\mathcal{I}_r^{(1)})\rangle\| \le \left(\varepsilon^{(1)}\right)^r~,\end{aligned}$$ where $$\begin{aligned} \varepsilon^{(1)}= \max_j \left(\varepsilon^{(1)}_j\right)~.\end{aligned}$$ Here $A_j=|\mathcal{I}(j)|$, $\zeta_j \ge \exp\left((A_j-s)\varepsilon\right)$, and the maximum is over all 1-gadgets in the circuit. We can regard $\varepsilon^{(1)}$ as an effective noise strength for the level-1 circuit, which is conveniently expressed in the form $$\begin{aligned} \varepsilon^{(1)} = \varepsilon_0\left(\varepsilon/\varepsilon_0\right)^s~,\end{aligned}$$ where $$\label{threshold-estimate-combinatoric} \varepsilon_0 = \min \left( \zeta {A\choose s}\right)^{-1/(s-1)}~,$$ and the minimum is over all 1-gadgets in our universal set. Now let us say that a level-$k$ gadget is bad if it contains $s$ or more bad $(k{-}1)$-gadgets. The bound eq. (\[noise-strength-level1\]) on the norm of the sum $|\Psi_{SB}^{\rm bad}(\mathcal{I}_r^{(1)})\rangle$ over all fault paths that are bad at level 1 is of the same form as the bound eq. (\[local-noise-strength-again\]) on the norm of the sum over all fault paths that are bad at level 0, but with a “renormalized” value of the effective noise strength. This means that in a recursive simulation, in which $k$-gadgets are constructed using the same circuits as 1-gadgets, but with each 0-gate in the 1-gadget replaced by a ($k{-}1$)-gadget, we can use the same combinatoric argument again to estimate the effective noise strength at level $k$. That is, suppose that $$\begin{aligned} \| |\Psi_{SB}^{\rm bad}(\mathcal{I}_r^{(k-1)})\rangle\| \le \left(\varepsilon^{(k{-}1)}\right)^r~,\end{aligned}$$ where $\mathcal{I}_r^{(k-1)}$ is any specified set of $r$ $(k{-}1)$-gadgets in a level-$(k{-}1)$ simulation, and $|\Psi_{SB}^{\rm bad}(\mathcal{I}_r^{(k-1)})\rangle$ denotes the sum of all fault paths such that all $r$ of the $(k{-}1)$-gadgets in $\mathcal{I}_r^{(k-1)}$ are bad. Then we may infer that $$\begin{aligned} \label{level-k-local-noise} \| |\Psi_{SB}^{\rm bad}(\mathcal{I}_r^{(k)})\rangle\| \le \left(\varepsilon^{(k)}\right)^r~,\end{aligned}$$ where $\mathcal{I}_r^{(k)}$ is any specified set of $r$ $k$-gadgets in a level-$k$ simulation, $|\Psi_{SB}^{\rm bad}(\mathcal{I}_r^{(k)})\rangle$ denotes the sum of all fault paths such that all $r$ of the $k$-gadgets in $\mathcal{I}_r^{(k)}$ are bad, and $$\begin{aligned} \varepsilon^{(k)}/\varepsilon_0 = \left(\varepsilon^{(k{-}1)}/\varepsilon_0\right)^s= \left(\varepsilon/\varepsilon_0\right)^{s^k}.\end{aligned}$$ The fault-path expansion of a level-$k$ simulation with all together $L$ $k$-gadgets can be expressed as $$\begin{aligned} |\Psi_{SB}\rangle = |\Psi_{SB}^{\rm good}\rangle +|\Psi_{SB}^{\rm bad}\rangle~,\end{aligned}$$ where $|\Psi_{SB}^{\rm good}\rangle$ is the sum of all fault paths such that every $k$-gadget is good, and $|\Psi_{SB}^{\rm bad}\rangle$ is the sum of all fault paths such that at least one $k$-gadget is bad. Combining the $s{=}1$ case of eq. (\[s-fault-bound\]) with eq. (\[level-k-local-noise\]), then, we see that $$\begin{aligned} \big\||\Psi_{SB}^{\rm bad}\rangle\big\| \le L \exp\left((L-1)\varepsilon^{(k)}\right)\varepsilon^{(k)}~,\end{aligned}$$ which is small for $L\varepsilon^{(k)}\ll 1$. Furthermore, the arguments in [@AGP] show that if $|\Psi_{SB}\rangle$ is good then the probability distribution $p^{({\rm actual})}=\{p_a^{({\rm actual})}\}$ governing the measurement outcome for the [*logical*]{} system qubits (where $p_a$ is the probability of the measurement outcome labeled by $a$) matches exactly the probability distribution $p^{({\rm ideal})}$ for the measurement outcomes in the ideal computation. Therefore, the deviation of the $p^{({\rm actual})}$ from $p^{({\rm ideal})}$ in the noisy simulation arises only from the small bad component of $|\Psi_{SB}\rangle$; in fact in the $L^1$ norm this deviation can be bounded as $$\delta= \| p^{\rm (actual)} -p^{\rm (ideal)}\|_1=\sum_a |p_a^{\rm (actual)} -p_a^{\rm (ideal)}| \le 2 \big\||\Psi_{SB}^{\rm bad}\rangle\big\|~.$$ Therefore, for $\varepsilon < \varepsilon_0$, the noisy computation becomes highly reliable as the level $k$ of the simulation increases; thus $\varepsilon_0$ is a lower bound on the accuracy threshold for quantum computation. In [@AGP], two valuable extensions of this argument were formulated that are useful for pushing the threshold estimate $\varepsilon_0$ higher. First, the argument can be applied to simulations where successive 1-gadgets [*overlap*]{}, [*i.e.*]{}, have 0-gates in common. By allowing the gadgets to overlap, we can justify the estimate eq. (\[threshold-estimate-combinatoric\]) when using properly designed gadgets based on a quantum error-correcting code that can correct $s{-}1$ errors in a code block. Second, we can refine the definition of badness, so that a 1-gadget with $s$ or more faults is declared bad only if the faults occur at a “malignant” set of locations, [*i.e.*]{}, only if the 1-gadget processes encoded information incorrectly because of the faults. For example, for gadgets that can correct one error (the $s{=}2$ case), our estimate of the level-1 effective noise strength improves to $$\varepsilon^{(1)} = B\varepsilon^2 + D\varepsilon^3 \le \varepsilon^2/\varepsilon_0~,$$ where $B$ is the number of malignant [*pairs*]{} of fault locations in the 1-gadget (maximized over all 1-gadgets), $D\varepsilon^3$ is a correction arising from summing contributions from fault paths with three of more faults in the 1-gadget, and $$\varepsilon_0^{-1}= \frac{1}{2}B\left(1 + \sqrt{1+4D/B^2}\right)$$ is our improved threshold estimate, found by solving the equation $B\varepsilon_0^2 + D\varepsilon_0^3=\varepsilon_0$.
--- abstract: 'We propose a novel online multi-object visual tracking algorithm via a tracking-by-detection paradigm using a Gaussian mixture Probability Hypothesis Density (GM-PHD) filter and deep Convolutional Neural Network (CNN) appearance representations learning. The GM-PHD filter has a linear complexity with the number of objects and observations while estimating the states and cardinality of unknown and time-varying number of objects in the scene. Though it handles object birth, death and clutter in a unified framework, it is susceptible to miss-detections and does not include the identity of objects. We use visual-spatio-temporal information obtained from object bounding boxes and deeply learned appearance representations to perform estimates-to-tracks data association for labelling of each target. We learn the deep CNN appearance representations by training an identification network (IdNet) on large-scale person re-identification data sets. We also employ additional unassigned tracks prediction after the update step to overcome the susceptibility of the GM-PHD filter towards miss-detections caused by occlusion. Our tracker which runs in real-time is applied to track multiple objects in video sequences acquired under varying environmental conditions and objects density. Lastly, we make extensive evaluations on Multiple Object Tracking 2016 (MOT16) and 2017 (MOT17) benchmark data sets and find out that our online tracker significantly outperforms several state-of-the-art trackers in terms of tracking accuracy and identification.' author: - '[^1]' bibliography: - 'egbib.bib' title: 'Occlusion-robust Online Multi-object Visual Tracking using a GM-PHD Filter with a CNN-based Re-identification\' --- Online visual tracking, GM-PHD filter, Prediction, CNN features, Re-identification, MOT challenge Introduction ============ Multi-target tracking is an active research field in computer vision with a wide variety of applications such as intelligent surveillance, autonomous driving, robot navigation and augmented reality. Its main purpose is to estimate the states (locations) of objects from noisy detections, recognize their identities in each video frame and produce their trajectories. The most commonly adopted paradigm for multi-target tracking in computer vision is a tracking-by-detection. This is due to the remarkable advances made in object detection algorithms driven by deep learning. In this tracking-by-detection paradigm, multi-target filters and/or data association are applied to object detections obtained from the object detector(s) applied to video frames to generate trajectories of tracked targets over-time. To perform this, online [@MatPoiCav16][@SonJeo16] and offline (batch) [@LeaCanSch16][@MilRotSch14][@PirRamFow11] tracking approaches are the commonly used ones in the literature. The online tracking methods estimate the target state using Bayesian filtering at each time instant using current detections and rely on prediction to handle miss-detections using motion models to continue tracking. However, both past and future detections are fed into mainly global optimization-based data association approaches to handle miss-detections in offline tracking methods. Generally, the offline tracking approaches outperform the online tracking methods though they are limited for time-critical real-time applications where it is crucial to provide state estimates as the detections arrive such as in autonomous driving and robot navigation. Multi-target tracking algorithms generally receive a random number of detections when object detector is applied to a video frame. When the object detector is applied to this video frame, there can be information uncertainty usually considered as measurement origin uncertainty [@VoMalBarCorOsbMahVo15] which include miss-detection, clutter and very near unresolved objects. Thus, in addition to this measurement origin uncertainty, the multi-object tracking method needs to handle the targets’ births, deaths, and the process and observation noises. As surveyed in [@LuoZhaKim14][@VoMalBarCorOsbMahVo15], the three commonly known traditional data association methods used for numerous applications are Global Nearest Neighbor (GNN) [@BarWilTia11], Joint Probabilistic Data Association Filter (JPDAF) [@BarWilTia11] and Multiple Hypothesis Tracking (MHT) [@KimLiCip15; @BarWilTia11]. While the GNN (computed using the Hungarian algorithm [@FraJea71]) is sensitive to noise, the JPDAF and the MHT are computationally very expensive. Since these methods are computationally expensive and heavily rely on heuristics to track time-varying number of objects, another multi-target tracking approach has been proposed based on random finite set (RFS) theory [@Mah14]. This approach include all sources of uncertainty in unified probabilistic framework. A probability hypothesis density (PHD) filter [@Mah03] is the most commonly adopted RFS-based filter in computer vision for tracking targets in video sequences since it has a linear complexity with the number of objects and observations. The PHD filter allows target birth, death, clutter (false alarms), and missing detections, however, it does not naturally incorporate the identity of objects in the framework since it is based on indistinguishability assumption of the point process. In order to include the identity of objects, additional technique is needed. This filter is also very susceptible to miss-detections. In fact, the PHD filter is designed originally for radar tracking applications where observations collected can contain numerous false alarms with very few miss-detections. However, in visual tracking applications, observations obtained from the recent deep learning-driven object detectors can contain very low false alarms (false positives) with high level of miss-detections (false negatives) due to occlusion. The parameter which controls the detection and miss-detection part of the PHD filter is the probability of detection ($p_D$, see in section \[Sec:GMPHD-Filter\] in the Gaussian mixture implementation of the PHD (GM-PHD) filter [@VoMa06]). In my experiment, The GM-PHD filter works if $p_D$ is set to not less than about 0.8 unless the covariance matrix fails to be a square, symmetric, positive definite matrix which in turn forces the GM-PHD filter to crash. This means even if we set $p_D$ to 0.8, the miss-detected target can not be maintained since the probability of detection drops too quickly (probability of miss-detection $p_{MD}= 1.0 - 0.8 = 0.2$). This is referred to as target death problem where targets die faster than they should when a miss-detection happens. Thus, naturally the GM-PHD filter is robust to false positives but it is very susceptible to miss-detections. More recently, outstanding results have been obtained on a wide range of tasks using deep Convolutional Neural Network (CNN) features such as object recognition [@KriHin12][@KaiXiaSha15], object detection [@ShaKaiRos15] and person re-identification [@KaiTao19]. Better performance has also been obtained on multi-target tracking using deep learning [@LeaCanSch16; @AmiAleSil17] since deeply learned appearance representations of objects have a capability of discriminating object of interest from not only background but also other objects of similar appearance. However, the advantages of deep appearance representations in Random Finite Set based filters, such as the GM-PHD filter, have not been explored which works online and run fast enough to be suitable for real-time applications. In this work, we propose an online multi-object visual tracker based on the GM-PHD filter using tracking-by-detection approach for real-time applications which not only runs in real-time but also addresses track management (target birth, death and labelling), false positives and miss-detections jointly. We also learn discriminative deep appearance representations of targets using identification network (IdNet) on large-scale person re-identification data sets. We formulate how to combine (fuse) spatio-temporal and visual similarities obtained from bounding boxes of objects and their CNN appearance features, respectively, to construct a cost to be minimized (similarity maximized) by the Hungarian algorithm to label each target. After this association step, additional unassigned tracks prediction step is used to overcome the miss-detection susceptibility of the GM-PHD filter caused by occlusion. Furthermore, we use the deeply learned CNN appearance representations as a person re-identification method to re-identify lost objects for consistently labelling them. To the best of our knowledge, nobody has adopted this approach. The main contributions of this paper are as follows: 1. We apply the GM-PHD filter with the deeply learned CNN features for tracking multiple targets in video sequences acquired under varying environmental conditions and targets density. 2. We formulate how to integrate spatio-temporal and visual similarities obtained from bounding boxes of objects and their CNN appearance features. 3. We use additional unassigned tracks predictions after the association step to overcome the miss-detection susceptibility of the GM-PHD filter. 4. We use the deeply learned CNN appearance representations as a person re-identification method to re-identify lost objects for consistently labelling them. 5. We make extensive evaluations on Multiple Object Tracking 2016 (MOT16) and 2017 (MOT17) benchmark data sets using the public detections provided in the benchmark’s test sets. We presented a preliminary idea of this work in [@Nat19]. In this work, we make more elaborate descriptions of our algorithm. In addition, we change from joint-input Siamese network (StackNet) to identification network (IdNet) to learn the deep appearance representations of targets on a large-scale person re-identification data sets as this IdNet allows us to extract features from an object once in each frame in the tracking process which speeds up the tracker significantly. We also include additional add-on prediction step for predicting unassigned tracks after the association step to handle miss-detections caused by occlusion. The paper is organized as follows. We discuss the related work in section \[Sec:RelatedWork\]. In section \[Sec:proposedAlgorithm\], our proposed algorithm is explained in detail including its all components, and section \[Sec:ParameterValues\] provides some important parameter values in the GM-PHD filter implementation. The experimental results are analyzed and compared in section \[Sec:ExperimentalResults\]. The main conclusions and suggestions for future work are summarized in section \[Sec:Conclusions\]. Related Work {#Sec:RelatedWork} ============ Numerous multi-target tracking algorithms have been introduced in the literature [@LuoZhaKim14][@VoMalBarCorOsbMahVo15][@PatPanLil18]. Traditionally, multi-target trackers have been developed by finding associations between targets and observations mainly using JPDAF [@BarWilTia11] and MHT [@BarWilTia11; @KimLiCip15]. However, these approaches have faced challenges not only in the uncertainty caused by data association but also in algorithmic complexity that increases exponentially with the number of targets and measurements. Recently, a unified framework which directly extends single to multiple target tracking by representing multi-target states and observations as RFS was developed by Mahler [@Mah03] which not only addresses the problem of increasing complexity, but also estimates the states and cardinality of an unknown and time varying number of targets in the scene by allowing for target birth, death, clutter (false alarms), and missing detections. It propagates the first-order moment of the multi-target posterior, called intensity or the PHD [@VoMa06], rather than the full multi-target posterior. This approach is flexible, for instance, it has been used to find the detection proposal with the maximum weight as the target position estimate for tracking a target of interest in dense environments by removing the other detection proposals as clutter [@BaiBhoWal18][@Nat18]. Furthermore, the standard PHD filter was extended to develop a novel N-type PHD filter ($N \geq 2$) for tracking multiple targets of different types in the same scene [@NatAnd19][@BaiWal19]. However, this approach does not naturally include target identity in the framework because of the indistinguishability assumption of the point process; additional mechanism is necessary for labelling each target. Recently, labeled RFS for multi-target tracking was introduced in [@VoVoPhu14][@VoVoHoa17][@Kim17], however, its computational complexity is high. In general, the RFS-based filters are susceptible to miss-detection even though they are robust to clutter. The two common implementation schemes of the PHD filter are the Gaussian mixture (GM-PHD filter [@VoMa06]) and Sequential Monte Carlo (SMC-PHD or particle PHD filter [@VoSinDou05]). Though the PHD filter is the most widely adopted RFS-based filter in computer vision due to its computational efficiency (it has a linear complexity with number of targets and observations), it is weak in handling miss-detection. This is because the PHD filter is designed originally for radar tracking applications where the number of miss-detections is very low as opposed to the visual tracking applications where significant number of miss-detections occur due to occlusion. In this work, we overcome not only the miss-detection problem but also the labelling of targets in each frame for real-time visual tracking applications. Incorporating deep appearance information into multi-target tracking algorithms improves the tracking performance as demonstrated in works such as  [@KimLiCip15; @KimLiReh18; @Kim17; @FuSngNaq18]. A multi-output regularized least squares (MORLS) framework has been used to learn appearance models online and are integrated into a tree-based track-oriented MHT (TO-MHT) in [@KimLiCip15]. The same author has trained a bilinear long short-term memory (LSTM) on both motion and appearance and has incorporated it into the MHT for gating in [@KimLiReh18]. These trackers are, however, computationally demanding and operates offline. Appearance models of objects are also learned in the same fashion as in [@KimLiCip15] to integrate into a generalized labeled multi-Bernoulli (GLMB) filter [@Kim17]. Deep discriminative correlation filters has also been learned and integrated into the PHD filter in [@FuSngNaq18]. Though the latter two trackers work online, they are computationally demanding to be applied for time-critical real-time applications. The two well known CNN structures are verification and identification models [@ZhengZY16]. In general, Siamese network, a kind of verification network (similarity metric), is the most widely used network for developing multi-target tracking methods [@LeaCanSch16][@TanAndAnd17][@Nat19][@YooKimYo19]. As discussed in [@LeaCanSch16], the Siamese topology has three types: those combined at cost function, in-network and StackNet. The StackNet which has been used in offline tracking [@LeaCanSch16][@TanAndAnd17] and online tracking [@Nat19][@YooKimYo19] methods outperforms the other types of Siamese topologies. This StackNet can also be referred to as joint-input network [@YooKimYo19]. This network takes two concatenated image patches along the channel dimension and infers their similarity. The last fully-connected layer of the StackNet models a 2-way classification problem (the same and different identities) i.e. given a pair of images, the StackNet produces the probability of the pair being the same or different identity by a forward pass. This means in multi-target tracking applications, all pair of tracks and detections (estimates in our case) need to be paired and given as input to the StackNet to get their probability of similarity in each video frame. This leads to high complexity as demonstrated in [@LeaCanSch16][@TanAndAnd17][@Nat19][@YooKimYo19] which limits the trackers’ applications for real-time scenarios. We observed this in our preliminary work [@Nat19], thus, we change the StackNet to identification network (IdNet) compensating the performance by training the IdNet on large-scale person re-identification data sets; the StackNet generally outperforms the IdNet [@TanAndAnd17]. Using this IdNet, appearance features are extracted once in each video frame from detections (output estimates from the GM-PHD filter in our case) and are copied to the assigned tracks after the data association step. This speeds up the online tracker very significantly when compared to using the StackNet. In addition to learning the discriminative deep appearance representations to solve both tracks-to-estimates associations and lost tracks re-identifications, we also include additional add-on unsigned tracks prediction after the association step to over-come the miss-detection problem of the PHD filter due to occlusion. To date, no work has incorporated these two important components not only to improve the multi-target tracking performance but also to speed it up to the level of real-time, as is the case in our work. The Proposed Algorithm {#Sec:proposedAlgorithm} ====================== The block diagram of our proposed multi-target tracking algorithm is given in Fig. \[fig:MOTdiagram\]. Our proposed online tracker consists of four components: 1) target states estimation using the GM-PHD filter, 2) tracks-to-estimates associations using the Hungarian algorithm, 3) add-on unassigned tracks prediction to alleviate miss-detections, and 4) lost tracks re-identification for tracks re-initialization. All of these four components are explained in details as follows. ![image](images2/MOTdiagram.png){width="1.0\linewidth"}\ The GM-PHD Filter {#Sec:GMPHD-Filter} ----------------- The Gaussian mixture implementation of the standard PHD (GM-PHD) filter [@VoMa06] is a closed-form solution of the PHD filter that assumes a linear Gaussian system. It has two steps: prediction and update. Before stating these two steps, certain assumptions are needed: 1) each target follows a linear Gaussian model: $$y_{k|k-1}(x|\zeta) = \mathcal{N}(x;F_{k-1}\zeta, Q_{k-1}) \label{eqn:linearState1}$$ $$f_{k}(z|x) = \mathcal{N}(z;H_{k} x, R_{k}) \label{eqn:linearObservation1}$$ where $y_{k|k-1}(.|\zeta)$ is the single target state transition probability density at time k given the previous state $\zeta$ and $f_{k}(z|x)$ is the single target likelihood function which defines the probability that $z$ is generated (observed) conditioned on state $x$. $\mathcal{N}(.;m, P)$ denotes a Gaussian density with mean $m$ and covariance $P$; $F_{k-1}$ and $H_k$ are the state transition and measurement matrices, respectively. $Q_{k-1}$ and $R_k$ are the covariance matrices of the process and the measurement noises, respectively. The measurement noise covariance $R_k$ can be measured off-line from sample measurements i.e. from ground truth and detection of training data [@WelBis06] as it indicates detection performance. 2) A current measurement driven birth intensity inspired by but not identical to [@RisClaVoVo12] is introduced at each time step, removing the need for the prior knowledge (specification of birth intensities) or a random model, with a non-informative zero initial velocity. The intensity of the spontaneous birth RFS is a Gaussian mixture of the form $$\begin{split} \gamma_{k}(x) = \sum_{v = 1}^{V_{\gamma,k}} w_{\gamma,k}^{(v)}\mathcal{N}(x; m_{\gamma,k}^{(v)}, P_{\gamma,k}^{(v)}) \label{eqn:PHDbirthassumptionLCMHT} \end{split}$$ where $V_{\gamma,k}$ is the number of birth Gaussian components, $w_{\gamma,k}^{(v)}$ is the weight accompanying the Gaussian component $v$, $m_{\gamma,k}^{(v)}$ is the current measurement and zero initial velocity used as mean, and $P_{\gamma,k}^{(v)}$ is birth covariance for Gaussian component $v$. 3\) The survival and detection probabilities are independent of the target state: $p_{S,k}(x_k) = p_{S,k}$ and $p_{D,k}(x_k) = p_{D,k}$. **Adaptive birth:** We use adaptive measurement-driven approach for birth of targets. Each detection $z_k \in Z_k$ is associated with detection confidence score $s_k \in [0, 1]$. We use more confident (strong) detections based on their score for birth of targets as they are more likely to represent a potential target. Confident detections used for birth of targets will be $Z_{b,k} = \{z_{b,k}: s_k \geq s_t\} \subseteq Z_k$ where $s_t$ is a detection score threshold. In fact, $s_t$ governs the relationship between the number of false positives (clutter) and miss-detections (false negatives). Increasing the value of $s_t$ gives more miss-detections and less false positives, and vice versa. The initial birth weight $w_{\gamma,k}^{(v)}$ in Eq. (\[eqn:PHDbirthassumptionLCMHT\]) is also weighted by $s_k$ to give high probability for more confident detections for birth of targets i.e. $w_{\gamma,k}^{(v)} = s_kw_{\gamma,k}^{(v)}$. However, all measurements $Z_k$ are used for the update step. **Prediction:** It is assumed that the posterior intensity at time $k-1$ is a Gaussian mixture of the form $$\begin{split} \mathcal{D}_{k-1}(x) = \mathcal{D}_{k-1|k-1}(x) = \sum_{v = 1}^{V_{k-1}} w_{k-1}^{(v)}\mathcal{N}(x; m_{k-1}^{(v)}, P_{k-1}^{(v)}), \label{eqn:PHDposterior1k-1} \end{split}$$ where $V_{k-1}$ is the number of Gaussian components of $\mathcal{D}_{k-1}(x)$ and it equals to the number of Gaussian components after pruning and merging at the previous iteration. Under these assumptions, the predicted intensity at time $k$ is given by $$\mathcal{D}_{k|k-1}(x) = \mathcal{D}_{S,k|k-1}(x) + \gamma_{k}(x), \label{eqn:PHDpredictionI1}$$ where $$\begin{array} {lll} \mathcal{D}_{S,k|k-1}(x) =& p_{S,k} \sum_{v = 1}^{V_{k-1}} w_{k-1}^{(v)}\mathcal{N}(x; m_{S,k|k-1}^{(v)},P_{S,k|k-1}^{(v)}), \nonumber \end{array} \label{eqn:PHDpredictionSurvival1}$$ $$m_{S,k|k-1}^{(v)} = F_{k-1} m_{k-1}^{(v)}, \nonumber \label{eqn:PHDpredictionSurvivalMean1}$$ $$P_{S,k|k-1}^{(v)} = Q_{k-1} + F_{k-1} P_{k-1}^{(v)} F^T_{k-1}, \nonumber \label{eqn:PHDpredictionSurvivalCov1}$$ where $\gamma_k(x)$ is given by Eq. (\[eqn:PHDbirthassumptionLCMHT\]). Since $\mathcal{D}_{S,k|k-1}(x)$ and $\gamma_k(x)$ are Gaussian mixtures, $ \mathcal{D}_{k|k-1}(x)$ can be expressed as a Gaussian mixture of the form $$\begin{split} \mathcal{D}_{k|k-1}(x) = \sum_{v = 1}^{V_{k|k-1}} w_{k|k-1}^{(v)}\mathcal{N}(x; m_{k|k-1}^{(v)},P_{k|k-1}^{(v)}), \label{eqn:PHDpredictionki} \end{split}$$ where $w_{k|k-1}^{(v)}$ is the weight accompanying the predicted Gaussian component $v$, and $V_{k|k-1}$ is the number of predicted Gaussian components and it equals to the number of born targets and the number of persistent (surviving) components. The number of persistent components is actually the number of Gaussian components after pruning and merging at the previous iteration. **Update:** The posterior intensity (updated PHD) at time $k$ is also a Gaussian mixture and is given by $$\begin{split} \mathcal{D}_{k|k}(x) = (1 - p_{D,k})\mathcal{D}_{k|k-1}(x) + \sum_{z\in Z_k} \mathcal{D}_{D,k}(x;z), \label{eqn:PHDupdatekiLCMHT} \end{split}$$ where $$\begin{split} \mathcal{D}_{D,k}(x;z) = \sum_{v = 1}^{V_{k|k-1}} w_{k}^{(v)}(z) \mathcal{N}(x; m_{k|k}^{(v)}(z), P_{k|k}^{(v)}), \nonumber \label{eqn:PHDupdateDetki} \end{split}$$ $$\begin{split} w^{(v)}_{k}(z) = \frac{p_{D,k} w^{(v)}_{k|k-1} q^{(v)}_{k}(z)}{c_{s_{k}}(z) + p_{D,k} \sum_{l = 1}^{V_{k|k-1}} w^{(l)}_{k|k-1} q^{(l)}_{k}(z)}, \nonumber \label{eqn:PHDupdatewwightki} \end{split}$$ $$\begin{split} q^{(v)}_{k}(z) = \mathcal{N}(z; H_k m_{k|k-1}^{(v)}, R_k + H_k P_{k|k-1}^{(v)} H^T_k), \nonumber \label{eqn:PHDupdateqki} \end{split}$$ $$\begin{split} m^{(v)}_{k|k}(z) = m^{(v)}_{k|k-1} + K^{(v)}_k (z - H_k m_{k|k-1}^{(v)}), \nonumber \label{eqn:PHDupdatemki} \end{split}$$ $$\begin{split} P^{(v)}_{k|k} = [I - K^{(v)}_k H_k] P_{k|k-1}^{(v)}, \nonumber \label{eqn:PHDupdatepki} \end{split}$$ $$\begin{split} K^{(v)}_k = P_{k|k-1}^{(v)} H^T_k [ H_k P_{k|k-1}^{(v)} H^T_k + R_k]^{-1} \nonumber \label{eqn:PHDupdateKki} \end{split}$$ The clutter intensity due to the scene, $c_{s_k}(z)$, in Eq. (\[eqn:PHDupdatekiLCMHT\]) is given by $$c_{s_k}(z) = \lambda_t c(z) = \lambda_{c} A c(z), \label{eqn:Clutterscenei}$$ where $c(.)$ is the uniform density over the surveillance region $A$, and $\lambda_{c}$ is the average number of clutter returns per unit volume i.e. $\lambda_t = \lambda_{c}A$. After update, weak Gaussian components with weight $w_k^{(v)} < T = 10^{-5}$ are pruned, and Gaussian components with Mahalanobis distance less than $U = 4$ pixels from each other are merged. These pruned and merged Gaussian components are predicted as existing (persistent) targets in the next iteration. Finally, Gaussian components of the pruned and merged intensity with means corresponding to weights greater than 0.5 as a threshold are selected as multi-target state estimates (we use the pruned and merged intensity rather than the posterior intensity as it gives better results). Data Association {#Sec:DataAssociation} ---------------- The GM-PHD filter distinguishes between true and false targets, however, this does not distinguish between two different targets, so an additional step is necessary to identify different targets between consecutive frames. We use both the spatio-temporal and visual similarities between the track boxes and estimated object states (filtered output boxes) in frames $k-1$ and $k$, respectively, to label each object across frames. ### Spatio-temporal information The spatio-temporal information is computed using track boxes and filtered output boxes in consecutive frames. Let $b^t_{i,k-1}$ be the $i^{th}$ track’s box and $b^e_{j,k}$ be the $j^{th}$ estimate’s (GM-PHD filter’s filtered output) box at frame k. Their spatio-temporal similarity is calculated using Euclidean distance $D_k(b^t_{i,k-1},b^e_{j,k})$ between their centers. We use Euclidean distance rather than Jaccard distance (1 - Intersection-over-Union) as it gives slightly better result. The spatio-temporal (motion or distance) relation has been commonly used, in different forms, in many multi-object tracking works [@MatPoiCav16][@TanAndSch17][@YuLiLi16]. The normalized Euclidean distance $D_{n,k}(b^t_{i,k-1},b^e_{j,k})$ between the centers of the bounding boxes $b^t_{i,k-1}$ and $b^e_{j,k}$ is given by $$\begin{array} {lll} D_{n,k}(b^t_{i,k-1},b^e_{j,k}) =\\ \sqrt{\bigg(\frac{b^t_{i,x,k-1} - b^e_{j,x,k}}{W}\bigg)^2 + \bigg(\frac{b^t_{i,y,k-1} - b^e_{j,y,k}}{H}\bigg)^2}, \end{array} \label{eqn:Dist}$$ where $(b^t_{i,x,k-1}, b^t_{i,y,k-1})$ and $(b^e_{j,x,k}, b^e_{j,y,k})$ are the center locations of their corresponding bounding boxes at frames $k-1$ and $k$, respectively. $W$ and $H$ are the width and height of a video frame which are used for the Euclidean distance normalization. ### Deep Appearance Representations Learning Visual cues are very crucial for associating tracks with detections (in our case current filtered outputs or estimated states) for robust online multi-object tracking. In this work, we propose an identification CNN network (IdNet) for computing visual affinities between image patches cropped at bounding box locations. We treated this task as a multi-class recognition problem to learn a discriminative CNN embedding. We adopted the ResNet [@KaiXiaSha15] as the network structure (ResNet50) by replacing the topmost layer (fully connected layer) to output confidence for each of the person identities in the training data set (changing from 1000 classes to 6654 classes in our case). The rest of the ResNet50 architecture remains the same except adding a dropout with a rate of 0.75 after the last pooling layer for reducing a possible over-fitting. We use a transfer learning approach i.e. it is pre-trained on the ImageNet data set [@ILSVRC15] consisting of 1000 classes rather than training the network from scratch. **Data preparation:** To learn discriminative deep appearance representations, we collected our training data set from numerous sources. First, we utilize publicly available person re-identification data sets including Market1501 data set [@ZheSheTia15] (736 identities from 751 as we restrict the number of images per identity to at least 4), CUHK03 data set [@LiZhaXia14] (1367 identities), LPW data set [@sonLenLiu17] (1974 identities), and MSMT data set [@LonShiWen17] (1141 identities). In addition to these person re-identification data sets, we also collected training data from publicly available tracking data sets such as MOT15 [@MOT15] and MOT16/17 [@MOT16] training data sets (MOT16 and MOT17 have the same training data set though MOT17 is claimed to have more accurate ground truth and is used in our experiment). From all these tracking training data sets of MOT15 (TUD-Stadmitte, TUD-Campus, PETS09-S2L1, ETH-Bahnhof and ETH-Sunnyday) and MOT16/17 (5 sequences), we produce about 521 person identities. We also produce about 213 identities from TownCentre data set [@BenRei11]. This helps the network more to adapt to the MOT benchmark test sequences as well as the network can learn the inter-frame variations. In total, we collected about 6,654 person identities from all these data sets to train our IdNet. 10% of this training set is used for validation (of each person identity if the number of images for that person identity is greater than 9 unless no validation for that class). We resize all the training images to $256\times256$ and then subtract the mean image from all the images which is computed from all the training images. During training, we randomly crop all the images to $224\times224$ and then mirror horizontally. We use a random order of images by reshuffling the data set. **Training:** We train the IdNet using a binary cross-entropy loss, softmax and mini-batch Stochastic Gradient Descent (SGD) with momentum. The mini-batch size is set to 20. We trained our model on a NVIDIA GeForce GTX 1050 GPU for 200 epochs, after which it generally converges, using MatConvNet [@VedLen15]. We initialize the learning rate to $10^{-4}$ for the first 75 epochs, to $10^{-5}$ for the next 75 epochs and to $10^{-6}$ for the last 50 epochs (1 epoch is one sweep over all the training data). In addition, we augment the training samples by random flipping horizontally as well as randomly shifting the cropping positions by no more than $\pm 0.2$ of detection box of width or height for $x$ and $y$ dimensions respectively to increase more variation and thus reduce possible over-fitting. **Testing:** We use 2 video sequences of MOT16/17 training data set (02 and 09) for testing of our trained IdNet. For this testing set, we produce about 66 person identities. We randomly sample about 800 positive pairs (the same identities) and 3200 negative pairs (different identities) from ground truth of MOT16/17-02 and MOT16/17-09 training dataset. We use this larger ratio of negative pairs to mimic the positive/negative distribution during tracking. We use verification accuracy as a an evaluation metric. Given a pair of images, we compute the cosine distance (using Eq. (\[eq:CosineD\])) between their extracted deep appearance feature vectors utilizing our learned model. If the computed cosine-distance of positive pairs are greater than or equal to 0.75, they are assumed as correctly classified pairs. Similarly, if the computed cosine-distance of negative pairs are less than 0.75, they are assumed as correctly classified pairs. Accordingly, the IdNet trained on large-scale data sets (6,654 identities) gives about 97.5% accuracy. ### Tracks-to-Estimates Association Here we use visual-spatio-temporaal information, fusion of both visual and spatio-temporal information, to associate tracks to the estimated (filtered output) boxes. The visual similarity $V_{s,k}(b^t_{i,k-1},b^e_{j,k})$ between the track’s box $b^t_{i,k-1}$ and estimate’s (filtered output) box $b^e_{j,k}$ at frame k is computed using the cosine distance $C_d(\mathbf{z}^i_t, \mathbf{z}^j_e)$ between appearance feature vectors $\mathbf{z}^i_t$ and $\mathbf{z}^j_e$ which are extracted from the track’s box $b^t_{i,k-1}$ and filtered output box $b^e_{j,k}$, respectively. Thus, this visual similarity (cosine distance) is given using the dot product and magnitude (norm) of the appearance feature vectors as $$V_{s,k}(b^t_{i,k-1},b^e_{j,k}) = C_d(\mathbf{z}^i_t, \mathbf{z}^j_e) = \frac{\mathbf{z}^i_t \cdot \mathbf{z}^j_e}{\| \mathbf{z}^i_t \| \|\mathbf{z}^j_e\|}, \label{eq:CosineD}$$ We consider the mean of track features weighted by confidences of the detections corresponding to this track till frame $k-1$ when computing the cosine distance between track features and estimated state features. We use the Munkres’s variant of the Hungarian algorithm [@FraJea71] to determine the optimal associations in case an estimate (filtered output) box is tried to be associated with multiple tracks using the following overall association cost $$%C_k(b^t_i,b^d_j) = D_k(b^t_i,b^d_j) \odot V_{d,k}(b^t_i,b^d_j), \mathbf{C}_k = (1 - \eta) \mathbf{D}_{n,k} + \eta \mathbf{V}_{d,k}, \label{eqn:OverallSimilarity}$$ where $\mathbf{V}_{d,k} = \mathbf{1} - \mathbf{V}_{s,k}$ is the visual difference used as a cost where each of its element $V_{d,k} \in [0, 1]$, $\mathbf{D}_{n,k}$ is a matrix of the normalized Euclidean distances where each element $D_{n,k} \in [0, 1]$, and $\eta$ is the weight balancing the two costs. $\mathbf{C}_k \in \mathbb{R}^{N\times M}$, $\mathbf{D}_{n,k} \in \mathbb{R}^{N\times M}$ and $\mathbf{V}_{d,k} \in \mathbb{R}^{N\times M}$ are matrices where $N$ and $M$ are the number of tracks and estimates (filtered outputs) at time $k$; $\mathbf{1}$ is a matrix of $1's$ of the same dimension as $\mathbf{V}_{s,k} \in \mathbb{R}^{N\times M}$. Spatio-temporal relation gives useful information for tracks-estimates association of targets that are in close proximity, however, its importance starts to decrease as targets become (temporally) far apart. In contrast, visual similarity obtained from CNN allows long-range association as it is robust to large temporal and spatial distance. These combination of spatio-temporal and visual information helps to solve target ambiguities which may occur due to either targets motion or their visual content as well as allows long-range association of targets. The outputs of the Hungarian algorithm are assigned tracks-to-estimates, unassigned tracks and unassigned estimates as shown in Fig. \[fig:MOTdiagram\]. The tracks-to-estimates association is confirmed if the cost $C_k(b^t_i,b^e_j)$ is lower than the cost threshold $C_{ts} = 0.4$. The associated estimates (filtered outputs) boxes are appended to their corresponding tracks to generate longer ones up to time k. The unassigned tracks are predicted using the add-on prediction step or killed accordingly as discussed in section \[Sec:EstimatesPrediction\]. The unassigned estimates either create new tracks or perform re-identification from the lost (dead) tracks to re-initialize the tracks as discussed in section \[Sec:ReId\]. Unassigned tracks Prediction {#Sec:EstimatesPrediction} ---------------------------- We keep state transition matrix ($F_{k-1}$), process noise covariance ($Q_{k-1}$) and the covariance matrices ($P_{k-1}$) from the update step of the GM-PHD filter for the unassigned tracks obtained after the Hungarian algorithm-based tracks-to-estimates association step. We, therefore, predict each of the unassigned track $X^t_{k-1}$ using its state transition matrix while also updating its covariance matrix $P^t_{k-1}$ for a period of $T_p$ number of predictions (frames) as follows (Eq. \[eqn:AddOnPred\]). $$\begin{split} X^t_k =& F_{k-1}X^t_{k-1} \\ P^t_k =& Q_{k-1} + F_{k-1} P^t_{k-1} F^T_{k-1} \label{eqn:AddOnPred} \end{split}$$ where $X^t_k$ and $P^t_k$ are the updated unassigned track $t$’s state (location) and covariance matrix at frame $k$, respectively. The effect of this additional add-on prediction step versus the number of predictions ($T_p$) is analyzed in Table \[tbl:MOT16TrainingTp\]. We kill the track if the number of performed predictions is greater than the number of predictions threshold ($T_{ts}$). This killed track can be considered for re-identification in the coming frames. In our experiment, we choose $T_{ts} = 3$ as this gives better Multiple Object Tracking Accuracy (MOTA) value as shown in Table \[tbl:MOT16TrainingTp\] and Fig. \[fig:MOTAvsTp\]. Detailed investigation of this add-on prediction on the performance of our online tracker is given in experimental results section \[Sec:AblationStudy\]. Re-identification for Tracking {#Sec:ReId} ------------------------------ Person re-identification in the context of multi-target tracking is very challenging due to occlusions (inter-object and background), cluttered background and inaccurate bounding box localization. Inter-object occlusions are very challenging in video sequences containing dense targets, hence, object detectors may miss some targets in some consecutive frames. Re-identification of lost targets due to miss-detections is crucial to keep track of identity of each target. The tracks-to-estimates association using the Hungarian algorithm given in section \[Sec:DataAssociation\] can also provide unassigned estimates. If a past track is not associated to any estimated box at frame k, the tracked target might be occluded or temporally missed by the object detector. If an estimated object box is not associated to any track, it is used for initializing a new track if it is not created earlier by checking it within the last $m = 1:k-1$ frames from lost or dead tracks using visual similarity $V_{s,k}$ for re-identification. We use a visual similarity threshold of $V^s_{ts} = 0.6$ for the re-identification of targets i.e re-identification occurs if the visual similarity (cosine distance) is greater than $V^s_{ts} = 0.6$. If multiple dead tracks are matched to the unassigned estimate, the one with the maximum similarity score is confirmed. Re-identification using the visual similarity along with combining the visual similarity with the spatio-temporal information to construct the cost for labelling of targets has increased the performance of our online tracker as shown in Table \[tbl:MOT16TrainingTp\] and Fig. \[fig:MOTA1601\]. An independent analysis of each component is also given in experimental results section \[Sec:AblationStudy\]. Parameter Values in the GM-PHD Filter Implementation {#Sec:ParameterValues} ==================================================== Our state vector includes the centroid positions, velocities, width and height of the bounding boxes, i.e. $x_k = [p_{cx,xk}, p_{cy,xk}, \dot{p}_{x,xk}, \dot{p}_{y,xk}, w_{xk}, h_{xk}]^T$. Similarly, the measurement is the noisy version of the target area in the image plane approximated with a $w$ x $h$ rectangle centered at $(p_{cx,xk}, p_{cy,xk})$ i.e. $z_k = [p_{cx,zk}, p_{cy,zk}, w_{zk}, h_{zk}]^T$. We set the survival probability $p_{S} = 0.99 $, and we assume the linear Gaussian dynamic model of Eq. (\[eqn:linearState1\]) with matrices taking into account the box width and height at the given scale. $$F_{k-1} = \left[ \begin{array}{ccc} I_2 & \Delta I_2 & 0_2 \\ 0_2 & I_2 & 0_2 \\ 0_2 & 0_2 & I_2 \end{array} \right],$$ $$Q_{k-1} = \sigma_v^2 \left [ \begin{array}{ccc} \frac{\Delta^4}{4}I_2 & \frac{\Delta^3}{2}I_2 & 0_2\\ \frac{\Delta^3}{2}I_2 & \Delta^2 I_2 & 0_2 \\ 0_2 & 0_2 & I_2 \end{array} \right], \label{eqn:PHDstateTransitionMatrixVideo}$$ where $F$ and $Q$ denote the state transition matrix and process noise covariance, respectively; $I_n$ and $0_n$ denote the *n* x *n* identity and zero matrices, respectively, and $\Delta = 1$ second is the sampling period defined by the time between frames. $\sigma_v = 5$ pixels$/s^2$ is the standard deviation of the process noise. Similarly, the measurement follows the observation model of Eq. (\[eqn:linearObservation1\]) with matrices taking into account the box width and height, $$H_k = \left[ \begin{array}{ccc} I_2 & 0_2 & 0_2 \\ 0_2 & 0_2 & I_2 \end{array} \right],$$ $$R_k = \sigma_r^2 \left [ \begin{array}{cc} I_2 & 0_2 \\ 0_2 & I_2 \end{array} \right], \label{eqn:eqn:PHDobservationMatrixVideo}$$ where $H_k$ and $R_k$ denote the observation matrix and the observation noise covariance, respectively, and $\sigma_r = 6$ pixels is the measurement standard deviation. The probability of detection is assumed to be constant across the state space and through time and is set to a value of $p_D = 0.95$. The false positives are independently and identically distributed (i.i.d), and the number of false positives per frame is Poisson-distributed with mean $\lambda_t = 10$ (false alarm rate of $\lambda_c \approx 4.8 \times 10^{-6}$; dividing the mean $\lambda_t$ by frame resolution $A$, refer to Eq. (\[eqn:Clutterscenei\])). Nothing is known about the appearing targets before the first observation. The distribution after the observation is determined by the current measurement and zero initial velocity used as a mean of the Gaussian distribution and using a predetermined initial covariance given in Eq. (\[eqn:PHDbirthCovariance\]) for birthing of targets. $$P_{\gamma,k} = diag([100, 100, 25, 25, 20, 20]). \label{eqn:PHDbirthCovariance}$$ The birth weight $w_{\gamma,k}$ that any potential observation represents an appearing target in Eq. (\[eqn:PHDbirthassumptionLCMHT\]), detection score threshold $s_t$ and whether using detection score $s_k$ along with (multiplied by) $w_{\gamma,k}$ depends on the application as they govern the relationship between false positives and miss-detections i.e. they are hyper-parameters that require tuning. For instance, we evaluated on $w_{\gamma,k} \in \{0.10, 0.02, 0.0001\}$, $s_t \in \{0.0, 0.10, 0.15, 0,20, 25\}$, and with and without using $s_k$ along with $w_{\gamma,k}$. Given $w_{\gamma,k} = 0.10$ and $s_t = 0.10$, using $s_k$ along with $w_{\gamma,k}$ ($w_{\gamma,k} = 0.10s_k$) gives better MOTA value than without using $s_k$. Reducing $w_{\gamma,k}$ to 0.0001 reduces MOTA value slightly, however, it greatly decreases false positives at the expense of increased miss-detections. In our experiment, we find $w_{\gamma,k} = 0.1$, $s_t = 0.0$ and without using $s_k$ along with $w_{\gamma,k}$ gives better MOTA value at the expense of increased false positives. The influence of $s_k$ and $s_t$ partly depends on the hype-parameter value of $w_{\gamma,k}$, and thus, all these hyper-parameters need to be tuned well for the application at hand. Furthermore, after evaluating on $\eta \in \{0.0, 0.4, 0.6, 0.85, 1.0\}$ (in Eq. (\[eqn:OverallSimilarity\])), we set it to 0.85 as this gives better result. The implementation parameters and their values are summarized in Table \[tbl:parameters\]. Parameters $\eta$ $s_t$ $w_{\gamma,k}$ $\sigma_r$ $\sigma_v$ $p_D$ $p_S$ $\lambda_t$ $U$ $T$ $m$ $T_{ts}$ $C_{ts}$ $V^s_{ts}$ ------------ -------- ------- ---------------- ------------ ---------------- ------- ------- ------------- ---------- ----------- -------------- ---------- ---------- ------------ Values 0.85 0.0 0.1 6 pixels 5 pixels$/s^2$ 0.95 0.99 10 4 pixels $10^{-5}$ 1:k-1 frames 3 0.4 0.6 Experimental Results {#Sec:ExperimentalResults} ==================== In this section, we discuss experimental settings, ablation Study on MOT Benchmark training set and evaluations on MOT Benchmark test sets in detail. Experimental Settings --------------------- The experimental settings for proposed online tracker such as tracking data sets, evaluation metrics and implementation details are presented as follows. **Tracking Datasets:** We make an extensive evaluations of our proposed online tracker using both MOT16 and MOT17 benchmark data sets [@MOT16] which are captured on unconstrained environments using both static and moving cameras. These data sets consist of 7 training sequences on which we make ablation study as given in section \[Sec:AblationStudy\] (Table \[tbl:MOT16TrainingTp\]) and 7 testing sequences on which we evaluate and compare our proposed online tracker with other trackers as shown in Table \[tbl:MOT16\] and Table \[tbl:MOT17\]. We use the *public detections* provided by the MOT benchmark with a non-maximum suppression (NMS) of 0.3 for DPM detector [@FelGirMcA10] (for both MOT16 and MOT17) and 0.5 for FRCNN [@ShaKaiRos15] and SDP [@YanChoLin16] detectors (for MOT17). **Evaluation Metrics:** We use numerous evaluation metrics which are presented as follows: - Multiple Object Tracking Accuracy (MOTA): A summary of overall tracking accuracy in terms of false positives, false negatives and identity switches, which gives a measure of the tracker’s performance at detecting objects as well as keeping track of their trajectories. - Multiple Object Tracking Precision [@KasGolSou09] (MOTP): A summary of overall tracking precision in terms of bounding box overlap between ground-truth and tracked location, which shows the ability of the tracker to estimate precise object positions. - Identification F1 (IDF1) score [@RisSolZou16]: The quantitative measure obtained by dividing the number of correctly identified detections by the mean of the number of ground truth and detections. - Mostly Tracked targets (MT): Percentage of mostly tracked targets (a target is tracked for at least 80% of its life span regardless of maintaining its identity) to the total number of ground truth trajectories. - Mostly Lost targets (ML) [@LiHua09]: Percentage of mostly lost targets (a target is tracked for less than 20% of its life span) to the total number of ground truth trajectories. - False Positives (FP): Number of false detections. - False Alarms per Frame (FAF): This can also be referred to as false positives per image (FPPI) which measures false positive ratio. - False Negatives (FN): Number of miss-detections. - Identity Switches (IDSw): Number of times the given identity of a ground-truth track changes. - Fragmented trajectories (Frag): Number of times a track is interrupted (compared to ground truth trajectory) due to miss-detection. True positives are detections which have at least 50% overlap with their corresponding ground truth bounding boxes. For more detailed description of each metric, please refer to [@MOT16]. **Implementation Details:** Our proposed tracking algorithm is implemented in Matlab on a i7 2.80 GHz core processor with 8 GB RAM. We use the MatConvNet [@VedLen15] for CNN feature extraction where its forward propagation computation is transferred to a NVIDIA GeForce GTX 1050 GPU, and our tracker runs at about 20.4 frames per second (fps). The forward propagation for feature extraction step is relatively the main computational load of our tracking algorithm, specially for constructing the cost due to the visual content in Eq. (\[eqn:OverallSimilarity\]). However, it is much significantly faster than our preliminary work in [@Nat19] (3.5 fps) since appearance features are extracted once from estimates in each frame and then copied to the associated tracks rather than concatenating both track and estimate patches along the channel dimension and extracting the features from all tracks and estimates in every frame. Ablation Study on MOT16 Benchmark Training Set {#Sec:AblationStudy} ---------------------------------------------- We investigate the contributions of the different components of our proposed online tracker, GMPHD-ReId, on the MOT16 [@MOT16] benchmark training set using public detections. These different components include motion information (Mot), appearance information (App), re-identification (ReId) and add-on unassigned tracks predictions (AddOnPr). First, we evaluate using only the motion information (Mot) as shown in Table \[tbl:MOT16TrainingTp\]. Second, we include appearance information (App) and re-identification (ReId) in addition to the motion information to see the effect of the learned discriminative deep appearance representations on the tracking performance. Third, we include the additional add-on prediction (AddOnPr) on top of the motion information, appearance information and re-identification, particularly by varying the number of predictions ($T_p$) as shown in Table \[tbl:MOT16TrainingTp\] using numerous tracking evaluation metrics. The graphical plot the MOTA values in Table \[tbl:MOT16TrainingTp\] versus the number of predictions ($T_p$) is shown in Fig. \[fig:MOTAvsTp\]. Accordingly, using only the motion information provides MOTA value of 31.1 and IDF1 of 20.6 as shown in Table \[tbl:MOT16TrainingTp\]. Including the deeply learned appearance information for data association and re-identification increases the MOTA and IDF1 to 33.8 and 44.2, respectively. We also investigate the influence the additional add-on prediction (AddOnPr) step by varying the number of predictions $T_p$ from 0 (no AddOnPr) to 12. The maximum MOTA value is obtained at $T_p = 3$ as shown in Table \[tbl:MOT16TrainingTp\] and Fig. \[fig:MOTAvsTp\]. Thus, including the additional add-on prediction with $T_p = 3$ in the our proposed online tracker increases the MOTA and IDF1 from 33.8 to 35.8 and from 44.2 to 46.5, respectively. This means an increase of 5.92% and 5.20% for MOTA and IDF1, respectively, is obtained using a very simple additional add-on unassigned tracks prediction. Thus, each component of the our proposed online tracker has an effect of increasing tracking performance. Type MOTA$\uparrow$ IDF1$\uparrow$ MOTP$\uparrow$ FAF$\downarrow$ MT (%)$\uparrow$ ML (%)$\downarrow$ FP$\downarrow$ FN$\downarrow$ IDS$\downarrow$ Frag$\downarrow$ ------------------------------- ---------------- ---------------- ---------------- ----------------- ------------------ -------------------- ---------------- ---------------- ----------------- ------------------ Mot Only 31.1 20.6 **77.7** **0.30** 4.00 26.70 **1609** 71396 3016 2794 Mot + App + ReId + 0 AddOnPr 33.8 44.2 77.6 0.34 4.10 26.20 1820 70756 509 2553 Mot + App + ReId + 2 AddOnPr 35.7 46.0 77.0 0.70 5.80 24.20 3704 66839 **442** 1396 Mot + App + ReId + 3 AddOnPr **35.8** 46.5 76.8 0.87 6.00 23.60 4604 65785 447 1274 Mot + App + ReId + 4 AddOnPr 35.6 46.9 76.6 1.06 6.70 23.30 5614 64975 461 1207 Mot + App + ReId + 5 AddOnPr 35.4 47.1 76.5 1.23 6.90 22.70 6550 64322 457 1167 Mot + App + ReId + 7 AddOnPr 34.3 **47.3** 76.3 1.64 7.30 22.40 8731 63346 471 1128 Mot + App + ReId + 10 AddOnPr 32.5 47.1 76.0 2.21 **7.60** 22.20 11767 62248 494 1078 Mot + App + ReId + 12 AddOnpr 31.2 46.5 75.9 2.59 7.50 **22.10** 13757 **61675** 508 **1061** ![MOTA values of the proposed GMPHD-ReId tracker when the number of predictions ($T_p$) is varied on the **MOT16** [@MOT16] benchmark training set. Maximum MOTA value is obtained at $T_p = T_{ts} = 3$.[]{data-label="fig:MOTAvsTp"}](images2/MOTAvsTp.png){width="1.0\linewidth"} [\[fig:GMPHDFrame04\] ![image](images2/00011-mot-crop.png){height="14.50000%"}]{} [\[fig:GMPHDFrame17\] ![image](images2/00075-mot-crop.png){height="14.50000%"}]{} [\[fig:GMPHDFrame36\] ![image](images2/00125-mot-crop.png){height="14.50000%"}]{}\ [\[fig:GMPHDINTFrame04\] ![image](images2/00011-crop.png){height="14.50000%"}]{} [\[fig:GMPHDINTFrame04\] ![image](images2/00075-crop.png){height="14.50000%"}]{} [\[fig:GMPHDINTFrame04\] ![image](images2/00125-crop.png){height="14.50000%"}]{} Evaluations on MOT Benchmark Test Sets -------------------------------------- After validating our proposed tracker, GMPHD-ReId, on the MOT16 Benchmark training set with the add-on prediction with $T_p = 3$ in section \[Sec:AblationStudy\], we compare it against state-of-the-art online and offline tracking methods as shown in Table \[tbl:MOT16\] and Table \[tbl:MOT17\]. Accordingly, quantitative evaluations of our proposed method with other trackers is compared in Table \[tbl:MOT16\] on MOT16 benchmark data set. The Table shows that our algorithm outperforms both online and offline trackers listed in the table in many of the evaluation metrics. When compared to the online trackers, our proposed online tracker outperforms all the others in MOTA, IDF1, MT, ML and FN. The number of identity switches (IDSw) is also significantly lower than many of the online trackers. Our proposed online tracker outperforms not only many of the online trackers but also several offline trackers in terms of several evaluation metrics. In terms of IDF1 and ML, our proposed online tracker performs better than all of the trackers, both online and offline, listed in the table. Our online tracker also runs faster than many of both online and offline trackers, at about 20.4 fps. Our online tracker also gives promising results on MOT17 benchmark data set as is quantitatively shown in Table \[tbl:MOT17\]. It outperforms all other online trackers in the table in all MOTA, IDF1, MT, ML and FN measures. The number of IDSw and Frag is also significantly lower than many of the online trackers in the table. In addition to the online trackers, our proposed online tracker outperforms many of the offline trackers listed in the table. Our proposed online tracker outperforms all the trackers in the table, both online and offline, in terms of ML. The most important to notice here is that the comparison of our algorithm to the GM-PHD-HDA [@SonJeo16] (GMPHD-SHA for MOT17). These both trackers use the GM-PHD filter but with different approaches for labelling of targets from frame to frame. While our tracker uses the Hungarian algorithm for labelling of targets by postprocessing the output of the filter using a combination of spatio-temporal and visual similarities along with visual similarity for re-identification, the GM-PHD-HDA uses the approach in [@PanClaVo09] at the prediction stage by also including appearance features for re-identification to label targets. In addition to the GM-PHD-HDA tracker, our proposed tracker outperforms the other GM-PHD filter-based trackers such as GMPHD-KCF [@KutBosEis17], GM-PHD [@EisArpPat12], GM-PHD-N1T (GMPHD-N1Tr) [@BaiWal19] and GM-PHD-DAL (GMPHD-DAL) [@Nat19] (our preliminary work) as shown in Tables  \[tbl:MOT16\] and \[tbl:MOT17\] in almost all of the evaluation metrics. The qualitative comparison of our proposed tracker (GMPHD-ReId) and our tracker without appearance information and additional unassigned tracks prediction is given in Fig. \[fig:MOTA1601\] for frames 11, 75 and 125. Due to the detection failures, some labels of targets are not consistent for our tracker without appearance information and additional unassigned tracks prediction (top row), for instance, labels 2 and 3 in frame 11 are changed to labels 9 and 36, respectively, in frame 75. Similarly labels 31, 35 and 35 in frame 75 are changed to labels 46, 44 and 42, respectively, in frame 125. However, the labels of the targets are consistent when using the GMPHD-ReId tracker (bottom row). The effect of the additional unassigned tracks prediction is also clearly visible. For instance, a person with label 5 in frame 11 is missed in 75 and 125 frames when using our tracker without appearance information and additional unassigned tracks prediction (top row), however, this same person with label 1 in frame 11 is tracked in both 75 and 125 frames when using our proposed online tracker which combines all the components together (bottom row): motion information, deep appearance information for both data association and re-identification, and additional add-on unassigned tracks prediction. In our evaluations, the association cost constructed using only visual similarity CNN gives better result than using only spatio-temporal relation, however, their combination using Eq. (\[eqn:OverallSimilarity\]) gives better result than each of them. Furthermore, weighted summation of the costs according to Eq. (\[eqn:OverallSimilarity\]) gives slightly better result than the Hadamard product (element-wise multiplication) of the two costs. Sample qualitative tracking results are shown as examples in Fig. \[fig:MOTA17\] using SDP detector and MOT17 test sequences. The tracking results are represented by bounding boxes with their color-coded identities. On the top row, MOT17-01-SDP and MOT17-03-SDP are shown from left to right. In the first and second middle rows are MOT17-06-SDP and MOT17-07-SDP, and MOT17-08-SDP and MOT17-12-SDP, respectively. Finally, MOT17-14-SDP is shown on the bottom row. Tracker Tracking Mode MOTA$\uparrow$ MOTP$\uparrow$ IDF1$\uparrow$ MT (%)$\uparrow$ ML (%)$\downarrow$ FP$\downarrow$ FN$\downarrow$ IDSw$\downarrow$ Frag$\downarrow$ Hz $\uparrow$ -------------------------- --------------- ---------------- ---------------- ---------------- ------------------ -------------------- ---------------- ---------------- ------------------ ------------------ --------------- MHT-DAM [@KimLiCip15] offline 6,412 781 0.8 MHT-bLSTM6 [@KimLiReh18] offline 75.9 11,637 753 1,156 1.8 CEM [@MilRotSch14] offline 33.2 75.8 N/A 7.8 54.4 6,837 114,322 642 0.3 DP-NMS [@PirRamFow11] offline 32.2 31.2 5.4 62.1 121,579 972 944 SMOT [@DicCamSzn13] offline 29.7 75.2 N/A 5.3 47.7 17,426 107,552 3,108 4,483 0.2 JPDF-m [@RezMilZha15] offline 26.2 N/A 4.1 67.5 130,549 GM-PHD-HDA [@SonJeo16] **online** 30.5 75.4 33.4 4.6 59.7 5,169 120,970 13.6 GM-PHD-N1T [@BaiWal19] **online** 33.3 25.5 5.5 56.0 116,452 3,499 3,594 9.9 HISP-T [@Nat18] **online** 35.9 76.1 28.9 7.8 50.1 6,406 107,905 2,592 2,299 4.8 OVBT [@BanBaAla16] **online** 38.4 75.4 37.8 7.5 11,517 1,321 2,140 0.3 EAMTT-pub [@MatPoiCav16] **online** 75.1 49.1 8,114 102,452 965 1,657 11.8 JCmin-MOT [@BorJeo17] **online** 36.7 75.9 36.2 7.5 54.4 2,936 111,890 HISP-DAL [@Baisa2019] **online** 37.4 76.3 30.5 7.6 50.9 3,222 108,865 2,101 2,151 3.3 GM-PHD-DAL [@Nat19] **online** 35.1 26.6 7.0 51.4 111,886 4,047 5,338 3.5 **GMPHD-ReId (ours)** **online** 75.2 7,147 815 2,446 Tracker Tracking Mode MOTA$\uparrow$ MOTP$\uparrow$ IDF1$\uparrow$ MT (%)$\uparrow$ ML (%)$\downarrow$ FP$\downarrow$ FN$\downarrow$ IDSw$\downarrow$ Frag$\downarrow$ Hz $\uparrow$ -------------------------- --------------- ---------------- ---------------- ---------------- ------------------ -------------------- ---------------- ---------------- ------------------ ------------------ --------------- MHT-DAM [@KimLiCip15] offline 47.2 22,875 2,314 0.9 MHT-bLSTM [@KimLiReh18] offline 41.7 25,981 3,124 1.9 IOU17 [@BocEisSik17] offline 45.5 39.4 15.7 281,643 5,988 7,404 SAS-MOT17 [@MakFua19] offline 44.2 76.4 16.1 44.3 29,473 283,611 4.8 DP-NMS [@PirRamFow11] offline 43.7 N/A 12.6 46.5 302,728 4,942 5,342 EAMTT [@MatPoiCav16] **online** 42.6 76.0 41.8 12.7 42.7 30,711 288,474 4,488 5,720 12.0 FPSN [@LeeKim19] **online** 44.9 76.6 33,757 7,136 14,491 10.1 GMPHD-KCF [@KutBosEis17] **online** 40.3 75.4 36.6 8.6 43.1 47,056 283,923 5,734 7,576 3.3 GM-PHD [@EisArpPat12] **online** 36.2 76.1 33.9 4.2 56.6 23,682 328,526 8,025 11,972 OTCD-1 [@LiuWuLi19] **online** 44.9 77.4 42.3 14.0 44.2 291,136 5.5 GMPHD-N1Tr [@BaiWal19] **online** 42.1 33.9 11.9 42.7 297,646 10,698 10,864 9.9 SORT17 [@BewGeOtt16] **online** 43.1 39.8 12.5 42.3 28,398 287,582 4,852 7,127 GMPHD-SHA [@SonJeo16] **online** 43.7 76.5 39.2 11.7 43.0 25,935 287,758 9.2 HISP-DAL17 [@Baisa2019] **online** 77.3 39.9 14.8 39.2 21,820 277,473 8,727 7,147 3.2 GMPHD-DAL [@Nat19] **online** 44.4 77.4 36.2 14.9 39.4 19,170 283,380 11,137 13,900 3.4 **GMPHD-ReId (ours)** **online** 76.5 37,249 4,742 8,636 20.1 \[htb\] [\[fig:DetectionFrame57\] ![image](images2/MOT17-01-SDP-37.png){height="27.20000%"}]{} [\[fig:TriPHDFrame57\] ![image](images2/MOT17-03-SDP-40.png){height="27.20000%"}]{}\ [\[fig:ThreePHDsFrame57\] ![image](images2/MOT17-06-SDP-90.png){height="31.00000%"}]{} [\[fig:ThreePHDsFrame57\] ![image](images2/MOT17-07-SDP-377.png){height="31.00000%"}]{}\ [\[fig:ThreePHDsFrame57\] ![image](images2/MOT17-08-SDP-213.png){height="27.30000%"}]{} [\[fig:ThreePHDsFrame57\] ![image](images2/MOT17-12-SDP-190.png){height="27.30000%"}]{}\ [\[fig:ThreePHDsFrame57\] ![image](images2/MOT17-14-SDP-635.png){height="38.00000%"}]{} Conclusions {#Sec:Conclusions} =========== We have developed a novel multi-target visual tracker based on the GM-PHD filter and deep CNN appearance representations learning. We apply this method for tracking multiple targets in video sequences acquired under varying environmental conditions and targets density. We followed a tracking-by-detection approach using the public detections provided in the MOT16 and MOT17 benchmark data sets. We integrate spatio-temporal similarity from the object bounding boxes and the appearance information from the learned deep CNN (using both motion and appearance cues) to label each target in consecutive frames. We learn the deep CNN appearance representations by training an identification network (IdNet) on large-scale person re-identification data sets. We also employ additional unassigned tracks prediction after the GM-PHD filter update step to overcome the susceptibility of the GM-PHD filter towards miss-detections caused by occlusion. Results show that our method outperforms state-of-the-art trackers developed using both online and offline approaches on the MOT16 and MOT17 benchmark data sets in terms of tracking accuracy and identification. In the future work, we will include inter-object relations model for tackling the interactions of different objects. [^1]: \*N. L. Baisa is currently a research scientist at AnyVision. The main part of this work was done when the author was with the Department of Computer Science, University of Lincoln, Lincoln LN6 7TS, United Kingdom. (e-mail: NBaisa@lincoln.ac.uk).
--- abstract: | New versions and extensions of Benson’s outer approximation algorithm for solving linear vector optimization problems are presented. Primal and dual variants are provided in which only one scalar linear program has to be solved in each iteration rather than two or three as in previous versions. Extensions are given to problems with arbitrary pointed solid polyhedral ordering cones. Numerical examples are provided, one of them involving a new set-valued risk measure for multivariate positions. [**Keywords:**]{} Vector optimization, multiple objective optimization, linear programming, duality, algorithms, outer approximation, set-valued risk measure, transaction costs. [**MSC 2010 Classification:**]{} 90C29, 90C05, 90-08, 91G99. author: - 'Andreas H. Hamel' - Andreas Löhne - Birgit Rudloff date: 'February 11, 2013 (update: July 28, 2013)' title: Benson type algorithms for linear vector optimization and applications --- Introduction ============ Set-valued approaches to vector optimization are promising in theory and applications. A duality theory in this framework is important for algorithms, and the dual problems can be interpreted in certain applications, see e.g. [@EhrLoeSha12; @Hamel09; @HamHey10; @HamHeyLoeTamWin04; @HamHeyRud11; @HeyLoe08; @HeyLoeTam09; @HeyLoeTam09-1; @Loehne11book; @ShaEhr08-1]. Benson’s outer approximation algorithm is a fundamental tool for solving linear (and also convex) vector optimization problems [@Benson98a; @Benson98; @EhrLoeSha12; @ShaEhr08; @ShaEhr08-1]. It is also important for solving set-valued problems [@LoeSch12]. Recent applications of linear vector optimization concern financial markets with frictions (transaction costs). For such applications, one obtains optimization problems which are genuinely set-valued. The need to compute the values of a set-valued risk measure for multi-variate random variables was a driving force for this work. In this article, we introduce a primal and a dual algorithm of Benson type where only one LP has to be solved in each iteration step[^1]. In contrast, previous versions [@Benson98a; @Benson98; @EhrLoeSha12; @ShaEhr08; @ShaEhr08-1] require at least two different LPs in each step. As the main effort of Benson type algorithms in typical applications is caused by the LPs, the computational time can be reduced considerably by the new algorithms. Another advantage is that all LPs have a very similar structure and therefore the impact of warm starts can be improved. A further benefit is an improvement of the error estimation given in [@ShaEhr08; @ShaEhr08-1], i.e., in approximate variants of the algorithms: The same approximation error can be achieved with fewer iteration steps (compare Remark \[rem\_eps\] and Example \[ex07\] below). For both the primal and dual algorithm two variants (‘break’ and ‘no break’) are presented and compared (compare Example \[ex07\]). Another novelty of this article is that linear vector optimization problems with arbitrary polyhedral solid pointed ordering cones are treated, whereas in all other references [@Benson98a; @Benson98; @EhrLoeSha12; @EhrShaSch11; @Loehne11book; @ShaEhr08; @ShaEhr08-1] only the special case of the usual ordering cone $\R^q_+$ is considered. This feature will be exploited in applications involving set-valued risk measures for multi-variate random variables in markets with transaction costs. In such situations, ordering cones are usually different from $\R^q_+$ and generated by a large number of directions. A short introduction into this topic and several (numerical) examples are given. Examples \[ex\_WCtheory\] and \[ex45\] involve a new type of a set-valued risk measure which we baptized the ‘relaxed’ worst case risk measure. This article is organized as follows. In Section 2 we provide some basic notations and results. The next three sections start with short introductions. Section 3 contains an overview on the set-valued approach to linear vector optimization and related duality results where, in contrast to most of the literature, we allow ordering cones more general than $\R^q_+$. In Sections 4 we introduce the new variants of Benson’s algorithm. We also give a detailed description of the two-phase-method to treat unbounded problems. Section 5 provides an introduction to applications involving set-valued risk measures, and in Section 6 several numerical examples are reported. Preliminaries ============= Let $A\subset \R^q$. We denote by $\cl A$, $\Int A$, $\bd A$ the closure, interior and boundary of $A$, respectively. The set $A$ is called [*solid*]{} if its interior is non-empty. A [*convex polyhedron*]{} or [a polyhedral convex set]{} $A$ in $\R^q$ is defined to be the intersection of finitely many half spaces, that is $$A=\bigcap_{i=1}^r \cb{y \in \R^q \st (z^i)^T y \geq \gamma_i}$$ where $z^1, \ldots, z^r \in \R^q{\backslash}\cb{0}$ and $\gamma_1, \ldots, \gamma_r \in \R$. As polyhedra considered in this article are always convex, we will not mention convexity explicitly. Every non-empty polyhedron $A \subseteq \R^q$ can be expressed by means of a finite number of points $y^1, \ldots, y^s \in \R^q$ ($s\in \N\smz$) and directions $k^1, \ldots, k^t \in \R^q\setminus \cb{0}$ ($t \in \N$) through $$A = \cbgg{ \sum_{i=1}^s \lambda_i y^i + \sum_{j=1}^t \mu_j k^j \bigg|\; \lambda_i \geq 0,\; i=1,\dots,s,\; \sum_{i=1}^s \lambda_i = 1,\; \mu_j \geq 0,\;j=1,\dots,t},$$ where $k \in \R^q\smz$ is called a direction of $A$ if $A + \cb{\mu\cdot k} \subseteq A$ for all $\mu>0$. This can be also written as $$\label{eq1} A = \conv \cb{y^1, \ldots,y^s} + \cone\cb{k^1, \ldots, k^t}.$$ Note that we set $\cone \emptyset = \cb{0}$. The polyhedron $A$ is bounded if and only if the cone-part in the above formula is $\cb{0}$. The vectors and directions are called the [*generators*]{} of $A$. The set $A_\infty := \cone\cb{k^1, \ldots, k^t}$ is the recession cone of $A$. A finite set of half spaces defining a polyhedron $A$ is called H-representation (or inequality representation) of $A$, whereas a finite set of points and directions defining $A$ is called V-representation (or generator representation) of $A$. A bounded polyhedron is called a [*polytope*]{}. A convex subset $F$ of a convex set $A$ is called a [*face*]{} of $A$ if $\of{\bar y,\hat y\in A \,\wedge\, \lambda\in (0,1)\,\wedge\,\lambda \bar y + (1-\lambda) \hat y\in F}$ implies $y,\hat y\in F$. A set $F$ is a [*proper*]{} (i.e. $\emptyset\neq F\neq A$) face of a polyhedron A if and only if there is a supporting hyperplane $H$ to $A$ with $F=H\cap A$. The proper $(r-1)$-dimensional faces of an $r$-dimensional polyhedral set $A$ are called [*facets*]{} of $A$. A point $y\in A$ is called a [*vertex*]{} of $A$ if $\cb{y}$ is a face of $A$. If $k \in \R^q\setminus \cb{0}$ belongs to a half-line face of a polyhedral set $A$, then $k$ is called [*extreme direction*]{} of $A$. A polyhedral convex cone $C \subset \R^q$ is called [*pointed*]{} if it contains no lines. Of course, a solid and pointed convex cone is [*non-trivial*]{}, that is, $\cb{0} \subsetneq C \subsetneq \R^q$. A non-trivial convex pointed cone $C \subset\R^q$ defines a partial ordering $\leq_C$ on $\R^q$ by $y^1 \leq_C y^2$ if and only if $y^2-y^1 \in C$. If $C=\R^q_+ :=\cb{y\in \R^q \st y_1 \geq 0, \ldots , y_q \geq 0}$, then the component-wise ordering $\leq_{\R^q_+}$ is abbreviated to $\leq$. A point $y \in \R^q$ is called [*$C$-minimal in $A$*]{} if $y \in A$ and $(\cb{y} -C\smz) \cap A = \emptyset$. The set of $C$-minimal elements of $A$ is denoted by $\Min_C A$. If $C$ is additionally solid (but not necessarily pointed), a point $y \in \R^q$ is called [*weakly $C$-minimal in $A$*]{} if $y \in A$ and $(\cb{y} - \Int C) \cap A = \emptyset$. Likewise, by replacing $C$ by $-C$, [*$C$-maximal*]{} and [*weakly $C$-maximal*]{} points in a set $A \subseteq \R^q$ are introduced and we write $\Max_C A$ for the set of $C$-maximal elements of $A$. The dual cone of a cone $C \subseteq \R^q$ is the set $C^+:=\cb{y^* \in \R^q \st \forall y \in C: (y^*)^T y \geq 0}$. The $i$-th canonical unit vector in $\R^q$ is denoted by $e^i$. Linear vector optimization ========================== In this section we outline the set-valued approach to linear vector optimization including duality theory and establish a more general setting where arbitrary polyhedral ordering cones $C\subseteq\R^q$ rather than $C=\R^q_+$ are supposed. A comprehensive exposition for the case of the ordering cone $C=\R^q_+$ can be found in [@Loehne11book]. The origin of this approach is discussed in [@Loehne11book Section 4.8]. A related duality theory and an overview on other approaches to duality for linear vector optimization problems can be found in a recent paper by Luc [@Luc11]. Problem setting and solution concepts ------------------------------------- The solution concepts defined in this section are based on the idea that in vector optimization (in contrast to scalar optimization), minimality and infimum attainment are no longer equivalent concepts. In order to have an appropriate complete lattice where an infimum is defined and exists, a set-valued reformulation of the vector optimization problem is necessary. Here we just introduce the solution concepts that result from these ideas. We motivate these concepts from an application oriented viewpoint only. More details and a theoretical motivation can be found in [@HamLoe12; @HeyLoe11; @Loehne11book]. Let matrices $B \in \R^{m\times n}$, $P \in \R^{q \times n}$, a vector $b \in \R^m$ and a solid pointed polyhedral cone $C \subseteq \R^q$ be given. The following linear vector optimization problem is considered: $$\tag{P}\label{p} \text{ minimize } P: \R^n\to\R^q \text{ with respect to } \le_C \text{ subject to } Bx \geq b.$$ Define $$S = \cb{x \in \R^n \mid Bx \geq b} \quad \text{and} \quad S^h = \cb{x \in \R^n \mid Bx \geq 0}.$$ Of course, we have $S^h = S_\infty$, that is, the non-zero points in $S^h$ are exactly the directions of $S$. A point $\bar x \in S$ is said to be a [*minimizer*]{} for if there is no $x \in S$ with $P x \leq_C P \bar x$ and $P x \neq P\bar x$, that is, $P \bar x$ is $C$-minimal in $P[S]:=\cb{Px\st x \in S}$. The set of minimizers of is denoted by $\Min\eqref{p}$. A direction $k \in \R^n\smz$ of $S$ is called a [*minimizer*]{} for if the corresponding point $k \in S^h\smz$ is a minimizer of the homogeneous problem $$\tag{P$^h$}\label{ph} \text{ minimize } P: \R^n\to\R^q \text{ with respect to } \le_C \text{ subject to } Bx \geq 0.$$ A set $\bar S \subseteq S$ is called a set of [*feasible*]{} points for and, whenever $S \neq \emptyset$, a set $\bar S^h \subseteq S^h\smz$ is called a set of feasible directions for . A pair of sets $\of{\bar S, \bar S^h}$ is called a [*finite infimizer*]{} for if $\bar S$ is a non-empty finite set of feasible points for , $\bar S^h$ is a (not necessarily non-empty) finite set of feasible directions for , and $$\label{eq_inf_att} \conv P[\bar S] + \cone P[\bar S^h] +C = P[S] + C.$$ An infimizer can be understood as a feasible element for a set-valued extension of problem (lattice extension) where the infimum (which is well defined for this lattice extension) is attained, i.e., condition stands for the [*infimum attainment*]{}. The set $\P:=P[S]+C$ is called [*upper image*]{} of . Clearly, if $(\bar S, \bar S^h)$ is a finite infimizer and $(\cb{0},Y)$ is a V-representation of the cone $C$, then $(P[\bar S],P[\bar S^h] \cup Y)$ is a V-representation of the upper image $\P$. The following solution concept is based on a combination of minimality and infimum attainment. \[def\_sol\] A finite infimizer $(\bar S, \bar S^h)$ of is called a [*solution*]{} to if $\bar S$ and $\bar S^h$ consist of only minimizers. In practice, the upper image $\P$ is one of the most important information for a decision maker. This is due to the fact that in typical applications the dimension $n$ of the decision space is considerably larger than the dimension $q$ of the outcome (or criteria) space. The problem to calculate all the minimizers is usually not tractable. Moreover, the overwhelming set of all minimizers is in general not suitable to support a decision. It is more natural and easier in practice to compare the criteria rather than decisions. Furthermore, also in scalar programming it is often not necessary to know all optimal solution. A solution as introduced above can be seen as an outcome set based concept which provides the information to describe the upper image $\P$. Duality ------- If vector optimization is considered in a set-valued framework, it is very natural to consider a dual problem with a hyperplane-valued objective function. First, we note that this is also the case in scalar optimization as in $\R$ a point and a hyperplane are the same thing. Secondly, we have in mind the well-known dual description of a convex set by hyperplanes. The upper image $\P$ of a linear vector optimization problem is a convex polyhedron which can be interpreted as an infimum of the lattice extension of problem , see [@Loehne11book]. It is therefore natural to ask for a dual description of this convex set which is obtained as the supremum of a suitable dual problem. As a third argument, we want to mention that there is a lack of applications of the classical approaches to duality theory in vector optimization. For instance, Ehrgott [@Ehrgott_NC] pointed out that “dual algorithms could not be developed because of the absence of a duality theory for MOLP that could be algorithmically exploited.” The idea of [*geometric duality*]{} [@HeyLoe08] is to transform the hyperplane-valued dual problem into a vector optimization problem. This idea is taken from the theory of convex polytopes, where an H-representation of a polytope $A$ defines a V-representation of a dual polytope. For instance, if $A$ is solid and contains zero in its interior, an H-representation of the form $$A=\cb{x\in \R^q \st B x \leq (1,\dots,1)^T }$$ exists. The row vectors of the matrix $B$ yield a V-representation of the polar set $A^\circ:=\{y^*\in\R^q \st \forall y \in A: {y^*}^T y \leq 1 \}$ of $A$, which is a dual polytope to the polytope $A$. The duality relation between $A$ and the dual polytope $A^\circ$ is given by an inclusion reversing one-to-one map $\Psi$ between the set of all faces of $A$ and the set of all faces of $A^\circ$. A similar duality map can be used to transform a hyperplane-valued optimization problem into a vector-valued problem, which is called the [*geometric dual problem*]{}. We assume throughout that there exists a vector $$\label{ass_c} c \in \Int C \quad \text{ such that } \quad c_q=1,$$ and we fix such a vector $c$. As $C$ was assumed to be a solid cone, there always exists some $c \in \Int C$ such that either $c_q=1$ or $c_q =-1$. In the latter case we can consider the problem where $C$ and $P$ are replaced by $-C$ and $-P$, which is equivalent to (P) and fulfills . Consider the dual problem $$\label{d} \tag{D$^*$} \text{ maximize } D^*: \R^m \times \R^q \to \R^q \text{ with respect to } \leq_K \text{ over } T,$$ with (linear) objective function $$D^*:\R^m\times\R^q \to \R^q,\quad D^*(u,w):=\of{w_1,...,w_{q-1},b^T u}^T,$$ ordering cone $$K:=\R_+ \cdot (0,0,\dots,0,1)^T,$$ and feasible set $$T:=\cb{(u,w)\in \R^m \times \R^q \st u \geq 0,\; B^T u = P^T w,\; c^T w = 1,\; Y^T w \geq 0},$$ where $Y$ is the matrix whose columns are generators of the ordering cone $C$. A point $(\bar u, \bar w) \in T$ is said to be a [*maximizer*]{} for if there is no $(u,w) \in T$ with $D^*(u,w) \geq_K D^*(\bar u,\bar w)$ and $D^*(u,w) \neq D^*(\bar u,\bar w)$. The set of maximizers of is denoted by $\Max\eqref{d}$. A non-empty finite set $\bar T$ of points being feasible for is called a [*finite supremizer*]{} of if $$\label{eq_sup_att} \conv D^*[\bar T] - K = D^*[T] -K.$$ The set $\D^*= D^*[T]-K$ is called [*lower image*]{} of . Similarly to above, a finite supremizer yields a V-representation of $\D^*$. Condition can be interpreted as the attainment of the supremum in a suitable complete lattice, see e.g. [@Loehne11book]. The combination of maximality and supremum attainment leads to a solution. \[def\_sold\] A finite supremizer $\bar T$ of is called a [*solution*]{} to if it consists of only maximizers. Note that, in contrast to , we do not need directions in . This is due to the simplicity of the cone $K$ in contrast to $C$. A duality mapping $\Psi$ for the vector optimization problems and is now introduced. The bi-affine function $$\varphi:\; \R^q\times\R^q \to \R,\quad \varphi(y,y^*):=\sum_{i=1}^{q-1} y_i y^*_i + y_q \of{1-\sum_{i=1}^{q-1} c_i y^*_i} - y^*_q$$ is used to define the following two injective hyperplane-valued maps $$H: \R^q \rightrightarrows \R^q,\quad H(y^*):= \cb{y \in \R^q \st \varphi(y,y^*)=0},$$ $$H^*:\R^q \rightrightarrows \R^q,\quad H^*(y):= \cb{y^* \in \R^q \st \varphi(y,y^*)=0}.$$ The mapping $H$ yields the duality map $$\Psi: 2^{\R^q} \to 2^{\R^q}, \quad\Psi( F^*):=\bigcap_{y^* \in F^*} H(y^*) \cap \P.$$ By setting $$\label{eq_w} w(y^*):= \of{y^*_1,\dots,y^*_{q-1},1-\sum_{i=1}^{q-1} c_i y^*_i}$$ and $$w^*(y):=\of{y_1-y_q c_1\,,\dots,\,y_{q-1}-y_q c_{q-1}\,,\,-1} ,$$ we can write $$\label{eq_phiw} \varphi(y,y^*)= w(y^*)^T y -y^*_q = w^*(y)^T y^* + y_q,$$ which is useful for the geometric interpretation of duality. Weak duality reads as follows. \[th\_wgd\] The following implication is true: $$\sqb{y \in \P \wedge y^* \in \D^*} \implies \varphi(y,y^*)\geq 0.$$ Note that weak duality implies the inclusions $$\D^* \subseteq \cb{y^* \in \R^q \st \forall y \in \P: \; \varphi(y,y^*)\geq 0} \quad \text{and} \quad \P \subseteq \cb{y \in \R^q \st \forall y^* \in \D^*: \; \varphi(y,y^*)\geq 0},$$ whereas the following strong duality theorem yields even equality. \[th\_fsd\] Let the feasible sets $S$ of and $T$ of be non-empty. Then $$\begin{aligned} \sqb{\forall y^* \in \D^*: \varphi(y,y^*)\geq 0} &\implies y \in \P ,\\ \sqb{\forall y \in \P: \varphi(y,y^*)\geq 0} &\implies y^* \in \D^*.\end{aligned}$$ \[rem\_3.7\] Theorems \[th\_wgd\] and \[th\_fsd\] are formulated and proved in the mentioned references only for the special case $C=\R^q_+$ and $c=(1,\dots,1)^T$. However, using the generalized variants of scalarized problems , , and as stated below, the general case can be obtained in the same way. The following [*geometric duality theorem*]{} provides a third type of duality relation. It takes into account the facial structure of the sets $\P$ and $\D^*$. Note that geometric duality does not play any role in scalar optimization because the structure of polyhedra in the objective space $\R$ is very simple. \[th\_mr\] $\Psi$ is an inclusion reversing one-to-one map between the set of all $K$-maximal proper faces of $\D^*$ and the set of all proper faces of $\P$. The inverse map is given by $$\Psi^{-1}(F)=\bigcap_{y \in F} H^*(y) \cap \D^*.$$ Moreover, if $F^*$ is a $K$-maximal proper face of $\D^*$, then $$\dim F^* + \dim \Psi( F^*) = q-1.$$ \[rem\_3.9\] The proof of the special case $c=(1,...,1)^T$ and $C=\R^q_+$ can be found in [@HeyLoe08]. The general case can be proved in the same way using the generalized versions of , , and as defined below. Theorem \[th\_mr\] (in the general setting) is also a special case of geometric duality theorem for convex vector optimization problems, see Example 3 in [@Heyde11]. \[rem\_3.10\] Non-$K$-maximal proper facets (faces of dimension $q-1$) of $\D^*$ correspond to the extreme directions of $\P$ by a similar duality relation, where the coupling function $\varphi$ has to be replaced by $\hat \varphi :\R^q\times\R^q\to \R, \quad \hat\varphi(y,y^*):=\varphi(y,y^*)+y^*_q$. The case $C=\R^q_+$, $c=(1,\dots,1)^T$ has been studied in [@Loehne11book] and the general case is obtained likewise using the generalized variants of , , and as defined below. The following scalarization techniques are fundamental for the algorithms described in the next section. As mentioned in Remarks \[rem\_3.7\], \[rem\_3.9\] and \[rem\_3.10\], they can also be used to prove weak, strong and geometric duality. The [*weighted sum*]{} scalarized problem for a parameter vector $w \in C^+$ satisfying $c^T w = 1$ is $$\tag{P$_1(w)$}\label{p1} \min w^T P x \quad \text{ subject to } \; Bx \geq b.$$ Its dual problem is $$\tag{D$_1(w)$}\label{d1} \max b^T u \quad \text{ subject to } \; \left\{ \begin{array}{rcl} B^T u &=& P^T w\\ u &\geq& 0. \end{array}\right.$$ The [*translative scalarization*]{} (or scalarization by reference variable) is based on problem $$\tag{P$_2(y)$} \label{p2} \min z \quad \text{ subject to } \; \left\{ \begin{array}{rcl} Bx &\geq& b\\ Z^T P x &\leq& Z^T y + z \cdot Z^T c, \end{array}\right.$$ where $Z$ is the matrix whose columns are the generating directions of the dual cone $C^+$ of the ordering cone $C$. The dual program is $$\tag{$\bar {\rm D}_2(y)$} \label{d2a} \max b^T u - y^T Z v \quad \text{ subject to } \; \left\{ \begin{array}{rcl} B^T u - P^T Z v&=& 0\\ c^T Z v&=& 1\\ (u,v) &\geq& 0. \end{array}\right.$$ This problem can be equivalently expressed as $$\tag{D$_2(y)$}\label{d2} \max b^T u - y^T w \quad \text{ subject to } \; \left\{ \begin{array}{rcl} B^T u - P^T w&=& 0\\ c^T w&=& 1\\ Y^T w&\geq& 0\\ u &\geq& 0, \end{array}\right.$$ where $Y$ is the matrix of generating directions of the ordering cone $C$. The equivalence of and is a consequence of the following assertion. For vectors $w \in \R^q$, we have $$Y^T w \geq 0 \iff \forall y \in C:\; y^T w \geq 0 \iff w \in C^+ \iff \exists v \geq 0:\; w = Z v.$$ Benson’s algorithm and its dual variant {#sec_alg} ======================================= \[sec\_Bensonalgo\] Benson [@Benson98a; @Benson98] motivated his outer approximation algorithm by practical problems that typically involve a huge number of variables and constraints and just a few objective functions. He proposed three advantages of “outcome set-based approaches” in comparison to “decision set-based approaches”. First, he observed that the set of minimal elements (in the outcome space $\R^q$) has a simpler structure than the set of minimizers (in the decision space $\R^n$) because, typically, $q \ll n$. This is beneficial for computational reasons but also for the decision maker. Second, in practice, decision makers prefer to base their decisions (at least in a first stage) on objectives rather than directly on a set of efficient decisions. Third, it is generic that many feasible points are mapped on a single image point which may lead to redundant calculations of “little or no use to the decision maker” [@Benson98]. Comparing this motivation with the notions of the previous section, we see that the solution concepts are based on exactly the same motivation (but additionally there is a theoretical motivation, see [@HeyLoe11; @Loehne11book]). It is therefore not surprising that the variants of Benson’s algorithm presented here just compute solutions in the sense of the previous section. The dual variant of the algorithm (based on geometric duality) has been established in [@EhrLoeSha07; @EhrLoeSha12]. It was followed by approximating variants [@ShaEhr08; @ShaEhr08-1] and by a generalization of the primal algorithm to convex problems [@EhrShaSch11]. Unbounded problems have been first treated in [@Loehne11book]. Problem is said to be [*bounded*]{}, if $$\exists y \in \R^q:\quad P[S] \subset \cb{y} + C.$$ The generalization from $\R^q_+$ to arbitrary pointed solid polyhedral convex cones is new in this article but has already been used in [@LoeRud11]. We will present simplified variants where only one linear program (rather than two or three) has to be solved in one iteration[^2]. The idea of the primal algorithm is to evaluate the upper image $\P = P[S] + C$ of problem by constructing appropriate cutting planes. This leads to an iterative refinement of an outer approximation $\T \supset \P$ by a decreasing sequence of polyhedral supersets $$\T^0 \supset \T^1 \supset \dots \supset \T^k = \P.$$ Both an H-representation and a V-representation of the approximating supersets $\T^i$ are stored. The algorithm terminates after finitely many steps (say $k$ steps) when the outer approximation coincides with $\P$. Unbounded problems are treated by a two-phase method. First, one solves the homogeneous problem (which is unbounded, too) and its dual problem $$\label{dh} \tag{D$^{*h}$} \text{ maximize } D^{*h}: \R^m \times \R^q \to \R^q \text{ with respect to } \leq_K \text{ over } T$$ with objective function $$D^{*h}:\R^m\times\R^q \to \R^q,\quad D^{*h}(u,w):=\of{w_1,...,w_{q-1},0}^T.$$ To this end, is transformed into an equivalent bounded problem $$\tag{P$^\eta$}\label{p_eta} \text{ minimize } P: \R^n\to\R^q \text{ with respect to } \le_C \text{ subject to } Bx \geq 0, \; \eta^T P x \leq 1,$$ where $\eta \in \Int (\D^{*h}+K)$ with $c^T \eta=1$ ($\D^{*h}$ denotes the lower image of ). In the second phase, a primal and dual solution of the homogeneous problem are used to calculate a primal and dual solution of the original (inhomogeneous and unbounded) problem . The two-phase method requires an algorithm that works whenever an H-representation of an initial outer approximation $\T^0 \supset \P$ with $\T^0_\infty = \P_\infty$ is known. If an H-representation of $\P_\infty$ is known, that is $$\P_\infty=\cb{y \in \R^q \st (w^i)^T y \geq 0, i=1,\dots,r},$$ and if $\gamma_i$ denotes the optimal value of (P$_1(w^i)$) for $i=1,\dots,r$, then $$\T^0=\cb{y \in \R^q \st (w^i)^T y \geq \gamma_i, i=1,\dots,r}$$ is the desired outer approximation of $\P$ satisfying $\T^0_\infty=\P_\infty$. If problem is bounded, we have $C=\P_\infty$, i.e., an H-representation of the ordering $C$ is required. Otherwise, whenever $\eqref{p}$ is feasible, the upper image $\P^h:=P[S^h]+C$ of the homogeneous problem coincides with $\P_\infty$. By geometric duality, a dual solution to yields an H-representation of $\P^h=\P_\infty$. The idea of such an algorithm can be explained geometrically. Let $\T^0 \supset \P$ be an initial outer representation of $\P$, i.e., $\T^0_\infty = \P_\infty$. First, the vertices of $\T^0$ are computed from its H-representation. This can be realized by [*vertex enumeration*]{}, which is a standard method in Computational Geometry, see e.g. [@BarDobHuh96; @BreFukMar98]. Secondly, for a vertex $t^0 \in \T^0$, problem (P$_2(t^0)$) is solved. Usually, LP solvers yield simultaneously a solution of both the primal and the dual problem. If the optimal value of (P$_2(t^0)$) is zero, then $t^0$ belongs to $\P$ and one proceeds with the next vertex of $\T^0$. If every vertex of $\T^0$ belongs to $\P$, we have $\T^0=\P$. Otherwise, for $t^0 \not\in \P$, a solution of (P$_2(t^0)$) yields a point $s^0 \in \bd\P \cap \Int \T^0$, see Proposition \[prop\_1\] below. The solution of the dual problem, defines a supporting hyperplane $H^0$ of $\P$ that contains $s^0$. The corresponding halfspace $H^0_+$ contains $\P$ but not $t^0$. An H-representation of an improved outer approximation $\T^1 := \T^0 \cap H^0_+$ is obtained immediately. This procedure is repeated until, after finitely many steps, $\T^k = \P$. A solution $(\bar S,\bar S^h)$ of is obtained by collecting those points $x$ that arise during the procedure from a solution $(x,z)$ of (P$_2(t^i)$) with zero optimal value. In this case we have $t=Px$ for some vertex $t$ of $\T^i$. Hence $Px$ is a vertex of $\P$ which implies that $x$ is a minimizer for . In the unbounded case, $\bar S^h$ contains directions that originate from a solution to the homogeneous problem. A solution of the dual vector optimization problem is obtained by collecting those dual solutions of (P$_2(t^i)$) with non-zero optimal value. \[prop\_1\] Let $S \neq \emptyset$, $C\subseteq\R^q$ a solid pointed polyhedral cone and let $c \in \Int C$. For every $t \in \R^q$, there exist optimal solutions $(\bar x,\bar z)$ to $({\rm P}_2(t))$ and $(\bar u,\bar w)$ to $({\rm D}_2(t))$. Each solution $(\bar u, \bar w)$ to $({\rm D}_2(t))$ defines a supporting hyperplane $H:=\cb{y \in \R^q \st \bar w^T y = b^T \bar u}$ of $\P$ such that $s := t + \bar z\cdot c \in H\cap \P$. We have $$t \not\in \P \iff \bar z > 0, \qquad t \in \wMin\P \iff \bar z = 0,\qquad t \in \Int \P \iff \bar z < 0.$$ Fix $t\in \R^q$. Since $S\neq \emptyset$ and $c \in \Int C$, $({\rm P}_2(t))$ is feasible. Assuming $({\rm P}_2(t))$ is not bounded, we obtain $t + (z-n)c - P x \in C$ for all $n \in \N$. Dividing by $n$ and letting $n\to \infty$, we conclude $-c \in C$. As $c \in \Int C$, convexity of $C$ implies $0 \in \Int C$. Thus $C=\R^q$, a contradiction. Consequently, $({\rm P}_2(t))$ and, by duality, also $({\rm D}_2(t))$ have optimal solutions $(\bar x,\bar z)$ and $(\bar u, \bar w)$, respectively. Duality yields $b^T \bar u - t^T \bar w = \bar z$ and thus $s=t+ \bar z c$ belongs to $H$. Of course, $H$ is a hyperplane as the constraint $\bar w^T c =1$ of $({\rm D}_2(t))$ implies $\bar w \neq 0$. From $P \bar x \leq_C t + z\cdot c$ we conclude that $s=t+ \bar z c$ belongs to $\P$. For arbitrary $y \in \P$, there exists $x \in S$ such that $y \geq_C P x$. Hence $(x,0)$ is feasible for $({\rm P}_2(y))$. Weak duality between $({\rm P}_2(y))$ and $({\rm D}_2(y))$ implies that $y^T w \geq b^T u$ for every $(u,w)\in T$, in particular, $y^T \bar w \geq b^T \bar u$. Hence $H = \cb{y \in \R^q \st y^T \bar w = b^T \bar u}$ is a supporting hyperplane to $\P$. The remaining statements are now obvious, where the fact $\wMin \P = \bd \P$ can be used. \[prop\_2\] Every vertex of $\P$ is minimal. Let $y \in \P = P[S] + C$ be not minimal for $\P$. Then there are $z \in \P$ and $k \in C\setminus \cb{0}$ such that $z = y - k$. The points $y - k$ and $y + k$ belong to $\P$ and we have $y = \frac{1}{2}(y - k) + \frac{1}{2}(y + k)$. Hence $y$ is not a vertex of $\P$. Two functions are used in the following algorithm. The function [*dual()*]{} computes a V-representation of an outer approximation $\T$ from an H-representation of $\T$, i.e., this function consists essentially of vertex enumeration. This H-representation of $\T$, however, is stored in a dual format, namely, as a V-representation of an inner approximation $\T^*$ of the lower image $\D^*$ of , where $$\label{tts} \T^*=\cb{y^*\in \R^q \st \varphi(y,y^*) \geq 0, y \in \T}.$$ The following duality relation holds. \[prop\_tst\] If $\emptyset\neq\T\subsetneq\R^q$ is closed and convex and $\T_\infty\supseteq C$, then $$\label{tst} \T=\cb{y \in \R^q \st \varphi(y,y^*) \geq 0, y^* \in \T^*}.$$ The inclusion $\subseteq$ is obvious. Assume that the inclusion $\supseteq$ does not hold, i.e., there exists $\bar y \in \R^q\setminus \T$ with $\varphi(\bar y,y^*)\geq 0$ for all $y^* \in \T^*$. Applying separation arguments, we get $\eta \in C^+\smz$ with $\eta ^T \bar y < \inf_{y \in \T} \eta^T y := \gamma$. By , we can assume $\eta^T c = 1$. Setting $\bar y^*:=(\eta_1,\dots,\eta_{q-1},\gamma)$, we get $w(\bar y^*)=\eta$ and $\varphi(y,\bar y^*)=w(\bar y^*)-\bar y^*_q=\eta^T y-\gamma$. For all $y \in \T$, we have $\eta^T y-\gamma \geq 0$, i.e., $\bar y^* \in \T^*$. But $\varphi(\bar y,\bar y^*)=\eta^T \bar y-\gamma< 0$, a contradiction. In the following algorithm, a V-representation of a polyhedron $\T$ (that contains no lines) is denoted by $(\T_{points},\T_{dirs})$, i.e., $\T = \conv \T_{points} + \cone \T_{dirs}$. We assume further that a V-representation returned by the function [*dual()*]{} is [*minimal*]{}, i.e., $\T_{points}$ consists of only vertices of $\T$ and $\T_{dirs}$ consists of only extreme directions of $\T$. The function [*solve()*]{} returns an optimal solution $(x,z)$ of (P$_2(t)$) and an optimal solution $(u,w)$ of (D$_2(t)$). Since (D$_2(t)$) is, up to a substitution, the dual program of (P$_2(t)$), only one linear program has to be solved. The variables in the following algorithm are arrays of vectors. By $\abs{A}$ we denote the number of vectors in an array $A$ and by $A[i]$ we refer to the $i$-th vector in $A$. The command [*break*]{} exits the inner-most loop. ### Algorithm 1. {#algorithm-1. .unnumbered} Input:\ $B, b, P, Z$ (data of );\ a solution $(\cb{0},\bar S^h)$ to ;\ a solution $\bar T^h$ to ; Output:\ $(\bar S,\bar S^h)$ ... a solution to ;\ $\bar T$ ... a solution to ;\ $(\T_{points},\T_{dirs})$ ... a V-representation of $\P$;\ $(\T^*_{points},\cb{-e^q})$ ... a V-representation of $\D^*$; \ $\bar T \leftarrow \cbg{\ofg{\text{solve(D$_1$($w$))},w} \big|\; (u,w) \in \bar T^h}$;\ \ $flag \leftarrow 0$;\ $\bar S \leftarrow \emptyset$;\ $\T^*_{points} \leftarrow \cb{D^*(u,w)\st (u,w) \in \bar T}$;\ $(\T_{points},\T_{dirs}) \leftarrow \dual(\T^*_{points},\cb{-e^q})$;\ $i=1$ [**to**]{} $\abs{\T_{points}}$ [**do**]{}\ $t \leftarrow \T_{points}[\,i\,]$;\ $(x,z,u,w)\leftarrow$ solve(P$_2$($t$)/D$_2$($t$));\ $z > 0$ [**then**]{}\ $\bar T \leftarrow \bar T\cup \cb{(u,w)}$;\ $flag \leftarrow 1$;\ break; (optional)\ \ $\bar S \leftarrow \bar S \cup \cb{x}$;\ ;\ ;\ ;\ \[corr\_fin\_1\] Let $S\neq \emptyset$, suppose $\P^h$ has a vertex and assume that the command $$(\T_{points},\T_{dirs}) \leftarrow \dual(\T^*_{points},\T^*_{dirs})$$ generates a minimal V-representation of $\T$ from a given V-representation of $\T^*$ according to . Then Algorithm 1 is correct and finite. As $\bar T^h$ is non-empty (by the definition of a solution), we can choose some $(u,w)\in \bar T^h$. Then $D^{*h}(u,w)=(w_1,\dots,w_{q-1},0)$ is $K$-maximal in $\D^{*h}$. Hence $u$ solves the homogeneous variant (i.e., we set $b=0$) of . Consequently, (D$_1$($w$)) (for arbitrary $b$) is feasible. Since $S \neq \emptyset$, (P$_1$($w$)) is feasible, too. Thus, by linear programming duality, has a solution. The set $\T^*:=\conv \T^*_{points} + \cone \cb{-e^q}$ is a non-empty subset of $\D^*$. Hence, by Theorem \[th\_wgd\], after calling the function [*dual()*]{}, $\T:=\conv \T_{points} + \cone \T_{dirs}$ is a superset of $\P$. As $\bar T^h$ solves the dual of the homogeneous problem, we have $\T_\infty=\P_\infty=\P^h$, see [@Loehne11book Section 4.6] for more details. As $\P^h$ is assumed to have a vertex, $\T$ must have a vertex, hence the array $\T_{points}$ is non-empty. By Proposition \[prop\_1\], solutions to (P$_2$($t$)) and (D$_2$($t$)) exist. The vectors $x \in \bar S$ are minimizers of . Indeed, $x$ is added to $\bar S$ only if $z=0$. In this case, we have $t \in \P$, where $t$ is a vertex of $\T\supseteq \P$ because, by assumption, $\T_{points}$ contains only vertices of $\T$. Hence $t$ is a vertex of $\P$ and, by Proposition \[prop\_2\], $t$ is a minimizer for . The algorithm terminates if all vertices of $\T$ belong to $\P$. Since $\P_\infty=\T_\infty$, we conclude $\P=\T$, i.e., $(\bar S,\bar S^h)$ is an infimizer of and $(\T_{points},\T_{dirs})$ is a V-representation of $\P$. A solution $(u,w)$ to (D$_2$($t$)) is always a maximizer of $\eqref{d}$, i.e., $\bar T$ consists of only maximizers. Since at termination $\T=\P$, Theorem \[th\_fsd\] implies $\T^*=\D^*$ and thus $\bar T$ is a supremizer for and $(\T^*_{points},\cb{-e_q})$ is a V-representation of $\D^*$. Finally we show that the algorithm terminates after a finite number of steps. The point $s^k:=t^k + z^k \cdot c$ computed in iteration $k$ (consider the ‘repeat’ loop) by solving (P$_2$($t^k$)/D$_2$($t^k$)) belongs to $\Int \T^{k-1}$ whenever $z^k>0$. We have $ \T^k := \T^{k-1} \cap \{ y \in \R^q \st (w^k)^T y \geq (u^k)^T b\}$ and by Proposition \[prop\_1\] we know that $F:=\{ y \in \P \st (w^k)^T y = (u^k)^T b\}$ is a face of $\P$ with $s^k \in F$, where $F \subset \bd \T^k$. This means for the next iteration that $s^{k+1} \not \in F$ (because $s^{k+1} \in \Int \T^k$), and therefore $s^{k+1}$ belongs to another face of $\P$. Since $\P$ is polyhedral, it has a finite number of faces, hence the algorithm is finite. We now turn to the dual variant of Algorithm 1. An analogous construction is now applied to the lower image $\D^*$, i.e., a finite sequence of polyhedral sets $$\T^{*0} \supseteq \T^{*1} \supseteq ,\dots, \supseteq \T^{*k} = \D^*$$ is calculated. Using the upper image $\P^h$ (which is a polyhedral cone) of the homogeneous problem , we define the set $$\Delta := \cb{y^* \in \R^q \st w(y^*) \in (\P^h)^+}.$$ The counterpart of Proposition \[prop\_1\] is the following. \[prop\_d1\] Let $S\neq \emptyset$ and $t^* \in \Delta$. For $w:=w(t^*)$, $({\rm P}_1(w))$ has a solution and for every such solution $\bar x$, $H^*(P \bar x)$ is a supporting hyperplane of $\D^*$ that contains $$\label{eq_sst} s^*:=(t^*_1,\dots,t^*_{q-1},w^T P \bar x) \in \textstyle\Max_K \D^*.$$ Moreover, we have $$t^* \not\in \D^* \iff w^T P\bar x < t^*_q, \quad\qquad t^* \in \textstyle\Max_K \D^* \iff w^T P\bar x = t^*_q.$$ Since $t^* \in \Delta$, for all $k \in \P^h$, we have $w^T k \geq 0$. This means that the homogeneous variant of the linear program (i.e., we set $b=0$ in ) is bounded (and feasible, as $0$ is feasible). Consequently, the dual program is feasible, even for arbitrary $b$, i.e., is feasible. On the other hand, is feasible, since we assumed $S \neq \emptyset$. Altogether this implies that both and have optimal solutions denoted, respectively, by $\bar x$ and $\bar u$. Strong duality implies $w^TP\bar x = b^T \bar u$. Thus, holds. We have $s^* \in H^*(P\bar x)$ because this can be written as $w(s^*)^T P \bar x = s^*_q$ where we have $w=w(t^*)=w(s^*)$. Together with Theorem \[th\_wgd\], we obtain that $H^*(P\bar x)$ is a supporting hyperplane of $\D^*$ that contains $s^*$. The remaining statements are now obvious. The following consequence of Proposition \[prop\_tst\] is useful to characterize the condition $t^* \in \Delta$. \[cor\_del\] Let the assumptions of Proposition \[prop\_tst\] be satisfied. Then, $w(y^*) \in (\T_\infty)^+$ for all $y^* \in \T^*$. Assuming the contrary, there is $\bar y^*\in \T^*$ and $k \in \T_\infty$ with $w(y^*)^T k < 0$. Let $\bar y \in \T$. For sufficiently large $\lambda>0$, using , we obtain $\varphi(\bar y+\lambda k,\bar y^*)< 0$, which contradicts Proposition \[prop\_tst\]. The following dual algorithm has the same input and output as Algorithm 1. Similar functions are used. The function [*dual()*]{} computes a V-representation of an outer approximation $\T^*$ of $\D^*$ from a V-representation of an inner approximation $\T$ of $\P$. In contrast to Algorithm 1, it is not necessary that [*dual()*]{} returns a minimal V-representation. The recession cone of sets $\T^*$ occurring in the algorithm is known, in fact, we always have $\T^*_\infty=-K=\R_+\cb{-e^q}$. Therefore we denote the return of the function [*dual()*]{} by $(\T^*_{points},\sim)$ indicating that the second return value (the array containing the extreme directions of $\T^*$) is not used. The function [*solve()*]{} returns an optimal solution $x$ of (P$_1(w)$) and an optimal solution $u$ of (D$_1(w)$). Again, only one linear program has to be solved. ### Algorithm 2. {#algorithm-2. .unnumbered} Input:\ $B, b, P, Y$ (data of Problem );\ a solution $(\cb{0},\bar S^h)$ to ;\ a solution $\bar T^h$ to ; Output:\ $(\bar S,\bar S^h)$ is a solution to ;\ $\bar T$ is a solution to ;\ $(\T_{points},\T_{dirs})$ ... a V-representation of $\P$;\ $(\T^*_{points},\cb{-e_q})$ ... a V-representation of $\D^*$; \ $\T_{dirs} \leftarrow \cb{Px \st x \in \bar S^h}\cup \cb{y \st y \text{ is a column of } Y}$;\ $\bar w \leftarrow \text{ mean} \cb{ w \st (u,w) \in \bar T^h}$;\ $\bar S \leftarrow \cb{\text{solve(P}_1\text{(}\bar w\text{))}}$;\ \ $flag \leftarrow 0$;\ $\bar T \leftarrow \emptyset$;\ $\T_{points} \leftarrow \cb{Px \st x \in \bar S}$\ $(\T^*_{points},\sim) \leftarrow \dual(\T_{points},\T_{dirs})$;\ $i=1$ [**to**]{} $\abs{\T^*_{points}}$ [**do**]{}\ $t^* \leftarrow \T^*_{points}[i]$;\ $w \leftarrow w(t^*)$;\ $(x,u)\leftarrow$ solve(P$_1$($w$)/D$_1$($w$));\ $t^*_q - b^T u > 0$ [**then**]{}\ $\bar S \leftarrow \bar S \cup \cb{x}$;\ $flag \leftarrow 1$;\ break; (optional)\ \ $\bar T \leftarrow \bar T \cup \cb{(u,w)}$;\ ;\ ;\ ;\ delete points $x \in \bar S$ whenever $Px$ is not a vertex of $\T$ ;\ The last line in the algorithm is easy to realize, for instance, by computing a minimal V-representation using the command $$(\T_{points},\T_{dirs}) \leftarrow \dual(\T^*_{points},\T^*_{dirs})$$ from Algorithm 1 by standard vertex enumeration methods. Then one has to test if for $x \in \bar S$, $Px$ belongs to $\T_{points}$, if not, $x$ is deleted from $\bar S$. In particular, it is not necessary to solve a linear program. \[corr\_fin\_2\] Let $S\neq \emptyset$ and assume that $\P^h$ has a vertex. Then, Algorithm 2 is correct and finite. By similar arguments as in the proof of Theorem \[corr\_fin\_1\] one can show that P$_1$($\bar w$)) has a solution. The set $\T:=\conv \T_{points} + \cone \T_{dirs}$ is a subset of $\P$. Hence, by Theorem \[th\_wgd\], after calling the function [*dual()*]{}, $\T^*:=\conv \T^*_{points} + \cone \T^*_{dirs}$ is a superset of $\D^*$. Since $\T \neq \R^q$, $c \in \Int C$ and $\T_\infty \supseteq C$, we have $\cone \T^*_\infty = \R_+ \cdot \cb{-e^q}$, i.e., we can set $\T^*_{dirs}=\cb{-e^q}$ and we know that $\T^*_{points} \neq \emptyset$. The array $(\cb{0},\T_{dirs})$ provides a V-representation of $\P^h$, i.e., $\T_\infty=\P^h$. Corollary \[cor\_del\] yields that $\T^*_{points} \subset \Delta$. Hence, by Proposition \[prop\_d1\], solutions to (P$_1$($w$)) and (D$_1$($w$)) exist. It can be easily shown that the vectors $(u,w) \in \bar T$ are maximizers of , see also [@Loehne11book Lemma 4.51]. The algorithm terminates, if $\T^*_{points} \subseteq \D^*$. Since $\D^*_\infty=\T^*_\infty=\R_+\cb{-e^q}$, we conclude $\D^*=\T^*$, i.e., $\bar T$ is a supremizer of and $(\T^*_{points},\cb{-e^q})$ is a V-representation of $\D^*$. Since at termination $\T^*=\D^*$, Theorem \[th\_fsd\] implies $\T=\P$. Thus $(\T_{points},\T_{dir})$ is a (not necessarily minimal) V-representation of $\P$. A solution $x$ to (P$_1$($w$)) is in general not a minimizer for $\eqref{p}$ (but only “weakly efficient”, compare e.g. [@Loehne11book Theorem 4.1]). Therefore, in the last line of the algorithm, $x$ is deleted from $\bar S$, whenever $Px$ is not a vertex of $\T$. According to Proposition \[prop\_2\], the remaining set $\bar S$ consists of only minimizers. It is non-empty because, by assumption, $\P^h$ has a vertex and hence $\T=\P$ must have a vertex. As non-vertex points are redundant in a V-representation of a set which has a vertex, the property of $\bar S$ being an infimizer for is maintained by deleting the non-minimizers in $\bar S$. Finally we show that the algorithm terminates after finitely many steps. We consider the ‘repeat’ loop in iteration $k$. We set $w=w^k$ and $t^*={t^{*k}}$ and denote the solutions of (P$_1$($w^k$)) and (D$_1$($w^k$)) by $x^k$ and $u^k$, respectively. The point $s^{*k}:=t^{*k} + z^{*k} \cdot \cb{-e^q}$, where $z^{*k}:=(t^{*k}_q - b^T u^k)$ belongs to $\T^{*(k-1)}\setminus \Max_K \T^{*(k-1)}$ whenever $z^{*k}>0$. We have $ \T^{*k} := \T^{*(k-1)} \cap \{ y^* \in \R^q \st \varphi(Px^k,y^*)\geq 0\}$ and by Proposition \[prop\_d1\] we know that $F^*:=\{ y^* \in \D^* \,\st\varphi\ofg{Px^k,y^*}= 0\}$ is a face of $\D^*$ with $s^{*k} \in F^*$. Likewise to [@Loehne11book Lemma 4.48], we see that $F^*\subseteq \Max_K \T^{*k}$. This means for the next iteration that $s^{*(k+1)} \not \in F^*$ (because $s^{*(k+1)} \in \T^{*k}\setminus\Max_K \T^{*k}$), and therefore $s^{*(k+1)}$ belongs to another face of $\D^*$. Since $\D^*$ is polyhedral, it has a finite number of faces, hence the algorithm is finite. Let us summarize the two-phase method for solving unbounded problems. We consider an arbitrary linear vector optimization problem, where we only assume that $C$ is a solid pointed polyhedral cone. We fix some $c$ according to , which is always possible in the way described after . In phase 1, we first try to compute some $\eta \in \Int(\D^* + K)$ with $\eta^T c = 1$. This can be realized by Algorithm 3 in [@Loehne11book Section 5.5], where the set $T$ has to be adapted to the more general setting of this article. Note that $c$ has a different meaning in [@Loehne11book Section 5.5]. The first LP solved by the mentioned algorithm is $$\min 0^T w + 0^T u \quad\text{ s.t. }\quad (u,w) \in T.$$ If this linear program is infeasible, then is infeasible. Otherwise one obtains either some $\eta \in \Int(\D^* +K)$ or the algorithm indicates that $\Int(\D^* +K)$ is empty. In the latter case, we know that $\P^h$ has no vertex. This means that $\P$, if non-empty, contains a line. This case has not been treated so far. Since $c_q=1$ according to , the condition $\eta^T c = 1$ can be always realized by an appropriate choice of $\eta_q$. Next, we solve by either Algorithm 1 or Algorithm 2. Since is bounded, a solution of the primal and dual homogeneous problem of can be easily obtained. However, this is not necessary as the $u$-components of $(u,w)\in \bar T^h$ are not used in Algorithms 1 and 2. Therefore we can use $$\bar S^h = \emptyset \qquad \text{and}\qquad\bar T^h = \cb{\of{0,\frac{z}{z^T c}} \bigg|\; z \text{ is a column of } Z }$$ as an input of Algorithm 1 or 2 to solve , compare also [@Loehne11book Theorem 5.20]. Let $(\bar S_\eta,\bar S^h_{\eta})$ be a solution of $\eqref{p_eta}$ and let $\bar T_\eta$ be a solution of the dual problem of . Then, a solution of is obtained by setting $$(\bar S,\bar S^h) := (\cb{0},\bar S_\eta\smz),$$ compare [@Loehne11book Theorem 5.23]. A solution $\bar T^h$ of $\eqref{dh}$ can be obtained from $\bar\T_{\eta}$ but again only the $w$-components are required by Algorithms 1 or 2 in phase 2. As a consequence of [@Loehne11book Theorem 5.25], we can use $$\bar T^h:=\cb{\of{0,w(y^*)}\st y^* \in \T^*_{points},\; y^*_q=0}$$ as an input of Algorithm 1 or 2 in the second phase, where $\T^*_{points}$ is the result from the first phase, i.e., $(\T^*_{points},\cb{-e^q})$ is a V-representation of the lower image $\D^{*\eta}$ of the dual problem of . In the second phase, the first LP to be solved in Algorithm 1 is . If turns out to be unbounded, we know that is infeasible. Likewise, if the first LP in Algorithm 2, namely (P$_1$($\bar w$)), is infeasible, we know that is infeasible. Otherwise, according to Theorems \[corr\_fin\_1\] and \[corr\_fin\_2\], solutions of and are computed. \[rem\_eps\] In practice the condition $z>0$ in Algorithm 1 is replaced by $z>\varepsilon$ for some $\varepsilon>0$. Assume that the results of the first phase are always exact. Then, in the second phase, Algorithm 1 yields an $\varepsilon$-solution of in the sense that in Definition \[def\_sol\] the finite infimizer is replaced by a finite $\varepsilon$-infimizer, i.e., condition is replaced by $$\label{eq_inf_att_eps} \conv P[\bar S] + \cone P[\bar S^h] + C -\varepsilon\cb{c} \supseteq P[S] + C.$$ Taking into account that (using the assumption $c_q=1$ in ) $$\varphi(y-\varepsilon c,y^*)=\varphi(y,y^*+\varepsilon e^q),$$ we see that Algorithm 1 yields an $\varepsilon$-solution of in the sense that in Definition \[def\_sold\] a finite supremizer is replaced by a finite $\varepsilon$-supremizer, i.e., condition is replaced by $$\label{eq_sup_att_eps} \conv D^*[\bar T] - K +\varepsilon\cb{e^q} \supseteq D^*[T] -K.$$ Likewise, in Algorithm 2 the condition $t^*_q - b^T u > 0$ is replaced by $t^*_q - b^T u > \varepsilon$ for some $\varepsilon > 0$. Consequently, Algorithm 2 yields an $\varepsilon$-solution of . It also yields an $\varepsilon$-infimizer of , but in general not an $\varepsilon$-solution of . The reason is that the last line in Algorithm 2 only works for the exact algorithm. Note further that an $\varepsilon$-solution $(\bar S,\bar S^h)$ of refers to an inner and an outer approximation of the upper image $\P$ in the sense that $$\conv P[\bar S] + \cone P[\bar S^h] + C \subseteq \P \subseteq \conv P[\bar S] + \cone P[\bar S^h] + C -\varepsilon\cb{c}.$$ Likewise, an $\varepsilon$-solution $\bar T$ of refers to an inner and an outer approximation of the lower image $\D^*$ in the sense that $$\conv D^*[\bar T] - K \subseteq \D^* \subseteq \conv D^*[\bar T] - K +\varepsilon\cb{e^q}.$$ Note that the approximation error of the classical variant of Benson’s algorithm and its dual variant has been studied in [@ShaEhr08; @ShaEhr08-1]. Computation of polyhedral set-valued risk measures ================================================== Set-valued risk measures evaluate the risk of multi-variate random portfolios $X \colon \Omega \to \R^d$ the components $X_i\of{\omega}$ of which represent the number of units of the $i$-th asset in the portfolio, $i = 1, \ldots, d$. If transaction costs are present, such risk measures are more appropriate than real-valued functions, which always represent a complete risk preference and thus cannot account for incomparable portfolios. The theory of set-valued risk measures was initiated in [@JMT04] and systematically developed in [@HamHey10] and [@HamHeyRud11]. We refer the reader to these references for further motivation and information. Here, we restrict ourself to the case of finite probability spaces and the question how the values of a set-valued risk measure can be computed. It will turn out that this leads to problems of type (P), hence one can apply the algorithm presented in Section \[sec\_Bensonalgo\]. The basic idea is as follows. The value of a set-valued risk measure at some random future portfolio $X$ consists of initial deterministic portfolios $u$ which can be given as deposits for the ‘risky payoff’ $X$, thus making the overall position ‘risky payoff plus deposit’ a non-risky one. It usually is not possible to use all assets as deposits, but rather a small subset including cash in a few currencies, bonds, gold or similar risk-free or low-risk assets. These ‘eligible’ assets are assumed to span the linear subspace $M$ of $\R^d$ with $1 \leq \dim M = m \leq d$. A typical example, already used in [@JMT04], is $\R^m \times \cb{0}^{d-m}$, i.e. the first $m$ assets are eligible. Let $\of{\Omega, P}$ be a finite probability space and $N \geq 2$ be the number of elements in $\Omega = \cb{\omega_1, \ldots, \omega_N}$. We assume $p_n = P\of{\cb{\omega_n}} > 0$ for all $n \in \cb{1, \ldots, N}$. The space of all multi-variate random variables $X \colon \Omega \to \R^d$ is denoted by $L^0_d\of{\Omega, P}$. A random variable $X \in L^0_d\of{\Omega, P}$ can be identified with an element $\hat x \in \R^{dN}$ through $$\hat{x} = \of{X_1\of{\omega_1}, \ldots, X_d\of{\omega_1}, X_1\of{\omega_2}, \ldots, X_d\of{\omega_N}}^T \in \R^{dN}$$ and vice versa. Thus, the function $T \colon L^0_d\of{\Omega, P} \to \R^{dN}$ defined by $TX = \hat x$ is a linear bijection. If $A \subseteq L^0_d\of{\Omega, P}$ then we set $\hat A = \cb{\hat x \in \R^{dN} \mid \hat x = TX, \; X \in A}$. Let $K_0 \subseteq \R^d$ be a finitely generated convex cone satisfying $\R^d_+ \subseteq K_0 \neq \R^d$. Such a ‘solvency’ cone models the market at initial time. We set $K_0 ^M = K_0 \cap M$ and $\mathcal P\of{M, K_0^M} = \cb{D \subseteq M \mid D = D + K_0^M}$. A risk measure is a function $R \colon L^0_d\of{\Omega, P} \to \mathcal P\of{M, K_0^M}$ satisfying $$\label{EqMTranslative} \forall u \in M, \; \forall X \in L^0_d\of{\Omega, P} \colon R\of{X + u{\mathrm{1\negthickspace I}}} = R\of{X} - u$$ where ${\mathrm{1\negthickspace I}}\colon \Omega \to \R$ with ${\mathrm{1\negthickspace I}}\of{\omega} = 1$ for all $\omega \in \Omega$ is the uni-variate random variable with constant value 1. With $R$, we associate a risk measure $\hat R \colon \R^{dN} \to \mathcal P\of{M, K_0^M}$ by means of $\hat R\of{\hat x} = R\of{X}$ for $\hat x = TX$. Consequently, $\hat R$ satisfies $$\label{EqMTranslativeDiscrete} \forall u \in M, \; \forall \hat x \in \R^{dN} \colon \hat R\of{\hat x + \hat I_d u} =\hat R\of{\hat x} - u$$ where $$\hat{I}_d = \left( \begin{array}{c} I_d \\ \vdots \\ I_d \end{array} \right) \in \R^{dN \times d} \quad \text{and} \quad I_d = \left( \begin{array}{cccc} 1 & 0 & \ldots & 0 \\ 0 & 1 & \ldots & 0 \\ \vdots & & \ddots & \vdots \\ 0 & \ldots & 0 & 1 \\ \end{array} \right) \in \R^{d \times d}.$$ The most common way to generate a risk measure is by means of a set $A \subseteq L^0_d\of{\Omega, P}$ of random variables which are considered to be ‘acceptable’ by the decision maker. The value of a risk measure generated by $A$ then consists of all deterministic (available at time $0$) portfolios $u \in M \subseteq \R^d$ which, when added to the uncertain future position $X$, make the overall position acceptable. Thus, $$R_A\of{X} = \cb{u \in M \mid X + u{\mathrm{1\negthickspace I}}\in A}.$$ This functions indeed satisfies . Correspondingly, $$R_{\hat A}\of{\hat x} = \cb{u \in M \mid \hat x + \hat I_d u \in \hat A}$$ satisfies , and we have $R_{\hat A} = \widehat{R_A}$. Vice versa, with risk measures $R \colon L^0_d\of{\Omega, P} \to \mathcal P\of{M, K_0^M}$ and $\hat R \colon \R^{dN} \to \mathcal P\of{M, K_0^M}$ we associate the sets $$A_R = \cb{X \in L^0_d\of{\Omega, P} \mid 0 \in R\of{X}} \; \text{and} \; A_{\hat R} = \cb{\hat x \in \R^{dN} \mid 0 \in \hat R\of{\hat x}},$$ respectively. A basic fact about risk measures is a one-to-one correspondence between closed acceptance sets $A \subseteq L^0_d\of{\Omega, P}$ which satisfy $A + K_0^M{\mathrm{1\negthickspace I}}\subseteq A$ and risk measures $R \colon L^0_d\of{\Omega, P} \to \mathcal P\of{M, K_0^M}$ with a closed graph by means of the above formulas. In particular, the relationships $\hat A = A _{R_{\hat A}}$ and $\hat R = R_{A_{\hat R}}$ hold true. See [@HamHeyRud11] for further details. A risk measure $R$ is called polyhedral if the associated risk measure $\hat R$ is polyhedral, i.e., if $$\gr \hat R = \cb{\of{\hat x, u} \in \R^{dN} \times M \mid u \in \hat R\of{\hat x}}$$ is a polyhedral subset of $\R^{dN} \times \R^d$. The one-to-one correspondence between risk measures and their acceptance sets extends to the polyhedral case: $\hat A \subseteq \R^{dN}$ is polyhedral if, and only if, $R_{\hat A}$ is polyhedral, and $\hat R$ is polyhedral if and only if $\hat A_R$ is polyhedral. The above discussion leads to the following conclusion. \[RemPRMvsBenson\] Since each polyhedral risk measure $\hat R$ has the representation $$\hat R\of{\hat x} = \cb{u \in M \mid \hat x + \hat I_d u \in A_{\hat R}}$$ where $A_{\hat R}$ is a polyhedral set, the set $\hat R\of{\hat x}$ is the upper image of a linear vector optimization problem. Indeed, if $A_{\hat R} \subseteq \R^{dN}$ has the H-representation $A_{\hat R} = \cb{\hat y \in \R^{dN} \mid \hat B \hat y \geq \hat b}$ where $\hat B$ and $\hat b$ are matrices of appropriate dimension then $$\hat R\of{\hat x} = \cb{u \in M \mid \hat B\of{\hat x + \hat I_d u} \geq \hat b} = \cb{u \in M \mid \hat B \hat I_d u \geq \hat b - \hat B\hat x}.$$ Let $P \in \R^{d \times m}$ be a matrix with column vectors $\mu^1, \ldots, \mu^m$ forming a basis of $M$ and define $B = \hat B \hat I_dP$, $b = \hat b - \hat B\hat x$. Then, observing that $\hat R\of{\hat x} + K_0^M = \hat R\of{\hat x}$ and substituting $u = Pz$ we obtain that $\hat R\of{\hat x}$ is the upper image of the problem $$\text{ minimize } P \colon \R^m \to M \text{ with respect to } \le_{K_0^M} \text{ subject to } Bz \geq b.$$ However, this is just a theoretical result since in practice life is not as straightforward: Usually, the constraints describing $\hat A$ involve a large number of auxiliary variables, and $u$ is given as a linear function of those (see Example \[exAVARtheory\] below). Therefore, the algorithm presented in Section \[sec\_Bensonalgo\] is an appropriate tool to compute the values of a polyhedral set-valued risk measure because the dimension of the pre-image space usually is much greater than the dimension of the image space which is $\dim M = m \leq d$. Compare Examples \[ex\_WCtheory\] and \[exAVARtheory\] below. In the following, we will discuss two examples which will be used for the numerical computations reported in Section \[section\_numeric\]. \[ex\_WCtheory\] In worst case, the regulator/decision maker only accepts positions with non-negative components. Thus, the acceptance set is $A = \of{L^0_d}_+$ which is the set of all component-wise non-negative random variables. The market extension of the worst case risk measure, i.e. when trading is allowed, is related to the set of superhedging portfolios, see [@LoeRud11]. Its acceptance set in a one-period market is $A = \of{L^0_d}_+ +K_0{\mathrm{1\negthickspace I}}+L^0_d\of{K_T}$ where the cone $K_0$ and the random cone $K_T$ model market conditions with a potential bid ask price spread at initial and terminal time, respectively, and $L^0_d\of{K_T} = \cb{X \in L^0_d \mid \forall \omega \in \Omega \colon X\of{\omega} \in K_T\of{\omega}}$. The cones $K_T\of{\omega}$ are also finitely generated convex cones satisfying $\R^d_+ \subseteq K_T\of{\omega} \neq \R^d$ for all $\omega \in \Omega$. The market extension of the worst case risk measure still is very conservative since a payoff $X$ is acceptable only if there is a trading strategy such that its result, when added to $X$, is non-negative in all components in all possible scenarios, even those with a very small probability. Therefore, we introduce a ‘relaxed’ variant as follows. Let $G \subseteq \R^d$ be a finitely generated convex cone with $\R^d_+ \subseteq G \neq \R^d$ and consider the following acceptance set $$A^{RWC} = \cb{X \in L^0_d\of{\Omega, P} \mid \forall \omega \in \Omega \colon X\of{\omega} \in \of{-\epsilon + \R^d_+} \cap G}+K_0{\mathrm{1\negthickspace I}}+L^0_d\of{K_T}$$ where $\epsilon \in \R^d$ such that $\epsilon_i \geq 0$ for all $i \in \cb{1, \ldots, d}$. Compared to the ‘true’ worst case risk measure, the set $\of{L^0_d}_+$ is replaced by ‘a little’ bigger set. Thus, payoffs with ‘small’ negative components may still be considered acceptable, and the size of the risk related to such payoffs is controlled by $\epsilon$ and $G$. The cone $G$ may serve as a conservative estimate of a market model which the regulator/supervisor thinks is robust enough to cover most market scenarios. For example, $G$ can be chosen such that $P(G\subseteq K_T)\geq 1-\alpha$ for some significance level $\alpha\in[0,1]$. Then, the probability of a loss is bounded by $\alpha$, and a potential loss (in physical units) is bounded by $\epsilon$. Note that in the scalar case $d=m=1$, the relaxed worst case risk measure reduces to the scalar worst case risk measure. The market extension of the relaxed worst case risk measure is given by $$\begin{aligned} \label{WCMar} RWC\of{X} \nonumber = & \big\{u \in M \mid z \in K_0, \; Z \in L^0_d\of{K_T}, \; \forall \omega \in \Omega \colon \\ & \;\;X\of{\omega} -z - Z\of{\omega} + u \in \of{-\epsilon + \R^d_+} \cap G\big\}\end{aligned}$$ and can be seen as a relaxation of the superhedging set and thus as a good deal price bound of $-X$, see Example \[ex45\] for details. It is polyhedral (convex) as $A^{RWC}$ is polyhedral (convex), but not sublinear. This is a new feature since the classical worst case risk measure is always sublinear. In order to describe $RWC\of{\hat x}$, let $g^1, \ldots, g^L$ be the generating vectors of the cone $G$ and let $\hat G$ be the $dN \times LN$ matrix which contains $N$ blocks on its diagonal, where each block consists of the matrix with $g^1, \ldots, g^L$ as columns. Then $$\forall n \in \cb{1, \ldots, N} \colon X\of{\omega_n} \in G \quad \Leftrightarrow \quad \exists \gamma \in \R^{LN}_+ \colon \hat x = \hat G \gamma.$$ Similarly, let $\hat K_T$ be the $dN \times J$ matrix which contains $N$ blocks on its diagonal, where the first block consists of the matrix with the generating vectors of $K_T(\omega_1)$ as columns, the second block contains the generating vectors of $K_T(\omega_2)$ and so forth to the last block with the generating vectors of $K_T(\omega_N)$. $J$ is the sum of the number of generating vectors of all $K_T$’s. Let $\hat K_0$ denote the matrix containing the $I$ generating vectors of $K_0$ as columns. Then, $$\begin{aligned} RWC\of{\hat x} = \{&Pz \mid \hat x + \hat I_d\of{Pz + \epsilon-\hat K_0r}- \hat K_Ts\in \R^{dN}_+,\\ & \hat I_d\of{ Pz-\hat K_0r}- \hat K_Ts - \hat G \gamma = -\hat x, z \in \R^m, \; \gamma \in \R^{LN}_+, \; r\in \R^{I}_+, \; s \in \R^{J}_+\},\end{aligned}$$ with ordering cone $K_0^M$. Thus, the dimension of the pre-image space is $m + LN+I+J$ whereas the dimension of the image space is just $m$. \[exAVARtheory\] The following set-valued function is a generalization of the scalar average value at risk (see [@FS11 p. 210]) which is probably the most important and most studied example of a sublinear coherent measure of risk as introduced by [@ADEH99]. Let $\alpha \in (0,1]^d$. Define for $X \in L^0_d$ $$\begin{aligned} \label{AVaRMar} AV@R_\alpha\of{X} \nonumber = &\big\{{{\rm diag}}\of{\alpha}^{-1}E\sqb{Z} - z \mid Z \in \of{L^0_d}_+, \\ &\;\; X + Z - z{\mathrm{1\negthickspace I}}\in K_0{\mathrm{1\negthickspace I}}+ L^0_d\of{K_T}, \; z \in \R^d\big\} \cap M\end{aligned}$$ where ${{\rm diag}}\of{\alpha}^{-1}$ is the inverse of the diagonal matrix with the components of $\alpha$ on its main diagonal and zero elsewhere, and the cones $K_0, K_T\of{\omega}$ modeling the market conditions are as described above. Therefore this risk measure is also called the market extension of a simpler ‘regulator’ version, see [@HamelRudloffYankova12]. We also refer to this paper for further motivation, interpretation and more details. It is immediately clear that $AV@R_\alpha$ is not given in the form of $R_A$ above, but it is a polyhedral convex (even sublinear) risk measure. Its ‘hat’ variant can be derived as follows. Replace $Z$ by $\hat z \in \R^{dN}$ and introduce auxiliary variables which admit to write $\hat x + \hat z - \hat I_d z$ as non-negative linear combination of the generating vectors of the cones $K_0$ and $K_T\of{\omega_n}$. Transform the objective into matrix form and get $$AV@R_\alpha\of{\hat x} = \cb{Px \mid Bx \geq b} + K_0^M,$$ with appropriate matrices $B$, $b$ and $P$ where $K_0^M = K_0 \cap M$ as before. The dimension of the pre-image space is $d(N+1)+I+J$ whereas the dimension of the image space is just $m$. It is worth mentioning that in the scalar case (i.e., without transaction costs) Rockafellar and Uryasev observed that the AV@R can be computed by solving a linear optimization problem, see [@RockafellarUryasev00]. Numerical examples {#section_numeric} ================== The algorithms have been implemented with MATLAB using the GNU Linear Programming Kit (GLPK) to solve the LPs and the CDDLIB package [@BreFukMar98] for vertex enumeration. The graphics have been generated by JavaView[^3] and OpenOffice (Figure \[fig\_5\]). By a straightforward extension of the above results we can also solve linear vector optimization problems with constraints of the form $$\label{eq_constr} a \leq Bx \leq b \qquad lb\leq x \leq ub,$$ where the components of $a,b,lb,ub$ belong to $\R \cup\cb{-\infty,+\infty}$. All examples were computed on a MacBook Pro with 2.26 GHz clock and 8 GB memory. We made use of the fact that all the LPs have a very similar form. This means that the matrix $B$ does not need to be changed during the algorithm (except one line because $\eta$ is not yet known at the beginning). This allows us to initialize LPs by appropriate basis solution of LPs solved in previous steps (warm starts). In the following examples we provide tables with a few computational data, such as the total time and the number of LPs solved (\# LPs). Note that we compute an $\varepsilon$-solution $(\bar S,\bar S^h)$ of and an $\varepsilon$-solution $\bar T$ of , compare Remark \[rem\_eps\]. We provide the cardinality $|\,.\,|$ of the sets $\bar S$, $\bar S^h$ and $\bar T$. Recall that we have $|S^h|=0$ whenever the problem is bounded. Note that $|\bar S|$ and $|\bar T|$ ‘correlate’ to the number of, respectively, vertices and facets of $\P$ (but do not need to coincide exactly). One reason for possible differences is degeneracy as discussed in [@Loehne11book Section 5.6], another one is numerical inaccuracy. Further we denote by $t_{max}$ the maximum time used to solve one LP and by $t_{aver}$ the average time to solve one LP. The quotient $t_{max}/t_{aver}$ indicates the impact of using warm starts. We start with two numerical examples from the literature. ![Illustration of the upper image $\P$ in Example \[ex07\]; top left: primal algorithm, variant ‘break’, $\varepsilon=0.3$; top right: primal algorithm, variant ‘no break’, $\varepsilon=0.3$; bottom left: dual algorithm, variant ‘break’, $\varepsilon=0.3$; bottom right: primal algorithm, variant ‘break’, $\varepsilon=0.05$.[]{data-label="fig1"}](ex07p1.pdf "fig:") ![Illustration of the upper image $\P$ in Example \[ex07\]; top left: primal algorithm, variant ‘break’, $\varepsilon=0.3$; top right: primal algorithm, variant ‘no break’, $\varepsilon=0.3$; bottom left: dual algorithm, variant ‘break’, $\varepsilon=0.3$; bottom right: primal algorithm, variant ‘break’, $\varepsilon=0.05$.[]{data-label="fig1"}](ex07p5.pdf "fig:") ![Illustration of the upper image $\P$ in Example \[ex07\]; top left: primal algorithm, variant ‘break’, $\varepsilon=0.3$; top right: primal algorithm, variant ‘no break’, $\varepsilon=0.3$; bottom left: dual algorithm, variant ‘break’, $\varepsilon=0.3$; bottom right: primal algorithm, variant ‘break’, $\varepsilon=0.05$.[]{data-label="fig1"}](ex07d1.pdf "fig:") ![Illustration of the upper image $\P$ in Example \[ex07\]; top left: primal algorithm, variant ‘break’, $\varepsilon=0.3$; top right: primal algorithm, variant ‘no break’, $\varepsilon=0.3$; bottom left: dual algorithm, variant ‘break’, $\varepsilon=0.3$; bottom right: primal algorithm, variant ‘break’, $\varepsilon=0.05$.[]{data-label="fig1"}](ex07p7.pdf "fig:") \[ex07\] Shao and Ehrgott [@ShaEhr08] used extended variants of Benson’s algorithm to solve linear vector optimization problems occurring in radio therapy treatment planning. We compute Example (PL) in [@ShaEhr08] which has three objectives and a matrix $B$ of size $1211 \times 1143$ with $153\,930$ nonzeros. The ordering cone is $C=\R^3_+$, and the problem is known to be bounded, which means that the first phase of our algorithms as well as the computation of $\eta$ can be skipped. Further we set $c=(1,1,1)^T$. The following table shows some results obtained by Algorithm 1. The second column in the table concerns the optional break command in the algorithm. One can observe that more LPs have to be solved when the break command is disabled. On the other hand, less vertex enumerations are required. This explains why the variant ‘no break’ is becoming faster than the ‘break’ variant when $\varepsilon>0$ is small enough. $\varepsilon$ variant total time $|\bar S|$ $|\bar T|$ \# LPs $t_{max}$ $t_{max}/t_{aver}$ --------------- ---------- ------------ ------------ ------------ -------- ----------- -------------------- $0.3$ break 47 secs 46 29 75 0.84 secs 1.8 $0.1$ break 91 secs 104 61 165 0.87 secs 2.0 $0.05$ break 144 secs 176 94 270 0.86 secs 2.0 $0.005$ break 1596 secs 1456 597 2053 0.84 secs 1.9 $0.3$ no break 54 secs 54 34 88 0.85 secs 1.8 $0.1$ no break 114 secs 134 78 212 0.84 secs 1.9 $0.05$ no break 205 secs 264 129 393 0.85 secs 1.9 $0.005$ no break 1411 secs 1945 804 2749 0.84 secs 1.9 Although we need less computational time than in [@ShaEhr08], it is difficult to compare the results as we use a faster computer, a different (open source) LP solver, and we utilize warm starts. Moreover, in [@ShaEhr08] an online vertex enumeration method is used, which is preferable if the number of vertices and facets of $\P$ is large. Furthermore, our method yields the same approximation error as in [@ShaEhr08] by less vertices and facets of $\P$. See Figure \[fig1\] for an illustration of part of the results. ![Illustration of the upper image $\P$ in Example \[ex06\] for $\varepsilon/\norm{c}=10^{-4}$ (inner and outer approximation, left) and for $\varepsilon/\norm{c}=10^{-6}$ (right).[]{data-label="fig_2"}](ex06p2.pdf "fig:") ![Illustration of the upper image $\P$ in Example \[ex06\] for $\varepsilon/\norm{c}=10^{-4}$ (inner and outer approximation, left) and for $\varepsilon/\norm{c}=10^{-6}$ (right).[]{data-label="fig_2"}](ex06p4.pdf "fig:") \[ex06\] Ruszczyński and Vanderbei [@RusVan03] developed a specialized parametric method for computing all minimizers of bi-criteria problems. Using intermediate results of the parametric simplex method, they solved in [@RusVan03], for instance, a mean-risk model with a dense matrix $B$ of size $6161 \times 3799$ having $4\,435\,919$ nonzero entries. They pointed out that computing all the 5017 minimizers takes only a little more time than solving one single LP. As this problem is known to be bounded, we can skip in our algorithms the first phase as well as the computation of $\eta$. For $c=(1,1)^T$, our primal algorithm yields approximate solutions as shown in the following table. $\varepsilon$ total time $|\bar S|$ $|\bar T|$ \# LPs $t_{max}$ $t_{max}/t_{aver}$ ------------------------- ------------ ------------ ------------ -------- ----------- -------------------- $\sqrt{2}\cdot 10^{-4}$ 946 secs 6 7 13 347 secs 4.4 $\sqrt{2}\cdot 10^{-5}$ 1648 secs 22 23 45 304 secs 8.9 $\sqrt{2}\cdot 10^{-6}$ 3085 secs 62 63 125 310 secs 14.1 We see that a ‘good’ approximation with $\varepsilon/\norm{c}=10^{-6}$ can be obtained in about ten times the time required to solve a single LP. This means our general method needs much more time for an $\varepsilon$-solution (compare Remark \[rem\_eps\]) than the parametric method for bounded bi-criteria problems in [@RusVan03] needs for the exact solution. On the one hand, approximating solutions are often sufficient for a decision maker in practice, compare Figure \[fig\_2\]. On the other hand, we think that the ideas of the algorithm by Ruszczyński and Vanderbei are promising for further improvements of Benson type algorithms for arbitrary linear vector optimization problems. The following three numerical examples refer to Example \[exAVARtheory\] in the previous section. ![Illustration of the upper image $\P$ (left) and the lower image $\D^*$ (right) in Example \[AVAR1\] for $\varepsilon=10^{-4}$.[]{data-label="fig_3"}](ex26p1.pdf "fig:") ![Illustration of the upper image $\P$ (left) and the lower image $\D^*$ (right) in Example \[AVAR1\] for $\varepsilon=10^{-4}$.[]{data-label="fig_3"}](ex26d1.pdf "fig:") \[AVAR1\] Let us consider $d=12$ assets, the first one is a risk-free USD bond with annual interest rate $5\%$. Given is the vector of today’s asset prices, the vector of the expected returns and a covariance matrix for the other $11$ correlated risky assets denoted in USD. Then, one can set up a one-period tree for the asset prices $S_T$ with time horizon $T=1$ year as in [@KorMue09] to reflect the drift and covariance structure. The resulting number of scenarios is $N=2^{d-1}=2048$. We consider proportional transaction costs for the bond to be $\lambda_0=3\%$ and for the first risky assets (usually another currency) to be $\lambda_1=7\%$, the second risky asset to be $\lambda_2=5\%$ and all other risky assets to be $1\%$. Then, the bid and ask prices of the assets are $(S_t^a)_i=(1+\lambda_i)(S_t)_i$ and $(S_t^b)_i=(1-\lambda_i)(S_t)_i$ for $i=0,\dots,11$ and $t\in\{0,T\}$. Furthermore, let us assume an exchange between any two risky assets can not be made directly, only via cash in USD by selling one asset and buying the other. Since the risk-free bond has strictly positive transaction costs $\lambda_0$, the cones $K_0$ and $K_T(\omega_n)$ for $n=1,\dots,N$ have $d(d-1)=132$ generating vectors each. Thus $I=132$ and $J=270\,336$. We want to evaluate the risk of an outperformance option with physical delivery and maturity $T$. This option gives the right to buy the asset that performed best out of a basket of assets at a given strike price. Let the strike be $K=(1+\lambda_1)(S_0)_1$. To normalize to today’s prices, let a vector $g$ be defined as $(S_0^a)_1=g_i(S_0^a)_i$ for $i\in\{1,\dots,11\}$. The payoff $X$ of the option is $-K$ in the risk free asset, $g_i$ units of asset $i$ for the smallest $i$ satisfying $g_i(S_T^a)^i=\max_{j\in\{1,...,11\}}(g_j(S_T^a)^j)\geq K$ and zero in the other assets. If $\max_{j\in\{1,...,11\}}(g_j(S_T^a)^j)< K$ the payoff is the zero vector. Let us calculate $AV@R_\alpha(X)$ as described in Example \[exAVARtheory\] with significance levels $$\alpha=(0.1, 0.08, 0.09, 0.1, 0.05, 0.05, 0.04, 0.05, 0.03, 0.04, 0.03, 0.04 )^T.$$ As the space of eligible assets we choose the space spanned by the first and the second asset, i.e. $M = \R^2 \times \{0\}^{10}$. Formula leads to a linear vector optimization problem with 2 objectives and constraints of the form where the matrix $B$ is of size $24\,586 \times 295\,056$. $B$ is sparse having $1\,150\,986$ nonzero entries. The ordering cone is $K_0^M$, which is strictly larger than $\R^2_+$ and is generated by $2$ vectors. The vertices of $AV@R_\alpha(X)$ are minimal deposits in the bond and the second asset that compensate for the risk of $X$ measured by $AV@R_\alpha$. The following table shows some computational data of the primal algorithm. $\varepsilon$ total time $|\bar S|$ $|\bar S^h|$ $|\bar T|$ \# LPs $t_{max}$ $t_{max}/t_{aver}$ --------------- ------------ ------------ -------------- ------------ -------- ----------- -------------------- $10^{-4}$ 3529 secs 20 1 21 46 592 secs 8.4 $10^{-5}$ 4716 secs 47 1 48 100 671 secs 17.1 $10^{-6}$ 7905 secs 122 1 123 253 449 secs 22.0 We can see that the problem is unbounded. In Figure \[fig\_3\] the upper image $\P$ and the lower image $\D^*$ for $\varepsilon=10^{-4}$ are shown. We observe that the vertices of $\P$ are almost on a line and the lower image $\D^*$ is more suitable to illustrate the example. ![The lower image $\D^*$ in Example \[ex39\] for $\varepsilon=10^{-3}$, two different view points.[]{data-label="fig_4"}](ex39d02.pdf "fig:") ![The lower image $\D^*$ in Example \[ex39\] for $\varepsilon=10^{-3}$, two different view points.[]{data-label="fig_4"}](ex39d02a.pdf "fig:") \[AVAR2\]\[ex39\] Now consider $d=11$ assets with a given correlation structure and all other input parameters as for the first $11$ assets in Example \[AVAR1\] above. We have $N=2^{d-1}=1024$ and the number of generating vectors of each cone $K_0$ and $K_T(\omega_n)$ for $n=1,\dots,N$ is $d(d-1)=110$. Consider a basket call option with physical delivery and strike price $K=\sum_{i=1}^{10} (S_0)_i$. If at maturity ($T=1$ year) the value of the basket of risky assets is greater or equal to the strike, i.e., $\sum_{i=1}^{10} (S_T)_i\geq K$, then one would exercise the option and buy the $10$ risky assets at strike $K$ by delivering $K$ times the bond, i.e. $X=(-K,1,\dots,1)^T$ in this case. If the value is less, the payoff vector is the zero vector. As the space of eligible assets we choose the space spanned by the first three assets, i.e. $M = \R^3 \times \{0\}^{8}$. The ordering cone is $K_0^M$, which is strictly larger than $\R^3_+$ and generated by $6$ vectors. The linear vector optimization problem has 3 objectives and a matrix of size $11\,272\times124\,025$, which is sparse having $481\,288$ nonzero entries. For $\varepsilon=10^{-3}$, the computational time of the primal algorithm was 1748 seconds. The result is illustrated in Figure \[fig\_4\]. As the upper image $\P$ is difficult to visualize (a polyhedron containing no lines but being ‘close’ to a halfspace) we only provide the lower image $\D^*$. ![Visualization of the 18 vertices of the upper image $\P\subset \R^4$ in Example \[ex36\] by a radar chart.[]{data-label="fig_5"}](ex36.pdf) \[ex36\] Consider $d=10$ assets with a given correlation structure and all other input parameters as in Example \[AVAR2\] above. Let $X$ be the payoff of an outperformance option with physical delivery as described in Example \[AVAR1\]. As the space of eligible assets we choose the space spanned by the first four assets, i.e. $M = \R^4 \times \{0\}^{6}$. The corresponding linear vector optimization problem has $4$ objectives and a matrix of size $5\,126 \times 51\,300$ with $197\,638$ nonzero entries. The ordering cone is $K_0^M$, which is strictly larger than $\R^4_+$ and generated by $12$ vectors. Then, $AV@R_\alpha(X)$ with $\alpha$ as in Example \[AVAR2\], obtained as the upper image of linear vector optimization problem computed with the primal algorithm and $\varepsilon=10^{-2}$, has 18 vertices and 12 extreme directions. The vertices of $\P$ are visualized by a radar chart in Figure \[fig\_5\]. The following numerical example refers to Example \[ex\_WCtheory\] in the previous section. \[ex45\] Let us consider $d=9$ assets with a given correlation structure, all other input parameters as in Example \[AVAR2\] above (i.e. $m=3$), and the same basket option (basket call) with payoff $X$ as in Example \[AVAR2\]. We want to calculate $RWC(-X)$, the relaxed worst case risk measure at $-X$, as described in Example \[ex\_WCtheory\] with parameter $\epsilon_i=500$ for $i\in\{3,\dots,9\}$ and zero otherwise. The cone $G$ can be seen as a worst case solvency cone and is chosen to be a conservative modification of $K_0$, where $\lambda$ is replaced by the larger transaction costs $\lambda_{wc}=\lambda+0.2$. ![The lower image $\D^*$ in Example \[ex45\] for $\varepsilon=10^{-2}$ (left) and $\varepsilon=10^{-3}$ (right) computed with the dual algorithm.[]{data-label="fig_6"}](ex45_1_d01_d.pdf "fig:") ![The lower image $\D^*$ in Example \[ex45\] for $\varepsilon=10^{-2}$ (left) and $\varepsilon=10^{-3}$ (right) computed with the dual algorithm.[]{data-label="fig_6"}](ex45_1_d02_d.pdf "fig:") $RWC(-X)$ corresponds to an upper good deal bound as it is a relaxed version of the set of superhedging portfolios. By considering certain small risks controlled by $\epsilon$ and $G$ as acceptable, the scalar superhedging price of $34.942464$ units of bond is reduced to $34.830995$ units of bond for $RWC(-X)$. The linear vector optimization problem to calculate $RWC(-X)$ has $3$ objectives and a matrix $B$ of size $4\,608 \times 36\,939$, which is sparse with $185\,856$ nonzero entries. The above prices in units of bond were obtained by solving linear vector optimization problems with both the primal and dual algorithm for $\varepsilon=10^{-4}$. The following table shows some computational data and a comparison of the primal and dual algorithm. In Figure \[fig\_6\], parts of the results are visualized. $\varepsilon$ variant total time $|\bar S|$ $|\bar T|$ \# LPs $t_{max}$ $t_{max}/t_{aver}$ --------------- --------- ------------ ------------ ------------ -------- ----------- -------------------- $10^{-2}$ primal 113 secs 13 12 37 8.8 secs 3.8 $10^{-3}$ primal 239 secs 68 37 123 9.0 secs 6.8 $10^{-4}$ primal 506 secs 153 82 308 8.8 secs 8.6 $10^{-2}$ dual 86 secs 7 20 37 5.8 secs 3.8 $10^{-3}$ dual 193 secs 30 73 113 8.2 secs 4.8 $10^{-4}$ dual 404 secs 74 136 256 5.8 secs 5.1 **Acknowledgements.** We thank Dr Lizhen Shao for providing the data of Example \[ex07\] taken from [@ShaEhr08] and we thank Professor Robert Vanderbei for supplying the data of Example \[ex06\] taken from [@RusVan03]. [10]{} P. Artzner, F. Delbaen, J.-M. Eber, and D. Heath. Coherent measures of risk. , 9(3):203–228, 1999. C. Barber, D. P. Dobkin, and H. Huhdanpaa. , 22(4):469–483, 1996. H. Benson. Further analysis of an outcome set-based algorithm for multiple-objective linear programming. , 97(1):1–10, 1998. H. Benson. An outer approximation algorithm for generating all efficient extreme points in the outcome set of a multiple objective linear programming problem. , 13:1–24, 1998. D. Bremner, K. Fukuda, and A. Marzetta. , 20(3):333–357, 1998. L. Csirmaz, , March 2013 M. Ehrgott. Solving multiobjective linear programmes - from primal methods in decision space to dual methods in outcome space. . http://pages.univ-nc.nc/ bonnel/spcm-2010/confspcm10.htm. M. Ehrgott, A. L[ö]{}hne, and L. Shao. A dual variant of [B]{}enson’s outer approximation algorithm. Report 654, University of Auckland School of Engineering, 2007. M. Ehrgott, A. L[ö]{}hne, and L. Shao. A dual variant of [B]{}enson’s outer approximation algorithm. , 52(4):757–778, 2012. M. Ehrgott, L. Shao, and A. Sch[ö]{}bel. , 50(3):397–416, 2011. H. F[ö]{}llmer and A. Schied. . Walter de Gruyter & Co., Berlin, extended edition, 2011. A. Hamel. , 17:153–182, 2009. A. Hamel and F. Heyde. , 1:66–95, 2010. A. Hamel, F. Heyde, A. L[ö]{}hne, C. Tammer, and K. Winkler. , 11(1):163–178, 2004. A. Hamel, F. Heyde, and B. Rudloff. Set-valued risk measures for conical market models. , 5:1–28, 2011. A. Hamel, B. Rudloff, and M. Yankova. . , 7(2):229–246, 2013. A. H. Hamel. , 60(7-9):1023–1043, 2011. A. H. Hamel and A. L[ö]{}hne. Lagrange duality in set optimization. , 2012. arXiv:1207.4433. F. Heyde. Geometric duality for convex vector optimization problems. , 2011. arXiv:1109.3592v1. F. Heyde and A. L[ö]{}hne. , 19(2):836–845, 2008. F. Heyde and A. L[ö]{}hne. Solution concepts in vector optimization: a fresh look at an old story. , 60(12):1421–1440, 2011. F. Heyde, A. L[ö]{}hne, and C. Tammer. , 69(1):159–179, 2009. F. Heyde, A. L[ö]{}hne, and C. Tammer. , 2009. E. Jouini, M. Meddeb, and N. Touzi. Vector-valued coherent risk measures. , 8:531–552, 2004. R. Korn and S. M[ü]{}ller. , 12(3):1–30, 2009. A. L[ö]{}hne. , 2011. A. L[ö]{}hne and B. Rudloff. An algorithm for calculating the set of superhedging portfolios and strategies in markets with transaction costs. , 2011. arXiv:1107.5720v1. A. L[ö]{}hne and C. Schrage. An algorithm to solve polyhedral convex set optimization problems. , 62(1):131–141, 2013. D. T. Luc. , 210:158–168, 2011. R. T. Rockafellar and S. P. Uryasev. Optimization of conditional value-at-risk. , 2:21–42, 2000. A. Ruszczyński and R. J. Vanderbei. , 71(4):1287–1297, 2003. L. Shao and M. Ehrgott. , 68(2):257–276, 2008. L. Shao and M. Ehrgott. , 68(3):469–492, 2008. [^1]: A similar variant has been developed independently in [@Csirmaz13]. [^2]: This simplification was initiated by an idea of Kevin Webster. During a lecture in the Ph.D. course in spring 2011 at ORFE, Princeton University, where the classical variant of Benson’s algorithm was introduced, he proposed a variant with the two LPs and . The advantage over the classical version is that and are dual to each other. All further improvements of this article are based on this idea. [^3]: by Konrad Polthier, http://www.javaview.de
--- abstract: 'We study mixed weighted weak-type inequalities for families of functions, which can be applied to study classical operators in harmonic analysis Our main theorem extends the key result from [@CMP2].' address: - 'Department of Mathematics, University of the Basque Country UPV/EHU, Bilbao, Spain' - 'IKERBASQUE, Basque Foundation for Science, Bilbao, Spain' - 'BCAM, Basque Center for Applied Mathematics, Bilbao, Spain' - 'Department of Mathematics, Universidad Nacional del Sur, Bahía Blanca, Argentina.' author: - Carlos Pérez - Sheldy Ombrosi title: 'Mixed weak type estimates: Examples and counterexamples related to a problem of E. Sawyer' --- Introduction and main results ============================= In this work we consider mixed weighted weak-type inequalities of the form $$\label{desigualdad} uv\bigg(\bigg\{x\in {\mathbb{R}}^n : \frac{|T(fv)(x)|}{v(x)} >t \bigg\}\bigg) \leq \frac{C}{t}\int_{{\mathbb{R}}^n} |f(x)|\,Mu(x)v(x)\,dx,$$ where $T$ is either the Hardy-Littlewood maximal operator or any Calderón-Zygmund operator. Similar inequalities were studied by Sawyer in [@Sa] motivated by the work of Muckenhoupt and Wheeden [@MW] (see also the works [@AM] and [@MOS]). E. Sawyer proved that inequality holds in ${\mathbb{R}}$ when $T=M$ is the Hardy-Littlewood maximal operator assuming that the weights $u$ and $v$ belong to the class $A_1$. This result can be seen as a very delicate extension of the classical weak type $(1,1)$ estimate. However, the reason why E. Sawyer considered is due to the following interesting observation. Indeed, inequality yields a new proof of the classical Muckenhoupt’s theorem for $M$ assuming that the $A_p$ weights can be factored (P. Jones’s theorem). This means that if $w\in A_p$ then $w=uv^{1-p}$ for some $u,v \in A_1$. Now, define the operator $f\rightarrow \frac{M(fv)}{v}$ which is bounded on $L^{\infty}(uv)$ and it is of weak type $(1,1)$ with respect to the measure $uvdx$ by . Hence by the Marcinkiewicz interpolation theorem we recover Muckenhoupt’s theorem. In the same paper, Sawyer conjectured that if $T$ is instead the Hilbert transform the inequality also holds with the same hypotheses on the weights $u$ and $v$. This conjecture was proved in [@CMP2]. In fact, it is proved in this paper that the inequality holds for both the Hardy-Littlewood maximal operator and for any Calderón-Zygmund Operator in any dimension if either the weights $u$ and $v$ both belong to $A_1$ or $u$ belongs to $A_1$ and $uv \in A_{\infty}$. The method of proof is quite different from that in [@Sa] (also from [@MW]) and it is based on certain ideas from extrapolation that go back to the work of Rubio de Francia (see [@CMP2] and also the expository paper [@CMP3]). Applications of these results can be found in [@LOPTT]. The authors conjectured in [@CMP2] that their results may hold under weaker hypotheses on the weights. To be more precise, they proposed that inequality is true if $u\in A_{1}$ and $v\in A_{\infty}$. Very recently, some quantitative estimates in terms of the relevant constants of the weights have been obtained in [@OPR] and some new conjectures have been formulated. Inequalities like , when $T$ is the Hardy-Littlewood maximal operator, can also be seen as generalizations of the classical Fefferman-Stein inequality $$\left\| M(f) \right\|_{L^{1, \infty}(u)} \leq c\;\|f\|_{L^1(Mu)},$$ where $c$ is a dimensional constant. However, in Section 3, we will see that does not hold in general even for weights satisfying strong conditions like $v\in RH_{\infty} \subset A_{\infty}$. In this work we generalize the extrapolation result in [@CMP3] for a larger class of weights (see Theorem \[theor:extrapol\] below). This method of extrapolation is flexible enough with scope reaching beyond the classical linear operators. Indeed, it can be applied to square functions, vector valued operators as well as multilinear singular integral operators. See Section \[applications\] for some of these applications. In fact, the best way to state the extrapolation theorem is without considering operators and the result can be seen as a property of families of functions. Hereafter, ${\mathcal{F}}$ will denote a family of ordered pairs of non-negative, measurable functions $(f,g)$. Also we are going to assume that this family ${\mathcal{F}}$ of functions, satisfies the following property: for **some** $p_0$, $0<p_0<\infty$, and every $w\in A_\infty$, $$\label{extrapol1} \int_{{\mathbb{R}}^n} f(x)^{p_0} w(x)\,dx \leq C\int_{{\mathbb{R}}^n} g(x)^{p_0} w(x)\,dx,$$ for all $(f,g)\in{\mathcal{F}}$ such that the left-hand side is finite, and where $C$ depends only on the $A_\infty$ constant of $w$. By the main theorem in [@CMP1], this assumption turns out to be true for [**any**]{} exponent $p\in(0,\infty)$ and **every** $w\in A_\infty$, $$\label{extrapol2} \int_{{\mathbb{R}}^n} f(x)^p w(x)\,dx \leq C\int_{{\mathbb{R}}^n} g(x)^p w(x)\,dx,$$ for all $(f,g)\in{\mathcal{F}}$ such that the left-hand side is finite, and where $C$ depends only on the $A_\infty$ constant of $w$. See the papers [@CMP1], [@CGMP] and [@CMP3] for more information and applications and the book [@CMP4] for a general account. It is also interesting that both and are equivalent to the following vector-valued version: for all $0<p,q<\infty$ and for all $w\in A_\infty$ we have $$\begin{aligned} \Big\| \Big(\sum_j (f_j)^q\Big)^{\frac{1}{q}} \Big\|_{L^{p}(w)} &\le& C\, \Big\| \Big(\sum_j (g_j)^q\Big)^{\frac{1}{q}} \Big\|_{L^{p}(w)}, \label{CMP:v-v}\end{aligned}$$ for any $\{(f_j,g_j)\}_j\subset {\mathcal{F}}$, where these estimates hold whenever the left-hand sides are finite. Next theorem improves the corresponding Theorem from [@CMP2]. \[theor:extrapol\] Let ${\mathcal{F}}$ be a family of functions satisfying and let $\theta\geq 1$. Suppose that $u\in A_1$ and that $v$ is a weight such that for some $\delta>0$, $v^{\delta} \in A_{\infty}$. Then, there is a constant $C$ such that $$\label{extrapolweak} \Big\| \frac{f}{v^{\theta} }\Big\|_{L^{1/\theta,\infty}(uv)} \leq C\,\Big\| \frac{g}{v^{\theta} }\Big\|_{L^{1/\theta,\infty}(uv)} , \qquad (f,g)\in{\mathcal{F}}.$$ Similarly, the following vector-valued extension holds: if $0<q<\infty$, $$\label{Lp,s:v-v} \Big\| \frac{\sum_j (f_j)^q \Big)^\frac1q}{v^{\theta}}\Big\|_{L^{1/\theta,\infty}(uv)} \leq C\,\Big\| \frac{\sum_j (g_j)^q \Big)^\frac1q}{v^{\theta} }\Big\|_{L^{1/\theta,\infty}(uv)},$$ for any $\{(f_j,g_j)\}_j\subset {\mathcal{F}}$. Observe that the singular class of weights $v(x)=|x|^{-nr}$, $r\geq1$, is covered by the hypothesis of the Theorem but not in the corresponding Theorem from [@CMP2]. The proof of is immediate since we can extrapolate using as initial hypothesis and then applying . \[corol:extrapol\] Let ${\mathcal{F}}$, $u$ and $\theta\geq 1$ as in the Theorem. Suppose now that $v_i$, $i=1, \cdots, m$, are weights such that for some $\delta_i>0$, $v_i^{\delta_i} \in A_{\infty}$, $i=1, \cdots, m$. Then, if we denote $v=\prod_{i=1}^{m} v_i$ $$\Big\| \frac{f}{v^{\theta} }\Big\|_{L^{1/\theta,\infty}(uv)} \leq C\, \Big\| \frac{g}{v^{\theta} }\Big\|_{L^{1/\theta,\infty}(uv)} , \qquad (f,g)\in{\mathcal{F}}.$$ and similarly for $0<q<\infty$, $$\Big\| \frac{\sum_j (f_j)^q \Big)^\frac1q}{v^{\theta}}\Big\|_{L^{1/\theta,\infty}(uv)} \leq C\,\Big\| \frac{\sum_j (g_j)^q \Big)^\frac1q}{v^{\theta} }\Big\|_{L^{1/\theta,\infty}(uv)},$$ for any $\{(f_j,g_j)\}_j\subset {\mathcal{F}}$. The proof reduces to the Theorem by choosing $\delta>0$ small enough such that $v^{\delta}=\prod_{i=1}^{m} v_i^{\delta} \in A_{\infty}$ which follows by convexity since $v_i^{\delta_i} \in A_{\infty}$, $i=1, \cdots, m$. To apply Theorem \[theor:extrapol\] above to some of the classical operators we need a mixed weak type estimate for the Hardy-Littlewood maximal operator. This is the content of next Theorem which was obtained in dimension one by Andersen and Muckenhoupt in [@AM] and by Mart[í]{}n-Reyes, Ortega Salvador and Sarri[ó]{}n Gavi[á]{}n [@MOS] in higher dimensions. In each case the proof follows as a consequence of a more general result with the additional hypothesis that $u\in A_1$. For completeness we will give an independent and direct proof with the advantage that no condition on the weight $u$ is assumed. \[theor:max\] Let $u\ge 0$ and $v(x)=|x|^{-nr}$ for some $r>1$. Then there is a constant $C$ such that for all $t>0$, $$\label{max1} uv\bigg(\bigg\{x\in {\mathbb{R}}^n : \frac{M(fv)(x)}{v(x)} >t \bigg\}\bigg) \leq \frac{C}{t}\int_{{\mathbb{R}}^n} |f(x)|\,Mu(x)v(x)\,dx.$$ \[falso\]We remark that the theorem could be false when $r=1$ even in the case $u=1$, see [@AM]. However, we already mentioned that the singular weight $v(x)=|x|^{-n}$ is included in the extrapolation Theorem \[theor:extrapol\]. [**Acknowledgement.**]{} The authors are grateful to F. J. Martín-Reyes and P. Ortega-Salvador to point out reference [@MOS]. Some applications {#applications} ================= In this section we show the flexibility of the method by giving two applications. The vector-valued case ---------------------- Let $T$ be any singular integral operator with standard kernel and let $M$ be the Hardy-Littlewood maximal function. We are going to show that starting from the following inequality due to Coifman [@Coi]: for $0<p<\infty$ and $w\in A_\infty$, $$\label{coifman} \int_{{\mathbb{R}}^n} |Tf(x)|^p\, w(x)\,d x \le C\,\int_{{\mathbb{R}}^n} Mf(x)^p\, w(x)\,d x,$$ combined with the extrapolation Theorem \[theor:extrapol\] together with Theorem \[theor:max\] yields the following corollary. \[corol:vv\] Let $u\in A_1$ and $v(x)=|x|^{-nr}$ for some $r>1$. Also let $1<q<\infty$. Then, there is a constant $C$ such that for all $t>0$, $$\begin{aligned} uv \bigg(\bigg\{ x\in {\mathbb{R}}^n : \frac{\Big(\sum_j M(f_{j}v)(x)^q\Big)^{\frac1q}}{v(x)} >t \bigg\}\bigg)\!\!\! &\leq& \frac{C}{t}\int_{{\mathbb{R}}^n} \Big(\sum_j |f_{j}(x)|^q\Big)^\frac1q \,u(x)v(x)\,dx, \label{max-vv} \\ uv \bigg(\bigg\{ x\in {\mathbb{R}}^n : \frac{\Big(\sum_j |T(f_{j}v)(x)|^q\Big)^{\frac1q}}{v(x)} >t \bigg\}\bigg)\!\!\! &\leq& \frac{C}{t}\int_{{\mathbb{R}}^n} \Big(\sum_j |f_j(x)|^q\Big)^\frac1q \,u(x)v(x)\,dx. \label{T-vv}\end{aligned}$$ Observe that in particular we have the following scalar version, $$uv\bigg(\bigg\{x\in {\mathbb{R}}^n : \frac{|T(fv)(x)|}{v(x)} >t \bigg\}\bigg) \leq \frac{C}{t}\int_{{\mathbb{R}}^n} |f(x)|\,u(x)v(x)\,dx.$$ This scalar version was proved in [@MOS]. The second inequality of the corollary follows from the first one by applying inequality in Theorem \[theor:extrapol\] with initial hypothesis : $$\sup_{t>0}t uv \bigg(\bigg\{ x\in {\mathbb{R}}^n : \frac{\Big(\sum_j |T(f_j)(x)|^q\Big)^{\frac1q}}{v(x)} >t \bigg\}\bigg)\le \\$$ $$C \sup_{t>0}t uv \bigg(\bigg\{ x\in {\mathbb{R}}^n : \frac{\Big(\sum_j M(f_j)(x)^q\Big)^{\frac1q}}{v(x)} >t \bigg\}\bigg).$$ To prove the first inequality in Corollary \[corol:vv\] we first note that in [@CGMP] it was shown for $1<q<\infty$ and for all $0<p<\infty$ and $w\in A_\infty$, $$\Big\|\Big(\sum_j (M(f_j))^q\Big)^\frac1q\Big\|_{L^p(w)} \le C\, \Big\| M \Big( \big(\sum_j |f_j|^q \big)^\frac1q \Big) \Big\|_{L^p(w)}.$$ To conclude we apply Theorem \[theor:extrapol\] combined with Theorem \[theor:max\]. Multilinear Calderón-Zygmund operators: --------------------------------------- We now apply our main results to multilinear Calderón-Zygmund operator. We follow here the theory developed by Grafakos and Torres in [@GT1], that is, $T$ is an $m$-linear operator such that $T:L^{q_1}\times\cdots\times L^{q_m} \longrightarrow L^q$, where $1< q_1,\dots,q_m<\infty$, $0<q<\infty$ and $$\label{exponents} \frac1q=\frac1{q_1}+\cdots+\frac1{q_m}.$$ The operator $T$ is associated with a Calderón-Zygmund kernel $K$ in the usual way: $$T (f_1,\dots, f_m)(x) = \int_{{\mathbb{R}}^n}\cdots \int_{{\mathbb{R}}^n} K(x,y_1,\dots, y_m)\, f_1(y_1)\dots f_m(y_m)\,dy_1\dots dy_m,$$ whenever $f_1,\dots,f_m$ are in $C_0^\infty$ and $x\notin \bigcap_{j=1}^m{\operatorname{supp}}f_j$. We assume that $K$ satisfies the appropriate decay and smoothness conditions (see [@GT1] for complete details). Such an operator $T$ is bounded on any product of Lebesgue spaces with exponents $1<q_1,\dots, q_m<\infty$, $0<q<\infty$ satisfying . Further, it also satisfies weak endpoint estimates when some of the $q_i$’s are equal to one. There are also weighted norm inequalities for multi-linear Calderón-Zygmund operators; these were first proved in [@GT2] using a good-$\lambda$ inequality and fully characterized in [@LOPTT] using the sharp maximal function ${\mathcal M}$ and a new maximal type function which plays a central role in the theory, $$\label{first11} {\mathcal M}(f_1,\dots , f_m)(x)=\sup_{\substack{Q\ni x\\ Q\,\,\textup{cube}} } \prod_{i=1}^m\frac{1}{|Q|}\int_Q|f_i(z)|\, dz,$$ where the supremum is taken over cubes with sides parallel to the axes. Indeed, one of the main results from [@LOPTT] is that for any $0<p<\infty$ and for any $w\in A_\infty$, $$\label{GT2:T-M} \Big\|T(f_1,\dots,f_m)\Big\|_{L^p(w)} \le C\, \Big\| {\mathcal M}(f_1,\dots , f_m) \Big\|_{L^p(w)}.$$ Beginning with these inequalities, we can apply Theorem \[theor:extrapol\] to the family\ $ {\mathcal{F}}\Big(T(f_1,\dots,f_m),\, {\mathcal M}(f_1,\dots , f_m) \Big).$ Hence, if $u\in A_1$ and $v(x)=|x|^{-nr}$ for some $r>1$. $$\label{multi:v-v-strong} \Big\| \frac{ T(f_1,\dots,f_m) }{v^m} \Big\|_{L^{1/m,\infty}(uv)} \le C\, \Big\| \frac{ {\mathcal M}(f_1,\dots , f_m) }{v^m} \Big\|_{L^{1/m,\infty}(uv)}$$ \[corol:mult-v-v\] Let $T$ be a multilinear Calderón-Zygmund operator as above. Let $u\in A_1$ and $v(x)=|x|^{-nr}$ for some $r>1$. Then $$\Big\| \frac{ T(f_1,\dots,f_m) }{v^m} \Big\|_{L^{1/m,\infty}(uv)} \le C\,\prod_{j=1}^m \int_{{\mathbb{R}}^n} |f_j|\,u\,dx,.$$ To prove this corollary we will use the following version of the generalized Holder’s inequality: for $1\leq q_1, \dots, q_m < \infty$ with $${\frac}{1}{q_1}+\dots +{\frac}{1}{q_m}={\frac}{1}{q},$$ there is a constant $C$ such that $$\| \prod_{j=1}^m h_j \|_{ L^{q,\infty}(w)} \leq C\, \prod_{j=1}^m \|h_j\|_{L^{ q_j,\infty}(w)}.$$ The proof of this inequality follows in a similar way that the proof of the classic generalized Holder’s inequality in $L^{p}$ theory. Now, if we combine this with and with the trivial observation that $$\label{e4} {\mathcal M}(f_1,\dots , f_m)(x) \le \prod_{i=1}^m M(f_i) \, ,$$ we have $$\Big\| \frac{ T(f_1,\dots,f_m) }{v^m} \Big\|_{L^{1/m,\infty}(uv)} \le C\, \prod_{j=1}^m \Big\| \frac{ M f_j}{v} \Big\|_{ L^{1,\infty}(uv)},$$ Finally, an application of Theorem \[theor:max\] concludes the proof of the corollary. counterexamples {#contraejemplo} =============== An interesting point from Theorem \[theor:max\] is that if $v(x)=|x|^{-nr}$, $r>1$, the estimate $$\label{desigualdad2} uv\bigg(\bigg\{x\in {\mathbb{R}}^n : \frac{M(fv)(x)}{v(x)} >t \bigg\}\bigg) \leq \frac{C}{t}\int_{{\mathbb{R}}^n} |f(x)|\,Mu(x)v(x)\,dx,$$ holds for any $u\ge 0$. On the other hand we have already mentioned that the same inequality holds if $u \in A_{1}$ and $v\in A_1$ or $uv\in A_{\infty}$ [@CMP2]. In particular, this is the case if $u\in A_{1}$ and $v\in RH_{\infty}$. Assuming that $v\in RH_{\infty}$, a natural question is whether inequality holds with [**no**]{} assumption on $u$. This would improve the classical Fefferman-Stein inequality. However, we will show in the next example that this is **false** in general. On the real line we let $v(x)=\sum_{k\in Z}\left| x-k\right| \chi _{ I_{k}}\left( x\right) $, where $I_{k}$ denotes the interval $\left| x-k\right| \leq 1/2$. It is not difficult to see that $v\in RH_{\infty }$. If we choose $$u(x)=\sum _{\substack{ k\in N \\ k>10}}\frac{k}{\log(k)}\chi _{ J_{k}}\left( x\right),$$ where $J_{k}= \left[ k+\frac{1}{4k},k+\frac{1}{k}\right] $, and $f=$ $\chi _{ \left[ -1,1 \right] },$ then there is no finite constant $C$ such that the inequality $$uv(\left\{ x:Mf\left( x\right) >v(x)\right\} )\leq C\int \left| f\right| M^{2}u \label{falsa}$$ holds. To prove this we will make use of the following observation: > There is a geometric constant such that $$M^{2}w(x) \approx M_{ L\log L}w(x) \qquad x \in {\mathbb{R}}^n$$ where $$M_{ L\log L}f(x) = \sup_{Q\ni x} \|f\|_{ L\log L,Q}$$ and $$\|f\|_{L\log L,Q} =\inf\{\lambda >0: \frac{1}{|Q|}\int_{Q} \Phi(\frac{ |f|}{ \lambda }) \, dx \le 1\}.$$ with $\Phi(t)=t\log(e+t),$ see [@PW] or [@G]. Now, it is a computation to see that if $x\in \lbrack -1,1]$, $M^{2}u(x)\approx M_{ L\log L}u(x) \leq C$ then the right hand side of $\left( \ref{falsa}\right)$ is finite, while the left hand side is infinite. Let us check that. For $\left| x\right| >2$ we have that $Mf\left( x\right) \geq $ $\frac{1}{\left| x\right| }$ and if $x\in J_{k}\subset I_{k}$ for $k>10$ $\frac{1}{\left| x\right| }>\frac{1}{2k}$, then it is easy to see that $(k+\frac{1}{4k},k+\frac{1}{2k})\subset \{x\in J_{k}:$ $Mf(x)>v(x)\}$ and therefore we obtain that $$\begin{aligned} uv(\left\{ x:Mf\left( x\right) >v(x)\right\} ) &>&\sum_{\substack{ k\in N \\ k>10}}\frac{k}{\log(k)}\int_{k+\frac{1}{4k}}^{k+\frac{1}{2k}}\left( x-k\right) dx> \\ &>&\sum_{\substack{ k\in N \\ k>10}}\frac{1}{8k \log(k)}=\infty .\end{aligned}$$ Proof of Theorem \[theor:extrapol\] =================================== The following Lemmas will be useful: \[lem:epsilon\] If $u\in A_1$, $w\in A_1$, then there exists $0<\epsilon_0<1$ depending only on $[u]_{A_1}$ such that $uw^\epsilon \in A_1$ for all $0<\epsilon<\epsilon_0$. Since $u\in A_1$, $u\in RH_{s_0}$ for some $s_0>1$ depending on $[u]_{A_1}$. Let $\epsilon_0=1/{s_0}'$ and $0<\epsilon<\epsilon_0$. This implies that $u\in RH_s$ with $s=(1/\epsilon)'$. Then since $u,\,v\in A_1$, for any cube $Q$ and almost every $x\in Q$, $$\begin{aligned} \lefteqn{\hskip-.3cm \frac{1}{|Q|}\int_Q u(y)w(y)^\epsilon\,dy \leq \left(\frac{1}{|Q|}\int_Q u(y)^s\,dy\right)^{1/s} \left(\frac{1}{|Q|}\int_Q w(y)\,dy\right)^{1/s'}} \\ &\leq& \frac{[u]_{RH_s}}{|Q|}\int_Q u(y)\,dy \left(\frac{1}{|Q|}\int_Q w(y)\,dy \right)^{1/s'} \leq [u]_{RH_s}[u]_{A_1}[w]_{A_1}^\epsilon u(x)w(x)^\epsilon.\end{aligned}$$Hence $uw^\epsilon \in A_1$ with $[uw^\epsilon]_{A_1}\le [u]_{RH_s}[u]_{A_1}[w]_{A_1}^\epsilon$. We also need the following version of the Marcinkiewicz interpolation theorem in the scale of Lorentz spaces. In fact we need a version of this theorem with precise constants. The proof can be found in [@CMP2]. \[prop:interpolation\] Given $p_0$, $1<p_0<\infty$, let $T$ be a sublinear operator such that $$\|T f\|_{L^{p_0,\infty}} \le C_0\,\|f\|_{L^{p_0,1}} \qquad \text{ and } \qquad \|T f\|_{L^{\infty}} \le C_1\,\|f\|_{L^{\infty}}.$$ Then for all $p_0<p<\infty$, $$\|T f\|_{L^{p,1}} \le 2^{1/p}\,\big(C_0\,(1/p_0-1/p)^{-1}+C_1\big)\,\|f\|_{L^{p,1}}.$$ Fix $u\in A_1$ and $v$ such that $v^{\delta} \in A_{\infty}$ for some $\delta>0$. Then by the factorization theorem $v^{\delta}=v_1 v_2$ for some $v_1 \in A_1$ and $v_2 \in RH_{\infty}$. Define the operator $S_{\lambda}$ by $$S_{\lambda}f(x)= \frac{M(fuv_1^{1/\lambda\delta})}{uv_1^{1/\lambda\delta}}$$ for some large enough constant $\lambda>1$ that will be chosen soon. By Lemma \[lem:epsilon\], there exists $0<\epsilon_0<1$ (that depends only on $[u]_{A_1}$) such that $u\,w^{\epsilon}\in A_1$ for all $w\in A_1$ and $0<\epsilon<\epsilon_0$. Choose $\lambda>\frac{1}{\delta \epsilon_0}$ such that $uv_1^{1/\lambda\delta} \in A_1$. Hence, $S_{\lambda}$ is bounded on $L^{\infty}(uv)$ with constant $C_1=[u]_{A_1}$. We will now show that for some larger $\lambda$, $S_{\lambda}$ is bounded on $L^{m}(uv)$. Observe that $$\int_{{\mathbb{R}}^n}\! Sf(x)^{\lambda}\, u(x)\,v(x)\,dx = \int_{{\mathbb{R}}^n}\! M(fuv_1^{1/\lambda\delta})(x)^{\lambda}\,u(x)^{1-\lambda}\,v_2(x)^{1/\delta} \,dx.$$ Since $v_2=\tilde{v}_2^{1-t}$ for some $\tilde{v}_2 \in A_{1}$ and $t>1$ we have $$u^{1-\lambda}\,v_2^{1/\delta}= u^{1-\lambda}\,\tilde{v}_2^{\frac{1-t}{\delta}}= \big(u\, \tilde{v}_2^{\frac{t-1}{\delta(\lambda-1)}} \big)^{1-\lambda}.$$ By Lemma \[lem:epsilon\] there exists $\lambda$ sufficiently large ($\lambda>1+\frac{t-1}{\delta \epsilon_0}$) such that $ u\, \tilde{v}_2^{\frac{t-1}{\delta(\lambda-1)}} \in A_1$ and hence $u^{1-\lambda}\,v_2^{1/\delta} \in A_{\lambda}$. By Muckenhoupt’s theorem, $M$ is bounded on $L^{\lambda}(u^{1-\lambda}v_2^{1/\delta} )$ and therefore $S$ is bounded on $L^{\lambda}(uv)$ with some constant $C_0$. Observe that $\lambda$ depends on the $A_1$ constant of $u$. We fix one such $\lambda$ from now on. By Proposition \[prop:interpolation\] above we have that $S$ is bounded on $L^{q,1}(uv)$, $q>\lambda$. Hence, $$\|S f\|_{L^{q,1}(uv)} \le 2^{1/q}\,\big(C_0\,(1/\lambda-1/q)^{-1}+C_1)\,\|f\|_{L^{q,1}(uv)}.$$ Thus, for all $q\ge 2\lambda$ we have that $\|S f\|_{L^{q,1}(uv)}\le K_0\,\|f\|_{L^{q,1}(uv)}$ with $K_0=4\lambda\,(C_0+C_1)$. We emphasize that the constant $K_0$ is valid for every $q\ge 2\lambda$. Fix $(f,g)\in{\mathcal{F}}$ such that the left-hand side of is finite. We let $r$ be such that $\theta<r<\theta (2\lambda)'$, to be chosen soon. Now, by the duality of $L^{r,\infty}$ and $L^{r',1}$, $$\big\|f\,v^{-\theta}\big\|_{L^{1/\theta,\infty}(uv)}^\frac{1}{r} = \big\| (f\,v^{-\theta})^\frac{1}{r} \big\|_{L^{r/\theta,\infty}(uv)} = \sup \int_{{\mathbb{R}}^n} f(x)^{\frac{1}{r}}\,h(x)\,u(x)\,v(x)^{1-\theta/r}\,dx,$$ where the supremum is taken over all non-negative $h \in L^{(\frac{r}{\theta})',1}(uv)$ with $\|h\|_{L^{(\frac{r}{\theta})',1}(uv)}=1$. Fix such a function $h$. We are going to build a larger function ${\mathbb{R}}h$ using the Rubio de Francia‘s method such ${\mathbb{R}}h\, uv^{1-\theta/r}\in A_\infty$. Hence we will use the hypothesis with $p=\theta/r$ (recall that is equivalent to ) with the weight ${\mathbb{R}}h\, uv^{1-\theta/r}\in A_\infty$ We let $r$ be such that $(\frac{r}{\theta})'> 2\lambda$ and hence $S_{(\frac{r}{\theta})'}$ is bounded on $L^{(\frac{r}{\theta})',1}(uv)$ with constant bounded by $K_0$. Now apply the Rubio de Francia algorithm (see [@GCRdF]) to define the operator ${\mathbb{R}}$ on $ h\in L^{(\frac{r}{\theta})',1}(uv)$, $h\geq 0$, by $${\mathbb{R}}h(x) = \sum_{j=0}^\infty \frac{S_{(\frac{r}{\theta})'}^j h(x)}{2^j\, K_0^j},$$ Recall that the operator $S_{(\frac{r}{\theta})'}$ is defined by $$S_{(\frac{r}{\theta})'}f(x)= \frac{M(fuv_1^{1/(\frac{r}{\theta})'\delta})}{uv_1^{1/(\frac{r}{\theta})'\delta}}.$$ Also, recall that by the choice of $r$ $uv_1^{1/(\frac{r}{\theta})'\delta} \in A_1$. It follows immediately from this definition that: [$(\theenumi)$]{} =1.4cm =.7cm=0.25cm =0.2cm =.1cm $h(x)\le {\mathbb{R}}h(x)$; $\|{\mathbb{R}}h\|_{L^{(\frac{r}{\theta})',1}(uv)}\le 2\,\|h\|_{L^{(\frac{r}{\theta})',1}(uv)}$; $S_{(\frac{r}{\theta})'}({\mathbb{R}}h)(x)\le 2\,K_0\, {\mathbb{R}}h(x)$. In particular, it follows from $(c)$ and the definition of $S$ that ${\mathbb{R}}h\, uv_1^{1/(\frac{r}{\theta})'\delta} \in A_1$ and therefore ${\mathbb{R}}h\, uv^{1/(\frac{r}{\theta})'}= {\mathbb{R}}h\, uv_1^{1/\delta (\frac{r}{\theta})'}v_2^{1/\delta (\frac{r}{\theta})'} \in A_\infty$. To apply the hypothesis we must first check that the left-hand side is finite, but this follows at once from Hölder’s inequality and $(b)$: $$\begin{gathered} \int_{{\mathbb{R}}^n} f(x)^{\frac{1}{r}}\,{\mathbb{R}}h(x)\,u(x)\,v(x)^{1-\frac{\theta}{r}}\,dx \le \big\|(f\,v^{-\theta})^{\frac1r}\big\|_{L^{r/\theta,\infty}(uv)}\,\|{\mathbb{R}}h\|_{L^{(r/\theta)',1}(uv)} \\ \le 2\,\big\|f\,v^{-\theta}\big\|_{L^{1/\theta,\infty}(uv)}^{\frac1r}\|h\|_{L^{(\frac{r}{\theta})',1}(uv)} < \infty.\end{gathered}$$ Thus since ${\mathbb{R}}h\, uv^{1/(\frac{r}{\theta})'} \in A_\infty$ by $$\begin{aligned} \int_{{\mathbb{R}}^n} f(x)^{\frac{1}{r}}\,h(x)\,u(x)\,v(x)^{1-\frac{\theta}{r}}\,dx & \leq \int_{{\mathbb{R}}^n} f(x)^{\frac{1}{r}}\,{\mathbb{R}}h(x)\,u(x)\,v(x)^{1-\frac{\theta}{r}}\,dx \\ &\leq C\,\int_{{\mathbb{R}}^n} g(x)^{\frac{1}{r}}\,{\mathbb{R}}h(x)\,u(x)\,v(x)^{1-\frac{\theta}{r}}\,dx \\ & \leq C\,\big\| (g\,v^{-\theta})^\frac{1}{r} \big\|_{L^{r/\theta,\infty}(uv)} \, \|{\mathbb{R}}h\|_{L^{(\frac{r}{\theta})',1}(uv)}\\ & \le 2\,C\,\big\| g\,v^{-\theta}\big\|_{L^{1/\theta,\infty}(uv)}^{\frac{1}{r}}.\end{aligned}$$ Since $C$ is independent of $h$, inequality follows finishing the proof of the theorem. Proof of Theorem \[theor:max\] {#section:max} ============================== Proof of --------- The following lemma is important in the proof. \[local\] Let $f$ be a positive and locally integrable function. Then for $r>1$ there exists a positive real number $a$ depending on $f$ and $\lambda $ such that the inequality $$\left( \int_{\left| y\right| \le a^{\frac{1}{r-1}}}f(y)dy\right) a^n=\lambda$$ holds. Consider the function $$g(a)=\left( \int_{\left| y\right| \le a^{\frac{1}{r-1}}}f(y)dy\right) a^n\text{, for }a\geq 0,$$ then by the hypothesis we have that $g$ is a continuous and non decreasing function. Furthermore, $g(0)=0 $, and $g(+\infty)=+\infty $, and therefore by the mean value theorem there exists $a$ which satisfies the conditions of lemma. Let $u\ge 0$ and $v(x)=|x|^{-nr}$ with $r>1$. By homogeneity we can assume that $\lambda=1$. Also, for simplicity we denote $g=fv$. Now, for each integer $k$ we denote $G_{k}=$ $\left\{ 2^{k}<\left| x\right| \leq 2^{k+1}\right\} $, $I_{k}=$ $\left\{ 2^{k-1}<\left| x\right| \leq 2^{k+2}\right\} $, $L_{k}=$ $\left\{ 2^{k+2}<\left| x\right| \right\} $, $C_{k}=\left\{ \left| x\right| \leq 2^{k-1}\right\} .$ It will be enough to prove the following estimates $$\sum_{k\in Z}uv\left\{ x\in G_{k}: M(g\chi _{I_{k}}) (x) >\frac{1}{\left| x\right| ^{nr}}\right\} \leq C_{r,n} \int g\,Mu, \label{fac}$$ $$\sum_{k\in Z}uv\left\{ x\in G_{k}: M(g\chi _{L_{k}})(x) >\frac{1 }{\left| x\right| ^{nr}}\right\} \leq C_{r,n}\int g\,Mu, \label{med}$$ $$\sum_{k\in Z}uv\left\{ x\in G_{k}:M(g\chi _{C_{k}})(x) >\frac{1}{\left| x\right| ^{nr}}\right\} \leq C_{r,n}\int g\,Mu. \label{dif}$$ Taking into account that in $G_{k},$ $ v(x)=\frac{1}{\left|x\right| ^{nr}}\sim 2^{-knr}$, using the $(1,1)$ weak type inequality of $M$ with respect to the pair of weights $(u,Mu)$ and since the subsets $I_{k}$ overlap at most three times we obtain $\left( \ref{fac}\right) $. To prove inequality we will estimate $M(g\chi _{L_{k}}) (x) $. Observe that if $x$ belongs to $G_{k}$ and $y\in L_{k}=$ $\left\{ 2^{k+2}<\left| y\right| \right\} ,$ and if $\left| y-x\right| \leq \rho $, we have that $\frac{\left| y\right| }{2}\leq \rho $, $$\frac{1}{\rho ^{n}}\int_{\left| y-x\right| \leq \rho }g(y)\chi _{L_{k}}\left( y\right) dy\leq C_{n}\int_{2^{k+2}<\left| y\right| }\frac{g(y)}{ \left| y\right|^n }dy \leq C_{n}\int_{|x|<\left| y\right| }\frac{g(y)}{ \left| y\right|^n }dy.$$ If we denote $F(x)= \int_{ |x|<|y| }\frac{g(y)}{ \left| y\right|^n }dy$ the left hand side of is bounded by $$\sum_{k\in Z}2^{-krn} u\left\{ x\in \mathbb{R}^{n}: F(x) >C\,2^{-knr} \right\} \approx \int_{0}^{\infty} t u\left\{ x\in \mathbb{R}^{n}: F(x) >t \right\} \frac{dt}{t}$$ $$= \int_{\mathbb{R}^{n}} F(x)\,u(x)dx = \int_{\mathbb{R}^{n}} \int_{ |x|<|y| }\frac{g(y)}{ \left| y\right|^n }dy \,u(x)dx$$ $$=\int_{\mathbb{R}^{n}} g(y)\,\frac{1}{ \left| y\right|^n }\int_{ |x|<|y| } \,u(x)dx\, dy \le C\int_{\mathbb{R}^{n}} g(y)\,Mu(y)dy.$$ To prove we estimate $M(g\chi _{C_{k}}) (x) $ for $x\in G_{k}.$ Indeed, if $y\in C_{k}$, $2\left| y\right| <\left| x\right| $ and since $M(g\chi _{C_{k}})(x) \leq \frac{c_n}{\left| x\right| ^{n}}\int_{C_{k}}g(y)dy$, we obtain $$M(g\chi _{C_{k}})(x) \leq \frac{C}{\left| x\right| ^{n}}\int_{C_{k}}g\leq \frac{C}{\left| x\right| ^{n}}\int_{\left| y\right| \leq \frac{\left| x\right| }{2}}g,$$ Thus, since the subsets $G_{k}$ are disjoint, the left hand side in is bounded by $$uv\left\{ x\in \mathbb{R}^{n}:\frac{C}{\left| x\right| ^{n}}\int_{\left| y\right| \leq \frac{\left| x\right| }{2}}g>\frac{1 }{\left| x\right| ^{nr}}\right\}.$$ Now, if $a$ denotes the positive real number that appears in Lemma \[local\] (i.e., $a$ satisfies $1 =\left( \int_{\left| y\right| \leq a^{\frac{1}{r-1}}}g\right) a^n$, we express the last integral in the following way: $$\begin{gathered} uv\left( \left\{ x:\frac{C}{\left| x\right| ^{n}}\int_{\left| y\right| \leq \frac{\left| x\right| }{2}}g>\frac{1 }{\left| x\right| ^{nr}}\right\} \right) =uv\left( \left\{ \left| x\right| \leq a^{\frac{1}{r-1}}:\frac{C}{\left| x\right| ^{n}}\int_{\left| y\right| \leq \frac{\left| x\right| }{2}}g>\frac{1 }{\left| x\right| ^{nr}}\right\} \right) + \label{serie} \\ +\sum_{k=0}^{\infty }uv\left( \left\{ x: 2^{k}a^{\frac{1}{r-1}}<\left| x\right| \leq 2^{k+1}a^{\frac{1}{r-1}} \,\,\mbox{and}\,\, \frac{C}{\left| x\right| ^{n}}\int_{\left| y\right| \leq \frac{\left| x\right| }{2}}g>\frac{1 }{\left| x\right| ^{nr}}\right\} \right) \notag\end{gathered}$$ If $\left| x\right| \leq a^{\frac{1}{r-1}},$ since $\left| y\right| \leq \frac{\left| x\right| }{2}$ we have that $\left| y\right| \leq a^{\frac{1}{r-1}}$, thus the set $$\left\{ \left| x\right| \leq a^{\frac{1}{r-1}}:\frac{C}{\left| x\right| ^{n}}\int_{\left| y\right| \leq \frac{\left| x\right| }{2}}g>\frac{1 }{\left| x\right| ^{nr}}\right\} \subset \left\{ \left| x\right| \leq a^{\frac{1}{r-1}}:\left| x\right| ^{n(r-1)}>C\left( \int_{\left| y\right| \leq a^{\frac{1}{r-1}}}g\right) ^{-1} \right\}.$$ Taking into account the last inclusion and since $\left( \int_{\left| y\right| \leq a^{\frac{1}{r-1}}}g\right) ^{-1}=a^n$, the first summand in the second term in $\left( \ref{serie}\right) $ is bounded by $$uv(\{ \left| x\right| ^{r-1}>Ca \} ) = uv(\{ |x| >ca^{r'-1}\} ).$$ Using again Lemma \[local\], the last term can be estimated by $$\int_{|x|>C\,a^{r'-1}} uv\,dx \le C\, \sum_{k=1}^{\infty }\frac{ 1 }{(2^ka^{r'-1})^{nr}} \int_{ c2^{k-1}a^{r'-1} \le |x|< c2^{k}a^{r'-1}} u(x)\,dx \le$$ $$\le C\,\sum_{k=1}^{\infty }\frac{ 1 }{2^{k(r-1)n}}\frac{1}{a^n} \frac{1}{ (c2^ka^{r'-1})^n }\int_{|x|\le c2^ka^{r'-1}} u(x)\,dx$$ $$=C \sum_{k=1}^{\infty }\frac{ 1 }{2^{k(r-1)n}} \int_{|y|\leq a^{r'-1}} g(y)\,dy \frac{1}{ (c2^ka^{r'-1})^n }\int_{|x|\le c2^ka^{r'-1}} u(x)\,dx,$$ and this is bounded by $$\le C\sum_{k=1}^{\infty }\frac{ 1 }{2^{k(r-1)n}} \int_{|y|\leq a^{r'-1}} g(y)Mu(y)\,dy \le C\, \int g\,Mu.$$ To finish, we must estimate the series in . It is clear that sum is bounded by $$\sum_{k=0}^{\infty }uv\left( \left\{ x \in 2^{k}a^{r'-1}<| x| \leq 2^{k+1}a^{r'-1} \right\} \right) \le C\, \sum_{k=0}^{\infty }\frac{ 1 }{(2^ka^{r'-1})^{nr}} \int_{ 2^{k-1}a^{r'-1} \le |x|< 2^{k}a^{r'-1}} u\,dx$$ and arguing as before we conclude the proof of $\left(\ref{dif}\right) $. We observe that the proof only uses the following conditions for a sublinear operator $T$: a) $T$ is of weak type $(1,1)$ with respect to the pair of weights $(u,Mu)$ and b) $T$ is a convolution type operator such that the associated kernel satisfies the usual standard condition: $$|K(x)|\le \frac{c}{|x|^n}.$$ In particular if $u \in A_1$, this observation can be applied to the usual Calder[ó]{}n-Zygmund singular integral operators and moreover to the strongly singular integral operators (see [@Ch] and [@F]). References {#references .unnumbered} ========== [00]{} K. Andersen, B. Muckenhoupt [*Weighted weak type Hardy inequalities with applications to Hilbert transforms and maximal functions*]{}, Studia Math. 72 (1982), no. 1, 9-26. S. Chanillo, *Weighted norm inequalities for strongly singular convolution operators*. Trans. Am. Math. Soc. V 281, $\left( 1984\right).$ R.R. Coifman, [*Distribution function inequalities for singular integrals*]{}, Proc. Acad. Sci. U.S.A. **69** (1972), 2838–2839. D. Cruz-Uribe, J.M. Martell and C. Pérez, [*Extrapolation results for $A_\infty$ weights and applications*]{}, J. Funct. Anal.213 (2004) 412–439. D. Cruz-Uribe, J.M. Martell and C. Pérez, [*Weighted weak-type inequalities and a conjecture of Sawyer,*]{} Int. Math. Res. Not., [**30**]{} 2005, 1849-1871. D. Cruz-Uribe, J.M. Martell y C. Pérez, [*Extensions of Rubio de Francia’s extrapolation theorem,*]{} Collect. Math., **57** (2006) 195-231. D. Cruz-Uribe, J.M. Martell and C. Pérez, [*Weights, Extrapolation and the Theory of Rubio de Francia*]{}, Birkhauser Basel, (2011). G.P. Curbera, J. Garc[í]{}a-Cuerva, J.M. Martell and C. Pérez, [*Extrapolation with Weights, Rearrangement Invariant Function Spaces, Modular inequalities and applications to Singular Integrals*]{}, Adv. Math., **203** (2006) 256-318. C. Fefferman. *Inequalities for strongly singular convolution operator*, Acta Math. 124 (1970) 9-36. J. García-Cuerva and J.L. Rubio de Francia, [*Weighted Norm Inequalities and Related Topics*]{}, North Holland Math. Studies 116, North Holland, Amsterdam, 1985. L. Grafakos, [*Modern Fourier Analysis*]{}, Springer-Verlag, Graduate Texts in Mathematics 250, Third Edition, (2014). L. Grafakos and R. Torres, [*Multilinear Calderón-Zygmund theory*]{}, Adv. Math. 165 (2002), 124–164. L. Grafakos and R. Torres, [*Maximal operator and weighted norm inequalities for multilinear singular integrals*]{}, Indiana Univ. Math. J. 51 (2002), no. 5, 1261–1276. A.K. Lerner, S. Ombrosi, C. Pérez, R.H. Torres and R. Trujillo-González, [*New maximal functions and multiple weights for the multilinear Calderón-Zygmund theory*]{}, Adv. Math. **220**, 1222-1264 (2009). F. J. Mart[í]{}n-Reyes, P. Ortega Salvador, M. D. Sarri[ó]{}n Gavil[á]{}n, [*Boundedness of operators of Hardy type in $\Lambda\sp {p,q}$ spaces and weighted mixed inequalities for singular integral operators*]{}, Proc. Roy. Soc. Edinburgh Sect. A 127 (1997), no. 1, 157–170. B. Muckenhoupt and R. Wheeden, [*Some weighted weak-type inequalities for the Hardy-Littlewood maximal function and the Hilbert transform*]{}, Indiana Univ. Math. J. **26** (1977), 801–816. S. Ombrosi, C. Pérez and J. Recchi, [*Quantitative weighted mixed weak-type inequalities for classical operators,*]{} Indiana Univ. Math. J. [**65**]{} (2016), 615–640. C. Pérez and R. Wheeden, [*Uncertaitny principle estimates for vector fields*]{}, Journal of Functional Analysis [**181**]{} (2001), 146–188. E.T. Sawyer, [*A weighted weak type inequality for the maximal function*]{}, Proc. Amer. Math. Soc. 93 (1985), 610–614.
--- abstract: 'The problem of Schrödinger propagation of a discontinuous wavefunction $-$ diffraction in time $-$ is studied under a new light. It is shown that the evolution map in phase space induces a set of affine transformations on discontinuous wavepackets, generating expansions similar to those of wavelet analysis. Such transformations are identified as the cause for the infinitesimal details in diffraction patterns. A simple case of an evolution map, such as $SL(2)$ in a two-dimensional phase space, is shown to produce an infinite set of space-time trajectories of constant probability. The trajectories emerge from a breaking point of the initial wave.' address: 'Institut für Quantenphysik, Universität Ulm, Albert-Einstein Allee 11 89081 Ulm - Germany.' author: - 'E. Sadurní' title: Phase Space Evolution and Discontinuous Schrödinger Waves --- Introduction ============ What are the consequences of breaking a Schrödinger wave? In a seminal paper [@mosh], Moshinsky studied the evolution problem of a quantum-mechanical wave blocked by a shutter and identified his results with Fresnel diffraction. Since then, the so-called ’diffraction in time’ has been widely studied [@zeilinger], and experimentally pursued [@nussensveig]. Needless to say, the spatial side of the analogy (in the paraxial approximation [@hecht]) has a long tradition of its own, mostly in the context of electromagnetic theory [@hannay; @nye]. It is rarely recognized, however, that the discontinuities of the initial condition are the true origin of Schrödinger diffraction as well as the peculiar intensity patterns accompanying the effect. The present paper delves into the subject by revealing its mathematical structure through the use of symmetries and self-similar relations obeyed by wave functions. It is shown that the afore mentioned patterns in space and time can be explained in a very general framework dealing with the evolution of discontinuous initial conditions. The role of symmetry in these quantum-mechanical problems will be crucial, as its influence in the evolution of the system shall give rise to non-classical trajectories of constant probability. Although diffractive effects are described by old and simple wave theories, the underlying explanation of the complexity in the emerging patterns is a subject of current discussion [@muga] and the resulting intricacies are the center of attention of other studies in connection with fractality [@berry]. It is worth to mention that the study of discontinuities in quantum-mechanical wave functions is also motivated by current physical applications ranging from molecular Talbot interferometers [@bill] to matter-wave optics based on cold atoms [@phillips; @turlapov]. Structure of the paper: In the following section, we study the old problem of diffraction by edges, in order to show all the relevant features of patterns by means of elementary techniques, including self-similarity and caustics. In section 3 we present a generalization of the method and illustrate it with a few examples in section 4. These include the diffraction of a square packet in a parametric harmonic oscillator, the evolution of discontinuities under non-linear canonical evolution and their relation to wavelet expansions. As a final example, the evolution of a discontinuous packet under the Gross-Pitaevskii equation is shown numerically. In section 5 we give a brief conclusion. Diffraction by edges ==================== It is usually considered that, in the paraxial approximation, the problem of diffraction by edges in the $x,z$ plane can be described by a plane wave in $z>0$ propagating parallel to the $z$ axis, a discontinuous opaque screen placed at $z=0$ parallel to the $x$ axis and the corresponding solution at $z>0$, subject to the boundary conditions mentioned before. The solution at $z>0$ can be found by propagating the wave at $z=0$, which is taken as the original plane wave but forced to vanish at the blocking screen a discontinuous initial condition. In the following we solve the corresponding Schrödinger propagation problem in natural units ($\hbar=1=m$) using the time variable $t$ instead of $z$. We have (x,t)= i , (x,0) = \_0(x) \[0.1\] where $\psi_0$ has, in principle, an arbitrary number of discontinuities. See the single-slit example in figure \[cero\]. ![\[cero\] Probability density as a function of space and time. Two regions can be distiguished: 1) the near zone or short-time regime, where infinite oscillations and a main lobe can be identified, and 2) the far zone or long-time regime, where the wave function spreads](beamslit) Let us consider a variant of the problem treated by Moshinsky [@mosh], revisited by Nussenzveig [@nussensveig]. Consider the intial condition (x,t=0) = (1/2-|x|) \[uno\] corresponding to a square packet of unit width. The wave function at time $t$ can be obtained via the free propagator [@grosche], which is a Gaussian of the form K(x,x’;t,0) = ( ). \[prop\] The wavefunction reads (x,t) = - . \[tres\] In this form we can obtain the intensity pattern in space and time, as shown in figure \[cero\]. It is worth to mention that the properties of such a pattern can be inferred from the Cornu spiral, as it was pointed out long ago [@mosh]. However, the applicability of this geometrical picture is restricted to the appearence of Fresnel integrals in the solution. More general initial conditions demand the use of more powerful methods. Self-similarity in single-slit diffraction using special functions ------------------------------------------------------------------ The origin of a self-similar pattern near the edges of the initial condition can be described alternatively by a [*replication formula *]{}, a relation describing the wave function in terms of itself. Using a trigonometric expansion of the initial condition (1/2-|x|) = + \_[n=0]{}\^ ( (2n+1) (x+1/2) ) \[4\] and applying the Gaussian propagator to this expansion, we can prove in a very simple way the identity = \_[n=0]{}\^ C\_n e\^[i \_n(x,t)]{} \[aa\] where we have used the definitions k\_n=(2n+1), C\_n = 1/i k\_n\ \_n(x,t) = -k\_n\^2 t/2 + k\_n(1/2 + x)\ \_n(x,t) = e\^[-i3/2]{} ( 1/2 - x - k\_n t)/ . \[bb\] Here we can infer the shape of the pattern near the edges by using a short time approximation of the functions on the r.h.s. of (\[aa\]). We simply take the individual packets to be the initial conditions whose argument is displaced according to $\eta_n(x,t)$, that is (x,t) &=& \_[n=0]{}\^ C\_n\ &+& (1/2-|x|) + o(t\^2) \[13\] where $O(t^2)$ is a small correction for short times. It is interesting to note that in this expansion, the emergence of the pattern in space time can be seen as the superposition of many individual square packets moving along the trajectories $\eta_n(\pm x,t)$, with plane wave factors propagating along $\xi_n(\pm x,t)$. The alternate signs indicate the contribution from each discontinuity or edge, while the index $n$ parameterizes the velocity. This velocity also alternates sign according to the source of the rays, propagating to the left from the right edge and vice versa. The individual functions preserving their integrity on the r.h.s. of (\[13\]) can be regarded as building blocks, while the coefficients $C_n$ weight the contribution of each block. For the resulting probability density plots, see figures \[fig:once\], \[fig:doce\], \[fig:trece\]. Furthermore, the building blocks can be replaced by any other localized shape such as that of a Gaussian or a tringular packet, and the resulting pattern generated by them will have the same features. See, in particular, figure \[fig:trece\]. We distiguish the following properties: - When the limit of the sums is truncated (say $n=n_{max}$), the pattern has a finite number of oscillations near the discontinuities. - The second term $n_{max}=1$ shows already a main peak: The original pattern to be replicated. Increasing $n$ implies finer detail in the pattern near the edges. - Rays are described by $\xi_n(\pm x,t)= const$. The envelopes of such system of rays are caustics (Legendre transform) and can be obtained by eliminating $n$ from the condition $\partial_n \xi_n(x,t)=0$. It turns out that such envelopes are parabolas centered at the edges. See figure \[fig:diez\]. - Classical methods from geometric optics produce a non-classical result. To this respect we should point out that the trajectories of (approximately) constant probability obtained from the caustics should not be interpreted as classical trajectories. In free evolution, the latter are simply rays, while the former are parabolas. - The short time approximation used in (\[13\]) can always be improved. For example, in the case of initial space dependent phases, a velocity field given by the gradient of the phase produces additional motion on the initial packets or building blocks. Using the continuity equation for the initial probability density, one can infer the specific form of the motion and obtain the correct approximation. ------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ \[fig:once\] Replication process using square packets as building blocks. The red circle indicates the pattern to be replicated.](edgepanel1.jpg "fig:") ![ \[fig:once\] Replication process using square packets as building blocks. The red circle indicates the pattern to be replicated.](edgepanel2.jpg "fig:") ![ \[fig:once\] Replication process using square packets as building blocks. The red circle indicates the pattern to be replicated.](edgepanel3.jpg "fig:") ![ \[fig:once\] Replication process using square packets as building blocks. The red circle indicates the pattern to be replicated.](edgepanel4.jpg "fig:") ![ \[fig:once\] Replication process using square packets as building blocks. The red circle indicates the pattern to be replicated.](edgepanel5.jpg "fig:") ![ \[fig:once\] Replication process using square packets as building blocks. The red circle indicates the pattern to be replicated.](edgepanel6.jpg "fig:") ------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ \[fig:doce\] Replication process using gaussian packets as building blocks. The red circle indicates the pattern to be replicated.](gausspanel1.jpg "fig:") ![ \[fig:doce\] Replication process using gaussian packets as building blocks. The red circle indicates the pattern to be replicated.](gausspanel2.jpg "fig:") ![ \[fig:doce\] Replication process using gaussian packets as building blocks. The red circle indicates the pattern to be replicated.](gausspanel3.jpg "fig:") ![ \[fig:doce\] Replication process using gaussian packets as building blocks. The red circle indicates the pattern to be replicated.](gausspanel4.jpg "fig:") ![ \[fig:doce\] Replication process using gaussian packets as building blocks. The red circle indicates the pattern to be replicated.](gausspanel5.jpg "fig:") ![ \[fig:doce\] Replication process using gaussian packets as building blocks. The red circle indicates the pattern to be replicated.](gausspanel6.jpg "fig:") ---------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ \[fig:trece\] A comparison between the pattern obtained through the replication formula and the numerical calculation using Moshinsky functions. The expansions contain 30 terms.](edge30.jpg "fig:") ![ \[fig:trece\] A comparison between the pattern obtained through the replication formula and the numerical calculation using Moshinsky functions. The expansions contain 30 terms.](gauss30.jpg "fig:") ![ \[fig:trece\] A comparison between the pattern obtained through the replication formula and the numerical calculation using Moshinsky functions. The expansions contain 30 terms.](preballistic.jpg "fig:") ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The emergence of caustics as the culprits for the constant intensity lines in the pattern. As indicated in the text, the caustics can be computed explicitly in the form of parabolas[]{data-label="fig:diez"}](caustic1.jpg "fig:") ![The emergence of caustics as the culprits for the constant intensity lines in the pattern. As indicated in the text, the caustics can be computed explicitly in the form of parabolas[]{data-label="fig:diez"}](caustic2.jpg "fig:") ![The emergence of caustics as the culprits for the constant intensity lines in the pattern. As indicated in the text, the caustics can be computed explicitly in the form of parabolas[]{data-label="fig:diez"}](parabolas.jpg "fig:") ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Generalizations =============== Many of the results above can be produced in more general settings. Now that we have revealed the origin of the patterns, we may consider intial conditions with an arbitrary number of discontinuities, as they can be written again in terms of individual square pulses \_0(x) = \_0(x). \[1\] It is also possible to treat similar problems in many dimensions. Take, for example, the free evolution of a packet in three dimensions written as the product of pulses in each coordinate $x,y,z$. Even more general initial conditions with compact support can be considered, as long as we use the appropriate trigonometric expansions. In this section we discuss not only the generalizations of the method, but also the role of symmetry in the evolution of discontinuities. Most of our treatment rests on the fact that a wave function can be written in terms of itself and on the possibility of representing its evolution as the superposition of wave packets which move and leave their mark in space-time, but preserve their integrity in the process. Replication formula from canonical evolution -------------------------------------------- A discontinuous wave can be written in terms of itself using step functions. We resort to a trivial but remarkable identity for an individual pulse: \_0(x)=(1/2-|x|) \_0(x). The evolution can be computed straightforwardly (x,t) &=& U\_t \_0(x) = U\_t (1/2-|x|) U\_[-t]{} U\_t \_0(x)\ &=& (1/2-|x (t)|) (x,t) \[cc\] where we have used the reversed Heisenberg picture to denote our evolved position operator $\hat x(t)$. As before, we use a trigonometric expansion of the step function in the form (1/2-|x(t)|) = \_[n,]{}’ e\^[ik\_n(x(t)1/2)]{} \_[n,]{}’ U\_[n]{}\^ \[expansion\] with a convenient abbreviation of the sum symbol given by \_[n,]{}’ \_[n=0]{}\^ \_ + 1/2 . Finally, the wave can be written in terms of itself by noting that each exponential in the expansion is itself a unitary operator to be applied to the wave at the right. Such exponentials depend on the evolved position operator containing both $x$ and $p$ at $t=0$. In this way, many coordinate transformations can be induced on $\psi$: (x,t) = \_[n,]{}’ U\_[n]{}\^ (x,t) = \_[n,]{}’ e\^[i \_n(x,t)]{}(\_n(x,t),t) There are many choices of $\hat x(t)$ for which $U_{n}^{\pm}$ induces coordinate transformations. As the most general way (in two-dimensional phase space) take $\hat x(t)$ as a function of $x,p$ in the form x(t) = f\_t(x) +{g\_t(x),p }. \[evolve\] This operator induces a change of variables in $x$ and an affine transformation for each $n$ in the trigonometric sums above. To fulfill the initial data problem, we set $f_0(x)-ig_0'(x)=x$ and $g_0(x)=1/2$. Obviously, the function $\eta_n(x,t)$ producing the change of variables is to be determined for our evolution of choice (\[evolve\]). Rotations, shearings and dilations can be obtained in this form. What we have achieved so far can be summarized as follows: - The elements $G_t$ of an evolution group act on Hilbert space as $G_t : \hcal \rightarrow \hcal$. - For $\psi \in \hcal$ with discontinuities in the form of pulses, one has $\psi = \Theta \psi$. The operator $\Theta$ is a projector. - In the reversed Heisenberg picture one has $\Theta^{\small{(H)}} \equiv G_t \Theta G_{-t} $, which admits an expansion in terms of unitary operators $\Theta^{\small{(H)}}= \sum_n C_n U_n$. - For each $U_n$, a new transformation is produced: $x^{\small{(H)}} \equiv G_t x G_{-t} = f_t(x) + \{g_t(x),p\} $. The transformation is canonical, $p^{\small{(H)}} = G_t p G_{-t}$. - The overall result becomes $\psi(x,t)= \sum_n C_n \psi(\eta_n(x,t),t)$. In the following we present some examples. Examples ======== Diffraction in a parametric harmonic oscillator ----------------------------------------------- Take as elements of the evolution group, the linear symplectic matrices with blocks $A,B,C,D$ acting on phase space $x,p$ [@quesne]. In the simplest two-dimensional application, these blocks can be considered scalars with the property $A(t)D(t)-B(t)C(t) =1$ and such that x(t) = A(t) x + B(t) p\ p(t) = C(t) x + D(t) p . After substitution and the use of Baker-Campbell-Hausdorff formula, the replication formula becomes (x,t) &=& \_[n,]{}’ (-ik\_n\^2 A(t)B(t)/2 ik\_n(A(t)x 1/2))\ && (x k\_n B(t),t) For short times, we have an expansion in terms of building blocks or a [*mother wave *]{}, given by the initial condition: (x,t) &=& \_[n,]{}’ (-ik\_n\^2 A(t)B(t)/2 ik\_n(A(t)x 1/2))\ && \_0(x k\_n B(t)) + o(t\^2) \[osc\] Furthermore, the short-time approximation can be improved by using a relation between harmonic and free evolution wavefunctions coming from the fact that both propagators are Gaussians. In the case of an initial condition given by the square packet, one obtains an interesting intensity pattern in agreement with the numerical evaluation. The trajectories are obtained from the phase factors in (\[osc\]), parameterized by $A(t),B(t)$. For a time-independent harmonic oscillator, such functions are taken as $A(t)= \cos (\omega t), B(t) = - \sin(\omega t)/\omega $. See figure \[fig:osc\] for a comparison. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ \[fig:osc\] A comparison of diffraction patterns under a harmonic oscillator potential: Numerics, theory (with improved individual packets) and trajectories.](ribbonexact "fig:"){height="5.0cm" width="5.0cm"} ![ \[fig:osc\] A comparison of diffraction patterns under a harmonic oscillator potential: Numerics, theory (with improved individual packets) and trajectories.](ribbonreplication "fig:"){height="5.0cm" width="5.0cm"} ![ \[fig:osc\] A comparison of diffraction patterns under a harmonic oscillator potential: Numerics, theory (with improved individual packets) and trajectories.](ribbonrays "fig:"){height="5.0cm" width="5.0cm"} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The emergence of wavelets ------------------------- Another example of interest emerges from the application of squeezing operations on wave packets. This shall lead to expansions similar to those of wavelet analysis [@daubechies]. Consider the transformation x(t) = x + {x,p}\ p(t) = ( tp + 1) which is an unusual (but valid) example of a canonical, non-linear transformation generated by a hamiltonian which can be written down explicitly: H= { x , (1-e\^[-tp]{})/t\^2 - p/t }. The replication formula now reads (x,t) = \_[n,]{}’ (-i(e\^[k\_n t]{}-1)x/t ) e\^[k\_n t/2]{} ( e\^[k\_n t /2]{} x,t )\ . Which implies a true scaling of the wave packets by powers of the parameters. These are visible in the exponentials accompanying the argument $x$. Each power can be understood as a new scale factor ’zooming’ into the pattern (this can be regarded as a microscope of increasing power). The function can be written for short times as (x,t) = \_[n,]{}’ (-i(e\^[k\_n t]{}-1)x/t ) e\^[k\_n t/2]{} \_0 ( e\^[k\_n t /2]{} x )\ . This is nothing else than a wavelet expansion, where the [*mother wavelet *]{} is given by the initial condition $\psi_0(x)$. Diffraction and the Gross-Pitaevskii equation --------------------------------------------- Here, as a final example, we present the [*numerical *]{} solution to the evolution problem of a square packet governed by the Gross-Pitaevskii equation. (x,t)= i , (x,0) = \_0(x). \[gp\] We have chosen distributions with slightly smooth edges, showing thus that the effect in question is robust. It is interesting to note that many of the features present in the linear Schrödinger equation, appear also in the non-linear case [@gp]. This occurs despite of the fact that in the numerical calculations, strong values of the coupling parameter $g=50,100$ have been used. See figure \[fig:gp\]. ------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------- ![ \[fig:gp\] Probability density for increasing values of $g=0,50,100$. Courtesy of S. Arnold [@gp].](g0 "fig:"){height="5.0cm" width="5.0cm"} ![ \[fig:gp\] Probability density for increasing values of $g=0,50,100$. Courtesy of S. Arnold [@gp].](g+50 "fig:"){height="5.0cm" width="5.0cm"} ![ \[fig:gp\] Probability density for increasing values of $g=0,50,100$. Courtesy of S. Arnold [@gp].](g+100 "fig:"){height="5.0cm" width="5.0cm"} ------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------- A brief explanation of the effect in question can be given in terms of the Gross-Pitaevskii equation plus the previously obtained replication formula for free evolution. For $g=0$ we have (x,t) = \_[n=0]{}\^ C\_n e\^[ik\_n\^2 t/2]{}e\^[ik\_n(x1/2)]{} \_0(x k\_n t) + o(t\^2). \[replication\] The non-linear potential can be incorporated in the evolution by means of the interaction picture. For short times and non-vanishing coupling, we have that the corresponding operator can be replaced by a simple phase factor in the propagation problem. This is indicated in the following expression. \_g(x,t) &=& \_[-]{}\^ dx’ K(x,x’;t,0) e\^[-igt |\_0(x)|\^2]{} \_0(x) + o(t\^2)\ \[grosspitaevskii\] which is in the form of Schrödinger propagation with an [*effective *]{} initial condition $e^{-igt |\psi_0(x)|^2} \psi_0(x)$. The intitially space-dependent phases produce additional focusing or defocusing in the previously obtained patterns (linear case). In the case of positive coupling $g$ we expect defocusing, preceded by a self-similar regime whose duration is shorter than in the free case. Concluding remarks ================== In this contribution, devoted to the role of symmetries and self-similarity in diffraction patterns, we have shown that the Schrödinger evolution of discontinuities can be described by a wave function in terms of itself. This gave rise to a replicating pattern near the edges of an initial distribution. The notion of replication can be extended to other parabolic equations, including non-linear terms. A formal theory of the present treatment is to be developed: Envelope curves live in manifolds, rays in the exponents live in tangent spaces. It has been stressed that the symmetry groups producing the evolution in phase space [*induce *]{} transformations of wavepackets through the exponentiation of the evolved coordinate operators. It is desirable to find all these patterns in experimental setups, either using classical light, or the more ambitious manipulation of Bose-Einstein condensates. The current technology suggests [@jaouadi] that the needed edges in the initial distributions can be produced under special configurations of the confining electromagnetic traps The author is indebted to Stefan Arnold, William Case and Manuel Goncalves for fruitful discussions. Financial support from the DLR project QUANTUS is acknowledged. References {#references .unnumbered} ========== [99]{} Moshinsky M 1952. [*Phys Rev *]{}[**88 **]{} 625 (1952). Brukner C and Zeilinger A 1997 [*Phys Rev A *]{} [**56 **]{}, 5. Goldemberg J and Nussenzveig H M 1957. *Rev. Mex. Fis.*, [**VI.3 **]{} 117. Hecht E 2002 [*Optics *]{}. Fourth edition. Addison-Wesley. Hannay J H 1995 [*Proc. R. Soc. Lond. A *]{} [**450**]{}, 51-65. Nye J F and Liang W 1998 *Near-Field Diffraction by Two Slits in a Black Screen*, [*Proceedings: Mathematical, Physical and Engineering Sciences *]{} [**454**]{} No. 1974, pp. 1635-1658 Torrontegui E 2011 [Phys. Rev. A]{} [**83 **]{}, 043608. Berry M V 1996 [*J. Phys. A *]{} [**29 **]{} 66176629; Berry M V and E Bodenschatz E 1999 [*J. Mod. Optics *]{} [**46 **]{} 349365; Berry M V and Klein S 1996 [*J. Mod. Optics *]{} [**43 **]{} 21392164. Case W 2009 [*Optics Express *]{} [**17 **]{}, 23 20966. Hagley E W 1999 [*Science *]{} [**286 **]{} 1706. Turlapov A 2005 [*Phys Rev A *]{} [**71 **]{}, 043612. Grosche C and Steiner F 1998 [*Handbook of Feynman Path Integrals *]{} Springer. Moshinsky M and Quesne C 1971 [*J. Math. Phys. *]{} [**12 **]{} 1772-1780. Daubechies I 1992 [*Ten lectures on wavelets *]{} CBMS-NSF regional conference series in applied mathematics 61. Arnold S 2011. The calculation has been done using a numerical method based on plane wave expansions with a cut-off frequency. Private Communication. Jaouadi A 2010 [*Phys. Rev.* ]{} [**82 **]{} 023613.
--- abstract: 'In this paper, we construct thin-shell wormholes from charged black string through cut and paste procedure and investigate its stability. We assume modified generalized Chaplygin gas as a dark energy fluid (exotic matter) present in the thin layer of matter-shell. The stability of these constructed thin-shell wormholes is investigated in the scenario of linear perturbations. We conclude that static stable as well as unstable configurations are possible for cylindrical thin-shell wormholes.' author: - | M. Sharif$^1$ [^1] and M. Azam$^{1,2}$ [^2]\ $^1$ Department of Mathematics, University of the Punjab,\ Quaid-e-Azam Campus, Lahore-54590, Pakistan.\ $^2$ Division of Science and Technology, University of Education,\ Township Campus, Lahore-54590, Pakistan. title: '**Stability Analysis of Thin-Shell Wormholes from Charged Black String**' --- [**Keywords:**]{} Israel junction conditions; Stability; Black strings.\ [**PACS:**]{} 04.20.Gz; 04.40.Nr; 04.70.Bw. Introduction ============ Wormholes are the hypothetical objects having a peculiar property of containing exotic matter (which violates null energy condition). The first ever wormhole model was known to be the Einstein-Rosen bridge [@1], which was obtained as a part of the maximally extended Schwarzschild solution. The main problem with this wormhole is the existence of event horizon which prevents observers to move freely from one universe to the other. Later, Morris and Thorne [@2] presented the first traversable Lorentzian wormhole as solution of the Einstein field equations. The key feature of this wormhole is that it does not contain event horizon and an observer may freely move in both universes through a handel (tunnel) known as wormhole throat. Traversable wormholes have some issues such as their mechanical stability, the unavoidable amount of exotic matter present at the wormhole throat, etc. The violation of energy conditions due to the presence of exotic matter in these configurations is a debatable issue in general relativity which is the main hurdle in its observational evidence. To minimize the violation of energy conditions, Visser [@3; @4] used the cut and paste technique on a black hole to build a thin-shell wormhole. He used the Darmois-Israel formalism [@5; @6] to study the dynamical behavior of thin-shell wormhole made of two identical geometries. Many authors studied the stability of thin-shell wormholes against linear perturbations through a standard potential approach. Poisson and Visser [@6a] explored the stability of the Schwarzschild thin-shell wormhole. Eiroa and Romero [@7] generalized this analysis for the Reissner-Nordström thin-shell wormholes, while Lobo and Crawford [@8] included the cosmological constant for the same analysis. For the sake of stable thin-shell wormhole configurations, people have also studied wormhole solutions in modified theories of gravity. For instance, Thibeault et al. [@9] found stable thin-shell wormhole in Einstein-Maxwell theory with a Gauss-Bonnet term. Rahaman et al. [@10; @10a] explored thin-shell wormhole solutions in heterotic string theory and in the Randall-Sundrum scenario. Mazharimousavi et al. [@11; @11a] found viable thin-shell wormhole solutions in the Einstein-Hoffmann Born-Infeld theory and Einstein-Yang-Mills Dilaton gravity. In recent papers [@11b], we have investigated the stability of cylindrical and spherical geometries in Newtonian and post-Newtonian approximations and also spherically symmetric thin-shell wormholes. Thorne [@12] emphasized that models with cylindrical symmetry are the ideal one. These objects have widely been used to study cosmic strings [@12a] which play a vital role in different physical phenomena like gravitational lensing, galaxy formation, thin-shell wormholes, etc. Some literature [@13]-[@16] indicates keen interest in the study of cylindrical thin-shell wormholes. In a sequence of papers [@17]-[@20], the stability of cylindrical thin-shell wormholes associated with local and global cosmic strings have been studied. It was found that the wormhole throat would expand or collapse according to the velocity sign and stable cylindrical thin-shell wormhole configurations could not be possible. Recently, we have explored stable thin-shell wormhole configurations supported by Chaplygin equation of state [@a]. In this paper, we construct thin-shell wormholes from charged black string using cut and paste procedure and investigate its stability through linear perturbations. We consider the Darmois-Israel formalism for the dynamical analysis of the system with modified generalized Chaplygin gas (MGCG) to matter shell. The paper is organized as follows. Section **2** deals with the general formalism for the construction of thin-shell wormholes. In section **3**, we discuss the linearized stability analysis and apply to charged black string thin-shell wormholes. In the last section, we summarize our results. Thin-Shell Wormholes: General Formalism ======================================= The charged static cylindrically symmetric spacetime is given by [@21] $$\label{3} ds^2=-\Phi(r)dt^{2}+\Phi^{-1}(r)dr^{2}+h(r)(d\phi^{2}+\alpha^2{dz^2}),$$ where $$\Phi(r)=\left(\alpha^2r^2-\frac{4M}{\alpha{r}}+\frac{4Q^2}{\alpha^2r^2}\right),~h(r)=r^2,$$ with the following constraints on the coordinates $$-\infty<t<\infty,\quad 0< {r}<\infty, \quad -\infty<{z}<{\infty},\quad 0\leq{\phi}\leq{2\pi}.$$ Here, the parameters $M,~Q$ are the ADM mass and charge density, respectively and $\alpha=-\frac{\Lambda}{3}>0$, $\Lambda$ is the cosmological constant. The inner and outer event horizons of the charged black string are given as $$\label{4} r_{\pm}=\frac{(4M)^\frac{1}{3}}{2\alpha}\left[\sqrt{s}\pm \sqrt{{2}\sqrt{s^2-Q^2\left(\frac{2}{M}\right)^\frac{4}{3}}-s}\right],$$ provided that the inequality $Q^2\leq\frac{3}{4}M^\frac{4}{3}$ holds, where $s$ is given by $$\begin{aligned} \label{5} s&=&\left(\frac{1}{2}+\frac{1}{2}\sqrt{1-\frac{64Q^6}{27M^4}} \right)^\frac{1}{3} +\left(\frac{1}{2}-\frac{1}{2}\sqrt{1-\frac{64Q^6}{27M^4}} \right)^\frac{1}{3}.\end{aligned}$$ For $Q^2>\frac{3}{4}M^\frac{4}{3}$, the given metric has no event horizon and represents a naked singularity. If $Q^2=\frac{3}{4}M^\frac{4}{3}$, the inner and outer horizons merge, which corresponds to the extremal black strings. We follow the Darmois-Israel formulation [@5; @6] for the dynamical analysis of mathematically constructed thin-shell wormholes. For this purpose, we assume radius $``a"$ greater than event horizon $r_h$ to avoid singularities and horizons in wormhole configuration. We take two identical copies $\mathcal{W}^{\pm}$ with $r\geq{a}$ of the cylindrical vacuum solution defined as $$\label{6} \mathcal{W}^{\pm}=\{x^{\mu}=(t,r,\phi,z)/r\geq{a}\}.$$ We join these geometries at the timelike hypersurface $\Sigma=\Sigma^\pm=\{r-a=0\}$ to get a geodesically complete manifold, i.e., $\mathcal{W}=\mathcal{W}^{+}\cup{\mathcal{W}^{-}}$ satisfying the radial flare-out condition, i.e., $h'(a)=2a>0$ [@16]. The two regions are connected at the surface $\Sigma$ (surface of minimal area) with the throat radius $a$. The induced metric at the throat $\Sigma$ with coordinates $\eta^i=(\tau, \phi, z)$ is defined as $$\label{7} ds^2=-d\tau^2+a^2(\tau)(d\phi^2+\alpha^2dz^2).$$ We take throat radius $a$ as a function of $\tau$ to understand the dynamics of the thin-shell wormhole. The presence of thin layer of matter at the shell leads to discontinuity in the extrinsic curvatures across a junction surface, where $K^{+}_{ij}-K^{-}_{ij}=\kappa_{ij}$, and the extrinsic curvature $K^{\pm}_{ij}$ is defined on $\Sigma$ $$\label{8} K^{\pm}_{ij}=-n^{\pm}_{\gamma}\left(\frac{{\partial}^2x^{\gamma}_{\pm}} {{\partial}{\eta}^i{\partial}{\eta}^j}+{\Gamma}^{\gamma}_{{\mu}{\nu}} \frac{{{\partial}x^{\mu}_{\pm}}{{\partial}x^{\nu}_{\pm}}} {{\partial}{\eta}^i{\partial}{\eta}^j}\right),\quad(i, j=0,2,3).$$ The $4$-vector unit normals $n^{\pm}_{\gamma}$ to $\mathcal{W}^{\pm}$ are $$\label{9} n^{\pm}_{\gamma}=\pm\left|g^{\mu\nu}\frac{\partial{f}}{\partial{x^{\mu}}} \frac{\partial{f}}{\partial{x^{\nu}}}\right| =\left(-\dot{a},\frac{\sqrt{\Phi(r)+\dot{a}^2}}{\Phi(r)},0,0\right),$$ satisfying the relation $n^{\gamma}n_{\gamma}=1$. Using Eqs.(\[3\]) and (\[8\]), the non-trivial components of the extrinsic curvature are $$\label{10} K^{\pm}_{\tau\tau}=\mp\frac{\Phi'(a)+2\ddot{a}}{2\sqrt{\Phi(a)+\dot{a}^2}}, \quad K^{\pm}_{\phi\phi}= \pm \frac{1}{a}\sqrt{\Phi(a)+\dot{a}^2},\quad K^{\pm}_{zz}=\alpha^2K^{\pm}_{\phi\phi},$$ where dot and prime mean derivative with respect to $\tau$ and $r$, respectively. Now using the relations between the extrinsic curvatures $$[K_{ij}]=K^{+}_{ij}-K^{-}_{ij},\quad K=tr[K_{ij}]=[K^{i}_{i}],$$ the Einstein equations (called Lanczos equations) are defined on the shell as $$\label{11} {S_{ij}}=\frac{1}{8\pi}\left\{g_{ij}K-[K_{ij}]\right\},$$ where $S_{ij}=diag(\sigma,p_\phi,p_z)$ is the surface energy-momentum tensor, $\sigma$ and $p_\phi,~p_z$ are the surface energy density and surface pressures, respectively. With Eqs.(\[10\]) and (\[11\]), we obtain $$\begin{aligned} \label{12} \sigma&=&-\frac{1}{2\pi{a}}\sqrt{\Phi(a)+\dot{a}^2},\\\label{12a} p&=&p_{\phi}=p_z=\frac{1}{8\pi{a}}\frac{2a\ddot{a}+2\dot{a}^2 +2\Phi(a)+a\Phi'(a)}{\sqrt{\Phi(a)+\dot{a}^2}}.\end{aligned}$$ The negative surface energy density (\[12\]) supports the presence of exotic matter at the throat. For the dynamical characterization of the shell, we consider the MGCG as exotic matter on the shell. The equation of state for MGCG is defined as $$\label{13} p=A{\sigma}-\frac{B}{\sigma^\beta},$$ where $A,~B$ are positive constants and $0<{\beta}\leq1$. This equation combines various equations of state and reduces to the following classes for different values of the parameters $A,~B$ and $\beta$, such as - [for $A=0,~\beta=1$, it corresponds to the usual Chaplygin gas.]{} - [for $A=0$, it corresponds to pure generalized Chaplygin gas (GCG).]{} - [for $\beta=1$, it is another form of modified Chaplygin gas (MCG).]{} Ujjal [@u] have generalized MGCG to variable MGCG by assuming $B$ as a function of the scale factor $a$, i.e., $B=B(a)=B_{0}a^{-m}$, where $B_0,~m$ are the positive constants. In this work, we have assumed $B$ as a positive constant. Inserting Eqs.(\[12\]) and (\[12a\]) in (\[13\]), we obtain a second order differential equation describing the evolution of the wormhole throat $$\begin{aligned} \label{14} &&\left\{\left[2\ddot{a}+\Phi'(a)\right]a^2+\left[\left(\Phi(a)+\dot{a}^2\right) \left(1+2A\right)\right]2a\right\}\left[2a\right]^{\beta}\\\nonumber&-&2B(4\pi{a^2})^{1+\beta} \left[\Phi(a)+\dot{a}^2\right]^\frac{1-\beta}{2}=0.\end{aligned}$$ Linearized Stability Analysis: A Standard Approach ================================================== In this section, we analyze the stability of static solutions of thin-shell wormhole under the standard potential approach [@6a; @7]. For this purpose, the static configuration of surface energy density, surface pressure and dynamical equation for the thin-shell wormhole yields $$\begin{aligned} \label{15} \sigma_0=-\frac{\sqrt{\Phi(a_0)}}{2\pi{a_0}},\quad p_0=\frac{2\Phi(a_0)+a_0\Phi'(a_0)}{8\pi{a_0}\sqrt{\Phi(a_0)}},\end{aligned}$$ $$\begin{aligned} \label{16} \left\{a^2_0\Phi'(a_0)+2a_0 \left(1+2A\right)\Phi(a_0)\right\}\left[2a_0\right]^{\beta}-2B(4\pi{a^2_0})^{1+\beta} \left[\Phi(a_0)\right]^\frac{1-\beta}{2}=0.\end{aligned}$$ The surface energy density and pressure satisfy the conservation equation $$\begin{aligned} \label{17} \frac{d}{d\tau}(\sigma{\Omega})+p\frac{d\Omega}{d\tau}=0,\end{aligned}$$ where $\Omega=4\pi{a^2}$ is the area of the wormhole throat. This equation describes the change in internal energy of the throat plus the work done by the throat’s internal forces. We can write this equation as follows $$\begin{aligned} \label{18} \dot{\sigma}=-2(\sigma+p)\frac{\dot{a}}{a}.\end{aligned}$$ Defining ${\sigma}'=\frac{\dot{\sigma}}{\dot{a}}$, this equation takes the form $$\label{19} a{\sigma}'=-2(\sigma+p).$$ For the stability of static configuration under the radial perturbations around $a=a_0$, we rearrange Eq.(\[12\]) to obtain the thin-shell equation of motion $$\label{20} \dot{a}^2+V(a)=0.$$ This completely determines the dynamics of the thin-shell wormhole, where $V(a)$ is known as potential function given by $$\label{21} V(a)=\Phi(a)-\left[2\pi{a}{\sigma(a)}\right]^2.$$ The stability of static solutions requires $V''(a_0)>0$, $V(a_0)=0=V'(a_0)$. For this purpose, we apply the Taylor series expansion to $V(a)$ upto second order around $a_0$ $$\begin{aligned} \label{22} V(a)=V(a_0)+V'(a_0)(a-a_0)+\frac{1}{2}V''(a_0)(a-a_0)^2+O[(a-a_0)^3].\end{aligned}$$ Taking the first derivative of Eq.(\[21\]) and using (\[19\]), we obtain $$\label{23} V'(a)=\Phi'(a)+8{\pi}^2a\sigma(a)\left[\sigma(a)+p(a)\right].$$ We have another useful relation from the equation of state $$\label{24} p'(a)={\sigma'(a)}\left[(1+\beta)A-\frac{\beta{p(a)}}{\sigma(a)}\right],$$ which may then be written as $$\label{25} {\sigma'(a)}+2p'(a)={\sigma}'(a)\left[1+2\{(1+\beta)A-\frac{\beta{p(a)}}{\sigma(a)}\}\right].$$ The second derivative of potential function along with the above equation leads to $$\begin{aligned} \nonumber V''(a)&=&\Phi''(a)-8{\pi}^2\left\{[\sigma(a)+2p(a)]^2+2\sigma(a) \left[\sigma(a)+p(a)\right]\right.\\\label{26}&\times&\left.\left[1+2\left((1+\beta)A -\frac{\beta{p(a)}}{\sigma(a)}\right)\right]\right\}.\end{aligned}$$ Using Eq.(\[15\]) both $V(a)$ and $V'(a)$ vanish at $a=a_0$, while $V''(a_0)$ becomes $$\begin{aligned} \nonumber V''(a_0)&=&\Phi''(a_0)+\frac{(\beta-1){\Phi'(a_0)}^2}{2\Phi(a_0)}+\frac{{\Phi'(a_0)}}{a_0} \left[1+2(1+\beta)A\right]\\\label{27}&-&\frac{2{\Phi(a_0)}(1+\beta)}{a_0}\left(1+A\right).\end{aligned}$$ Charged Black String Thin-Shell Wormholes ----------------------------------------- In this section, we formulate the charged black string thin-shell wormholes and discuss the stability of their static solutions. The surface energy density and pressure for the charged black string wormhole with Eq.(\[15\]) becomes $$\begin{aligned} \label{28} \sigma_0=-\frac{\sqrt{\alpha^4{a^4_0}-4M\alpha{a_0}+4Q^2}}{2\pi\alpha{a_0}},\quad p_0=\frac{\alpha^3{a^3_0}-4M}{2\pi{a_0}\sqrt{\alpha^4{a^4_0}-4M\alpha{a_0}+4Q^2}}.\end{aligned}$$ Using these values in Eqs.(\[13\]) and (\[26\]), the dynamical equation and the second derivative of potential for the thin-shell wormhole satisfied by the throat radius becomes $$\begin{aligned} \nonumber &&\alpha^4{a^4_0}-M\alpha{a_0}+A\left(\alpha^4{a^4_0}-4M\alpha{a_0}+4Q^2\right) -B\left(2\pi\alpha{a^2_0}\right)^{1+\beta} \\\label{29}&\times&\left(\alpha^4{a^4_0}-4M\alpha{a_0}+4Q^2\right)^{\frac{1-\beta}{2}}=0,\end{aligned}$$ and $$\begin{aligned} \nonumber V''(a_0)&=&\frac{4}{\alpha^2a^4_0(\alpha^4a^4_0-4\alpha{a_0}M+4Q^2)} \left\{(1+\beta)\left[-6\alpha^2a^2_0{M^2}\right.\right.\\\nonumber &\times&\left.\left.(1+4A)-32AQ^2+(1+A)\alpha^5a^5_0M+8\alpha{a_0}MQ^2(1+7A) \right.\right.\\\label{30} &-&\left.\left.8(1+A)\alpha^4{a^4_0}Q^2\right]+16\alpha^4{a^4_0}Q^2-9\alpha^5{a^5_0}M\right\}.\end{aligned}$$ Now we explore the nature of static solutions whether they are stable or unstable. The existence of static solution is constrained by the condition $a_0>{r_h}$, i.e., throat radius must be greater than the horizon radius. On the other hand, for $a_0\leq{r_h}$, the static solutions do not exist which correspond to the non-physical zone (grey zone) as shown in Figures **1-6**. Due to the complicated nature of Eq.(\[29\]), we solve this equation numerically and find $a_0$ for $\beta=0.2,~0.6,~1$ and then replace the solution in Eq.(\[30\]). If $V''(a_0)>0$, we have stable static solution which is represented by the solid curve, whereas for $V''(a_0)<0$, the solution is unstable represented by the dotted curve. The behavior of static solutions depends upon the critical value of charge, $Q_c=0.866025$. We can discuss the solutions given in Figures **1-2** for $\beta=0.2,~0.6$ as follows: - [For $|Q|=0$, there exist both stable and unstable solutions for the black string thin-shell wormhole. The unstable solution approaches to the horizon radius for large values of $B\alpha^{-(1+\beta)}.$]{} - [For $|Q|=0.7Q_c$, this gives similar behavior as for the case $|Q|=0$.]{} - [For $|Q|=0.999Q_c$, i.e., $|Q|$ is nearly equal to the value of the critical charge. When $\beta=0.2$, both stable and unstable solutions exist, while for $\beta=0.6$, stable solution exists only for small values of $B\alpha^{-(1+\beta)}$. Moreover, the horizon radius decreases with the increase of charge.]{} - [When $|Q|=1.1Q_c$, i.e., $|Q|$ has greater value than the critical value of the charge. In this case, stable and unstable solutions exist in each case for the increasing value of $B\alpha^{-(1+\beta)}$ and the horizon radius gradually disappears for $|Q|>Q_c$.]{} We also explore the stability of static solutions corresponding to $\beta=1,$ which corresponds to the MCG as shown in Figure **3**. Notice that for $|Q|<Q_c,$ there always exists a stable static solution for small values of $B\alpha^{-(1+\beta)}$ and vanishes for large values of $B\alpha^{-(1+\beta)}$. When $|Q|>Q_c$, we again have two solutions stable for small values of $B\alpha^{-(1+\beta)}$ and unstable for large values of $B\alpha^{-(1+\beta)}$. Also, similar to the above cases, the horizon radius decreases and eventually disappears for the increasing value of $|Q|$. Now we analyze the stability of static solutions which correspond to GCG and usual Chaplygin gas. For this purpose, we take $A=0$ and $\beta=0.2,~0.6,~1$ in Eq.(\[13\]) and results are shown in Figures **4-6**. When $\beta=0.2$, we have only unstable solution for $|Q|=0,~0.7Q_c$, while for $|Q|=0.999Q_c$ there exist three solutions two are unstable and one is stable. Finally, for $|Q|>Q_c$, the horizon radius disappears and both stable and unstable solutions exist similar to the above cases. For $\beta=0.6$, the behavior of solutions presented in Figure **5** is similar to the case $\beta=0.6$ for MGCG shown in Figure **2** for $|Q|=0,~0.7Q_c, ~0.999Q_c$, while for $|Q|>Q_c$, only stable solution exists in this case. When $\beta=1$, we have only stable solutions for all values of $|Q|$ as shown in Figure **6**. Summary ======= The study of thin-shell wormholes has been the subject of interest due to the presence of exotic matter, which violates the null energy condition. The aim of this study is to construct cylindrical thin-shell wormholes and investigate the stability of these configurations. We have developed cylindrical thin-shell wormholes by joining two identical copies of the cylindrical manifold using cut and paste method. In order to explore the dynamics of thin-shell wormhole, we have applied the Darmois-Israel junction conditions along with MGCG for the description of exotic matter. We have explored the stability of static solutions numerically satisfying the condition $a_0>r_h$, under linear perturbations. The stability of thin-shell wormhole solutions is examined for different values of parameter $\beta=0.2,~0.6,~1$. It is found that for $\beta=0.2,~0.6$, there always exist stable and unstable solutions except for $\beta=0.6$ and $|Q|=0.999Q_c$, for which only stable static solution exists. Moreover, in both cases the horizon radius decreases with the increase of $|Q|$. When $\beta=1$, we have stable configurations for small values of $B\alpha^{-(1+\beta)}$ with $|Q|<Q_c$ and approaches to the horizon of the manifold, whereas for $|Q|>Q_c$, we have obtained stable as well as unstable solutions like the other cases for $\beta=0.2,~0.6.$ Further, we examine the stability of solutions for the GCG. It is found that for $\beta=0.2$, there exists one unstable solution with $|Q|=0,0.7Q_c$ and two unstable and one stable solutions for $|Q|=0.999Q_c$. Also, when $|Q|>Q_c$, there exist both stable and unstable solutions. For $\beta=0.6$, the solutions are similar to the case of MGCG with $|Q|=0,~0.7Q_c,~0.999Q_c$, while only stable solution exists for $|Q|>Q_c$. Recently, we have found stable configurations for the uncharged and charged black string wormholes supported by Chaplygin gas [@a]. We would like to mention here that the results of this work reduce to [@a] for $\beta=1$ and $A=0$ as shown in Figure **6**. It is worth mentioning here that the literature [@17]-[@20] indicates only unstable solutions for cylindrical thin-shell wormhole. The apparent discrepancy of our results with those presented in the literature comes from the Einstein field equations and the choice of the equation of state for the exotic matter. We have concluded that there is a possibility of stable configuration for the cylindrical thin-shell wormholes.\ [**Acknowledgment**]{} We would like to thank the Higher Education Commission, Islamabad, Pakistan, for its financial support through the [*Indigenous Ph.D. 5000 Fellowship Program Batch-VII*]{}. One of us (MA) would like to thank University of Education, Lahore for the study leave. [40]{} Einstein, A. and Rosen, N.: Phys. Rev. **48**(1935)73. Morris, M. and Thorne, K.: Am. J. Phys. **56**(1988)395. Visser, M.: Phys. Rev. D **39**(1989)3182; Visser, M.: Nucl. Phys. B **328**(1989)203. Visser, M.: *Lorentzian Wormholes* (AIP Press, New York, 1996). Darmois, G.: Memorial des Sciences Mathematiques (Gautheir-Villars, 1927) Fasc. 25; Israel, W.: Nuovo Cimento B **44**(1966)1. Musgrave, P. and Lake, K.: Class. Quantum Grav. **13**(1996)1885. Poisson, E and Visser, M.: Phys. Rev. D **52**(1995)7318. Eiroa, E.F. and Romero, G.E.: Gen. Relativ. Gravit. **36**(2004)651. Lobo, F.S.N. and Crawford, P.: Class. Quantum Grav. **21**(2004)391. Thibeault, M., Simeone, C. and Eiroa, E.F.: Gen. Relativ. Gravit. **38**(2006)1593. Rahaman, F., Kalam, M. and Chakraborty, S.: Int. J. Mod. Phys. D **16**(2007)1669. Rahaman, F., Kalam, M., Rahman, K.A. and Chakraborty, S.: Gen. Relativ. Gravit. **39**(2007)945. Mazharimousavi, S.H., Halilsoy, M. and Amirabi, Z.: Phys. Letts. A **375**(2011)3649. Mazharimousavi, S.H., Halilsoy, M. and Amirabi, Z.: Phys. Letts. A **375**(2011)231. Sharif, M. and Azam, M.: JCAP **02**(2012)043; Gen. Relativ. Gravit. **44**(2012)1181; J. Phys. Soc. Jpn. **81**(2012)124006; Mon. Not. R. Astron. Soc. **430**(2013)3048; Chinese Phys. B **22**(2013)050401. Thorne, K.S.: *in Magic without Magic* (Freeman, San Francisco, 1972). Vilenkin, A. and Shellard, E.P.S.: *Cosmic Strings and Other Toplogical Defects* (Cambridge University Press, 1994). Cle´ment, G.: Phys. Rev. D **51**(1995)6803. Aros, R.O. and Zamorano, N.: Phys. Rev. D **56**(1997)6607. Kuhfittig, P.K.F.: Phys. Rev. D **71**(2005)104007. Bronnikov, K.A. and Lemos, J.P.S.: Phys. Rev. D **79**(2009)104019. Eiroa, E.F. and Simeone, C.: Phys. Rev. D **70**(2004)044008. Bejarano, C., Eiroa, E.F. and Simeone, C.: Phys. Rev. D **75**(2007)027501. Richarte, M.G. and Simeone, C.: Phys. Rev. D **79**(2009)127502. Eiroa, E.F. and Simeone, C.: Phys. Rev. D **81**(2010)084022. Sharif, M. and Azam, M.: Eur. Phys. J. C **73**(2013)2407.. Lemos, J.P.S. and Zanchin, V.T.: Phys. Rev. D **54**(1996)3840. Debnath, U.: Astrophys. Space Sci. **312**(2007)295. [^1]: msharif.math@pu.edu.pk [^2]: azammath@gmail.com
--- abstract: 'We consider the effects of eccentricity on the fragmentation of gravitationally unstable accretion disks, using numerical hydrodynamics. We find that eccentricity does not affect the overall stability of the disk against fragmentation, but significantly alters the manner in which such fragments accrete gas. Variable tidal forces around an eccentric orbit slow the accretion process, and suppress the formation of weakly-bound clumps. The “stellar” mass function resulting from the fragmentation of an eccentric disk is found to have a significantly higher characteristic mass than that from a corresponding circular disk. We discuss our results in terms of the disk(s) of massive stars at $\simeq0.1$pc from the Galactic Center, and find that the fragmentation of an eccentric accretion disk, due to gravitational instability, is a viable mechanism for the formation of these systems.' author: - 'Richard D. Alexander, Philip J. Armitage, Jorge Cuadra, and Mitchell C. Begelman' title: 'Self-gravitating fragmentation of eccentric accretion disks' --- Introduction {#sec:intro} ============ The relative proximity of the Galactic Center (henceforth GC) provides a unique opportunity for “close-up” study of the processes that influence the formation of galaxies and black holes. Recent advances in telescope technology have enabled the resolution of individual stars in the crowded GC environment, and these new data have produced several puzzles [e.g., @ghez98; @genzel03; @ghez05; @paumard06]. In particular, the detection of large numbers of young, massive stars is strongly suggestive of recent star formation activity at or close to the GC, despite the fact that the extreme physical conditions at the GC (temperature, tidal shear, etc.) prohibit the formation of stars by the same mechanisms that form stars in the solar neighborhood. The data show that a small number (few tens) of B-type stars exist very close to the GC (within $\simeq0.01$pc; these are the so-called “S-stars”), and a larger population ($>100$) of massive O, B and Wolf-Rayet (WR) stars exist at slightly larger radii ($\simeq0.1$pc). Many of this second population of stars are known to form a coherent ring or disk [e.g., @genzel03] and, while the existence of a second such disk of stars remains controversial [e.g., @paumard06; @lu06], the existence of at least one stellar disk seems secure. A popular theory posits that the stellar disk(s) formed via fragmentation of an accretion disk due to gravitational instability [e.g., @ps77; @lb03; @sb87; @sb89; @gt04; @nayak06; @levin07]. However, we have recently demonstrated that the observed eccentricities of the stellar orbits [typically $\gtrsim 0.3$, @bel06] are inconsistent with dynamical relaxation from a circular initial configuration [@aba07], unless the stellar mass function (MF) is extremely top-heavy, much more so than is inferred from observations [@ns05; @paumard06]. We suggested that fragmentation of an eccentric accretion disk could explain this discrepancy, and here seek to investigate the effects of eccentricity on disk fragmentation. In recent years gravitational instabilities in accretion disks have been the subject of much theoretical and numerical research, primarily in the context of protoplanetary disks and planet formation [see, e.g., the review by @durisen_ppv]. Some details remain contested, but the basic physical processes are now well understood. In order for a gravitationally unstable disk to fragment, two conditions must be satisfied. Firstly, the disk must be sufficiently massive and/or cold that the @toomre64 criterion is satisfied: $$Q = \frac{c_{\mathrm s} \Omega}{\pi G \Sigma} \lesssim 1$$ Here $c_{\mathrm s}$ is the local sound speed, $\Omega$ the local orbital frequency, and $\Sigma$ the disk surface density. This criterion ensures that the disk is gravitationally unstable, but in order for the instability to lead to fragmentation the disk must also cool rapidly, or else compressional heating will stabilize the disk against fragmentation. @gammie01 used a local analysis to show that the cooling timescale of the disk must satisfy $$t_{\mathrm {cool}} \lesssim 3 \Omega^{-1} \, ,$$ and subsequent work using global simulations has verified the validity of this criterion [@rice03; @lr04; @lr05]. The criterion also depends weakly on the adopted equation of state [@gammie01; @rice05], and studies of protoplanetary disks have shown that this dependence can be non-trivial in real disks [@boley07]. In the case of an eccentric disk, the circularization energy provides an additional, and potentially important, source of heating, and this remains largely unstudied to date. Theoretical models of disk fragmentation around black holes typically consider circular disks with near-equilibrium initial conditions, implicitly assuming that such a configuration results from some sort of “accretion event” [such as the capture of a molecular cloud, e.g., @nayak06; @nayak07; @levin07]. In the case of the GC, the so-called circumnuclear ring at a few pc from Sgr A$^*$ is thought to be the likely gas reservoir for such accretion events [@morris93; @sanders98], but the dynamics of the infall process is not well understood. Moreover, some models of cloud infall predict the formation of coherent, eccentric disks around the central black hole [e.g., @sanders98]. We expect such disks to be gravitationally unstable, and star formation via fragmentation of an eccentric disk would provide an elegant solution to the problem of the stellar orbits discussed in @aba07, although it may leave some issues, such as the $z$-distribution of the orbits, unresolved. In this paper, therefore, we seek to understand the dynamics of gravitational instability and fragmentation in eccentric disks. In Section \[sec:sims\] we describe our numerical simulations, and present the results in Section \[sec:res\]. We discuss the implications of our results for models of star formation at the GC, as well as the limitations of our analysis, in Section \[sec:dis\], and summarize our conclusions in Section \[sec:summary\]. Simulations {#sec:sims} =========== Our simulations make use of the publicly-available smoothed-particle hydrodynamics (SPH) code [Gadget2]{} [@springel05]. The code has been modified to allow the use of a simple cooling prescription (as outlined below), which is valid only in a disk geometry. We adopt the standard @mg83 prescription for the SPH viscosity (with $\alpha_{\mathrm {SPH}}=1.0$), using the “Balsara-switch” [@balsara95] to limit the artificial shear viscosity (as specified in Equations 11–12 of @springel05), and adopt sufficiently high numerical resolution that the transport of angular momentum due to numerical dissipation is much less than that expected from self-gravitating angular momentum transport [@lr04; @lr05; @nelson06]. As demanded by @nelson06, we allow for a variable gravitational softening length, and fix the SPH smoothing and gravitational softening lengths to be equal throughout. We use the standard Barnes-Hut formalism to compute the gravitational force tree [as described in @springel05], and use $N_{\mathrm {ngb}}=64\pm2$ as the number of SPH neighbours. Lastly, in order to avoid close orbits around the “black hole” limiting the time-step to an unreasonably small value, we use a single sink particle as the central gravitating mass. This particle simply accretes all gas particles that pass within its sink radius [as described in @cuadra06], and is merely a numerical convenience that has no physical effect on the simulation. We set the sink radius of the “black hole” to be 1/4 of the inner disk radius (or semi-major axis, if $e\ne0$), which has a value of 1 in our scale-free simulations (see Section \[sec:sim\_dets\]). In practice fewer than 0.1% of the particles are swallowed by the sink particle in any of our simulations. The simulations were run on the [tungsten]{} Xeon Linux cluster at NCSA[^1], using 64 parallel CPUs for the highest-resolution runs. Initial conditions {#sec:ics} ------------------ We set up our initial disk as follows, using an approach that closely mirrors that of @rice05. We adopt a surface density profile $$\Sigma(a) \propto a^{-1}$$ and a disk temperature that scales as $$T(a) \propto a^{-1/2} \, ,$$ where $a$ is the orbital semi-major axis. The disk sound speed $c_s \propto T^{1/2}$ is normalized so that the Toomre parameter $$Q = \frac{c_{\mathrm s} \Omega}{\pi G \Sigma}$$ is equal to 2.0 at the outer boundary. (Here $\Omega$ is the orbital frequency.) Using this prescription, the Toomre parameter scales approximately as $$Q \propto a^{-3/4}$$ in the initial disk. With this set-up the initial disk is marginally gravitationally stable, and cools into instability with time. In order to set up such a disk in SPH, we first define a disk mass $M_{\mathrm d}$, and set the mass of each gas particle to be $m = M_{\mathrm d}/N_{\mathrm {SPH}}$, where $N_{\mathrm {SPH}}$ is the number of gas particles. We then divide the disk into concentric annuli, and distribute the particles in the annuli according to the surface density profile. Within each annulus, particles are distributed such that the mass flux around the annulus is constant. For an orbit of given semi-major axis $a$, eccentricity $e$, and azimuthal angle $\phi$, the radius is given by $$r(\phi) = \frac{1-e^2}{1+e\cos\phi} a \,$$ and the radial and azimuthal components of the velocity are given by $$v_r = e \sin \phi \sqrt{\frac{G M_{\mathrm {enc}}}{r(1+e\cos\phi)}}\, ,$$ $$v_{\phi} = \sqrt{\frac{G M_{\mathrm {enc}} (1+e\cos\phi)}{r}}$$ respectively, where $M_{\mathrm {enc}}$ is the mass enclosed by the orbit (the mass of the central object plus the mass of the disk at semi-major axis $<a$). Note that, as our disks are sufficiently massive to be self-gravitating, the disk mass can cause a significant perturbation to the Keplerian potential of the central mass. (Strictly these expressions only apply to a spherically symmetric mass distribution, but they are sufficiently accurate for our purposes.) The particles are then distributed vertically by randomly sampling a hydrostatic equilibrium (Gaussian) density distribution with scale-height $H = c_{\mathrm s}/\Omega(r)$ (although we note that this is not strictly an equilibrium configuration when $e\ne0$). Cooling {#sec:cooling} ------- In order to draw comparisons with previous studies, we adopt a simple cooling prescription of the form $$\left(\frac{du}{dt}\right)_{\mathrm {cool},i} = -\frac{u_i}{t_{\mathrm {cool}}} \, ,$$ where $u_i$ is the internal energy of particle $i$, and the cooling timescale $t_{\mathrm {cool}}$ is given by $$t_{\mathrm {cool}} = \frac{\beta}{\Omega} \, .$$ Here the constant $\beta$ is an input parameter: previous studies have found that the fragmentation boundary in circular disks lies at $\beta \simeq 3$, with a weak dependence on the equation of state adopted [@gammie01; @rice03; @rice05]. In the case of an eccentric disk, however, there is an ambiguity in this definition of the cooling timescale. One can choose either to use $\Omega(r) = \sqrt{G M_{\mathrm {bh}}/r^3}$, or $\Omega(a) = \sqrt{G M_{\mathrm {bh}}/a^3}$. The first case results in a cooling timescale which varies around the orbit, while in the second case $t_{\mathrm {cool}}$ is constant for individual orbital streamlines. It is not entirely clear which of these is most appropriate when compared to a real disk, as this depends on the coolants involved. However, it seems unlikely that the rate at which an individual fluid element cools will vary on a timescale shorter than the dynamical timescale, so we adopt the second prescription for most of our modelling. We do, however, run a model with varying cooling time as a test case. Consequently, we define the cooling timescale as $$t_{\mathrm {cool}} = \frac{\beta}{\Omega(a)} = \beta\sqrt{\frac{a^3}{G M_{\mathrm {bh}}}}\, ,$$ and compute the semi-major axis $a$ of a given particle directly from the instantaneous orbital elements (based on the assumption of Keplerian orbits). We adopt an adiabatic equation of state throughout, with adiabatic index $\gamma=5/3$. Simulation details {#sec:sim_dets} ------------------ --------------- --------------------- ------ --------- -------------------- ---------------------------------- ---------------- ---------------------- Simulation $N_{\mathrm {SPH}}$ $e$ $\beta$ $a_{\mathrm{max}}$ $M_{\mathrm d}/M_{\mathrm {bh}}$ Min. smoothing $N_{\mathrm {runs}}$ length? [circ]{} 500,000 0 3 25.0 0.2 No 3 [circb5]{} 500,000 0 5 25.0 0.2 No 1 [ecc.25]{} 500,000 0.25 3 25.0 0.2 No 1 [ecc.50]{} 500,000 0.5 3 25.0 0.2 No 3 [ecc.50b5]{} 500,000 0.5 5 25.0 0.2 No 1 [ecc.50var]{} 500,000 0.5 3 25.0 0.2 No 1 [ecc.75]{} 500,000 0.75 3 25.0 0.2 No 1 [circhr]{} $5\times10^6$ 0 3 5.0 0.05 Yes 1 [ecchr]{} $5\times10^6$ 0.5 3 5.0 0.05 Yes 1 --------------- --------------------- ------ --------- -------------------- ---------------------------------- ---------------- ---------------------- : [List of simulations run, showing resolution and physical parameters for each simulation. $N_{\mathrm {SPH}}$ is the number of SPH particles used to model the gas disk, and $N_{\mathrm {runs}}$ is the number of different random realizations of the initial conditions that were used in each case. The simulation [ecc.50var]{} used the cooling prescription that results in a cooling time which varies around the eccentric orbit; all the others used a cooling time that was constant along the orbital streamlines.]{}[]{data-label="tab:sims"} In order to study the effects of eccentricity on disk fragmentation, we have conducted a number of simulations. The parameters of these simulations are specified in Table \[tab:sims\], and are summarized below. The simulations are scale-free: we adopt a system of units where the length unit is the semi-major axis of the inner disk edge, the mass unit is that of the central black hole, and the time unit is the orbital period at $a=1$. Firstly, we conducted a large suite of simulations at moderate resolution, using 500,000 gas particles. In order to resolve the fragmentation process properly we adopted a rather massive (and therefore thick) disk, with $M_{\mathrm d}/M_{\mathrm {bh}}=0.2$, and simply stopped the calculations when the density of collapsed regions became high enough to restrict the timestep to an unfeasibly small value. We adopted a dynamic range of 25 in semi-major axis for these simulations. For a circular disk ($e=0$) we conducted three identical simulations with $\beta=3$ (i.e., that are marginally unstable to fragmentation), each using a different random realization of the initial distribution of gas particles (i.e.,, a different noise field), and similarly performed three such simulations with $e=0.5$. In addition, we performed a single simulation with $e=0.25$, another with $e=0.75$, and a further simulation with $e=0.5$ using the cooling prescription that varies around the disk orbit (see Section \[sec:cooling\]). Lastly, we performed two simulations with a longer cooling time of $\beta=5$ [which should be stable against fragmentation in the circular case; @rice03]: one with $e=0$, and one with $e=0.5$. As seen in Section \[sec:res\_low\] below, these low-resolution simulations allow us to infer a great deal about the effects of eccentricity on disk fragmentation, but do not run for long enough, or produce enough fragments, to allow measurement of a MF. Consequently, we also ran two further simulations at much higher resolution, using $5\times10^6$ gas particles and a somewhat smaller radial range (with the outer edge of the disk at 5 times the semi-major axis of the inner edge). We adopted the lightest disk that could be resolved safely throughout (at a factor of $\simeq2$ better than the @nelson06 criterion for spatial resolution in the vertical direction), and so adopted $M_{\mathrm d}/M_{\mathrm {bh}}=0.05$. If we scale this to the GC system, this gives an SPH particle mass of 0.03M$_{\odot}$ (assuming a black hole mass of $3\times10^6$M$_{\odot}$). We note that this is slightly more massive than the disks adopted by previous such simulations [e.g., @nayak07 who adopt $M_{\mathrm d}/M_{\mathrm {bh}}=0.01$], and also slightly more massive than current observational estimates of the total stellar mass in the GC disk(s) [which range from $\simeq0.003$–0.03$M_{\mathrm {bh}}$, e.g., @ndcg06; @paumard06]. However, such observations may under-estimate the mass of the initial gas disk as, given the age of the stellar disk(s) [$\simeq 6$Myr, @paumard06], it is reasonable to assume that the most massive stars that formed are no longer present. Moreover, given the scale-free nature of our simulations, it is unlikely that this small over-estimate of the disk mass will have a significant effect on our results. Additionally, in order to follow the fragmentation process beyond the initial collapse phase in these high-resolution simulations, we imposed a minimum SPH smoothing length of 0.001. This is well below the Hill radius expected for even the least massive clumps, and merely acts to limit collapse to densities that would stop the calculation; the clumps essentially have density profiles which are “flat-topped” within this radius. We prefer this to a sink particle approach because bound material can become unbound from fragments as they move around an eccentric orbit (due to the variable shear), and sink particles of the type used by, for example, @nayak07 cannot lose mass in this manner (by construction). We conducted two of these high-resolution simulations: one of a circular disk, and one with $e=0.5$. Results {#sec:res} ======= Code Tests ---------- As a test of our numerical method, we first set out to re-produce previously obtained results. In the circular case, we find that the simulation with $\beta=5$ was indeed stable against fragmentation, and instead produced quasi-steady angular momentum transport via spiral density waves. This simulation was allowed to run for 5 [*outer*]{} orbital periods (i.e., 625 inner orbital periods) and showed no evidence of fragmentation, suggesting that this configuration is indeed stable against fragmentation over long timescales. By contrast, all of the simulations with $\beta=3$ were unstable to fragmentation, typically breaking into fragments after around 90 inner orbital periods[^2]. Reflections of density waves from the inner edge of the disk act to stabilize the region within a few $H$ of the boundary, so that the first fragments form at radii of $\simeq3$–5 (where the orbital timescale is approximately 10 times that at the inner boundary). Thus we find that our numerical procedure was successful in reproducing previous results, namely that the fragmentation boundary for a gravitationally unstable disk with $\gamma=5/3$ lies at $t_{\mathrm {cool}}\Omega \simeq 3$ [e.g., @gammie01; @rice03]. Low-resolution runs {#sec:res_low} ------------------- The low resolution runs make use of a rather massive (and therefore thick) disk and, if they fragment at all, are not expected to produce very many fragments (as the most unstable length scale is always of order $H$). Consequently, the analysis of these low resolution runs is limited to if, when and where they fragment, and the number of fragments produced: there are too few fragments to allow detailed analysis of their properties. In these runs the density is allowed to increase without limit, so the simulations simply stop when the density becomes sufficiently high to force a very small timestep. Typically this occurs 1–2 (local) orbital periods after the first fragments form, and only a small fraction of the gas is accreted into clumps before the simulations stop. In order to compare the fragmentation properties of runs with different eccentricities, we require a simple method to quantify the behavior (masses, velocities, positions, etc.) of the fragments that form. We define fragments (or “clumps”) to be bound objects containing at least 128 SPH particles (i.e., with at least double the number of nearest neighbors). The clumps are first identified by locating peaks in the density distribution, and are then tested for bound-ness. For simplicity we calculate the potential energy of pairs of particles using a slightly simplified gravitational potential of the form $Gm/(r+h)$, where $h$ is the local SPH smoothing length: this is sufficiently accurate for our purposes, and is faster to compute than convolution with the rather complex smoothing function used by [Gadget2]{}. The masses of individual clumps are determined by ranking particles by total energy (potential plus kinetic plus thermal) and iterating outwards until the first unbound particle is reached. We define the position and velocity of each clump as the mean values of all of the SPH particles bound to that clump. While somewhat cumbersome, we prefer this method of identifying clumps to a geometric one (using, for example, concentric shells) as it allows accurate treatment of non-spherical clumps, and we apply this method identically to all of our simulations (both here and in the high-resolution runs discussed below). ### Fragmentation boundary for $e\ne0$ ![Snapshots of disk midplane density in the central region of the low-resolution runs with $e=0.5$: the upper panel shows a model with $\beta=3$ (simulation [ecc.50]{}), the lower $\beta=5$ (simulation [ecc.50b5]{}). Both are plotted on the same (logarithmic) density scale, and both are shown for $t=112.5$. The $\beta=3$ run has fragmented, while the $\beta=5$ run remains stable. The major axis of the disk was initially aligned with the $x$-axis: the precession of the inner disk is clearly seen in both snapshots.[]{data-label="fig:b3_b5"}](f1a.ps){width="\hsize"} ![Snapshots of disk midplane density in the central region of the low-resolution runs with $e=0.5$: the upper panel shows a model with $\beta=3$ (simulation [ecc.50]{}), the lower $\beta=5$ (simulation [ecc.50b5]{}). Both are plotted on the same (logarithmic) density scale, and both are shown for $t=112.5$. The $\beta=3$ run has fragmented, while the $\beta=5$ run remains stable. The major axis of the disk was initially aligned with the $x$-axis: the precession of the inner disk is clearly seen in both snapshots.[]{data-label="fig:b3_b5"}](f1b.ps){width="\hsize"} Our first aim was to study the effect of eccentricity on the overall stability of the disk against fragmentation. The energy liberated by circularization of an eccentric orbit (with a fixed angular momentum) is given simply by the product of $e^2$ and the total energy of the corresponding circular orbit, so in principle this represents a large energy reservoir that could stabilize the disk against fragmentation. However, in our simulations this energy is not liberated sufficiently quickly to prevent fragmentation: all of the eccentric disks with $\beta=3$ fragmented (apart from that with $e=0.75$, see below), while none of the models with $\beta=5$ showed any evidence for fragmentation despite running for several outer orbital times (see Fig.\[fig:b3\_b5\])[^3]. Moreover, the fact that the disk with $e=0.75$ and $\beta=3$ did not fragment is likely an artefact of our numerical set-up. In this simulation the major-to-minor axis ratio of the disk is so large that when the inner disk precesses it approaches the outer edge of the disk. At this point density waves are reflected back off the outer edge of the disk, and appear to stabilize the disk against fragmentation. In principle this could be a physical effect, but it is not clear that a real disk would have such a sharp outer edge with such a large major-to-minor axis ratio. Additional simulations with a much larger dynamical range in semi-major axis (and therefore a much larger particle number) are required to investigate this further. We therefore conclude that the disk eccentricity has little or no effect on the location of the fragmentation boundary, except possibly at very high eccentricity ($e\gtrsim 0.75$). In a sense, this can be attributed to our choice of initial conditions. Our disks were set up so that the angle of pericenter was constant with radius, and consequently none of the initial streamlines intersect. We expect the circularization energy to be liberated by shocks, which arise when differential precession of the disk causes different orbital streamlines to intersect. In our models the central body provides a point-mass potential, and the only non-point-mass contribution to the potential (i.e., the part that causes precession) comes from the disk itself. The precession timescale is approximately $(M_{\mathrm {bh}}/M_{\mathrm d}) t_{\mathrm {orb}}$, but the disk becomes unstable to fragmentation after approximately 1–2 local cooling times. Consequently our disks become unstable to fragmentation before they are able to liberate a significant fraction of their circularization energy (as seen in Fig.\[fig:b3\_b5\]), and the eccentricity has no significant effect on the overall stability criteria. We find that a disk with uniform eccentricity is just as likely to be unstable to fragmentation as a circular disk with the same thermodynamic properties, but note that our disks are artificially constructed to have such a uniform eccentricity. The question of whether or not these initial conditions are realistic is discussed in Section \[sec:dis\_ics\]. ### The effects of disk eccentricity on the fragmentation process ![Evolution of the number of bound clumps in simulations with different eccentricity: the time axis is normalised to the point where the first bound clump forms in each simulation. The solid black ($e=0$) and red ($e=0.5$) lines show the mean of three different simulations; the (noisier) dashed green line ($e=0.25$) is for a single simulation only. The eccentric disks clearly form fewer bound clumps than are seen in the circular case. The decline in clump number at $t\simeq15$ in the circular disks is due to mergers.[]{data-label="fig:number_low"}](f2.ps){width="\hsize"} ![Evolution of the mass of a typical clump, formed in a disk with $e=0.5$ (simulation [ecc.50]{}). The horizontal axis shows the orbital phase, with the zero-point taken at pericenter; the vertical axis shows the mass bound to the clump, in units of the mass of the central black hole. The plot begins at the point where the clump first becomes bound, and ends where the simulation stops (giving a total period of approximately one orbit for this particular clump). The decrease in mass caused by tidal stripping during pericenter passage is clearly visible. The small variations in mass close to apocenter arise when the clump passes through local enhancements in the gas density, caused by global spiral density waves.[]{data-label="fig:clump39"}](f3.ps){width="\hsize"} While it seems that eccentricity has little or no effect on the overall stability of a self-gravitating disk, our simulations do show that the details of the fragmentation process are strongly affected by eccentricity. This is clearly seen in Fig.\[fig:number\_low\], which shows the number of bound clumps as a function of time for simulations with different eccentricity. Although all of the disks with $\beta=3$ fragment, clump formation is clearly suppressed in the cases where $e\ne0$, suggesting that disk eccentricity has a significant influence on the accretion of gas on to individual clumps. This is presumably due to the variable tidal forces around an eccentric orbit: gas which is bound to a particular clump at apocenter is not necessarily bound at pericenter. In the simulations with $e\ne0$, the most weakly-bound clumps do not survive their first pericenter passage, and instead are sheared apart by tidal forces. The simulation with a variable cooling time (simulation [ecc.50var]{}, not shown in Fig.\[fig:number\_low\]) shows less pronounced tidal stripping than those with constant cooling time (as the cooling is faster near pericenter), but still results in many fewer fragments than seen in the circular disks. The effect of tidal forces is highlighted further in Fig.\[fig:clump39\], which shows the mass of a particular clump in one of the $e=0.5$ runs during the first orbit following its formation. The mass of the clump is approximately constant around much of the orbit, but decreases by around 70% as the clump passes pericenter. This clearly demonstrates that the variable tidal forces in an eccentric disk have a strong effect on the accretion process, and that tidal stripping of this kind must be taken into account when modelling this process. High-resolution runs -------------------- In order to study the effect of eccentricity on the fragmentation process in more detail, we conducted two additional simulations with much higher numerical resolution. As described in Section \[sec:sim\_dets\], these simulations used 5 million SPH particles and $M_{\mathrm d}/M_{\mathrm {bh}}=0.05$, resulting in an SPH particle mass of $10^{-8}M_{\mathrm {bh}}$. In addition, we imposed a minimum SPH smoothing length on these simulations [@monaghan92; @nayak07], limiting collapse at the highest densities and allowing us to follow the simulations for much longer after the formation of the first fragments. The thinner disk used here results in many more fragments being formed, and following the simulations for longer allows us to construct MFs of the formed clumps with good number statistics. We note that the recent work of @pd07 found, using an isothermal equation of state, that the artificial viscosity used in SPH can increase the survival times of weakly-bound clumps in simulations such as these. In principle this could influence our results, so we adopt a resolution limit which ensures that all our clumps are well-resolved, and this should minimize the effects of artificial dissipation on our derived mass functions. We use the algorithm described in Section \[sec:res\_low\] above to identify clumps, but reject clumps with fewer than 192 bound SPH particles (i.e., $3N_{\mathrm {ngb}}$) from our subsequent analysis as such clumps are only marginally resolved. Scaled to the GC system, this corresponds to a resolution limit of $\simeq6$M$_{\odot}$ in mass and $\simeq20$AU in length (assuming a black hole mass of $3\times10^6$M$_{\odot}$ and a characteristic disk radius of 0.1pc). We compute the number of bound clumps for every output snapshot (every 0.05 time units), but for computational reasons we only evaluate the MF for a handful of these snapshots (every 1 time unit, starting from the point where the first bound clump forms). ### Circular disk ![Final MF for the circular disk, measured at the point where the simulation stopped ($t=12.7$). The solid histogram shows the measured MF; the dashed lines show the Salpeter slope and best-fitting log-normal function (with a width of 0.79dex). The vertical dotted line denotes the resolution limit.[]{data-label="fig:MF_circ"}](f4.ps){width="\hsize"} The reference simulation of a circular disk ([circhr]{}), ran for 12.7 inner orbital periods. The first bound fragment was formed at $t=6.2$, and most of the fragmentation occurred at radii $<2$ (simply because the dynamical time in the outer disk is too long for fragmentation to occur). At the point where the calculation stopped it had formed 313 bound clumps, of which 263 were above the (mass) resolution limit. Within $r=2$ the bound clumps account for 40.5% of the total disk mass, suggesting that much of the gas has been accreted into bound objects and the resulting MF is indeed representative. The final MF is shown in Fig.\[fig:MF\_circ\]. The slope is close to Salpeter at high masses, but the MF shows a well-resolved turnover at $\simeq 10^{-5}M_{\mathrm {bh}}$ and is in fact well-fit by a @ms79 log-normal distribution function, with a width of $\simeq 0.8$dex and a characteristic mass of $1.1\times10^{-5}M_{\mathrm {bh}}$. Note that this is essentially the Toomre mass, which for this simulation is $\pi \Sigma H^2 \simeq 10^{-5}M_{\mathrm {bh}}$: this corresponds to a characteristic mass of $\simeq 30$M$_{\odot}$ when scaled to the GC. The MF is clearly not consistent with the Salpeter slope at even moderate stellar masses, and the low-mass turnover is consistent with previous numerical simulations [@nayak07], and also with observations of the GC stellar disks [@ns05; @paumard06]. Thus we are confident that our reference model is both numerically accurate and physically plausible, and now look to the differences that result when the disk is eccentric. ### Eccentric disk ![Snapshots of disk midplane density in the central region of the high-resolution run with $e=0.5$ (simulation [ecchr]{}). The snapshot is taken at the point where the simulation ended, $t=13.5$, and many bound clumps are clearly visible. The major axis of the disk was again initially aligned with the $x$-axis: the less massive disk results in considerably less precession than was seen in the low-resolution runs.[]{data-label="fig:HR_snap"}](f5.ps){width="\hsize"} The simulation [ecchr]{} ran for 13.5 inner orbital periods, and the first fragment formed at $t=6.75$. As in the circular disk, the fragmentation was confined to $a\lesssim2$, and both simulations ran for $\simeq7$ inner orbital periods after the formation of the first fragment. As we saw in the low resolution runs, the eccentric disk formed fewer clumps: at the point where this calculation stopped it had formed 189 clumps, of which 178 had masses above the resolution limit. Within $a=2$ the bound clumps had accreted 26.5% of the available gas. The clumps follow orbits very similar to the disk: while there is some scatter in the instantaneous values of the clump eccentricities, the rms eccentricity of all of the bound clumps at the end of the simulation was $\simeq 0.49$ (essentially equal to the disk eccentricity of 0.5). The final midplane density profile is shown in Fig.\[fig:HR\_snap\]. ### Mass function evolution In order to look at the effects of eccentricity on the fragmentation and accretion processes, it is instructive to look at the evolution of the MF in both the circular and eccentric disks. Fig.\[fig:number\_high\] shows the evolution of both clump number and accreted mass in the inner part ($a\le2$) of both the circular and eccentric disks. Both disks show a “two-phase” formation process: a fragmentation phase, where the clump number increases rapidly and most of the bound mass is in small clumps; followed by an accretion phase, where the clump number remains approximately constant but the clumps continue to accrete gas from their surroundings. Once again we see that the eccentric disk forms fewer clumps than the circular disk, and we again attribute this to the tidal destruction of weakly-bound clumps in the eccentric disk. Moreover, the accretion of mass is much slower in the eccentric disk, suggesting that tidal forces affect the accretion process across the full range of clump masses. This is reinforced by the measured MFs: the eccentric disk MF consistently shows a larger characteristic clump mass than seen the circular disk. ![Evolution of bound clumps in the high-resolution simulations. The solid lines show the evolution of the number of bound clumps (left-hand scale), while the points show the fraction of disk mass within $a\le2$ accreted into bound clumps (right-hand scale). The dotted lines between the points are added as a guide to the eye. As in Fig.\[fig:number\_low\], the time axis is normalized to the point where the first bound clump forms in each simulation.[]{data-label="fig:number_high"}](f6.ps){width="\hsize"} The MFs are compared directly in Fig.\[fig:MF\_comp\], measured at a point where both simulations have accreted the same fraction of gas into bound clumps. The eccentric disk MF is taken at the end of the simulation (when 26.5% of the disk mass has been accreted into clumps), and the “reference” circular disk MF is taken 4.4 time units after the formation of the first clump (26.8% accreted). Both are reasonably well-fit by log-normal distribution functions: the best-fitting log-normal in the circular case has a characteristic mass of $6.6\times10^{-6}$ and a width of 0.6 dex; in the eccentric case the best-fitting characteristic mass is $1.2\times10^{-5}$, and the width is 0.5 dex. A KS test gives a probability of $7\times10^{-6}$ that the two MFs are drawn from the same underlying distribution, and similar comparisons at earlier times in the simulations (with the same accreted mass fractions) all result in probabilities of $<1$% that the two MFs are the same. Thus we conclude that the disk eccentricity alters the accretion process significantly, primarily because of tidal effects around the eccentric orbit. Mass is tidally stripped from bound clumps during pericenter passage, with two notable effects. Firstly, the most weakly-bound, lowest-mass clumps fail to survive pericenter passage, resulting in a relative dearth of low-mass objects in the clump MF. Secondly, the overall accretion rate is slowed, suggesting that an eccentric disk will take significantly longer than a circular one to accrete all of its mass into bound clumps. Discussion {#sec:dis} ========== ![Comparison of clump MFs in the circular (black) and eccentric (red) disks. As in Fig.\[fig:MF\_circ\], the dotted line shows the resolution limit, and the dashed line the Salpeter slope. The MF for the eccentric disk is measured at the end of the simulation; in the circular case it is measured where the same fraction of the disk mass has been accreted into bound clumps. The eccentric disk clearly forms fewer low-mass objects than the circular disk, and a KS test rejects the possibility that the two MFs are drawn from the same underlying distribution.[]{data-label="fig:MF_comp"}](f7.ps){width="\hsize"} While our simulations are scale-free, they have obvious applications to the formation of the stellar disk(s) observed at the GC. Observations of the GC stellar disks find that the stellar MF must be much more “top-heavy” than the standard Salpeter MF. The relatively low X-ray flux from the GC region suggests a significant deficit of low-mass ($\lesssim 5$M$_{\odot}$) stars compared to local star-forming regions [@ns05], and estimates of the stellar MF from the $K$-band luminosity function suggests a MF slope that is 1.0–1.5 dex flatter than the Salpeter slope [@paumard06]. Also, given that the total stellar mass in the disk(s) is $\sim10^4$–$10^5$M$_{\odot}$ [e.g., @ndcg06; @paumard06], the large number ($\gtrsim 100$) of observed massive O- & WR-stars suggests a characteristic stellar mass of $\simeq 10$–50M$_{\odot}$, much larger than that observed in the solar neighbourhood. This is consistent with our results [and those of previous studies such as @nayak07], which predict a characteristic stellar mass of $\simeq 30$M$_{\odot}$ (when scaled to the GC), and a MF that is significantly deficient in low-mass objects. Previous studies have not been able to reproduce all of the observed properties of the disks (notably the eccentric stellar orbits), but our results suggest that fragmentation of moderately eccentric accretion disk can form disks of stars with masses and orbits consistent with those observed. Our simulations suggest that eccentric disks fragment to produce MFs that are slightly more top-heavy than expected in the circular case (due to the tidal destruction of the lowest-mass clumps), but not so top-heavy as to be inconsistent with the observed stellar populations. Moreover, previous simulations have shown that changes in the thermodynamic properties of the disk, such as the cooling rate, can alter the MF at a level at least as significant as that seen in our simulations [@nayak07]. Consequently, we are satisfied that our results are robust, within the range of parameter space explored by our simulations. However, there are three major areas in which our simulations differ from real systems, and we discuss each of these in turn below. In addition, we note that angular momentum transport (by spiral waves) is negligible in the simulations that exhibit fragmentation, as the fragmentation and subsequent accretion of gas occurs on the dynamical timescale (which is much shorter than the timescale for angular momentum transport). It has long been thought that angular momentum transport due to gravitational instabilities can be responsible for significant accretion in disks around black holes [e.g., @sfb89]. This is indeed likely, but our results confirm those of @nayak07, that star formation due to gravitational instability in accretion disks occurs on a timescale that is significantly shorter than the timescale for angular momentum transport. Consequently, if star formation occurs at all radii in the disk, it seems likely that little or no gas will be accreted on to the central black hole. We note, however, that real disks around black holes are likely only unstable to fragmentation in their outer regions [e.g., @levin07; @kp07], and that whether or not gravitational instability in such disks results in fragmentation or transport depends critically on the issues of cooling discussed in Section \[sec:dis\_cool\]. Our simulations are most applicable to the GC stellar disk(s), but their scale-free nature allows us to consider the results in terms of other systems also. It has been suggested that the young stellar population at the GC may be characteristic of more generic episodes of black hole (AGN) activity in other galaxies [e.g., @lb03; @tqm05], and this has implications for models of AGN accretion and the growth of supermassive black holes. The consequences of our models for AGN feeding depend on the disk thermodynamics (see Section \[sec:dis\_cool\]), but we note in passing that our models fit naturally into, for example, the black hole growth scenario presented by @kp06 [@kp07]. The applicability of our results to protoplanetary discs, however, is less clear. It seems likely that protoplanetary disks are gravitationally unstable at early times in their evolution [e.g., @durisen_ppv], but it is less clear that such disks possess significant eccentricities. If eccentric, self-gravitating disks do exist around young stars then our results could easily be applied to such systems, but at present there is no strong evidence for the existence of such systems. Initial configuration {#sec:dis_ics} --------------------- The first area of concern is in our initial conditions. We choose an initial configuration that is marginally stable, and allow the disks to cool into instability. This approach has been followed by previous studies [e.g., @rice03; @rice05], and recent work suggests that use of these artificial initial conditions does not affect the fragmentation boundary in circular disks [@clarke07]. What is less clear, however, is whether or not our choice of a uniform angles of pericenter is similarly valid. As discussed in Section \[sec:intro\], in systems like the GC we expect the disk to form via some type of accretion event, and while previous simulations [e.g., @sanders98] have found that an eccentric configuration can result, there is no particular reason to expect a uniform eccentricity. If the disk does not have uniform eccentricity then neighboring orbital streamlines will intersect, and the resulting shocks have the potential to provide significant additional heating. Such heating may act to stabilize the disk against fragmentation, but a realistic treatment of this problem requires full (magneto-)hydrodynamic simulations of the infall and capture of a gas cloud by the black hole, and such modeling is clearly beyond the scope of this work. What we have demonstrated, however, is that if an eccentric disk is able to evolve towards a configuration where it satisfies the (circular) conditions for fragmentation, then it will fragment before any such shock heating is able to stabilize the disk. Cooling and thermodynamics {#sec:dis_cool} -------------------------- The second major area of uncertainty in our models is our simplified cooling prescription. Our cooling prescription (Section \[sec:cooling\]) parametrizes the cooling rate by semi-major axis only, and while this is ideal for the sort of numerical experiments conducted here, it is far from realistic. In the case of protoplanetary disks, the subject of heating and cooling rates in gravitationally unstable disks is an area of on-going debate [e.g., the review by @durisen_ppv], but in the case of a disk around a black hole we can justify the approximate form of our cooling prescription by a few simple scaling arguments. Similar arguments have been discussed previously by other authors [e.g., @lb03; @jg03; @gt04; @levin07; @rafikov07], but we re-state them here for completeness. A self-gravitating disk has $Q\simeq 1$, and therefore if we know the disk surface density we can estimate the sound speed, and therefore the temperature, in the disk. If we scale our model to the GC system (by adopting a characteristic radius of 0.1pc and $M_{\mathrm {bh}}=3\times10^6$M$_{\odot}$), we find a typical disk surface density of $\sim 100$g cm$^{-2}$ and a corresponding disk temperature of $\sim 100$K (estimated at $a=2$, but these quantities do not vary significantly over the limited radial range of our simulations). At such temperatures the dominant source of opacity is dust grains, and typical values for the Rosseland mean opacity are $\kappa \simeq 1$–10cm$^2$g$^{-1}$ [e.g., @semenov03]. The opacity does not change dramatically at temperatures below that at which the dust grains sublime (typically $\simeq 1500$K). As seen above, scaling our models to the GC system (assuming that the initial gas mass in the disk is comparable to the observed stellar mass) results in a much lower temperature than this, so significant variations in the disk opacity (such as the opacity gap discussed by, e.g., @jg03 [@tqm05]) seem unlikely. Only if star formation in such disks is very inefficient, contrary to what is seen in our (and other) simulations, would we expect the disk temperatures to be high enough for dust sublimation to be significant. Therefore, we expect such a disk to be marginally optically thick[^4], and the resulting cooling timescale is typically comparable to the orbital period [e.g., @rafikov07]. However, we also note that our cooling model takes no account of changes in the cooling function as the local density increases (due to gravitational collapse). A more realistic model would make use of an opacity-based cooling model that accounts for the density structure of the disc, and would also adopt a somewhat stiffer equation of state in the the collapsing clumps. The application of such models to protoplanetary disks is still mired in controversy [e.g., @boley06; @mayer07], but the physical conditions in a GC disk are known to be much more conducive to fragmentation than those in protoplanetary disks [see e.g., the discussion in @rafikov07]. Consequently we expect that our disks will cool rapidly enough to be unstable to fragmentation. Moreover, the uncertainties discussed here are not specific to our models of eccentric disks, so we do not expect our results regarding the differences between eccentric and circular disks to be affected by these simplifications. A further simplification is that our cooling prescription depends only on the local conditions in the disk, and is not influenced at all by the global disk structure. Previous work using this scale-free, local cooling prescription has found that it gives rise to instabilities that are well-approximated by a local description [@lr04; @lr05], and therefore the global disk structure is unlikely to have a significant effect on the spatial distribution of clumps in our simulations. However, in a real disk the cooling time is not scale-free and, in general, gravitational instabilities cannot be well-described locally [@bp99]. Numerical simulations of protoplanetary disks have found that transport can be dominated by low-order global modes in the non-scale-free case [@boley06; @cai07; @durisen_ppv]. It seems likely, however, that the local approximation will be more accurate for the very thin disks considered here than in the case of protoplanetary disks (where $M_{\mathrm d}/M_* \sim 0.1$). What is less certain is whether or not a realistic system is able to evolve into the configuration discussed here. The virial temperature of a gas cloud that falls towards the GC from, for example, the circumnuclear ring is orders of magnitude larger than the disk temperatures considered here [e.g @morris93; @sanders98]. An infalling cloud may not virialize before it forms a coherent disk structure, but there is clearly sufficient energy available to heat the gas significantly. At temperatures from $\simeq 1000$–$10^4$K the typical Rosseland opacities are much smaller than those at $\sim 100$K, as the primary coolants (dust, and also molecular species) are destroyed at such temperatures [@semenov03]. Thus, we expect the cooling rates at higher temperatures to be much lower than those considered here, and it may be difficult for such a disk to cool into instability in a more realistic model [see also the discussion in @jg03]. Our simple thermodynamic model is viable once the disk reaches the state considered here but, as discussed above, we do not address the issue of whether or not a real system will be able to reach such a configuration. Accretion physics at stellar scales ----------------------------------- The last major area neglected by our simulations is the small-scale physics of star-formation. Our simulations have sufficient numerical resolution to treat the fragmentation process correctly, but when scaled to the GC our minimum length resolution is a few tens of AU. This is 2–3 orders or magnitude larger than the typical stellar radius, and we make no attempt to model the physics of star formation on small scales. Essentially our MFs are MFs of pre-stellar cores, and as such they may not be representative of the stellar MFs that result from disk fragmentation. However, we note that the fragmentation into bound clumps occurs on orbital timescales, and the results presented in Fig.\[fig:number\_high\] suggest that all of the disk gas will be bound to clumps within $\simeq10$ orbital periods. The orbital period of the GC stellar disks is $\sim 1000$yr, but the typical timescales for such cores to collapse to form stars are $\sim 10^4$–$10^5$yr [e.g., @mt03]. Consequently we do not expect stellar feedback processes to have a significant effect on the process of disk fragmentation. However, individual bound clumps may well form more than one star each, and we also expect to see some mergers between clumps [@levin07]. More detailed simulations, using higher numerical resolution and more realistic thermal physics, will be required to address these issues further. Summary {#sec:summary} ======= We have constructed hydrodynamic models of gravitationally unstable disks, and investigated the effects of eccentricity on disk fragmentation. We find that the fragmentation boundary is unaffected by eccentricity, and that conditions which lead to fragmentation in circular disks do likewise in eccentric disks. However, we find that the fragmentation process is altered by eccentricity, as the variable tidal forces around an eccentric orbit have a strong effect on the accretion of gas on to bound fragments. We find that the formation of low-mass fragments is suppressed, and that the growth (via accretion) of more massive fragments proceeds much more slowly in an eccentric disk than in a circular one. We consider our results in the context of the GC stellar disks, and find good agreement between the MFs in our simulations and those observed. We find that fragmentation of eccentric accretion disks is a viable mechanism for the formation of the GC stellar disks, with the resulting stellar MF and orbits consistent with observations. We thank Volker Springel for providing us with some subroutines that are not part of the public version of [gadget2]{}, and thank an anonymous referee for several valuable comments. This work made use of NSF (NCSA) super-computing allocations AST070006N and AST070011, and was supported by NASA under grants NAG5-13207, NNG04GL01G and NNG05GI92G from the Origins of Solar Systems, Astrophysics Theory, and Beyond Einstein Foundation Science Programs, and by the NSF under grants AST–0307502 and AST–0407040. Alexander, R.D., Begelman, M.C., Armitage, P.J., 2007, ApJ, 654, 907 Balbus, S.A., & Papaloizou, J.C.B., 1999, ApJ, 521, 650 Balsara, D.S., 1995, J. Comput. Phys., 121, 357 Boley, A.C., Mejía, A.C., Durisen, R.H., Cai, K., Pickett, M.K., & d’Alessio, P., 2006, ApJ, 651, 517 Boley, A.C., Hartquist, T.W., Durisen, R.H., Michael, S., 2007, ApJ, 656, L89 Beloborodov, A.M., Levin, Y., Eisenhauer, F., Genzel, R., Paumard, T., Gillessen, S., Ott, T. 2006, ApJ, 648, 405 Cai, K., Durisen, R.H., Boley, A.C., Pickett, M.K., Meija, A.C., 2007, ApJ in press (arXiv:0706.4046) Clarke, C.J., Harper-Clark, E., Lodato, G., 2007, MNRAS, in press (arXiv:0708.0742) Cuadra, J., Nayakshin, S., Springel, V., di Matteo, T., 2006, MNRAS, 366, 358 Durisen, R.H., Boss, A.P., Mayer, L., Nelson, A.F., Quinn, T., Rice, W.K.M., 2007, in Reipurth, B., Jewitt, D., Keil, K., eds, [*Protostars & Planets V*]{}, Univ. Arizona Press, Tuscon, 607 Gammie, C.F., 2001, ApJ, 553, 174 Genzel, R., et al. 2003, ApJ, 594, 812 Ghez, A.M., Klein, B.L., Morris, M., Becklin, E.E. 1998, ApJ, 509, 678 Ghez, A.M., Salim, S., Hornstein, D., Tanner, A., Lu, J.R., Morris, M., Becklin, E.E., Duchêne, G. 2005, ApJ, 620, 744 Goodman, J., & Tan, J.C. 2004, ApJ, 608, 108 Johnson, B.M., & Gammie, C.F., 2003, ApJ, 597, 131 King, A.R., Pringle, J.E., 2006, MNRAS, 373, L90 King, A.R., Pringle, J.E., 2007, MNRAS, 377, L25 Levin, Y., & Beloborodov, A.M. 2003, ApJ, 590, L33 Levin, Y., 2007, MNRAS, 374, 515 Lodato, G., Rice, W.K.M., 2004, MNRAS, 351, 630 Lodato, G., Rice, W.K.M., 2005, MNRAS, 358, 1489 Lu, J., Ghez, A.M., Hornstein, S.D., Morris, M., Matthews, K., Thompson, D.J., Becklin, E.E., 2006, J. Phys. Conf. Series, 54, 279 Mayer, L., Lufkin, G., Quinn, T., & Wadsley, J., 2007, ApJ, 661, L77 McKee, C.F., & Tan, J.C., 2003, ApJ, 585, 850 Miller, G.E., & Scalo, J.M., 1979, ApJS, 41, 513 Monaghan, J.J., 1992, ARA&A, 30, 543 Monaghan, J.J., Gingold, R.A., 1983, J. Comput. Phys., 52, 374 Morris, M., 1993, ApJ, 408, 496 Nayakshin, S. 2006, MNRAS, 372, 143 Nayakshin, S., & Cuadra, J. 2005, A&A, 437, 437 Nayakshin, S., & Sunyaev, R. 2005, MNRAS, 364, L23 Nayakshin, S., Dehnen, W., Cuadra, J., Genzel, R. 2006, MNRAS, 366, 1410 Nayakshin, S., Cuadra, J., Springel, V., 2007, MNRAS, 379, 21 Nelson, A.F., 2006, MNRAS, 373, 1039 Paumard, T., et al. 2006, ApJ, 643, 1011 Pickett, M.K., & Durisen, R.H., 2007, ApJ, 654, L155 Polyachenko, V.L., & Shukman, I.G. 1977, Soviet Ast. Letters, 3, 134 Price, D.J., 2007, PASA, in press (arXiv:0709.0832) Rafikov, R.R., 2007, ApJ, 662, 642 Rice, W.K.M., Armitage, P.J., Bate, M.R., Bonnell, I.A., 2003, MNRAS, 339, 1025 Rice, W.K.M., Lodato, G., Armitage, P.J., 2005, MNRAS, 364, L56 Sanders, R.H., 1998, MNRAS, 294, 35 Semenov, D., Henning, Th., Helling, Ch., Ilgner, M., Sedlmayr, E., 2003, A&A, 410, 611 Shlosman, I., & Begelman, M.C., 1987, Nature, 329, 801 Shlosman, I., & Begelman, M.C., 1989, ApJ, 341, 685 Shlosman, I., Frank, J,, Begelman, M.C., 1989, Nature, 388, 45 Springel, V., 2005, MNRAS, 364, 1105 Thompson, T.A., Quataert, E., Murray, N., 2005, ApJ, 630, 167 Toomre, A., 1964, ApJ, 139, 1217 [^1]: See [http://www.ncsa.uiuc.edu/]{} [^2]: The exact time at which fragmentation occurs is somewhat arbitrary, as it merely reflects the time required for the initial configuration to cool into instability (the disk initially has $Q\simeq 20$ in the inner regions). However, the fact that different random realizations of the initial conditions result in fragmentation at the same time supports our assertion that the simulations are numerically converged. [^3]: Visualizations of our SPH simulations were created using [splash]{}: see @price07 for details. [^4]: Note that this also verifies our earlier assumption that the cooling rate is unlikely to change significantly around the orbit.
--- abstract: 'Super-diffusion, characterized by a spreading rate $t^{1/\alpha}$ of the probability density function $p(x,t) = t^{-1/\alpha} p \left( t^{-1/\alpha} x , 1 \right)$, where $t$ is time, may be modeled by space-fractional diffusion equations with order $1 < \alpha < 2$. Some applications in biophysics (calcium spark diffusion), image processing, and computational fluid dynamics utilize integer-order and fractional-order exponents beyond than this range ($\alpha > 2$), known as high-order diffusion, or hyperdiffusion. Recently, space-time duality, motivated by Zolotarev’s duality law for stable densities, established a link between time-fractional and space-fractional diffusion for $1 < \alpha \leq 2$. This paper extends space-time duality to fractional exponents $1<\alpha \leq 3$, and several applications are presented. In particular, it will be shown that space-fractional diffusion equations with order $2<\alpha \leq 3$ model sub-diffusion and have a stochastic interpretation. A space-time duality for tempered fractional equations, which models transient anomalous diffusion, is also developed.' author: - 'James F. Kelly' - 'Mark M. Meerschaert' title: 'Space-Time Duality and High-Order Fractional Diffusion' --- Introduction ============ Non-Fickian, or anomalous, diffusion is observed in many areas of physics, including hydrology [@deng2006parameter; @phanikumar2007separating; @haggerty2002power], turbulent transport [@del2006fractional], and biophysics [@fedotov2008non; @jeon2011vivo]. Anomalous *super-diffusion* is characterized by a spreading rate $t^{1/\alpha}$ of the probability density function $p(x,t) = t^{-1/\alpha} p \left( t^{-1/\alpha} x , 1 \right)$ that is faster than the classical $t^{1/2}$ rate predicted by Fickian diffusion [@metzler2000random], where $t$ is time, while anomalous *sub-diffusion* is characterized by a spreading rate that is slower than $t^{1/2}$. Fractional PDEs (FPDEs), where local time- and space-derivatives are replaced by non-local fractional derivatives, are often used to study anomalous diffusion. FPDEs with a $\gamma$-fractional derivative in time and an $\alpha$-fractional derivative in space lead to a scaling rate of $t^{\gamma/\alpha}$. Sub-diffusion may be modeled by a time-fractional derivative (e.g., Caputo derivative) with order $\gamma<1$ and a second derivative in space ($\alpha =2$) [@metzler2000random], whereas super-diffusion may be modeled by a space-fractional derivative (e.g., Riemann-Liouville derivative) of order $1<\alpha <2$ and a first order derivative in time ($\gamma =1$) [@metzler2004restaurant]. These FPDEs may be derived from a continuous time random walk (CTRW) framework: time-fractional diffusion equations involve long-waiting times between particle jumps, where the chance of waiting longer than some time $t>0$ is proportional to $t^{-\gamma}$, while space-fractional diffusion equations involve long particle jumps, where the chance of jumping longer than some distance $x>0$ is proportional to $x^{-\alpha}$. Transient anomalous sub- and super-diffusion, which transition from early-time anomalous behavior to late-time diffusive behavior, may be modeled with tempered time-fractional [@meerschaert2008tempered] and space-fractional [@baeumer2010tempered; @cartea2007] derivatives, respectively. Recently, we have established a link between time-fractional and space-fractional diffusion equations, called space-time duality [@baeumer2009space; @kellyduality]. Zolotarev [@zolotarev1961expression; @zolotarev1986one] first proved a duality law between stable densities with indices $1<\alpha \leq 2$ and $1/2 \leq 1/\alpha < 1$. The duality principle was applied to the space-fractional diffusion equation in [@baeumer2009space], and later to the space-fractional advection-dispersion equation in [@kellyduality]. The latter study was motivated by a controversy in river-flow hydrology: both space-fractional dispersion (diffusion) equations and time-fractional PDEs provide reasonably good fits to breakthrough curve (BTC) measurements [@kelly_fracfit]. From a stochastic point of view, space-time duality established a connection between long, power-time waiting times and long negative jumps, thereby justifying a space-fractional PDE for modeling retention of contaminant particles. In short, a particle that rests while the plume moves downstream ends up in the same position as a particle that moves downstream, but then makes a long upstream jump. In both [@baeumer2009space] and [@kellyduality], the equivalence was restricted to space-fractional PDEs modeling super-diffusion $(1 < \alpha < 2)$. The equivalent time-fractional equation has order $\gamma = 1/\alpha$. Space-fractional derivatives of order $\alpha > 2$ have recently been used to model sub-diffusion of calcium sparks in cardiac myocytes by Chen et al. [@chen2013] and Tan et al. [@tan2007anomalous], exhibiting good agreement with experimental data. This sub-diffusion results from the multi-scale nature of cytoplasm, which has polymer networks and complex macro-molecules that immobilize diffusing particles. Recall that time-fractional PDEs are often used to model sub-diffusion since the time-fractional Caputo derivative results from long waiting times in the CTRW formalism. A question arises: can the space-fractional model with order $2<\alpha \leq 3$ proposed in [@tan2007anomalous; @chen2013] be linked with time-fractional [@fedotov2008non] and CTRW [@jeon2011vivo] diffusion models also used in biophysics? Space-fractional exponents with $\alpha >2$ (high-order diffusion, or hyperdiffusion) are also found in fluid mechanics [@frisch2008hyperviscosity], image processing [@wei1999generalized], and transport of cosmic rays [@tawfik2018]. The goal of this paper is to extend space-time duality to fractional (and integer) spatial derivatives of order $1<\alpha \leq 3$. Our duality result shows how both super-diffusion and sub-diffusion can be modeled by a space-fractional PDE. Then we illustrate the method with applications to the time-fractional diffusion wave equation, multi-dimensional time-changed Brownian motion, and tempered fractional diffusion. In Section II, we briefly review the space-fractional diffusion equation and hyperdiffusion. Section III generalizes the space-time duality argument presented in [@kellyduality] to all space-fractional exponents $1 <\alpha \leq 3$. Section IV connects solutions of the time-fractional diffusion-wave equation to a corresponding system of space-fractional diffusion equations. A governing equation for subordinated multi-dimensional Brownian motion is proposed in Section V using a vector space-fractional PDE. Section VI extends space-time duality to tempered fractional diffusion, followed by conclusions in Section VII. Space-Fractional Diffusion ========================== The two-sided space-fractional diffusion equation is given by [@meerschaert2012 Equation (1.26)] $$\frac{\partial}{\partial t} u(x,t) = \left(\frac{1 + \theta}{2}\right) C \frac{\partial^{\alpha}}{\partial x^{\alpha}} u(x,t)+ \left(\frac{1 - \theta}{2}\right) C \frac{\partial^{\alpha}}{\partial (-x)^{\alpha}} u(x,t) \label{fde}$$ where $C$ is a fractional diffusion coefficient, the fractional index is $\alpha >1$, and the skewness is $\theta \in[-1,1]$. The positive (left) and negative (right) Riemann-Liouville (RL) fractional derivatives are defined by [@kilbas2006theory p. 87] \[rlderivs\] $$\frac{\partial^{\alpha}}{\partial x^{\alpha}} f(x) = \frac{1}{\Gamma(n - \alpha)} \frac{\partial^n}{\partial x^n} \int_{-\infty}^{x} f(y) (x-y)^{n -1 - \alpha} \, dy \label{posrl}$$ $$\frac{\partial^{\alpha}}{\partial (-x)^{\alpha}} f(x) = \frac{(-1)^n}{\Gamma(n - \alpha)} \frac{\partial^n}{\partial x^n} \int_x^{\infty} f(y) (y-x)^{n-1 - \alpha} \, dy , \label{negrl}$$ where $n = {\lceil{\alpha}\rceil}$ and $\Gamma(z)$ is the Gamma function. For $1< \alpha \leq 2$ subject to an impulse initial condition $u(x,0) = \delta(x)$, the fundamental solution of is a stable probability density function (PDF) with index $\alpha$ and skewness $\theta$ [@mainardi2001fundamental]. In river-flow hydrology, breakthrough curve measurements of relative concentration $u(x,t)$ with $x$ fixed are well fit by negatively-skewed ($\theta = -1$) PDFs [@kelly_fracfit]. For the special case of $\theta = -1$, reduces to the negatively skewed space-fractional diffusion equation $$\frac{\partial}{\partial t} u(x,t) = C \frac{\partial^{\alpha}}{\partial (-x)^\alpha} u(x,t). \label{fde2}$$ The coefficient $C$ is chosen such that the eigenvalues of have a non-positive real part so energy is not created. Denote the Fourier transform (FT) of $u(x,t)$ by $\hat{u}(k,t)$ and apply a FT to , yielding $$\frac{\partial}{\partial t} \hat{u}(k,t) = C (-ik)^{\alpha} \hat{u}(k,t) . \label{fde2hat}$$ Since the real part of $C (-ik)^{\alpha}$ is $C \cos ( \pi \alpha /2)$, we take $C = (-1)^{m+1}$ where $2m - 1 < \alpha < 2m + 1$ and $m \in \mathbb{N}$ to produce eigenvalues with non-positive real part. In particular, $C=1$ if $1 < \alpha \leq 3$ and $C=-1$ if $3 < \alpha \leq 5$. Under this condition, reduces to a *hyperdiffusion equation* [@frisch2008hyperviscosity; @baeumer2015higher] $$\frac{\partial}{\partial t} u(x,t) = (-1)^{m+1} \frac{\partial^{\alpha}}{\partial (-x)^{\alpha}} u(x,t) . \label{hyperdiff}$$ For integer $\alpha = 2m$, is used in turbulence modeling [@frisch2008hyperviscosity], stabilizing numerical methods such as the spectral element method [@ullrich2018impact], and modeling the transport of cosmic rays [@malkov2015cosmic]. In the remainder of this paper, we consider with $C= 1$ for $1 < \alpha \leq 3$, which is a special case of . We consider solutions with an impulse initial condition $u(x,0) = \delta(x)$. For $1 < \alpha \leq 2$, solutions to are negatively skewed stable densities [@mainardi2001fundamental], which model anomalous diffusion where particles experience large jumps in the negative direction. This equation, complemented with a drift term, successfully models contaminant transport in rivers [@phanikumar2007separating; @chakraborty2009parameter], as well as source identification problems in groundwater hydrology [@zhang2016backward], where $u(x,t)$ is the release location/time PDF. These hydrology applications assume a fractional exponent $1 < \alpha \leq 2$, so that the contaminant particles experience super-diffusion and there is stochastic interpretation to $u(x,t)$. The term “hyperdiffusion” has several usages in the literature. For example, Metzler et al. [@metzler2012] define hyperdiffusion as a process with mean-squared displacement that has a scaling rate of $t^{\alpha}$, where $\alpha > 2$. In this paper, the term “hyperdiffusion” refers to the FPDE with $\alpha > 2$ and its solutions. Hyperdiffusion (or hyperviscosity) is popular in turbulence modeling and computational fluid dynamics (CFD), where integer powers greater than two are used to stabilize numerical methods by reducing the range of scales over which dissipation acts [@frisch2008hyperviscosity]. Hyperdiffusion is used in spectral element models to damp high-order modes and eliminate numerical noise [@ullrich2018impact]. The most commonly used value for hyperdiffusion is $\alpha = 4$ ($m=2$) [@satoh2008; @ullrich2018impact]. Wei [@wei1999generalized] applied integer-order hyperdiffusion for image denoising and edge detection problems, while Malkov and Sagdeev [@malkov2015cosmic] derived a hyperdiffusion model with $\alpha = 4$ ($m=2$) for cosmic ray transport. Fractional-order hyperdiffusion with orders larger than two have also been used in the surface generation of proteins by Hu et al. [@hu2013high] and modeling calcium sparks in cardiac myocytes by Tan et al. [@tan2007anomalous]. Recently, Tawfik et al. [@tawfik2018] used a space-time hyperdiffusion equation with a Riesz derivative in space of order $\alpha > 2$ and Caputo derivative in time of order $0 < \gamma < 1$ to model cosmic rays. Space-Time Duality ================== Although space-time duality was first noted using stable PDFs [@zolotarev1961expression; @zolotarev1986one], the basic idea may be illustrated using Fourier transforms and dispersion relationships. Applying a space-time FT to using the relationship $\int_{-\infty}^{\infty} \frac{\partial^{\alpha}}{\partial (-x)^{\alpha}} f(x) e^{-ikx} \, dx = (-ik)^{\alpha} \hat{f} (k)$ yields a dispersion relationship $i \omega = (-ik)^{\alpha}$, where $\omega$ is angular frequency and $k$ the wavenumber, and $\hat{f}(k)$ is the spatial FT of $f(x)$. Formally take the $\alpha$-th root, yielding an equivalent dispersion relationship $(i \omega)^{\gamma} = -ik$, where $\gamma = 1/\alpha$, which characterizes a *time-fractional* PDE of order $\gamma < 1$. Although this argument is heuristic, it motivates a Fourier-Laplace transform (FLT) argument first presented in [@kellyduality]. In [@kellyduality], we restricted our attention to fractional orders $1 < \alpha \leq 2$ in with $C = 1$. In this section, this restriction on $\alpha$ is relaxed, allowing the fractional order to be larger than two and less than or equal to three and providing a stochastic model for hyperdiffusion. Our motivation comes from Hu et al. [@hu2013high]: “Currently, most attention in the field is paid to the fractional derivatives of order less than 2. High-order fractional derivatives are hardly used, partly due to the limited understanding of their physical meanings." In this section, we assign a physical meaning to with $2 < \alpha \leq 3$ using a space-time duality argument. Define the FLT of $u(x,t)$ via $$\overline{u}(k,s) = \int_0^{\infty} \int_{-\infty}^{\infty} u(x,t) e^{-s t} e^{-i k x} \, dx \, dt \label{flt}$$ and the Laplace transform by $\tilde{u} (x,s)$. Then apply to with $C=1$, yielding $$\overline{u} (k,s) = \frac{1}{s - (-ik)^{\alpha}} . \label{flt1}$$ The inverse FT of can be expressed as [@morse1953methods (4.8.18)] $$\tilde{u} (x, s) = \frac{1}{2 \pi} \lim_{R\to\infty} \int_{-R + i\tau}^{R + i\tau}\frac{e^{ikx}}{s - (-ik)^{\alpha}} \, dk , \label{invfourier}$$ where $\tau>0$ is chosen to avoid the branch cut along the negative real axis. For $1 < \alpha \leq 3$, the integrand of has a single, simple pole at $k^{*}= is^{1/\alpha}$ and remains analytic for all other points in the upper half-plane (UHP) for any choice of $1<\alpha \leq 3$. To prove this, write the wavenumber in polar form $k = |k| e^{i \theta}$, where $|\theta| \leq \pi$ is the phase angle. The poles $k^{*}$ then satisfy $$|k^{*}|^{\alpha} e^{i \alpha ( \theta - \pi /2)} = s \label{poleeqn}$$ where $s$ is positive and real. Hence, the phase angle satisfies $\alpha (\theta - \pi /2) = 2 \pi n$ with $n \in \mathbb{N}$. Since we are only interested in poles that reside in the UHP, take $0 < \theta < \pi$. Solving for $n$ yields $-\alpha / 3 < n < \alpha /3$. Hence, if $1<\alpha \leq 3$, the only integer solution is $n=0$, implying that only one pole lies in the UHP. If $3<\alpha \leq 5$, then the coefficient on the right hand side of is negative, yielding a FLT of $\overline{u}(k,s) = \left(s + (-ik)^{\alpha}\right)^{-1}$. Repeating the pole calculation yields at least *two* poles in the UHP for $3 < \alpha \leq 5$, while for $\alpha > 5$, there are at least *three* poles in the UHP. Hence, the complex plane argument described below is not applicable and we cannot assign a stochastic interpretation to the space-fractional diffusion equation for $\alpha > 3$. By converting the path of integration in into a closed contour in the upper half-plane by attaching a semi-circle of radius $R$ (see Appendix A in [@kellyduality] for details), is evaluated using the Cauchy residue theorem as $$\tilde{u} (x, s) = \gamma s^{\gamma -1} \exp \left( -x s^{\gamma} \right) \label{flt2}$$ where $1/3 \leq \gamma = 1 / \alpha < 1$. The contribution along the semi-circle $C_R$ vanishes as $R \rightarrow \infty$ using the bounds in Appendix A of [@kellyduality]. Inverting the LT yields $$u(x,t) = \gamma h_{\gamma}(x,t) \label{usol1}$$ where $h_{\gamma}(x,t)$ is the inverse stable density (see Remark \[invstblrem\] below) with index $\gamma$ [@meerschaert2013inverse]. To derive the governing equation of the inverse stable density, take the FT of , yielding $$\tilde{u} (x, s) = \frac{ \gamma s^{\gamma -1}}{ik + s^{\gamma}} . \label{flt3}$$ Recall that the LT of the Caputo derivative $\partial^{\gamma}_t u(x,t)$ is given by $\mathcal{L}_t \left[ \partial^{\gamma}_t u(x,t) \right] = s^{\gamma} \tilde{u} (x,s) - s^{\gamma -1} u(x,0)$ for $0 < \gamma < 1$ [@mainardi2010fractional Equation (1.27)]. Cross-multiply and invert, yielding $$\partial^{\gamma}_t u(x,t) = -\frac{\partial}{\partial x} u(x,t); \quad u(x,0) = \gamma \delta(x) , \label{disp2xt}$$ which is valid for any $1/3 \leq \gamma < 1$. Hence, we have transformed the space-fractional equation into an equivalent time-fractional equation on the half-axis. This result extends the results of [@baeumer2009space] and [@kellyduality] to a larger range of fractional (and integer) exponents $1<\alpha \leq 3$ and time-fractional exponents $1/3 \leq \gamma<1$. For $\alpha \neq 2,3$, the spatial nonlocality of the negative RL derivative is exchanged for the temporal nonlocaity of the Caputo derivative. The time-fractional equation governs the long term limit of a random walk where the particles experience power-law waiting times $T$ with tail probability $P(T > t) \approx t^{-\gamma}$ for $t \gg 1$. Hence, we can assign a stochastic intepretation to for $2<\alpha \leq 3$: the fractional order $\alpha$ codes long, power-law waiting times that scale like $t^{-1/\alpha}$. Note that the tail of the waiting time distribtion associated with is *heavier* than those considered in [@kellyduality], indicating a higher probability of very long waiting times. \[invstblrem\] The time-fractional equation is the governing equation of the inverse stable subordinator [@meerschaert2013inverse] $$E_t = \mbox{inf} \left\{ x>0 : D_x > t \right\} \label{invsub}$$ that models the first passage times of the stable subordinator $t= D_x$, where $D_x$ has density $g(t,x)$ with Laplace transform $e^{-xs^{\gamma}}$. From a CTRW perspective, the inverse process $E_t$ models the local times of particles undergoing long waiting times. The inverse $\gamma$-stable subordinator of order $\gamma = 1/3$ satisfies the integer-order PDE $$\frac{\partial}{\partial t} h_{1/3}(x,t) = -\frac{\partial^{3}}{\partial x^3} h_{1/3}(x,t) \label{orderonethird}$$ which is a linearized KdV equation [@whitman] used to model long wavelength water waves. Equation \[orderonethird\] may be evaluated in closed form [@zolotarev1986one Equation (2.10.3)] $$\begin{aligned} h_{1/3}(x,t) =& \frac{1}{\gamma} \mathcal{F}_x^{-1} \left[ \exp \left( t(-ik)^3 \right) \right] \nonumber \\ =& \frac{3}{2 \pi} \int_{-\infty}^{\infty} \exp \left(i (kx + tk^3) \right) \, dk \nonumber \\ =& \frac{3}{(3t)^{1/3}} \mbox{Ai} \left( \frac{x}{(3t)^{1/3}} \right) \label{airysol} \end{aligned}$$ where $\mbox{Ai} (z)$ is the Airy function. Hence, $h_{1/3}(x,t)$ spreads at rate $t^{1/3}$, which is clearly sub-diffusive. The density $h_{\gamma}(x,t)$ is self-similar with a scaling relationship $h_{\gamma}(x,t) = t^{-1/\alpha} h_{\gamma}(x t^{-1/\alpha},1)$ [@meerschaert2013inverse]. We can distinguish three types of behavior: (i) if $1<\alpha < 2$, the plume spreads faster than the diffusive rate of $t^{1/2}$; (ii) if $\alpha =2$, the solution is classically diffusive; and (iii) if $2<\alpha \leq 3$, the solution spreads slower than the diffusive rate of $t^{1/2}$. Hence, a wide range of anomalous diffusion may be modeled with a negatively skewed space-fractional diffusion equation. \[positiveskew\] Space-time duality may be applied to the positively-skewed case $\theta = 1$ on the *negative* half-axis $x<0$ by the same argument. Zolotarev wrote a general duality law involving stable PDFs for $\alpha \leq 2$ [@zolotarev1986one Equation 2.3.3] and trans-stable distributions for $\alpha > 2$ [@zolotarev1986one Equation 2.11.7]. This duality law is valid for a range of skewness parameters $\theta$. These duality relations may be extended to the negative half-axis using the reflection property of stable and trans-stable PDFs. Using these relationships, we derived a time-fractional equation involving both positive and negative temporal RL derivatives that is equivalent to for $x<0$ in Appendix C of [@kellyduality]. It should be possible to extend this result to the two-sided diffusion equation by a similar argument. Unlike the negative spatial RL derivative, it is not known how to assign any physical meaning to a negative (right) temporal RL derivative, which models temporal nonlocality into the future. It is also interesting to consider the physical meaning of a time derivative of order $\gamma>1$. Some results in this direction can be found in [@baeumer2007] for the case $1<\gamma<2$. For a diffusion with drift, introducing a fractional time derivative of order $1<\gamma<2$ results in a kind of superdiffusion, where the plume variance spreads like $t^{3-\gamma}$, see [@baeumer2007 Section 6.2]. We do not know whether there is a duality result for $\gamma>1$. \[numerics\] Conservative explicit Euler [@baeumer2017boundary] and implicit Euler [@kellybcs2018] methods are available to solve subject to the reflecting boundary condition . Feng [@feng2018] proposed an unconditionally stable Crank-Nicolson scheme for fractional orders $2 < \alpha < 3$ that is first-order accurate in space and second-order accurate in time. Baeumer et al. [@baeumer2015higher Proposition 4.2] proposed a stable scheme for for any $\alpha$ that is high-order in space based on a Grünwald discretization [@meerschaert2006finite] with shift $m$, where $m$ is given by $2m -1 < \alpha < 2m + 1$. Time-Fractional Diffusion-Wave Equation ======================================= A wide variety of anomalous phenomena can be modeled by the time-fractional diffusion-wave equation on the real line $$\partial_t^{\beta} u(x,t) = \frac{\partial^2}{\partial x^2} u(x,t) , \label{timefrac}$$ where $0 < \beta \leq 2$, $\beta = 2 \gamma$, and the left hand side is the Caputo derivative of order $\beta$. Equation interpolates between the diffusion equation ($\beta = 1$) and the wave equation ($\beta =2$). For $0 < \beta <1$, models anomalous sub-diffusion and Hamiltonian chaos [@zaslavsky1994fractional]. In particular, $u(x,t)$ is the limiting density of a CTRW with a Pareto (power-law) waiting time distribution $P(J_n > t) = Bt^{-\beta}$ [@baeumer2001stochastic]. For $1 < \beta < 2$, models wave propagation in viscoelastic materials [@mainardi2010fractional], including seismic waves [@mainardi1997seismic] and acoustic waves in biological media [@meerschaert2015stochastic]. Analytical Solution ------------------- Fundamental solutions to on the real line are computed using the initial condition $u(x,0) = \beta \delta(x)$. For $1 < \beta \leq 2$, we impose the additional initial condition $u_t(x,0) = 0$. The Laplace transform of the Caputo derivative with order $1 < \beta \leq 2$ is given by [@mainardi2010fractional Equation (1.27)] $$\mathcal{L}_t \left[ \partial_t^{\beta} u(x,t) \right] = s^{\gamma} \tilde{u} (x,s) - s^{\gamma -1} u(x,0) - s^{\gamma -2} u_t(x,0) \label{ltcaputo}$$ while for $0 < \beta \leq 1$, the Laplace transform is merely the first two terms. Apply a FLT to , yielding $$\overline{u} (k,s) = \frac{\beta s^{\beta -1}}{k^2 + s^{\beta}} . \label{flt11}$$ Factor the denominator into $(s^{\beta/2} + ik)(s^{\beta/2} - ik)$ and expand in partial fractions, yielding $$\overline{u} (k,s) = \frac{\gamma s^{\gamma/2 -1}}{ik + s^{\gamma}} + \frac{\gamma s^{\gamma/2 -1}}{-ik + s^{\gamma}} \label{flt22}$$ where $1/2 < \gamma = \beta/2 \leq 1$. Noting that the first term has a pole $k^{*} = i s^{\gamma}$ in the upper-half $k$ plane, and the second term has a pole $k^{*} =- i s^{\gamma}$ in the lower-half $k$ plane, we see that the first term has support on $x>0$ while the second term has support on $x<0$. Applying an inverse FLT to each term in yields a pair of *one way fractional wave equations* $$\partial_t^{\gamma} u_{+}(x,t) = -\frac{\partial}{\partial x} u_{+}(x,t) \quad\text{for $x>0$ and } \label{rightmoving}$$ $$\partial_t^{\gamma} u_{-}(x,t) = \frac{\partial}{\partial x} u_{-}(x,t) \quad\text{for $x<0$.} \label{leftmoving}$$ \[factoredeqns\] Much like the classical wave equation, consists of left and right moving components. A similar decomposition was reported in [@meerschaert2015stochastic Equation (5.4)] for $1 \leq \beta \leq 2$. The solution of is the density of the inverse $\gamma$-stable subordinator $$h_{\gamma}(x,t) = \frac{t}{x^{1+ 1/\gamma}} g_{\gamma} \left( t x^{-1/\gamma} \right) , \label{invstablesub}$$ where $g_{\gamma}(x)$ is the density of the $\gamma$-stable subordinator with Laplace transform $e^{-s^{\gamma}}$. The left-moving component is given by $u_{-}(x,t) = \gamma h_{\gamma}(-x,t)$. Combining these two components yields $$u(x,t) = \frac{\gamma}{2} h_{\gamma} (|x|,t) , \label{waveeqsol}$$ which is also given in Mainardi et al. [@mainardi2001fundamental Equation (4.23)] using the Wright function. Note that is continuous but not differentiable at $x=0$ with a “cusp" at $x=0$ [@alrawashdeh2017applications Proposition 6.1]. See also [@metzler2000random; @carnaffan2017cusping]. Duality Solution {#waveeqdual} ---------------- By duality, the system of one way time-fractional equations may be converted into a system of space-fractional equations on the real line. We see that $u_{+}(x,t)$ also solves with $\alpha = 1/\gamma = 2/\beta$ and $C=1$. Applying Remark \[positiveskew\], the solutions to also solve a system of space-fractional PDEs \[spacefrac2\] $$\frac{\partial}{\partial t} u(x,t) = \frac{\partial^{\alpha}}{\partial (-x)^{\alpha}} u(x,t) \quad\text{for $x>0$ and } \label{spacefrac2a}$$ $$\frac{\partial}{\partial t} u(x,t) = \frac{\partial^{\alpha}}{\partial x^{\alpha}} u(x,t) \quad\text{for $x<0$, } \label{spacefrac2b}$$ which may be expressed for any real $x$ via $$\frac{\partial}{\partial t} u(x,t) = A_x^{\alpha} u(x,t) \label{ucomp}$$ using the operator $$A^{\alpha}_x f(x) = \begin{cases} \frac{\partial^{\alpha}}{\partial (-x)^{\alpha}} f(x) & x > 0 \\ \frac{\partial^{\alpha}}{\partial x^{\alpha}} f(x) & x < 0, \\ \end{cases} \label{Aoperator}$$ In the case of sub-diffusion ($2/3 \leq \beta < 1$), $2 <\alpha \leq 3$, while for super-diffusion ($1 < \beta < 2$), $1 < \alpha < 2$. \[stochasticinterp\] For $1 < \alpha \leq 2$, complemented by the boundary condition govern spectrally negative Lévy motion conditioned to stay positive [@baeumer2016reflected], while governs spectrally positive Lévy motion conditioned to stay negative. On the positive half-axis, particles may drift to the right or jump to the left. On the negative half-axis, particles may drift left or jump to the right. \[nopdf\] Note that solutions to either or on the *entire* real line are not positive for $\alpha > 2$, which may be shown by calculating moments using $\int_{-\infty}^{\infty} x^n u(x,t) \, dx = i^n \hat{u}^{(n)} (0,t)$. Hence, these solutions on the real line are not PDFs. Numerical solutions to on the real line are shown in Figure \[realaxisfig\] for $\alpha = 2.5$ and 3, illustrating this non-positivity. \[myo\] The space-fractional diffusion equation of order $\alpha = 2.25$ was proposed by Tan et al. [@tan2007anomalous] to model sub-diffusion of calcium sparks in the heart. Since the space-fractional diffusion equation of order $2<\alpha \leq 3$ is mathematically equivalent to a time-fractional diffusion equation of order $1/\alpha$, is the limit of a CTRW with waiting times $J_n$ that are asymptotically Pareto with index $1/3 \leq \gamma < 1/2$. Hence the space-fractional PDE with $2<\alpha \leq 3$ models anomalous sub-diffusion caused by particle sticking or trapping. Numerical Experiments --------------------- As noted in [@baeumer2009space Section 5], Equation is the solution of the space-fractional PDE on the half-line $x>0$. To make the problem well-posed on the half-line [@baeumer2016reflected Theorem 2.3], it is necessary to impose a fractional reflecting boundary condition given by at $x=0$ (see Appendix). We numerically solved the negatively skewed space-fractional equation subject to the reflecting boundary condition at $x=0$ on the domain $[0, 3]$ and an impulse initial condition using an implicit Euler scheme with reflecting (Neumann) boundary conditions outlined in [@baeumer2017boundary] and [@kellybcs2018]. Since $2 \leq \alpha \leq 3$ in these examples, a shift of $m=1$ was applied to the Grünwald discretization. The simulation was stopped before the signal reached the right boundary in order to mimic an infinite domain. A total of $n=1501$ grid-points and a time step of $\Delta t = 0.00001$ was utilized to ensure sufficient accuracy. Figure \[stablesubfig\] displays these numerical solutions of evaluated at $t = $ 0, 0.001, 0.002, 0.005, and 0.01 for $\alpha = 2$, 2.5, and 3, while the analytical solution is shown in circles. For $\alpha =2$, the solution is a normal density, while for $\alpha =3$, the dual solution is given by the Airy function . For $\alpha = 2.5$, the solution was checked against a numerical inverse Fourier transform $$h_{\gamma}(x,t) = \frac{\alpha}{2 \pi} \int_{-\infty}^{\infty} \exp \left( t (-ik)^{\alpha} \right) \exp(i k x) \, dk \label{invtransnum}$$ evaluated using adaptive quadrature. There is excellent agreement between the inverse stable density $h_{\gamma}(x,t)$ and these numerical solutions. Subordinated Brownian Motion in Multiple Dimensions =================================================== All of the above examples are limited to one spatial dimension. In this section, we show that multi-dimensional Brownian motion subordinated to a vector of independent inverse stable subordinators, defined by in each dimension, is governed by a vector space-fractional PDE. This multi-dimensional subordinated Brownian motion model may be useful for modeling contaminant transport in anisotropic media (multiscaling anomalous subdiffusion) [@zhang2006random], where the retardation rate differs along each coordinate axis. Inverse stable subordinator vector ---------------------------------- Let $(u,v) = (E_t^1, E_t^2)$ be a pair of independent, inverse stable subordinators with densities $h_{\gamma_1}(u,t)$ and $h_{\gamma_2}(v,t)$ with indices $1/3 \leq \gamma_1 , \gamma_2<1$. Physically, the indices $\gamma_1$ and $\gamma_2$ code the retention (retardation) that particles experience due to heterogeneity. By independence, the joint density of $(E_t^1, E_t^2)$ is given by $$h(u,v,t) = h_{\gamma_1}(u,t) h_{\gamma_2}(v,t) . \label{jointdensity}$$ Since the FLT of each density is $s^{\gamma_1 -1}/(s^{\gamma_1} + ik_u)$ and $s^{\gamma_2 -1}/(s^{\gamma_2} + ik_v)$, respectively, the convolution theorem [@howell_fourier] yields $$\overline{h} (k_u, k_v, s) = s^{\gamma_1 -1}/(s^{\gamma_1} + ik_u) * s^{\gamma_2 -1}/(s^{\gamma_2} + ik_v) \label{jointdensityconv}$$ where the convolution “\*" is with respect to $s$. Since the FLT is not a simple algebraic expression, it is difficult to find a simple time-nonlocal governing equation for the joint-density . Although the order of the time fractional derivative in one dimension determines the retardation factor, here there are two different retardation factors, and only one time variable. Hence, a time-fractional operator does not have enough degrees of freedom to code for both retardation factors. However, we may find a vector space-fractional equation. By space-time duality, each factor in satisfies a space-fractional PDE $$\frac{\partial}{\partial t} h_{\gamma_1}(u,t) = \frac{\partial^{\alpha_1}}{\partial (-u)^{\alpha_1} } h_{\gamma_1} (u,t) \quad\text{for $u>0$ and } \label{h1sp}$$ $$\frac{\partial}{\partial t} h_{\gamma_2}(v,t) = \frac{\partial^{\alpha_2}}{\partial (-v)^{\alpha_2} } h_{\gamma_2} (v,t) \quad\text{for $v>0$.} \label{h2sp}$$ \[hsp\] where $\alpha_1 = 1/\gamma_1$ and $\alpha_2 = 1/\gamma_2$. Apply a time-derivative to and apply the chain rule and , yielding $$\frac{\partial}{\partial t} h(u,v,t) = \frac{\partial^{\alpha_1} }{\partial (-u)^{\alpha_1} } h(u,v,t) + \frac{\partial^{\alpha_2} }{\partial (-v)^{\alpha_2} } h(u,v,t) \label{hgov}$$ for $u>0$ and $v>0$, which is the space-fractional governing equation of the process $(E_t^1, E_t^2)$. Note that also governs *operator stable Lévy motion* [@meerschaert2012] with backward, independent jumps in both the $u$ and $v$ directions. Application to 2D Independent Brownian Motion --------------------------------------------- Next, we consider a pair of independent Brownian motions subordinated (time-changed) by a pair of independent inverse stable subordinators. Anisotropic super-diffusion may be modeled with the multi-dimensional fractional advection dispersion equation (FADE) [@meerschaert1999multidimensional]; however, we are not aware of any FPDE that models sub-diffusion in anisotropic media where the retardation factor in each coordinate is different. In this section, we write the density of this 2D process, and determine the corresponding governing equation. Let $x = B^{1}(u)$ and $y = B^{2}(v)$ be independent Brownian motions with densities $p(x,u)$ and $p(y,v)$, respectively. Let $(u,v) = (E_t^1, E_t^2)$ be a pair of independent, inverse stable subordinators with densities $h_{\gamma_1}(u,t)$ and $h_{\gamma_2}(v,t)$ with indices $0 \leq \gamma_1 , \gamma_2<1$, respectively. By a conditioning argument, we can write the joint density $q(x,y,t)$ of $\left( x , y \right) = \left( B^{1} \left( E_t^1 \right) , B^{2} \left( E_t^2 \right) \right)$ as $$\begin{aligned} q(x,y,t) =& \left( \int_0^{\infty} p(x,u) h_{\gamma_1}(u,t) \, du \right) \left( \int_0^{\infty} p(y,v) h_{\gamma_2}(v,t) \, dv \right) \\ =& \int_0^{\infty} \int_0^{\infty} p(x,u) p(y,v) h(u,v,t) \, du \, dv .\end{aligned}$$ The variables $u$ and $v$ are the temporal scaling of $B^{1}(u)$ and $B^{2}(v)$. Hence, $q(x,y,t)$ is characterized by two time-scales. Using a partial fraction expansion, we may evaluate the subordination integrals above in closed form. For example, $$\int_0^{\infty} p(x,u) h_{\gamma} (u,t) \,du = \frac{1}{2} h_{\gamma/2} \left( |x|,t \right) \label{eval1}$$ and similarly for the $v$ integral. Alternatively, one may use the composition formulas in Mainardi et al.[@mainardi2001fundamental Section 5] to derive . Applying yields $$q(x,y,t) = \frac{1}{4} h_{\gamma_1 / 2} \left(|x|,t \right) h_{\gamma_2 / 2} \left(|y|,t \right). \label{subbrownsol}$$ Hence, the density $q(x,y,t)$ is symmetric about the $x$ and $y$ axes, but is not radially symmetric in general. Now let $2/3 \leq \gamma_1 , \gamma_2 \leq 1$ and $\alpha_1 = 2/ \gamma_1$ and $\alpha_2 = 2/ \gamma_2$. For $x > 0$ and $y > 0$, space-time duality implies that the the density of each inverse stable subordinator satisfies $$\frac{\partial}{\partial t} h_{\gamma_1 / 2} \left(x ,t \right) = \frac{\partial^{\alpha_1}}{\partial (-x)^{\alpha_1} } h_{\gamma_1 / 2} \left(x ,t \right) \quad\text{for $x>0$ and } \label{dual1}$$ $$\frac{\partial}{\partial t} h_{\gamma_2 / 2} \left(y ,t \right) = \frac{\partial^{\alpha_2}}{\partial (-y)^{\alpha_2} } h_{\gamma_2 / 2} \left(y ,t \right) \quad\text{for $y>0$.} \label{dual2}$$ Apply the product rule to , yielding $$\begin{aligned} \frac{\partial}{\partial t} q(x,y,t) =& \frac{1}{4} \frac{\partial h_{\gamma_1 / 2} (x,t)}{\partial t} h_{\gamma_2 / 2} (y,t) + \frac{1}{4} h_{\gamma_1 / 2} (x,t) \frac{\partial h_{\gamma_2 / 2} (y,t)}{\partial t} \\ =& \frac{1}{4} \frac{\partial^{\alpha_1}}{\partial (-x)^{\alpha_1}} h_{\gamma_1 / 2} \left(x ,t \right) h_{\gamma_2 / 2} (y,t) + \frac{1}{4} h_{\gamma_1 / 2} (x,t) \frac{\partial^{\alpha_2}}{\partial (-y)^{\alpha_2} } h_{\gamma_2 / 2} \left(y ,t \right) \\ =& \frac{\partial^{\alpha_1}}{\partial (-x)^{\alpha_1} } q(x,y,t) + \frac{\partial^{\alpha_2}}{\partial (-y)^{\alpha_2} } q(x,y,t) \end{aligned}$$ for $x>0$ and $y>0$. Using the argument in Sec. \[waveeqdual\], we see that $h_{\gamma_1/2}(x,t)$ satisfies for $x<0$. By the same token, $h_{\gamma_2/2}(y,t)$ satisfies a similar system of space-fractional equations, yielding the two-dimensional governing equation $$\frac{\partial}{\partial t} q(x,y,t) = A^{\alpha_1}_x q(x,y,t) + A^{\alpha_2}_y q(x,y,t) , \label{subbrowngov}$$ where $A_x^{\alpha}$ is defined by . The governing equation is the two-dimensional generalization of . Since $2/3 \leq \gamma_1 , \gamma_2 \leq 1$, it follows that $2 \leq \alpha_1, \alpha_2 \leq 3$. We conclude that the governing equation of $\left( B^{1} \left( E^{1}_t \right), B^{2} \left( E^{2}_t \right) \right)$ is the space-fractional PDE utilizing both negative (right) and positive (left) RL fractional derivatives with orders greater than or equal to two. This is another example of sub-diffusion modeled with a space-fractional PDE. Generalization of to $n$-dimensional Brownian motion time-changed by $n$ independent inverse stable subordinators $\left( E^{1}_{\gamma_1} , \cdots, E^{n}_{\gamma_n} \right)$ is straightforward. Figure \[multibrownfig\] displays contour plots of the joint density of $\left( B^1(E_t^1), B^2(E_t^2) \right)$ for Brownian motion $\gamma_1 = \gamma_2 =1$ (top left), sub-diffusion in the $x$ dimension and Brownian motion in the $y$ dimension $\gamma_1 = 0.5$ and $\gamma_2 = 1$ (top right), Brownian motion in the $x$ dimension and sub-diffusion in the $y$ dimension $\gamma_1 = 1$ and $\gamma_2 = 0.5$ (bottom left), and sub-diffusion along both axes $\gamma_1 = \gamma_2 = 0.5$ (bottom right). Except for the top left panel (Brownian motion), these densities do not have radial symmetry, including the bottom right panel, where the inverse stable indices are the same in both directions. In the case of sub-diffusion along both axes (bottom right), the density is not differentiable along the lines $x = 0$ and $y=0$, which follows from [@alrawashdeh2017applications Proposition 6.1]. Tempered Duality ================ Tempered fractional time derivatives impose an exponential cutoff to power-law waiting times [@alrawashdeh2017applications; @meerschaert2008tempered], while tempered fractional space derivatives cool power-law jumps in space [@baeumer2010tempered; @cartea2007]. Tempered fractional diffusion equations transition from anomalous to Fickian transport [@meerschaert2008tempered]. This transition is governed by the spatial tempering rate $\lambda>0$ or the temporal tempering rate $\mu > 0$, which is typically small relative to the characteristic spatial or temporal scales, respectively. For tempered space-fractional diffusion, the cross-over time (relaxation time) from anomalous to Fickian transport is proportional to $\lambda^{-\alpha}$, while for tempered time-fractional diffusion, the cross-over time is proportional to $\mu^{-1/\gamma}$ [@cartea2007]. The tempering parameter also increases the effective diffusivity, which is given by Equation (31) in [@cartea2007]. An alternative approach for modeling the transition from anomalous short time behavior to Fickian long term behavior are persistent random walks [@sadjadi2015], where a self-propulsion mechanism competes with random fluctuations. In this section, we apply space-time duality to connect tempered space-fractional and tempered time-fractional diffusion equations. A negative Riemann-Liouville tempered fractional derivative of order $\alpha$ may be defined via $$\frac{\partial^{\alpha, \lambda}}{\partial (-x)^{\alpha, \lambda}} f(x) = e^{\lambda x} \frac{\partial^{\alpha}}{\partial (-x)^{\alpha}} \left[ e^{-\lambda x} f(x) \right] - \lambda^{\alpha} f(x) \label{negtemprl}$$ where $\partial^{\alpha} / \partial (-x)^{\alpha}$ is the negative RL fractional derivative given by . The negatively-skewed tempered space-fractional diffusion equation is written using non-dimensionalized units as $$\frac{\partial}{\partial t} u(x,t) = \frac{\partial^{\alpha, \lambda}}{\partial (-x)^{\alpha, \lambda}} u(x,t). \label{tsfrac}$$ The second term in is needed to ensure that solutions to are proportional to a PDF (mass-conserving). For $1 < \alpha \leq 2$, solutions to with an impulse initial condition are given by [@baeumer2010tempered] $$u(x,t) = e^{\lambda x} p(x,t) e^{-t \lambda^{\alpha}} \label{usol11}$$ where $p(x,t)$ is a negatively-skewed $\alpha$-stable density that solves . By space-time duality, $p(x,t)$ also satisfies $$\partial_t^{\gamma} p(x,t) = -\partial_x p(x,t) \label{dualtemp}$$ for $x>0$, where $\gamma = 1/ \alpha$ and the left hand side is the Caputo derivative or order $\gamma$. Solving for $p(x,t)$, inserting into , and applying the product rule yields $$e^{-t \lambda^{\alpha}} \partial_t^{\gamma} \left[ e^{t \lambda^{\alpha}} u(x,t) \right] - \lambda u(x,t) = -\frac{\partial}{\partial x} u(x,t) .$$ Letting $\mu = \lambda^{\alpha}$, we see $u(x,t)$ solves the equivalent tempered time-fractional PDE $$\partial_t^{\gamma, \mu} u(x,t) = -\frac{\partial}{\partial x} u(x,t) \label{dualtemp2}$$ where $$\partial_t^{\gamma, \mu} f(t) = e^{-\mu t} \partial_t^{\gamma} \left[ e^{t \mu} f(t) \right] - \mu^{\gamma} f(t) . \label{temperedcaputo}$$ Hence, the tempered time-fractional equation has the same solution as the tempered space-fractional equation, where the tempering rates are related by $\mu = \lambda^{\alpha}$. From a stochastic point of view, governs tempered spectrally negative Lévy motion conditioned to stay positive with negative jumps, while governs power-law waiting times with an exponential cutoff. By the equivalence between and , backward jumps with power-law index $\alpha$ and tempering parameter $\lambda$ have the same governing equation as waiting times with power-law index $1/\alpha$ and tempering parameter $\lambda^{\alpha}$. Conclusions =========== This paper extends space-time duality to fractional diffusion for orders $1<\alpha \leq 3$. An equivalence with a time-fractional PDE is established using a Fourier-Laplace transform argument. Since the equivalent time-fractional PDE governs the long-term limit of a power-law waiting time process, space-fractional diffusion equations with $2 <\alpha \leq 3$ gain a stochastic interpretation. Using space-time duality, we show that the time-fractional diffusion-wave equation is a equivalent to a system of space-fractional diffusion equations. Then we show that multi-dimensional Brownian motion subordinated to an independent inverse stable subordinator in each dimension is governed by a vector space-fractional PDE. Finally, we extend the space-time duality to tempered fractional models for transient anomalous diffusion. Kelly was partially supported by ARO MURI grant W911NF-15-1-0562 and USA National Science Foundation grant EAR-1344280. Meerschaert was partially supported by ARO MURI grant W911NF-15-1-0562 and USA National Science Foundation grants DMS-1462156 and EAR-1344280. Insightful discussion with Medhi Samimee (Department of Mechanical Engineering, Michigan State University) and Harish Sankaranarayanan (Department of Statistics and Probability, Michigan State University) are gratefully acknowledged. We thank John Nolan (Department of Mathematics and Statistics, American University, Washington, DC ) for graciously providing the Stable toolbox ([`w`ww.RobustAnalysis.com]{}). Reflecting Boundary Condition ============================= We demonstrate that restricted to the half-line $x>0$ is equivalent to a boundary-value problem with a *reflecting boundary condition* at $x=0$ [@baeumer2017diffeqs; @baeumer2017boundary]. Observe that since $h_{\gamma} (x,t)$ is a PDF with support on the half-line $x>0$, the total mass $\int_0^{\infty} u(x,t) \, dx$ on the half-line is a constant $\gamma$ for all times $t$. Write in a conservation form $$\frac{\partial}{\partial t} u(x,t) = -C \frac{\partial}{\partial x} q(x,t) \label{fde2cons}$$ where $q(x,t)$ is the fractional flux constitutive equation $$q(x,t) = C \frac{\partial^{\alpha -1}}{\partial (-x)^{\alpha -1}} u(x,t) , \label{flulxfunc}$$ which has been proposed for super-diffusion ($\alpha <2$) by Paradisi et al. [@paradisi2001fractional] and Schumer et al. [@schumer2001eulerian] and for hyperdiffusion ($\alpha > 2$) by Wei [@wei1999generalized] and Hu et al. [@hu2013high]. Due to the factor of $(-1)^n$ in , the derivative of the $(\alpha -1)$ negative RL derivative is $-\frac{\partial^{\alpha}}{\partial (-x)^{\alpha}}$. Assuming that $u(x,t)$ is bounded for $t>0$, we have $$\frac{\partial}{\partial t} \int_0^{\infty} u(x,t) \, dx = - \int_0^{\infty} \frac{\partial}{\partial x} q(x,t) \, dx = q(0,t) , $$ where the flux is assumed to be zero at infinity. Mass conservation on $x>0$ yields the no-flux (or reflecting) boundary condition $$\frac{\partial^{\alpha -1}}{\partial (-x)^{\alpha -1}} u(0,t) = 0 , \label{nofluxbc}$$ which were studied by Baeumer et al. [@baeumer2017diffeqs]. [10]{} Z. Deng, L. Bengtsson, and V. P. Singh, , 451–475 (2006). M. S. Phanikumar, I. Aslam, C. Shen, D. T. Long, and T. C. Voice. , W05406 (2007). R. Haggerty, S. M. Wondzell, and M. A. Johnson, , 18-1–18-4 (2002). D. del Castillo-Negrete, , 082308 (2006). S. Fedotov and V. M[é]{}ndez, , 218102 (2008). J.-H. Jeon, V. Tejedor, S. Burov, E. Barkai, C. Selhuber-Unkel, K. Berg-S[ø]{}rensen, L. Oddershede, and R. Metzler, , 048103 (2011). R. Metzler and J. Klafter, , 1-77 (2000). R. Metzler and J. Klafter, , R161 (2004). M. M. Meerschaert, Y. Zhang, and B. Baeumer, , L17403 (2008). B. Baeumer and M. M. Meerschaert, , 2438–2448 (2010). A. Cartea and D. del-Castillo-Negrete, , 041105 (2007). B. Baeumer, M. M. Meerschaert, and E. Nane, , 1100–1115 (2009). J. F. Kelly and M. M. Meerschaert, , 3464–3475 (2017). V. M. Zolotarev, , 735–738 (1961). V. M. Zolotarev, (American Mathematical Soc., Providence, 1986). J. F. Kelly, M. M. Meerschaert, D. Bolster, J. D. Drummond, and A. I. Packman, , [**53**]{}, 1763-1776 (2017). X. Chen, J. Kang, C. Fu, and W. Tan, , 1-9 (2013). W. Tan, C. Fu, C. Fu, W. Xie, and H. Cheng, , 183901 (2007). U. Frisch, S. Kurien, R. Pandit, W. Pauls, S. S. Ray, A. Wirth, and J. Z. Zhu, , [**101**]{}, 144501 (2008). G. W. Wei, , 165-167 (1999). A. M. Tawfik, H. Fichtner, A. Elhanbaly, and R. Schlickeiser, , 178-187 (2018). M. M. Meerschaert and A. Sikorskii, (Walter de Gruyter, Berlin, 2012). A. A. Kilbas, H. M. Srivastava, and J. J. Trujillo. (Elsevier Science Limited, Amsterdam, 2006). F. Mainardi and Y. Luchko and G. Pagnini, , 153–192 (2001). B. Baeumer, M. Kov[á]{}cs, and H. Sankaranarayanan, , 813–834 (2015). P. A. Ullrich, D. R. Reynolds, J. E. Guerra, and M. A. Taylor, (2018). J. Klafter, S. C. Lim, and R. Metzler, (World Scientific, Singapore, 2012). M. Satoh, M. Matsuno, H. Tomita, H. Miura, T. Nasuno, and S. I.  Iga, , 3486–3514 (2008). M. A. Malkov and R. Z. Sagdeev, , 157 (2015). P. Chakraborty, M. M. Meerschaert, and C. Y. Lim. , W10415 (2009). Y. Zhang, M. M. Meerschaert, and R. M. Neupauer, , 2462–2473 (2016). L. Hu, D. Chen, and G. W. Wei, , 1-25 (2013). J. F. Kelly, H. Sankaranarayanan, and M. M. Meerschaert, , 1089-1107 (2019). B. Baeumer, M. Kov[á]{}cs, M. Meerschaert, R. Schilling, and P. Straka, , 227-248 (2016). P. M. Morse and H. Feshbach. (McGraw-Hill, New York, 1953). M. M. Meerschaert and P. Straka. , 1-16 (2013). G. B. Whitman, (Wiley, New York, 1974). G. Zaslavsky, , 110–122 (1994). B. Baeumer and M. M. Meerschaert, , 481-500 (2001). F. Mainardi. (Imperial College Press, London, 2010). M. M. Meerschaert and C. Tadjeran, , 80-90 (2006). B. Baeumer, M. Kov[á]{}cs, M. M. Meerschaert and H. Sankaranarayanan, , 408-424 (2018). B. Baeumer and M. M. Meerschaert, , 237-251 (2007). Q. Feng, , 214-220 (2018). F. Mainardi and M. Tomirotti, , 1311-1328 (1997). M. M. Meerschaert, R. L. Schilling, and A. Sikorskii, , 1685–1695 (2015). M. S. Alrawashdeh, J. F. Kelly, M. M. Meerschaert, and H.-P. Scheffler, , 892-905 (2017). S. Carnaffan and R. Kawai, , 245001 (2017). Z. Sadjadi, M. R. Shaebani, H. Rieger, and L. Santen, , 062715 (2015). J. P. Nolan, , 759–774 (1997). Y. Zhang, D. A. Benson, M. M. Meerschaert, E. M. LaBolle and H.-P. Scheffler, , 026706 (2006). K. B. Howell, (Chapman & Hall/CRC, Boca Raton, 2001). M. M. Meerschaert, D. A. Benson, and B. Baeumer, , 5026–5028 (1999). B. Baeumer, M. Kov[á]{}cs, and H. Sankaranarayanan. , 1377–1410 (2018). P. Paradisi, R. Cesari, F. Mainardi, and F. Tampieri, , 130–142 (2001). R. Schumer, D. A. Benson, M. M. Meerschaert, and S. W. Wheatcraft, , 69–88 (2001).
--- abstract: 'We introduce two new metrics of “simplicity” for knight’s tours: the number of turns and the number of crossings. We give a novel algorithm that produces tours with $9.5n+O(1)$ turns and $13n+O(1)$ crossings on a $n\times n$ board. We show lower bounds of $(6-\varepsilon)n$, for any $\varepsilon>0$, and $4n-O(1)$ on the respective problems of minimizing these metrics. Hence, we achieve approximation ratios of $19/12+o(1)$ and $13/4+o(1)$. We generalize our techniques to rectangular boards, high-dimensional boards, symmetric tours, odd boards with a missing corner, and tours for $(1,4)$-leapers. In doing so, we show that these extensions also admit a constant approximation ratio on the minimum number of turns, and on the number of crossings in most cases.' author: - | Juan Jose Besa$^*$Timothy Johnson$^*$\ Nil Mamano$^*$Martha C. Osegueda[^1]\ Department of Computer Science\ University of California, Irvine\ California, U.S.A.\ {jjbesavi,tujohnso,nmamano,mosegued}@uci.edu bibliography: - 'biblio.bib' title: | Taming the Knight’s Tour:\ Minimizing Turns and Crossings --- Introduction ============ The game of chess is a fruitful source of mathematical puzzles. The puzzles often blend an appealing aesthetic with interesting and deep combinatorial properties [@watkins2012across]. An old and well-known problem is the knight’s tour problem. A *knight’s tour* in a generalized $n\times m$ board is a path through all $nm$ cells such that any two consecutive cells are connected by a “knight move” (Figure \[fig:knightmove\]). For a historic treatment of the problem, see [@rouse1974mathematical]. ![A knight moves one unit along one axis and two units along the other.[]{data-label="fig:knightmove"}](figures){width="0.25\linewidth"} A knight’s tour is *closed* if the last cell in the path is one knight move away from the first one. Otherwise, it is *open*. This paper focuses solely on closed tours, so henceforth we omit the distinction. The knight’s tour problem is a special case of the Hamiltonian cycle problem, the problem of finding a simple cycle in a graph that visits all the nodes. Consider the graph with one node for each cell of the board and where nodes are connected if the corresponding cells are a knight move apart. The knight’s tour problem corresponds to finding a Hamiltonian cycle in this graph. We approach the knight’s tour problem in a novel way. Existing work focuses on the questions of existence, counting, and construction algorithms. In general, the goal of existing algorithms is to find *any* knight’s tour. We propose two new metrics that capture simplicity and structure in a knight’s tour, and set the goal of finding tours optimizing these metrics. We define the following optimization problems. We associate each cell in the board with a point $(i,j)$ in the plane, where $i$ is the row of the cell and $j$ is the column. Given a knight’s tour, a *turn* is *a triplet of consecutive cells with non-collinear coordinates.* \[pro:turns\] Given a rectangular $n\times m$ board such that a knight’s tour exists, find the knight’s tour with the smallest number of turns. Given a knight’s tour, a *crossing* occurs *when the two line segments corresponding to moves in the tour intersect*. That is, if $\{c_1,c_2\}$ and $\{c_3,c_4\}$ are two distinct pairs of consecutive cells visited along the tour, a crossing happens if the open line segments $(c_1,c_2)$ and $(c_3,c_4)$ intersect. \[pro:crossings\] Given a rectangular $n\times m$ board such that a knight’s tour exists, find the knight’s tour with the smallest number of crossings. Knight’s tours are typically visualized by connecting consecutive cells by a line segment. Turns and crossings make the sequence harder to follow. Minimizing crossings is a central problem in *graph drawing*, the sub-field of graph theory concerned with the intelligible visualization of graphs (e.g., see the survey in [@herman2000graph]). Problem \[pro:crossings\] is the natural adaptation for knight’s tours. Problem \[pro:turns\] asks for the (self-intersecting) polygon with the smallest number of vertices that represents a valid knight’s tour. Our contributions. ------------------ We propose a novel algorithm for finding knight’s tours with the following features. - $9.5n+O(1)$ turns and $13n+O(1)$ crossings on a $n\times n$ board. - A $19/12+o(1)$[^2] approximation factor on the minimum number of turns (Problem \[pro:turns\]). - A $13/4+o(1)$ approximation factor on the minimum number of crossings (Problem \[pro:crossings\]). - A $O(nm)$ running time on a $n\times m$ board, i.e., linear on the number of cells, which is optimal. - The algorithm is fully parallelizable, in that it can be executed in $O(1)$ time with $O(nm)$ processors in the CREW PRAM model. More specifically, the cell at a given index in the tour sequence (or, conversely, the index of a given cell) can be determined in constant time, which implies the above. - It can be generalized to most typical variations of the problem: high-dimensional boards, boards symmetric under 90 degree rotations, tours in boards with odd width and height that skip a corner cell, and tours for *giraffes*, which move one cell in one dimension and four in the other. - The algorithm can be simulated by hand with ease. This is of particular interest in the context of recreational mathematics and mathematics outreach. The paper is organized as follows. In Section \[sec:related\], we give an overview of the literature on the knight’s tour problem and its variants. We describe the algorithm in Section \[sec:methods\]. We prove the approximation ratios in Section \[sec:lower\]. Section \[sec:extensions\] deals with the mentioned extensions. We conclude in Section \[sec:conclusions\]. The tours produced by the algorithm can be generated interactively for different board dimensions at <https://www.ics.uci.edu/~nmamano/knightstour.html>. Related Work {#sec:related} ------------ Despite being over a thousand years old [@watkins2012across], the knight’s tour problem is still an active area of research. We review the key questions considered in the literature. #### Existence. In rectangular boards, a tour exists as long as one dimension is even and the board size is large enough; no knight’s tour exists for dimensions $1\times n$, $2\times n$ or $4\times n$, for any $n\geq 1$ and, additionally, none exist for dimensions $3\times 6$ or $3\times 8$ [@schwenk1991rectangular]. In three dimensions or higher, the situation is similar: a tour exists only if at least one dimension is even and large enough [@demaio2007chessboards; @demaio2011prism; @erde2012closed]. In the case of open knight’s tours, a tour exists in two dimensions if both dimensions are at least $5$ [@cull1978knight; @conrad1994solution]. #### Counting. The number of closed knight’s tours in an even-sized $n\times n$ board is at least $\Omega(1.35^{n^2})$ and at most $4^{n^2}$ [@kyek1997bounds]. The exact number of knight’s tours in the standard $8\times 8$ board is $26,534,728,821,064$ [@mckay1997knight]. Furthermore algorithms for enumerating multiple [@shufelt1993generating] and enumerating all [@Alwan1992reentrant] knight’s tours have also been studied. #### Algorithms. Historically, greedy algorithms have been popular. The idea is to construct the tour in order, one step at a time, according to some heuristic selection rule. Warnsdorff’s rule and its refinements [@pohl1967method; @Alwan1992reentrant; @squirrel1996warnsdorff] work well in practice for small boards, but do not scale to larger boards [@parberry1996scalability]. The basic idea is to choose the next node with fewest continuations. Interestingly, this heuristic can be effective in the more general Hamiltonian cycle problem [@pohl1967method]. To our knowledge, every efficient algorithm for arbitrary board sizes before this paper is based on a divide-and-conquer approach. The tour is solved for a finite set of small, constant-size boards. Then, the board is covered by these smaller tours like a mosaic. The small tours are connected into a single one by swapping a few knight moves. This can be done in a bottom-up [@schwenk1991rectangular; @conrad1994solution; @demaio2007chessboards; @demaio2011prism; @kamcev2014generalised] or a top-down recursive [@parberry1997efficient; @lin2005optimal] fashion. This process is simple and can be done in time linear on the number of cells. Like our algorithm, these algorithms are highly parallelizable [@conrad1994solution; @parberry1997efficient]. This is because the tours are made of predictive repeating patterns. Divide-and-conquer is not suitable for finding tours with a small number of turns or crossings. Since each base solution has constant size, a $n\times n$ board is covered by $\Theta(n^2)$ of them, and each one contains turns and crossings. Thus, the divide-and-conquer approach necessarily results in $\Theta(n^2)$ turns and crossings. In contrast, our algorithm has $O(n)$. #### Extensions. The above questions have been considered in related settings. Extensions can be classified into three categories, which may overlap: - **Tours with special properties.** Our work can be seen as searching for tours with special properties. *Magic knight’s tours* are also in this category: tours such that the indices of each cell in the tour form a magic square (see [@beasley2012magic] for a survey). The study of symmetry in knight’s tours dates back at least to 1917 [@bergholt2001memoirs]. Symmetric tours under 90 degree rotations exist in $n\times n$ tours where $n\geq 6$ and $n$ is of the form $4k+2$ for some $k$ [@dejter1983symmetry]. Parberry extended the divide-and-conquer approach to produce tours symmetric under 90 degree rotations [@parberry1997efficient]. Jelliss provided results on which rectangular board sizes can have which kinds of symmetry [@jelliss1999symmetry]. Both of our proposed problems are new, but minimizing crossings is related to the *uncrossed knight’s tour problem*, which asks to find the longest sequence of knight moves without *any* crossings [@yarbrough1969uncrossed]. This strict constraint results in incomplete tours. This problem has been further studied in two [@jelliss1999intersecting; @fischer2006new] and three [@kumar2008noncrossing] dimensions. - **Board variations.** Besides higher dimensions, knight’s tours have been considered in other boards, such as torus boards, where the top and bottom edges are connected, and the left and right edges are also connected. Any rectangular torus board has a closed tour [@watkins1997torus]. Another option is to consider boards with odd width and height. Since boards with an odd number of cells do not have tours, it is common to search for tours that skip a specific cell, such as a corner cell [@parberry1997efficient]. - **Move variations.** An $(i,j)$*-leaper* is a generalized knight that moves $i$ cells in one dimension and $j$ in the other (the knight is a $(1,2)$-leaper) [@nash1959abelian]. Knuth studied the existence of tours for general $(i,j)$-leapers in rectangular boards [@knuth1994leaper]. Tours for *giraffes* ($(4,1)$-leapers) were provided in [@dejter1983symmetry] using a divide-and-conquer approach. Chia and Ong [@chia2005generalized] study which board sizes admit generalized $(a,b)$-leaper tours. Kamčev [@kamcev2014generalised] showed that any board with sufficiently large and even size admits a $(2,3)$-, $(2,5)$-, and a $(a,1)$-leaper tour for any even $a$, and generalized this to any higher dimensions. Note that $a$ and $b$ are required to be coprime and not both odd, or no tour can exist [@kamcev2014generalised]. The Algorithm {#sec:methods} ============= Given that one of the dimensions must be even for a tour to exist, we assume, without loss of generality, that the width $w$ of the board is even, while the height $h$ can be odd. We also assume that $w\geq 16$ and $h\geq 12$. The construction still works for some smaller sizes, but may require tweaks to its most general form described here. #### Quartet moves. What makes the knight’s tour problem challenging is that knight jumps leave “gaps”. Our first crucial observation is that a quartet of four knights arranged in a square $2\times 2$ formation can move “like a king”: they can move horizontally, vertically, or diagonally without leaving any gaps (Figure \[fig:quartet\]). ![Quartet of knights moving in unison without leaving any unvisited squares. Note that, in a straight move, the starting and ending position of the quartet overlap because two of the knights remain in place.[]{data-label="fig:quartet"}](figures){width="0.65\linewidth"} By using the “formation moves” depicted in Figure \[fig:quartet\], four knights can easily cover the board moving vertically and horizontally while remaining in formation. Of course, the goal is to traverse the entire board in a single cycle, not four paths. We address this issue with special structures placed in the bottom-left and top-right corners of the board, which we call *junctions*, and which tie the paths together to create a single cycle. Note that using only straight formation moves leads to tours with a large number of turns and crossings. Fortunately, two consecutive diagonal moves in the same direction introduces no turns or crossings, so our main idea is to use as many diagonal moves as possible. This led us to the general pattern shown in Figure \[fig:mainconstr\]. The full algorithm is given in Algorithm \[alg:main\]. The formation starts at a junction at the bottom-left corner and ends at a junction at the top-right corner. To get from one to the other, it zigzags along an odd number of parallel diagonals, alternating between downward-right and upward-left directions. The junctions in Figure \[fig:junctions\] have a *height*, which influences the number of diagonals traversed by the formation. At the bottom-left corner, we use a junction with height 5. At the top-right corner, we use a junction with height between 5 and 8. Choosing the height as in Algorithm \[alg:main\] guarantees that, for any board dimensions, an odd number of diagonals fit between the two junctions. Sequence $1$ in Figure \[fig:othercorners\], which we call the *heel*, is used to transition between diagonals along the horizontal edges of the board. The two non-junction corners may require special sequences of quartet moves, as depicted in Figure \[fig:othercorners\]. In particular, Sequences $1,2,3,$ and $0$ are used when the last heel ends $0,2,4,$ and $6$ columns away from the vertical edge, respectively. As with the height of the top-right junction, these variations are *predictable* because they cycle as the board dimensions grow[^3], so in Algorithm \[alg:main\] we give expressions for them in terms of $w$ and $h$. 1\. Fill the corners of the board as follows: Bottom-left: : first junction in Figure \[fig:junctions\]. Top-right: : junction of height $5+((w/2+h-1)\mbox{ mod }4)$ in Figure \[fig:junctions\] except the first one. Bottom-right: : Sequence $(w/2+2)\mbox{ mod }4$ in Figure \[fig:othercorners\]. Top-left: : Sequence $(3-h)\mbox{ mod }4$ in Figure \[fig:othercorners\] rotated 180 degrees. 2\. Connect the four corners using formation moves, by moving along diagonals from the bottom-left corner to the top-right corner as in Figure \[fig:mainconstr\]. To transition between diagonals: Vertical edges: : use a double straight up move (Figure \[fig:quartet\]). Horizontal edges: : use Sequence 1 in Figure \[fig:othercorners\]. ![Side by side comparison between the knight’s tour and the underlying quartet moves in a $30\times 30$ board. The arrows illustrate sequences of consecutive and equal formation moves. Starting from the bottom-left square of the board, the single knight’s tour follows the colored sections of the tour in the following order: red, green, yellow, purple, blue, orange, black, cyan, and back to red.[]{data-label="fig:mainconstr"}](figures){width="1\linewidth"} ![Junctions used in our construction.[]{data-label="fig:junctions"}](figures){width="0.99\linewidth"} ![The four possible cases for the bottom-right corner.[]{data-label="fig:othercorners"}](figures){width="0.75\linewidth"} Correctness ----------- It is clear that the construction visits every cell, and that every node in the underlying graph of knight moves has degree two. However, it remains to be argued that the construction is actually a single closed cycle. For this, we need to consider the choice of junctions. A junction is a pair of disjoint knight paths whose four endpoints are adjacent as in the quartet formation. Thus, the bottom-left junction connects the knights into two pairs. Denote the four knight positions in the formation by $tl, tr, bl, br$, where the first letter indicates top/bottom and the second left/right. We consider the three possible *positional matchings* with respect to these positions: horizontal matching ${ H}=(tl,tr), (bl,br)$, vertical matching ${ V}=(tl,bl), (tr,br)$, and cross matching ${ X}=(tl,br), (tr,bl)$. Let $\mathcal{M} = \{{ H}, { V}, { X}\}$ denote the set of positional matchings. We are interested in the effect of formation moves on the positional matching. A formation move does not change which knights are matched with which, but a non-diagonal move changes their positions, and thus their labels $tl, tr, bl, br$ also change. For instance, a horizontal matching becomes a cross matching after a straight move to the right. It is easy to see that a straight move upwards or downwards has the same effect on the positional matching. Similarly for left and right straight moves. Thus, we classify the formation moves in Figure \[fig:quartet\] (excluding double straight moves, which are a composition of two straight moves) into vertical straight moves ${ \updownarrow}$, horizontal straight moves ${ \leftrightarrow}$, and diagonal moves ${ \begin{turn}{45} \raisebox{-1ex}{$\leftrightarrow$} \end{turn} }$. Let $\mathcal{S}=\{{ \updownarrow},{ \leftrightarrow},{ \begin{turn}{45} \raisebox{-1ex}{$\leftrightarrow$} \end{turn} }\}$ denote the three types of quartet moves. We see each move type $s\in\mathcal{S}$ as a function $s:\mathcal{M}\to\mathcal{M}$ (see Table \[tab:perms\]). Note that the diagonal move ${ \begin{turn}{45} \raisebox{-1ex}{$\leftrightarrow$} \end{turn} }$ is just the identity. Given a sequence of moves $S=(s_1,\ldots,s_k)$, where each $s_i\in\mathcal{S}$, let $S(M)=s_1\circ\cdots \circ s_k(M)$. The move types ${ \updownarrow},{ \leftrightarrow},{ \begin{turn}{45} \raisebox{-1ex}{$\leftrightarrow$} \end{turn} }$ seen as functions are, in fact, permutations (Table \[tab:perms\]). It follows that *any* sequence of formation moves permutes the positional matchings, according to the composed permutation of each move in the sequence. There are six possible permutations of the three positional matchings, three of which correspond to the “atomic” formation moves ${ \begin{turn}{45} \raisebox{-1ex}{$\leftrightarrow$} \end{turn} },{ \updownarrow},$ and ${ \leftrightarrow}$. The other three permutations can be obtained by composing atomic moves, for instance, with the compositions ${ \updownarrow}{ \leftrightarrow}, { \leftrightarrow}{ \updownarrow},$ and ${ \updownarrow}{ \leftrightarrow}{ \updownarrow}$ (Table \[tab:perms\]). Thus, any sequence of moves permutes the positional matchings in the same way as one of the sequences in the set $\{{ \begin{turn}{45} \raisebox{-1ex}{$\leftrightarrow$} \end{turn} }, { \updownarrow}, { \leftrightarrow}, { \updownarrow}{ \leftrightarrow}, { \leftrightarrow}{ \updownarrow}, { \updownarrow}{ \leftrightarrow}{ \updownarrow}\}$. This is equivalent to saying that this set, under the composition operation, is isomorphic to the symmetric group of degree three. Table \[tab:cayley\] shows the Cayley table of this group. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ${ \begin{turn}{45} ${ \updownarrow}$ ${ \leftrightarrow}$ ${ \updownarrow}{ \leftrightarrow}$ ${ \leftrightarrow}{ \updownarrow}$ ${ \updownarrow}{ \leftrightarrow}{ \updownarrow}$ \raisebox{-1ex}{$\leftrightarrow$} \end{turn} }$ ----------- -------------------------------------------- ---------------------- ------------------------- ------------------------------------------- ------------------------------------------- ------------------------------------------------------------- ${ V}$ ${ V}$ ${ X}$ ${ V}$ ${ H}$ ${ X}$ ${ H}$ ${ H}$ ${ H}$ ${ H}$ ${ X}$ ${ X}$ ${ V}$ ${ V}$ ${ X}$ ${ X}$ ${ V}$ ${ H}$ ${ V}$ ${ H}$ ${ X}$ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : Cayley table for the group of positional matching permutations.[]{data-label="tab:cayley"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ${ \begin{turn}{45} ${ \updownarrow}$ ${ \leftrightarrow}$ ${ \updownarrow}{ \leftrightarrow}$ ${ \leftrightarrow}{ \updownarrow}$ ${ \updownarrow}{ \leftrightarrow}{ \updownarrow}$ \raisebox{-1ex}{$\leftrightarrow$} \end{turn} }$ ------------------------------------------------------------- ------------------------------------------------------------- ------------------------------------------------------------- ------------------------------------------------------------- ------------------------------------------------------------- ------------------------------------------------------------- ------------------------------------------------------------- ${ \begin{turn}{45} ${ \begin{turn}{45} ${ \updownarrow}$ ${ \leftrightarrow}$ ${ \updownarrow}{ \leftrightarrow}$ ${ \leftrightarrow}{ \updownarrow}$ ${ \updownarrow}{ \leftrightarrow}{ \updownarrow}$ \raisebox{-1ex}{$\leftrightarrow$} \raisebox{-1ex}{$\leftrightarrow$} \end{turn} \end{turn} }$ }$ ${ \updownarrow}$ ${ \updownarrow}$ ${ \begin{turn}{45} ${ \updownarrow}{ \leftrightarrow}$ ${ \leftrightarrow}$ ${ \updownarrow}{ \leftrightarrow}{ \updownarrow}$ ${ \leftrightarrow}{ \updownarrow}$ \raisebox{-1ex}{$\leftrightarrow$} \end{turn} }$ ${ \leftrightarrow}$ ${ \leftrightarrow}$ ${ \leftrightarrow}{ \updownarrow}$ ${ \begin{turn}{45} ${ \updownarrow}{ \leftrightarrow}{ \updownarrow}$ ${ \updownarrow}$ ${ \updownarrow}{ \leftrightarrow}$ \raisebox{-1ex}{$\leftrightarrow$} \end{turn} }$ ${ \updownarrow}{ \leftrightarrow}$ ${ \updownarrow}{ \leftrightarrow}$ ${ \updownarrow}{ \leftrightarrow}{ \updownarrow}$ ${ \updownarrow}$ ${ \leftrightarrow}{ \updownarrow}$ ${ \begin{turn}{45} ${ \leftrightarrow}$ \raisebox{-1ex}{$\leftrightarrow$} \end{turn} }$ ${ \leftrightarrow}{ \updownarrow}$ ${ \leftrightarrow}{ \updownarrow}$ ${ \leftrightarrow}$ ${ \updownarrow}{ \leftrightarrow}{ \updownarrow}$ ${ \begin{turn}{45} ${ \updownarrow}{ \leftrightarrow}$ ${ \updownarrow}$ \raisebox{-1ex}{$\leftrightarrow$} \end{turn} }$ ${ \updownarrow}{ \leftrightarrow}{ \updownarrow}$ ${ \updownarrow}{ \leftrightarrow}{ \updownarrow}$ ${ \updownarrow}{ \leftrightarrow}$ ${ \leftrightarrow}{ \updownarrow}$ ${ \updownarrow}$ ${ \leftrightarrow}$ ${ \begin{turn}{45} \raisebox{-1ex}{$\leftrightarrow$} \end{turn} }$ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : Cayley table for the group of positional matching permutations.[]{data-label="tab:cayley"} Let $T_{w,h}$ be the sequence of formation moves that goes from the bottom-left junction to the top-right one in Algorithm \[alg:main\] in a $w\times h$ board. \[lem:seq\] For any even $w\geq 16$ and any $h\geq 12$, $T_{w,h}({ H})={ H}$. We show that the entire sequence of moves $T_{w,h}$ is either neutral or equivalent to single vertical move, depending on the board dimensions. According to Table \[tab:perms\], this suffices to prove the lemma. The sequence $T_{w,h}$ consists mostly of diagonal moves, which are neutral. The transition between diagonals along the vertical edges consist of two vertical moves, which are also neutral (${ \updownarrow}{ \updownarrow}={ \begin{turn}{45} \raisebox{-1ex}{$\leftrightarrow$} \end{turn} }$). The heel is also neutral, as it consists of the sequence ${ \updownarrow}{ \updownarrow}{ \leftrightarrow}{ \leftrightarrow}{ \updownarrow}{ \leftrightarrow}{ \leftrightarrow}{ \updownarrow}$ (omitting diagonal moves) which is again equivalent to ${ \begin{turn}{45} \raisebox{-1ex}{$\leftrightarrow$} \end{turn} }$. This is easy to see by noting that any two consecutive vertical or horizontal moves cancel out. It is depicted in detail in Figure \[fig:neutralheel\]. Thus, $T_{w,h}$ reduces to composing the sequences in the bottom-right and top-left corners. As mentioned, Sequence 1 (the heel) is neutral. It is easy to see that the other sequences (counting each part of Sequence 2 separately) is equivalent to ${ \updownarrow}$. Thus, we get that $T_{w,h}$ is simply the composition of zero to four vertical moves, depending on the width and height of the board. This further simplifies to zero or one vertical moves. ![Visualization of how the heel permutes the position of the knights. Note that the sequence of moves flips the columns of the knights (the knight in position $tl$ moves to $tr$ and so on). However, this does not affect their positional matching. For instance, if the knights were paired in a horizontal matching, after flipping the columns, they are still in a horizontal matching. The same holds for vertical and cross matchings. On a separate note, the 32 crossings in the heel are marked with white disks.[]{data-label="fig:neutralheel"}](figures){width="0.85\linewidth"} \[thm:correctness\] Algorithm \[alg:main\] outputs a valid knight’s tour in any board with even width $w\geq 16$ and with height $h\geq 12$. Clearly, the construction is a set of disjoint cycles in the underlying knight-move graph. We prove that it is actually one cycle. Given a set of disjoint cycles in a graph, *contracting* a node in one of the cycles is the process of removing it and connecting its two neighbors in the cycle. Clearly, contracting a node in a cycle of length $\geq 3$ does not change the number of cycles. Thus, consider the remaining graph if we contract all the nodes except the four endpoints of the top-right junction. Note that we use a horizontal matching in the bottom-left junction and a vertical matching in the top-right junction. Contracting the non-endpoint nodes inside the top-right junction leaves the two edges corresponding to the vertical matching. By Lemma \[lem:seq\], contracting the nodes outside the top-right junction leaves the edges corresponding to a horizontal matching. Thus, the resulting graph is a single cycle of four nodes. Note that the choice of matchings at the junctions is important; using a horizontal matching in the top-right junction would not result in a knight’s tour. Lower Bounds and Approximation Ratios {#sec:lower} ===================================== In this section, we analyze the approximation ratio that our algorithm achieves for Problem \[pro:turns\] and Problem \[pro:crossings\]. For simplicity, we restrict the analysis to square boards. First, we briefly discuss the classification of these problems in complexity classes. Computational Complexity ------------------------ Consider the following decision versions of the problems: is there a knight’s tour on an $n\times n$ board with at most $k$ turns (resp. crossings)? We do not know if these problems are in $\mathsf{P}$. Furthermore, it may depend on how the input is encoded. Technically, the input consists of two numbers, $n$ and $k$, which can be encoded in $O(\log n+\log k)$ bits. However, it is more natural to do the analysis as a function of the size of the board (or, equivalently, of the underlying graph on which we are solving the Hamiltonian Cycle problem), that is, $\Theta(n^2)$. It is plausible that the optimal number of turns (resp. crossings) is a simple, arithmetic function of $n$. This would be the case if the optimal tour follows a predictable pattern like our construction (note that we can count the number of turns or crossings of our algorithm without constructing it). If that is the case, then the problems are in $\mathsf{P}$, regardless of how the input is encoded. If the input is represented using $\Theta(n^2)$ space, the problems are clearly in $\mathsf{NP}$, as a tour with $k$ turns/crossings acts as a certificate of polynomial length. However, unless $\mathsf{P}=\mathsf{NP}$, the problems are not $\mathsf{NP}$-hard. To see this, consider the language $$\{1^n01^k\mid \mbox{ there is a tour with at most }k\mbox{ turns in an }n\times n\mbox{ board}\},$$ and analogously for crossings. These languages are sparse, meaning that, for any given word length, there is a polynomial number of words of that length in the language. Mahaney’s theorem states that if a sparse language is $\mathsf{NP}$-complete, then $\mathsf{P}=\mathsf{NP}$ [@MAHANEY1982130]. This suggests that the problems are in $\mathsf{P}$, though technically they could also be $\mathsf{NP}$-intermediate. If the input is represented using $O(\log n+\log k)$ bits, then the problems are in $\mathsf{NEXP}$ because the “unary” versions above are in $\mathsf{NP}$. Note that, in this setting, simply listing a tour would require time exponential on the input size. Number of Turns --------------- All the turns in our construction happen near the edges. The four corners account for a constant number of turns. The left and right edges have eight turns for each four rows. As it can be seen in Figure \[fig:neutralheel\], the heel has 22 turns, so the top and bottom edges have 22 turns each for each eight columns. Therefore, the number of turns in our construction is bounded by $2\frac{8}{4}n+2\frac{22}{8}n+O(1)=9.5n+O(1)$. #### Lower bound. We now give a lower bound on the number of turns in the optimal tour. First, note that every cell next to an edge *must* contain a turn. This accounts for $4n-4$ turns. A simple argument, sketched in Appendix \[app\], improves this to a $4.25n-O(1)$ lower bound. Here we focus on the main result, a lower bound of $(6-\varepsilon)n$ for any $\varepsilon>0$. We start with some intermediate results. We associate each cell in the board with a point $(i,j)$ in the plane, where $i$ is the row of the cell and $j$ is the column. An edge cell only has four moves available. We call the directions of these moves $D_1,D_2,D_3,$ and $D_4$, in clockwise order. For an edge cell $c$, let $r_i(c)$, with $1\leq i\leq 4$, denote the ray starting at $c$ and in direction $D_i$. That is, the ray that passes through the cells reachable from $c$ by moving along $D_i$. Let $a$ and $b$ be two cells along the left edge of the board, with $a$ above $b$. The discussion is symmetric for the other three edges. Given two intersecting rays $r$ and $r'$, one starting from $a$ and one from $b$, let $S(r,r')$ denote the set of cells in the region of the board *bounded* by $r$ and $r'$: the set of cells below or on $r$ and above or on $r'$. We define the *crown* of $a$ and $b$ as the following set of cells (see Figure \[fig:crown\]): $$\mbox{crown}(a,b)=S(r_2(a),r_1(b))\cup S(r_3(a),r_2(b))\cup S(r_4(a),r_3(b)).$$ ![The black leg collides would collide with all the red legs.[]{data-label="fig:collisions"}](figures){width="0.71\linewidth"} ![The black leg collides would collide with all the red legs.[]{data-label="fig:collisions"}](figures){width="0.33\linewidth"} We can associate, with each edge cell $c$, the two maximal sequences of moves without turns in the tour that have $c$ as an endpoint. We call them the *legs* of $c$. We say that legs begin at $c$ and end at their other endpoint. We say two legs of different cells *collide* if they end at the same cell. Let $C_{a,b}$ denote the set of edge cells along the right edge between $a$ and $b$ ($a$ and $b$ included). The following is easy to see. Any collision between the legs of edge cells in $C_{a,b}$ happens inside $\mbox{crown}(a,b)$. We say that a leg of a cell in $C_{a,b}$ *escapes* the crown of $a$ and $b$ if it ends outside the crown. We say an edge cell in $C_{a,b}$ is *clean*, with respect to $C_{a,b}$, if both of its legs escape. We use the following observation. Let $m=|C_{a,b}|$ and $k$ be the number of clean cells in $C_{a,b}$. The number of turns inside $\mbox{crown}(a,b)$ is *at least* $m+(m-k)/2$. Each edge cell is one turn. Further, each of the $m-k$ non-clean cells have a leg that ends in a turn inside the crown. This turn may be because it collided with the leg of another edge cell in the crown. Thus, there is at least one turn for each two non-clean edge cells. To obtain the lower bound, we show that there is only a constant number of clean cells inside a crown. Let $a,b$ be two cells along the left edge of the board, with $a$ above $b$. There are at most 122 clean cells inside $\mbox{crown}(a,b)$. First we show that there are at most 60 clean cells such that one of their legs goes in direction $D_1$. For the sake of contradiction, assume that there are at least 61. Then, there are two, $c$ and $d$, such that $c$ is $60r$ rows above $d$, for some $r\in\mathbb{N},r\geq 1$. The contradiction follows from the fact that the other leg of $c$, which goes along $D_2,D_3,$ or $D_4$, would collide with the leg of $b$ along $D_1$. This is because, for any $l\geq 1$, the leg of $b$ along $D_1$ collides with (see Figure \[fig:collisions\]): - any leg along $D_2$ starting from a cell $3l$ rows above $b$, - any leg along $D_3$ starting from a cell $5l$ rows above $b$, and - any leg along $D_4$ starting from a cell $4l$ rows above $b$. Since $60r$ is a multiple of $3,4,$ and $5$, no matter what direction the other leg of $c$ goes, it collides with the leg of $d$. As observed, this collision happens inside the crown. Thus, $c$ and $d$ are not clean. By a symmetric argument, there are at most 60 clean cells such that one of their legs goes in direction $D_4$. Finally, note that there can only be two clean cells with legs in directions $D_2$ *and* $D_3$. This is because, by a similar argument, there cannot be two such cells at an even distance of each other; the leg along $D_3$ of the top one would collide against the leg along $D_2$ of the bottom one. \[cor:122\] Suppose that the crown of $a$ and $b$ has $m\geq 122$ edge cells. Then, there are at least $(m-122)/2$ turns inside the crown at non-edge cells. Now, consider the iterative process depicted in Figure \[fig:fractal\], defined over the unit square. The square is divided in four sectors along its main diagonals. Whereas earlier we used the term ‘crown’ to denote a set of cells, here we use it to denote the polygon with the *shape* of a crown. On the first step, a maximum-size crown is placed on each sector. At step $i>1$, we place $2^{i-1}$ more crowns in each sector. They are maximum-size crowns, subject to being disjoint from previous crowns, in each gap between previous crowns and between the crowns closest to the corners and the main diagonals. ![Each sector of the square shows the process after a different number of iterations: $1,2,3,$ and $4$ iterations on the top, right, bottom, and left sectors, respectively.[]{data-label="fig:fractal"}](figures){width="0.55\linewidth"} \[lem:fractal\] For any $1>\varepsilon>0$, there exists an $i\in\mathbb{N}$ such that at least $(1-\varepsilon)$ of the boundary of the unit square is inside a crown after $i$ iterations of the process. At each iteration, a constant fraction larger than $0.36$ of the length on each side that is not in a crown is added to a new crown (Figure \[fig:geometry\]). This gives rise to a series $A_i$ for the fraction of the side inside crowns after $i$ iterations: $A_1=1/3$, $A_{i+1}> A_{i}+0.36(1-A_{i})$ for $i>1$; this series converges to $1$ (we do not prove that $A_1=1/3$, but this is inconsequential because the value of $A_1$ does not affect the convergence of the series). ![Lower bounds on two ratios. **Left:** the ratio between the gap between consecutive crowns and the base of the maximum-size crown that fits in the gap is $>0.4$. **Right:** the ratio between the gap between a crown and a main diagonal and the base of the maximum-size crown that fits in the gap is $>0.36$.[]{data-label="fig:geometry"}](figures){width="0.4\linewidth"} \[thm:lowerbound\] For any constant $\varepsilon>0$, there is a sufficiently large $n$ such that any knight’s tour on a $n\times n$ board requires $(6-\varepsilon)n$ turns. We show a seemingly weaker form of the claim: that there is a sufficiently large $n$ such that any knight’s tour on a $n\times n$ board requires $(6-2\varepsilon)n-C_\varepsilon$ turns, where $C_\varepsilon$ is a constant that depends on $\varepsilon$ but not on $n$. This weaker form is in fact equivalent because, for sufficiently large $n$, $C_\varepsilon<\varepsilon n$, and hence $(6-2\varepsilon)n-C_\varepsilon>(6-3\varepsilon)n$. Thus, the claim is equivalent up to a multiplicative factor in $\varepsilon$, but note that it is a claim about arbitrarily small $\varepsilon$, so it is not affected by a multiplicative factor loss. Let $i$ be the smallest number of iterations of the iterative process in Figure \[fig:fractal\] such that at least $(1-\varepsilon)$ of the boundary of the unit square is inside crown shapes. The number $i$ exists by Lemma \[lem:fractal\]. Fix $S$ to be the corresponding set of crown shapes, and $r=|S|$. Note that $r=4(2^i-1)$ is a constant that depends only on $\varepsilon$. Now, consider a square $n\times n$ board with the crown shapes in $S$ overlaid in top of them. Let the board size $n$ be such that the smallest crown in $S$ contains more than $122$ edge cells. Then, by Corollary \[cor:122\], adding up the turns at non-edge cells over all the crowns in $S$, we get at least $4n(1-\varepsilon)/2-61r$ turns. Adding the $4n-4$ turns at edge cells, we get that the total number of turns is at least $(6-2\varepsilon)n-61r-4$. To complete the proof, consider $C_\varepsilon=61r+4$. Algorithm \[alg:main\] achieves a $19/12+o(1)$ approximation on the minimum number of turns. In a $n\times n$ board, let $ALG(n)$ denote the number of turns in the tour produced by Algorithm \[alg:main\] and $OPT(n)$ denote the minimum number of turns. Let $\varepsilon>0$ be an arbitrarily small constant. We show that there is an $n_0$ such that for all $n\geq n_0$, $ALG(n)/OPT(n)<19/12+\varepsilon$. As mentioned, for any even $n\geq 16$, $ALG(n)<9.5n+c$ for some small constant $c$. In addition, by Theorem \[thm:lowerbound\], for sufficiently large $n$, $OPT(n)>(6-\varepsilon)n$. Thus, $$\frac{ALG(n)}{OPT(n)}<\frac{9.5n+c}{(6-\varepsilon)n}$$ Furthermore, for large enough $n$, $c/((6-\varepsilon)n)<\varepsilon/2$, so $$\frac{ALG(n)}{OPT(n)}<\frac{9.5}{6-\varepsilon}+\frac{\varepsilon}{2}=\frac{19}{12-2\varepsilon}+\frac{\varepsilon}{2}<\frac{19+6\varepsilon}{12}+\frac{\varepsilon}{2}=\frac{19}{12}+\varepsilon.$$ Number of Crossings ------------------- Similarly to the case of turns, all the crossings in our construction happen near the edges. The four corners account for a constant number of crossings. The left and right edges have 10 crossings for each four rows. The top and bottom edges have 32 crossings for each eight columns (Figure \[fig:neutralheel\]). Therefore, the number of turns in our construction is bounded by $2\frac{10}{4}n+2\frac{32}{8}n+O(1)=13n+O(1)$. #### Lower bound. {#lower-bound.-1 .unnumbered} We prove the following lower bound on the number of crossings. Any knight’s tour on an $n \times n$ board has at least $4n - O(1)$ crossings. Let $T$ be an arbitrary knight’s tour on an $n\times n$ board. We show that $T$ has $n-O(1)$ crossings involving knight moves incident to the cells along the left edge of the board. An analogous argument holds for the three other edges of the board, which combined yield the desired bound. We partition the edge cells along the left-most column into sets of three consecutive cells, which we call *triplets* (if $n$ is not multiple of three, we ignore any remaining cells, as they only account for a constant number of crossings). Two triplets are *adjacent* if they contain adjacent cells. Each triplet has six associated knight moves in the tour $T$, two for each of its cells. We call the choice of moves the *configuration* of the triplet. Since there are $\binom{4}{2} = 6$ choices of moves for each cell, there are $6^3 = 216$ possible configurations of each triplet. Consider a weighted directed graph $G$ with a node for each of the $216$ possible triplet configuration and an edge from every node to every node, including a loop from each node to itself. The graph has weights on both vertices and edges. Given a node $v$, let $C(v)$ denote its associated configuration. The weight of $v$ is the number of crossings between moves in $C(v)$. The weight of each edge $v\rightarrow u$ is the number of crossings between moves in $C(v)$ and moves in $C(u)$ when $C(v)$ is adjacent and above $C(u)$. Each path in $G$ represents a choice of move configurations for a sequence of consecutive triplets. Note that if two knight moves in $T$ with endpoints in edge cells cross, the edges cells containing the endpoints are either in the same triplet or in adjacent triplets. Thus, the sum of the weights of the vertices and edges in the path equals the total number of crossings among all of these moves. Since $G$ is finite, any sufficiently long path eventually repeats a vertex. Given a cycle $c$, let $w(c)$ be the sum of the weights of nodes and edges in $c$, divided by the length of $c$. Let $c^*$ be the cycle in $G$ minimizing $w$. Then, $w(c^*)$ is a lower bound on the number of crossings per triplet along to edge. By examining $G$, we can see that $w(c^*)=3$. Figure \[fig:mincrossings\] shows an optimum cycle, which in fact uses only one triplet configuration. The cycle minimizing $w$ can be found using Karp’s algorithm for finding the minimum mean weight cycle in directed graphs [@karp1978characterization], which runs in $O(|V|\cdot|E|)$ time in a graph with vertex set $V$ and edge set $E$. However, this requires modifying the graph $G$, as Karp’s algorithm is not suitable for graphs that also have node weights. We transform $G$ into a directed graph $G'$ with weights on only the edges and which preserves the optimal solution, as follows. We double each node $v$ in $G$ into two nodes $v_{in}, v_{out}$ in $G'$. We also add an edge $v_{in} \rightarrow v_{out}$ in $G'$ with weight equal to the weight of $v$ in $G$. For each edge $v\rightarrow u$ in $G$, we add an edge $v_{out}\rightarrow u_{in}$ in $G'$. Since we only counted crossings between moves incident to the first column, a question arises of whether the lower bound can be improved by considering configurations spanning more columns (e.g., the two or three leftmost columns). The answer is negative for any constant number of columns. Figure \[fig:mincrossings\] shows that the edges can be extended to paths that cover any fixed number of rows away from the edge without increasing the number of crossings. ![A configuration pattern that produces the minimum number of crossings along the edge of the board. The moves in the triplet configurations are shown in black. The dashed continuations illustrate that the moves in the configuration pattern can be extended to any number of columns without extra crossings.[]{data-label="fig:mincrossings"}](figures){width="0.22\linewidth"} Algorithm \[alg:main\] achieves a $13/4+o(1)$ approximation on the minimum number of crossings. Extensions {#sec:extensions} ========== The idea of using formation moves to cover the board and junctions to close the tour is quite robust to variations of the problem. We show how this can be done in some of the most popular generalizations of the problem. A variant that we do not consider is torus boards (where opposite edges are connected). The problem of finding tours with a small number of turns seems easier on a torus board, because one is not forced to make a turn when reaching an edge. Nonetheless, in a square $n\times n$ torus board, $\Omega(n)$ turns are still required, because making $n$ consecutive moves in the same direction brings the knight back to the starting position, so at least one turn is required for each $n$ visited cells. The tours for torus boards in [@watkins1997torus] match this lower bound up to constant factors, at least for some board dimensions (see the last section in [@watkins1997torus]). Crossings are not straightforward to define in torus boards. High-dimensional boards {#sec:3d} ----------------------- We extend our technique to three and higher dimensions. In $d$ dimensions, a knight moves by $1$ and $2$ cells along any two dimensions, and leaves the remaining coordinates unchanged. A typical technique to extend a knight’s tour algorithm to three dimensions is the “layer-by-layer” approach [@watkins2012across; @demaio2007chessboards; @demaio2011prism]: a 2D tour is reused on each level of the 3D board, adding the minimal required modifications to connect them into a single tour. We also follow this approach. (Watkins and Yulan [@yulan2006boxes] consider a generalizations of knight moves where the third coordinate also has a positive offset, but this is not as common.) For illustration purposes, we start with the 3D case, and later extend it to the general case. We require one dimenson to be even and $\geq 16$ and another dimension to be $\geq 12$, which we assume w.l.o.g. to be the first two. The rest can be any size. Note that at least one dimension must be even, or no tour exists [@erde2012closed]. The construction works as follows. The 2D construction is reused at each level. However, there are only two actual junctions, one on the first layer, of height 5, and one on the last layer, which may have any of the four heights in Figure \[fig:junctions\]. Every other junction is replaced by a sequence of formation moves. At every layer except the last, the formation ends adjacent to the corner using one of the sequences of moves in Figure \[fig:3dcorners\] (note that we show sequences for 4 different heights, thus guaranteeing that one shape fits for any board dimensions). The layers are connected with a formation move one layer up and two cells to the side, as in Figure \[fig:3djump\]. At every layer except the first, the formation starts with the rightmost sequence of moves in Figure \[fig:3dcorners\]. A full example is illustrated in Figure \[fig:262626tour\]. ![Corners where the knights stay in formation and end at specific positions.[]{data-label="fig:3dcorners"}](figures){width="0.99\linewidth"} ![Formation move across layers. Each color shows the starting and ending position of one of the knights.[]{data-label="fig:3djump"}](figures){width="0.35\linewidth"} Note that, since the sequence of moves between junctions is more involved than in two dimensions, Lemma \[lem:seq\] may not hold. There is, however, an easy fix: if the entire sequence is not a single cycle, replace the first junction with one that has a vertical matching (second junction in Figure \[fig:junctions\], rotate 180 degrees). This then makes a cycle. If the number of dimensions is higher than three, simply observe that the same move used between levels can also be used to jump to the next dimension; instead of changing by $1$ the third coordinate, change the fourth. After the first such move, the formation will be at the “top” of the second 3D board, which can be traversed downwards. This can be repeated any number of times, and generalizes to any number of dimensions. Note that in a $n^d$ board, $\Omega(n^{d-1})$ turns are needed, because there are $n^d$ cells and a turn must be made after at most $n/2$ moves. Note that our construction has $O(n^{d-1})$ turns, as it consists of $n^{d-2}$ iterations of the 2D tour. Thus, it achieves a constant approximation ratio on the minimum number of turns. We do not know of any lower bound on the number of crossings in higher dimensions. Odd boards {#sec:odd} ---------- We show how to construct a tour for a 2D board with odd dimensions which visits every cell except a corner cell. This is used in the next section to construct a tour that is symmetric under $90\degree$ rotations. Let the board dimensions be $w\times h$, where $w>16$ and $h>12$ are both odd. First, we use Algorithm \[alg:main\] to construct a $(w-1)\times h$ tour which is missing the leftmost column. Then, we extend our tour to cover this column, except the bottom cell, with the variations of our construction depicted in \[fig:oddadaptation\]. In particular, for the top-left corner, recall that we use sequence $(3-h)\mbox{ mod }4$ in Figure \[fig:othercorners\]. Here, the height $h$ is odd, so we only need adaptations for Sequences 2 and 0. ![Adaptations required to add a row to the left of the normal construction, with a missing cell in the junction.[]{data-label="fig:oddadaptation"}](figures){width="0.8\linewidth"} 90 Degree Symmetry {#sec:symmetry} ------------------ In this section, we show how to construct a symmetric tour under 90 degree rotations. We say a tour is symmetric under a given geometric operation if the tour looks the same when the operation is applied to the board. As a side note, our construction is already nearly symmetric under $180\degree$ rotations. For board dimensions such as $30\times 30$ where opposite corners have the same shape, the only asymmetry is in the internal wiring of the junctions. However, the construction cannot easily be made fully symmetric. It follows from the argument in the proof of Lemma \[lem:seq\] that if the two non-junction corners are equal, the entire sequence of formation moves from one junction to the other is neutral. Thus, using the same junction in both corners, as required to have symmetry, would result in two disjoint cycles. Symmetric tours under $90\degree$ rotations exist only for square boards where the size $n=4k+2$ is a multiple of two but not a multiple of four [@dejter1983symmetry]. In [@parberry1997efficient], Parberry gives a construction for knight’s tours missing a corner cell and then shows how to combine four such tours into a single tour symmetric under $90\degree$ rotations. We follow the same approach to obtain a symmetric tour with a number of turns and crossings linear on $n$, and thus constant approximation ratios. In our construction from Section \[sec:odd\], cell $(0,0)$ is missing, and edge $e=\{(0,1),(2,0)\}$ is present. This suffices to construct a symmetric tour. Divide the $2n\times 2n$ board into four equal quadrants, each of which is now a square board with odd dimensions. Use the construction for odd bords to fill each quadrant, rotated so that the missing cell is in the center. Finally, connect all four tours as in Figure \[fig:symmetry\]. ![This transformation appears in [@parberry1997efficient]. **Left:** four tours missing a corner square and containing a certain edge. The dashed lines represent the rest of the tour in each quadrant, which cover every square except the dark square. **Right:** single tour that is symmetric under $90\degree$ rotations. The numbers on the right side indicate the order in which each part of the tour is visited, showing that the tour is indeed a single cycle.[]{data-label="fig:symmetry"}](figures){width="0.6\linewidth"} Giraffe’s tour {#sec:giraffe} -------------- A giraffe is a leaper characterized by the move $(1,4)$ instead of $(1,2)$. Giraffe’s tours are known to exist on square boards of even size $n\geq 104$ [@kamcev2014generalised] and on square boards of size $2n$ when $n$ is odd and bigger than $4$ [@dejter1983symmetry]. Our result extends this to some rectangular sizes. We adapt our techniques for finding giraffe’s tours with $O(w+h)$ turns and crossings, where $w$ and $h$ are the width and height of the board. We use a formation of $4\times 4$ giraffes. Figure \[fig:giraffemoves\] shows the formation moves, Figure \[fig:giraffeheel\] shows the analogous of a heel to be used to transition between diagonals, and Figure \[fig:giraffejunctions\] shows the two junctions. Figure \[fig:giraffetour\] shows how these elements are combined to cover the board. We restrict our construction to rectangular boards where $w=32k+20$, for some $k\geq 1$, and $h=8l+14$, for some $l\geq 1$ (extending the results to more boards would require additional heel variations). We start at the bottom-left junction as in the knight’s tour. We transition between diagonals along the bottom edge with a giraffe heel, and along the top edge with a flipped giraffe heel. We transition between diagonals along the left and right edges with four consecutive upward moves. The junction has width 20 and each heel has width 32, so there are $k$ heels along the bottom and top edges. The junction has height 11 and the tip of the heel has height 3, so there are $l$ sequences of four upward moves along each side (see Figure \[fig:giraffetour\]). It is easy to see that the construction visits every cell. As in the case of knight’s tours, for the result to be a valid giraffe’s tour it should be a cycle instead of a set of disjoint cycles. Note that the matchings in the two junctions form a cycle. Thus, if the formation reaches the top-right junction in the same matching as they left the bottom-left junction, the entire construction is a single cycle (by an argument analogous to Theorem \[thm:correctness\]). Next, we argue that this is the case. Let $H,F,$ and $U$ denote the sequences of formation moves in the heel, in the flipped heel, and the sequence of four upward moves, respectively. Let $T_{w,h}$ denote the entire sequence of moves from one junction to the other, where $w=32k+20$, for some $k\geq 1$, and $h=8l+14$, for some $l\geq 1$. Note that $T_{w,h}$ is a concatenation, in some order, of $H$ $k$ times, $F$ $k$ times, and $U$ $2l$ times (we can safely ignore diagonal moves, which do not change the coordinates of the giraffes within the formation). Let $M$ be the matching of the bottom-left junction. We want to argue that, after all the moves in $T_{w,h}$, the giraffes are still in matching $M$, that is, $T_{w,h}(M)=M$ using the notation from Section \[sec:methods\]. We show that not only the giraffes arrive to the other junction in the same matching but, in fact, they arrive in the same coordinates in the formation as they started. First, note that $U$ has the effect of flipping column $1$ with $2$ and column $3$ with $4$ in the formation. Perhaps surprisingly, $H$ and $F$ have the same effect. This is tedious but can be checked for each giraffe (Figure \[fig:giraffeheel\] shows one in red). Therefore, $T_{w,h}$ is equivalent to $U$ $2(l+k)$ times in a row. Note that after eight consecutive upward moves, or $U$ twice, each giraffe ends where it started. Thus, this is true of the entire tour. ![Formation of 16 giraffes moving together without leaving any unvisited squares.[]{data-label="fig:giraffemoves"}](figures){width="0.7\linewidth"} ![A giraffe heel. The formation moves are shown with black arrows (grouping up to four sequential straight moves together) The intermediate positions of the formation are marked by rounded squares, showing that every cell is covered. Note that the tip of the heel fits tightly under the next heel. The red line shows the path of one specific giraffe.[]{data-label="fig:giraffeheel"}](figures){width="0.9\linewidth"} ![Two giraffe junctions, their corresponding matchings, and the union of their matchings. The bottom-left junction consists mostly of formation moves, whereas the top-right one was computed via brute-force search. The cycle through the edges of the union is shown with the index of each node. []{data-label="fig:giraffejunctions"}](figures){width="0.99\linewidth"} ![The formation moves of a giraffe’s tour on a $52\times 30$ board.[]{data-label="fig:giraffetour"}](figures){width="0.7\linewidth"} Conclusions {#sec:conclusions} =========== We have introduced two new metrics of “simplicity” for knight’s tours: the number of turns and the number of crossings. We provided an algorithm which is within a constant factor of the minimum for both metrics. In doing so, we found that, in a $n\times n$ board, the minimum number of turns and crossings is $O(n)$. Prior techniques such as divide-and-conquer necessarily result in $\Theta(n^2)$ turns and crossings, so at the outset of this work it was unclear whether $o(n^2)$ could be achieved at all. The ideas of the algorithm, while simple, seem to be new in the literature, which is interesting considering the history of the problem. Perhaps it was our *a priori* optimization goal that led us in a new direction. The algorithm exhibits a number of positive traits. It is simple, efficient to compute, parallelizable, and amenable to generalizations (see Section \[sec:extensions\]). We conclude with some open questions: - Our tours have $9.5n+O(1)$ turns and $13n+O(1)$ crossings, and we showed respective lower bounds of $(6-\varepsilon)n$ and $4n-O(1)$. The main open question is closing or reducing these gaps, as there may still be room for improvement in both directions. We conjecture that the minimum number of turns is at least $8n$. - Are there other properties of knight’s tours, besides turns and crossings, that might be interesting to optimize? - Our method relies heavily on the topology of the knight-move graph. Thus, it is not applicable to general Hamiltonian cycle problems. Are there other graph classes with a similar enough structure that the ideas of formations and junctions can be useful? Omitted Figures {#app} =============== This section contains additional figures that supplement the main text. ![A lower bound of $4.25n-O(1)$ on the number of turns required by any knight’s tour on a $n\times n$ board can be seen as follows. Consider the cells in one of the two central columns (does not matter which one), and in a row in the range $(n/4,3n/4)$. They are shown in red. These cells have the property that every maximal sequence of knight moves without turns through them reaches opposite facing edges. The maximal sequences of knight moves through the first and last red cells are shown in dashed lines. Because $n$ must be even, one of the two endpoints of each maximal sequence through a red cell is not an edge cell. It follows that each red cell is part of a sequence of knight’s moves that ends in a turn at a non-edge cell. Thus, there is at least one turn at a non-edge cell for each pair of red cells. Since there are $0.5n$ red cells, we get the mentioned lower bound.[]{data-label="fig:badlowerbound"}](figures){width="0.3\linewidth"} ![Quartet moves for a $3D$ tour in a $26\times26\times26$ board. The quartet can move from the blue circle at each layer to the orange circle in the next layer by a quartet move.[]{data-label="fig:262626tour"}](figures){width="0.85\linewidth"} [^1]: The authors were supported by NSF Grant CCF-1616248 and NSF Grant 1815073. [^2]: By $o(1)$, we mean a function $f(n)$ such that for any constant $\varepsilon>0$, there is an $n_0$ such that, for all $n\geq n_0$, $f(n)<\varepsilon$. [^3]: See how the variations cycle at <https://www.ics.uci.edu/~nmamano/knightstour.html>.
--- abstract: 'We present numerical evidence for an extended order parameter and conjugate field for the dynamic phase transition in a Ginzburg-Landau mean-field model driven by an oscillating field. The order parameter, previously taken to be the time-averaged magnetization, comprises the deviations of the Fourier components of the magnetization from their values at the critical period. The conjugate field, previously taken to be the time-averaged magnetic field, comprises the even Fourier components of the field. The scaling exponents $\beta$ and $\delta$ associated with the extended order parameter and conjugate field are shown numerically to be consistent with their values in the equilibrium mean-field model.' author: - 'Daniel T. Robb' - Aaron Ostrander title: 'Extended Order Parameter and Conjugate Field for the Dynamic Phase Transition in a Ginzburg-Landau Mean-Field Model in an Oscillating Field' --- Introduction ============ Dynamic phase transitions (DPTs) have been identified in a variety of physical systems, and can serve as valuable aids in understanding non-equilibrium systems. A well-studied DPT in magnetic systems occurs when the period of an applied oscillating magnetic field of sufficient amplitude drops below a critical period $P_c$, causing the symmetric hysteresis loop to bifurcate into two asymmetric loops [@Tome90; @Mendes91; @Acharyya99]. Below the critical period, the DPT in magnetic systems has been shown in mean-field models [@Fujisaka01] and kinetic Ising model simulations [@Sides98; @Sides99; @Korniss00] to exhibit critical scaling with the same critical exponent $\beta$ as the corresponding equilibrium transitions, with the period-averaged magnetization serving as a dynamic order parameter [@Fujisaka01; @Sides98; @Sides99]. Recent work has shed light on the behavior in the critical region [@Idigoras12; @Gallardo12], examined the dependence on the stochastic dynamics [@Buendia08], and investigated the DPT in novel theoretical [@Baez10; @Deviren10; @Park13; @Kinoshita10] and experimental [@Deviren12] contexts. In numerical simulations of the two-dimensional Ising model in an oscillating field, it was shown that the period-averaged magnetic field serves as a field conjugate to the dynamic order parameter in the two-dimensional Ising model [@Robb07]. Evidence for a DPT in an Ising-like experimental magnetic system, using the period-averaged magnetic field as the dynamic order parameter, was provided in Ref. [@Robb08]. However, this recent work did not establish that the period-averaged magnetic field (called $H_b$ in Refs. [@Robb07] and [@Robb08]) is the *only* component of the conjugate field. For example, the same results would have been found in Ref. [@Robb07] if the full conjugate field $H_c$ were actually $H_c = H_b + H_d$, where $H_d$ is a function of the applied field which happened to be zero in all cases studied in Refs. [@Robb07] and [@Robb08]. Here we study a particular *mean-field model* and demonstrate numerically that, at least in the case of the mean-field model chosen, there are indeed additional components to the conjugate field. We also demonstrate that there are additional components to the dynamic order parameter, at least near the critical period $P_c$. We speculate that similar results will hold for the kinetic Ising model and other driven, spatially-extended models, but do not provide evidence for such models in this paper. Computational Model =================== The mean field model studied here has the Ginzburg-Landau (GL) free energy $F(m) = am^2 + bm^4 - h m$, where the magnetization $m=m(t)$ and magnetic field $h=h(t)$ are time-dependent but spatially uniform. This produces the dynamical equation $$\frac{\mathrm{d}m}{\mathrm{d}t} = -\frac{\partial F}{\partial m} = -2am - 4bm^3 + h , \label{eqn-motion}$$ which is a more general form of Eq. (3) governing the spatially uniform solutions in Ref. [@Fujisaka01]. It is known and straightforward to show that the equilibrium critical exponents for this mean-field Ginzburg-Landau (MFGL) model are $\beta = 1/2$ and $\delta = 3$. The dynamic critical exponents for this MFGL model at the critical period match the corresponding exponents for the equilibrium transition, as they do for the kinetic Ising model studied in Ref. [@Robb07]. We believe this result for the MFGL model has been demonstrated previously; at least the dynamical exponent $\beta = 1/2$ is established in Eq. (23) of Ref [@Fujisaka01]. In any case, we establish the dynamic critical exponents $\beta = 1/2$ and $\delta = 3$ numerically in Figs. \[DPT\_epsilon\_scaling\] and \[h\_scaling\], respectively, of this paper. In implementing the MFGL, we choose parameters $a=-\frac{3\sqrt{3}}{4}$ and $b=\frac{3\sqrt{3}}{8}$, which for $h=0$ yield minima of the free energy at $m = \pm 1$. In a periodic applied field $h(t) = h(t+P)$, we expect the dynamics to converge to limit cycle(s) of the form $m(t) = m(t+P)$. Setting $\omega = 2\pi/P$, we can expand both $h(t)$ and $m(t)$ as complex Fourier series: $$h(t) = \sum_k h_k e^{ik\omega t} \; \; ; \; \; m(t) = \sum_k m_k e^{ik\omega t} \label{mh-expansion}$$ where here and for the remainder of this paper, a summation index without limits is understood to run from $-\infty$ to $+\infty$. Since $h(t)$ and $m(t)$ are real, it follows that $h_{-k} = h_k^*$ and $m_{-k} = m_k^*$, so that $h_0$ and $m_0$ are real. The dynamic order parameter referred to as $Q$ in previous studies [@Sides98; @Sides99; @Korniss00] is the real Fourier coefficient $m_0$, while the component of the conjugate field identified in Ref. [@Robb07] – the period-averaged magnetic field – is the real Fourier coefficient $h_0$. Higher-order bifurcations ========================= In both mean-field [@Tome90; @Fujisaka01] and kinetic Ising [@Sides98; @Sides99; @Korniss00] models, above $P_c$ there is a stable symmetric hysteresis loop with $m_0 = 0$. Below $P_c$ there are two stable asymmetric loops with opposite values $m_0 = \pm m_s$, as well as one unstable symmetric loop with $m_0=0$. This behavior of $m_0$ in the GL model defined by Eq. (\[eqn-motion\]) can be observed in Fig. \[Dynamic\_phase\_diagram\]. In addition, the Fourier components $m_2$ and $m_4$ undergo a similar bifurcation at $P_c$. That is, above $P_c$ there is a stable symmetric hysteresis loop with $m_2 = m_4 = 0$. Below $P_c$ there are two stable asymmetric loops with opposite values $m_2 = \pm m_{s,2}$ and $m_4 = \pm m_{s,4}$, as well as one unstable symmetric loop with $m_0=0$. It was shown in Ref. [@Fujisaka01] that $m_{2j} = 0$ (for $j$ integer) above $P_c$, but the bifurcation of $m_2$ and $m_4$ below $P_c$ has not been reported before to our knowledge. A similar bifurcation occurs for all even Fourier components $m_{2j}$. It is interesting to note, however, that whereas the constant component $m_0$ increases monotonically below $P_c$, the amplitudes $|m_2|$ and $|m_4|$ increase over a limited range below $P_c$, and then decrease, eventually approaching 0 as the period $P$ decreases to 0. ![\[Dynamic\_phase\_diagram\] **(Color online)** Dynamic phase diagram illustrating the bifurcation of Fourier coefficients $m_0$, $m_2$ and $m_4$ below $P_C$. Here $a=-\frac{3\sqrt{3}}{4}$, $b=\frac{3\sqrt{3}}{8}$, and $H_1 = 1.5$, for which it is found that $P_C = 5.319357661995$. Note that the values $+{\vert m_{2j} \vert}$ and $-{\vert m_{2j} \vert}$ are displayed below $P_c$ in Fig. \[Dynamic\_phase\_diagram\] for simplicity; the two stable asymmetric loops actually have opposite complex Fourier components $m_{2j}$ and $-m_{2j}$. ](Dynamic_phase_diagram_v2.jpg){width="4.0in"} To within the numerical accuracy of our simulations, the bifurcation in all three components $m_0, m_2$ and $m_4$ occurred at the same critical value $P_c$. The critical period can be located numerically by applying the stability criterion $$\sum_k {\vert m_{k,c} \vert}^2 = - \frac{a}{6b} \label{critical.period.criterion}$$ along the line of solutions with $m_0 = 0$. Here the notation $m_{k,c}$ refers to the Fourier components of the steady state magnetization $m(t)$ at the critical period $P_c$. To establish Eq. (\[critical.period.criterion\]), we follow Ref. [@Fujisaka01] and perturb Eq. (\[eqn-motion\]) around the stable solution $m(t)$, giving to first order $\frac{\mathrm{d}}{\mathrm{d}t}{\left[ \delta m(t) \right]} = -2a {\left[ \delta m(t) \right]} - 12b{\left[ m(t) \right]}^2 {\left[ \delta m(t) \right]}$. This has solution $\delta m(t) = \delta m(0) \exp {\left[ -\int_0^t {\left( 2a+12b{\left[ m(t') \right]}^2 \right)} \mathrm{d}t' \right]}$. Evaluating at $t=P$, we find that the perturbation will grow, i.e., the solution $m(t)$ is unstable, if $-\int_0^P {\left[ 2a+12b{\left[ m(t') \right]}^2 \right]}\mathrm{d}t' > 0$. Expanding the two factors of $m(t')$ in their Fourier components using Eq. (\[mh-expansion\]), this can be shown to be equivalent to the condition $\sum_{k=-\infty}^{k=+\infty} {\vert m_{k} \vert}^2 < - \frac{a}{6b}$, which establishes Eq. (\[critical.period.criterion\]). For the parameters used here ($a=-\frac{3\sqrt{3}}{4}$, $b=\frac{3\sqrt{3}}{8}$), and with a sinusoidal applied field $h(t) = H_1 \cos(\omega t)$ with $H_1 = 1.5$, the critical period was determined using Eq. (\[critical.period.criterion\]) to be $P_c = 5.319357661995$. The bifurcation in the even Fourier components $m_{2j}$ can be understood using Fourier analysis. We assume that the driving field $h(t)$ contains (arbitrary) odd Fourier components $h_k$, including a non-zero $h_1$. Inserting the expansions in Eq. (\[mh-expansion\]) into Eq. (\[eqn-motion\]) yields (for all integer $k$) $$0 = -(i\omega k + 2 a) m_k - 4b\displaystyle\sum_{n_1,n_2} m_{n_1}m_{n_2}m_{k-n_1-n_2} + h_k \label{eom-fourier}$$ For odd $k$, the terms in the sum in Eq. (\[eom-fourier\]) must contain either 0 or 2 even Fourier components $m_k$. (Here ‘even Fourier component’ refers to a Fourier component with even index.) Thus, the equations for odd $k$ are still satisfied if the signs of all the even Fourier components $m_k$ are reversed. For even $k$, each term in the sum in Eq. (\[eom-fourier\]) must contain either 1 or 3 even Fourier components $m_k$. By the above assumption, $h_k$ = 0 in this case, so changing the sign of all $m_k$ will reverse the sign of all terms in the equation, and the equations for even $k$ also remain satisfied. Thus, stable loops below $P_c$ come in pairs. The two stable loops in the pair share the same value for the odd $m_k$, and values with opposite signs for the even $m_k$ values. Scaling with respect to period ============================== We investigated numerically the scaling of both odd and even Fourier components $m_k$ below the critical period $P_c$. Because we investigate deviations in various quantities at and nearby a *numerically determined* critical period, this requires very accurate simulation, achieved using Cash-Karp Runge-Kutta integration in long double precision variables (accurate to twenty decimal places on the computer system used). The steady-state loops for a given field period $P$ above $P_c$ were determined using a shooting method, which located the initial magnetization values $m(0)$ resulting in the same subsequent value $m(t=P) = m(0)$ at the end of the field cycle. The use of the shooting method circumvents the issue of critical slowing down occurring near the critical period, in which the convergence time to the steady-state becomes inconveniently large during normal time evolution. For the scaling variables, we use the scaled period $\epsilon = \frac{P_c - P}{P_c}$ and $$z_k = \sqrt{{\vert m_k \vert}^2 - {\vert m_{k,c} \vert}^2} \label{z_k_definition}$$ Note the scaling variable $z_k$ reduces to ${\vert m_k \vert}$ for even $k$, since $m_{k,c} = 0$ in this case. ![\[DPT\_epsilon\_scaling\] **(Color online)** Critical scaling of order parameter scaling variables $z_k$ with respect to the scaled period $\epsilon$. The plots for $z_k$, $k=0,...7$ are shown individually in the figure, along with a reference line representing scaling with exponent 1/2.](epsilon-scaling-nokey.pdf){width="4.0in"} As shown in Fig. \[DPT\_epsilon\_scaling\], the quantities $z_k$ scale with respect to the scaled period $\epsilon$ with critical exponent 1/2, for Fourier components $k=0$ through $k=7$. This agrees with the scaling exponent ($\beta = 1/2$) previously determined for $m_0$ in the GL model (see Eq. (23) of Ref. [@Fujisaka01]). We have explicitly confirmed the scaling with exponent $\beta = 1/2$ up to index $k=40$. (We are confident the scaling continues with exponent $\beta = 1/2$ beyond $k=40$. However, since $z_k$ decreases with $k$, as seen in Fig. \[DPT\_epsilon\_scaling\], the values of $z_k$ decrease below the accuracy of our numerical simulation past $k=40$.) Defining the deviation $\delta m_k = m_k - m_{k,c}$, it is straightforward to show that the fact that $z_k \sim \epsilon^{1/2}$ implies that $\delta m_k \sim \epsilon$ for $k$ odd, and $\delta m_k \sim \epsilon^{1/2}$ for $k$ even. This scaling of the deviations can be confirmed analytically using a perturbation of the Fourier relation in Eq. (\[eom-fourier\]) for frequency $\omega = \omega_c + \delta \omega$, i.e., just below the critical period. We insert $m_k = m_{k,c} + \delta m_k$ into Eq. (\[eom-fourier\]), expand and group terms, and then subtract Eq. (\[eom-fourier\]) with the critical values $m_{k,c}$. Noting to first order $\delta \omega = \epsilon \omega_c$, the result is $$\begin{aligned} 0 = - i \epsilon \omega_c k m_{k,c} - (i\omega_c k + 2a) \delta m_k - 12b\displaystyle\sum_{n_1,n_2} m_{n_1,c} m_{n_2,c} \delta m_{k-n_1-n_2} & \nonumber \\* - 12b\displaystyle\sum_{n_1,n_2} m_{k-n_1-n_2,c} \delta m_{n_1} \delta m_{n_2} & - 4b\displaystyle\sum_{n_1,n_2} \delta m_{n_1} \delta m_{n_2} \delta m_{k-n_1-n_2} \label{eom-fourier2}\end{aligned}$$ If we assume scaling relationships of the form $$\delta m_k =\begin{cases} c_k \epsilon^{p}, & \text{for k odd}\\ c_k \epsilon^{q}, & \text{for k even} \end{cases} \label{scaling-form}$$ then the scaling exponents $p=1$ and $q=1/2$ can be determined from Eq. (\[eom-fourier2\]) as follows. Considering Eq. (\[eom-fourier2\]) for odd $k$, for example $k=1$, the first term $-i \epsilon \omega_c m_{1,c}$ is linear in $\epsilon$. The rest of the terms (to lowest order in the deviations for even and odd $k$) must then be linear in $\epsilon$ as well, in order that the equation obtained by inserting the scaling forms in Eq (\[scaling-form\]) is independent of $\epsilon$. The first sum $\displaystyle\sum_{n_1,n_2} m_{n_1,c} m_{n_2,c} \delta m_{1-n_1-n_2}$ involves only odd $\delta m_k$, since $n_1$ and $n_2$ must be odd in order that the term in the sum be nonzero. This establishes that the scaling exponent $p = 1$ for the odd terms. The second sum $\displaystyle\sum_{n_1,n_2} m_{1-n_1-n_2,c} \delta m_{n_1} \delta m_{n_2}$ has non-zero terms with $n_1$ and $n_2$ either both odd or both even. If $n_1$ and $n_2$ are both odd, the term scales as $\epsilon^2$ and can be neglected. The terms with both $n_1$ and $n_2$ even are the lowest order terms including the even $\delta m_k$, and must scale linearly in $\epsilon$, implying that the scaling exponent $q=1/2$. The third sum is higher order in both even and odd $\delta m_k$ and can be neglected for critical scaling. It is also interesting to note that the system of equations represented by Eq. (\[eom-fourier2\]) has a solution with all even $\delta m_k = 0$. In this case, the set of equations for odd $k$ forms (to lowest order) a linear system whose solution is the *unstable* symmetric loop below $P_c$. Scaling with respect to field components ======================================== We next provide numerical evidence that all $h_j$ (for $j$ even) are components of the conjugate field, which yield the same scaling exponent associated with $h_0$. First, though, it is helpful to consider a specific case in which even Fourier components other than $h_0$ can produce a non-zero value of $m_0$, as this may seem counterintuitive. In Fig. \[Effect\_of\_h2\], the magnetization and field are plotted as a function of time for the applied field $h(t) = H_1 \sin{\left( \omega t \right)} + H_2 \sin{\left( 2\omega t \right)}$, with $H_1 = 1.5$ and increasing values of the amplitude $H_2$. With $H_2 = 0$, we find $h(t+P/2)=-h(t)$ and $m(t+P/2)=-m(t)$, respectively, which implies $h_0 = 0$ and $m_0 = 0$. With $H_2 = 0.5$, the maximum of the field, and therefore the maximum of the magnetization, occurs earlier in the cycle. Due to the hysteresis in the model, the system spends a greater percentage of the field cycle with positive magnetization, producing a value $m_0 > 0$. With $H_2 = 1.0$, the maxima of the field and the magnetization occur even earlier in the field cycle, producing an even larger value of $m_0$. ![\[Effect\_of\_h2\] **(Color online)** Plot of $h(t)$ and $m(t)$ for GL model with applied field $h(t) = H_1 \sin{\left( \omega t \right)} + H_2 \sin{\left( 2\omega t \right)}$, with $H_1 = 1.5$ and $\omega = \frac{2\pi}{P_c}$. The thin curves (black online) represent $h(t)$; the thick curves (red online) represent $m(t)$. The cases $H_2 = 0.0, 0.5$ and 1.0 are represented by solid, dashed and dotted curves, respectively.](Effect_of_h2.jpg){width="4.0in"} ![\[h\_scaling\] **(Color online)** Critical scaling of the variables $z_k$ ($k=0,..7$) with respect to $h_0$, $h_2$ and $h_4$. The scaling with respect to $h_0$, $h_2$ and $h_4$ are represented by solid lines (red online), dashed lines (green online), and dotted lines (blue online), respectively. The black reference line shows exact scaling with exponent 1/3.](h0-h2-h4-scaling-nokey.pdf){width="4.0in"} ![\[h0\_scaling\_crossover\] **(Color online)** Crossover of critical scaling of $z_0=|m_0|$ with respect to $h_0$, at the period value $P=5.3193577 > P_c$. The data for $z_0$ is represented by the thin line (red online). The thick black dashed curve represents scaling with exponent 1/3, while the thick black dash-dotted curve represents scaling with exponent 1.](h0-scaling-crossover.pdf){width="4.0in"} We next investigate whether, as suggested by Fig. \[Effect\_of\_h2\], other even Fourier components of the magnetic field function as parts of the conjugate field. For the scaling variables associated with the conjugate field, we use the Fourier components $h_j$ ($j$ even). For the scaling variables associated with the order parameter, we again use the quantities $z_k$ defined in Eq. (\[z\_k\_definition\]). In Fig. \[h\_scaling\], we observe that all of the variables $z_k$ ($k=0,..7$) exhibit critical scaling with exponent 1/3 with respect to the amplitudes $h_0$, ${\vert h_2 \vert}$ and ${\vert h_4 \vert}$ of the zeroth, second and fourth Fourier coefficients of the applied field. In each case, we have explicitly confirmed the scaling with exponent 1/3 up to index $k=50$ (the limit of our numerical accuracy). In addition, the scaling of $z_k$ ($k=0,..7$) with respect to $h_j$ ($j$ even) with exponent 1/3 has been explicitly verified up to $j=30$ (the limit of our numerical accuracy). The critical exponent agrees with that found for mean-field models for the scaling of the magnetization with respect to the field at the critical temperature ($1/\delta = 1/3$). We emphasize that Fig. \[h\_scaling\] illustrates the interesting fact that *each* scaling variable $z_0,z_1,z_2...$ (and its associated magnetization component $m_0,m_1,m_2$ exhibit scaling independently with respect to *each* even field component $h_0,h_2,h_4...$. We have not examined the effect of changes to more than one field component simultaneously. ![\[h0\_scaling\_terms\] **(Color online)** Illustration of the three bracketed terms ($T_1$, $T_2$, $T_3$) from Eq. (\[eom-fourier3\]) with respect to $h_0$, at the period $P=5.3193577$. The signs of $T_2$ and $T_3$ remain positive throughout. The sign of $T_1$ switches from positive to negative just above $h_0 = 10^{-12}$. Due to the logarithmic scale used, the absolute value $|T_1|$ is therefore plotted. The sum of the three terms, denoted $\Sigma = T_1 + T_2 + T_3$, is also shown. The thick black dashed curve shows represents scaling with exponent 1, while the thick black dotted curve represents scaling with exponent 3.](h0-scaling-terms.pdf){width="4.0in"} Note that a change in an odd Fourier component of the applied field ($\delta h_j$, $j$ odd) serves only to relocate the critical period, with the relative shift $\epsilon = \frac{{P_{C}'-P_C}}{P_C} \sim \delta h_j$ (the direction of the shift changing with the sign of $\delta h_j$). As a result, introducing a change $\delta h_j$ ($j$ odd) at $P_c$ will (through the shift in the critical period) bring about a change $z_k \sim \epsilon^{1/2} \sim |\delta h_j|^{1/2}$ for $k$ odd. If the critical period is decreased by the change $\delta h_j$, then $z_k = 0$ for $k$ even will be zero. However, if the critical period is increased by $\delta h_j$, $z_k$ for $k$ even will also scale as $z_k \sim \epsilon^{1/2} \sim |\delta h_j|^{1/2}$. We can understand several important aspects of these numerical scaling results with respect to the field by considering the analogue of Eq. (\[eom-fourier2\]) for the case in which perturbations $\delta h_k$ to the field’s Fourier components are introduced:\ $$\begin{aligned} 0 = \left[ 2a \delta m_k - 12b\displaystyle\sum_{n_1,n_2} m_{n_1,c} m_{n_2,c} \delta m_{k-n_1-n_2} \right] & \nonumber \\* + \left[ - 12b\displaystyle\sum_{n_1,n_2} m_{k-n_1-n_2,c} \delta m_{n_1} \delta m_{n_2} \right] & + \left[- 4b\displaystyle\sum_{n_1,n_2} \delta m_{n_1} \delta m_{n_2} \delta m_{k-n_1-n_2} \right] + \delta h_k \label{eom-fourier3}\end{aligned}$$ As an example, consider a perturbation with $\delta h_0 = h_0$, and the other $\delta h_k = 0$ (for $k \neq 0$). We examine the scaling behavior of $z_0 = |m_0|$ with respect to $h_0$ at a period $P = 5.3193577$, just above the critical period $P_c = 5.319357661995$. As seen in Fig. (\[h0\_scaling\_crossover\]), the scaling of $|m_0|$ undergoes a crossover from linear scaling ($\sim h_0$) to cube root scaling ($\sim h_0^{1/3}$) in the range from $h_0 = 10^{-12}$ to $h_0 = 10^{-11}$. Simulations at values of $P$ closer to $P_c$ show that the crossover region moves to progressively lower values of $h_0$ as $P$ approaches $P_c$, so that the scaling at $P_c$ has exponent 1/3, as previously illustrated in Fig. \[h\_scaling\].\ Fig. \[h0\_scaling\_terms\] illustrates the behavior of the three bracketed terms ($T_1,T_2,T_3$) in Eq. (\[eom-fourier3\]), as well as their sum. The interaction of the three terms in creating the crossover from linear to cube root scaling in Fig. \[h0\_scaling\_crossover\] is somewhat involved, but can be understood in general terms as follows. Again taking $k=0$ as a specific example, when $\delta h_0 = h_0$ is very small, the resulting deviations $\delta m_k$ will be very small. Thus, the term $T_1$ linear in $\delta m_k$ dominates, while the much smaller $T_2$ and $T_3$ scale with a higher power ($\sim (h_0)^{3}$). As $h_0$ grows, the values $\delta m_k$ increase, and the sums in terms $T_2$ and $T_3$ become comparable in size to $T_1$. In addition, the sum within $T_1$ finally dominates the single term $2a \delta m_k$, so that $T_1$ crosses from positive to negative. Past this point, all three terms $T_1$, $T_2$, and $T_3$ scale linearly with $h_0$, as seen in Fig. \[h0\_scaling\_terms\]. Given that the term $T_3$, comprised of three-term products of the deviations $\delta m_k$, scales linearly with $h_0$, the relationship $\delta m_k \sim (h_0)^{1/3}$, seen for the case $k=0$ in Fig. \[h0\_scaling\_crossover\], is then determined. In addition, note that as $P$ approaches $P_C$, the coefficient of $\delta m_k$ within the term $T_1$, i.e., $$2a - 12b \sum_{n_1+n_2=0} m_{n_1,c} m_{n_2,c} = 2a - 12b \sum_{k} \vert m_{k,c}\vert^2$$ approaches zero, as may be seen from the condition for $P_c$ in Eq. (\[critical.period.criterion\]). Thus, the switch of $T_1$ from positive to negative, and the accompanying crossover from linear to cube root scaling, occurs at smaller and smaller values of $h_0$ as $P$ approaches $P_c$.\ Conclusion and Future Work ========================== We have verified that analogous scaling results are seen starting with a square-wave field or a triangular wave field, which each consist of a particular set of odd Fourier components $h_j$, rather than the sinusoidal field (only $h_1$) used here. That is, *each* scaling variable $z_0,z_1,z_2...$, consisting of deviations from the values associated with the basic applied field form, exhibits scaling independently with respect to *each* even field component $h_0,h_2,h_4...$ added to the basic applied field form. Given this, it is reasonable to hypothesize that the set of odd Fourier components $h_j$ determine a dynamic phase transition with critical period and unstable symmetric loops below the critical period; the even Fourier components of the field then serve as components of a conjugate field in this dynamic phase transition. It would be interesting to determine if a single composite conjugate field can be constructed from the even Fourier components $h_j$, at least near the critical period, which would require investigating the effect of introducing several even Fourier components of the field simultaneously. Given that higher order magnetization components $m_2, m_4,...$ do not increase monotonically below $P_c$ (as seen in Fig. \[DPT\_epsilon\_scaling\]), such a single composite conjugate field would likely be limited to the immediate neighborhood of the critical period $P_c$. Finally, while the MFGL model we have used does capture the basic physics of the ferromagnetic phase transition, spatially dependent models such as the kinetic Ising model, as well as more specific models of particular geometries (e.g., superlattices, multilayers or nanostructures), are of more practical interest. We speculate that similar extensions of order parameter and conjugate field will occur in some form in these more realistic systems, but it is important and worthwhile to test this directly, and to discover what practical importance these higher-order components of the dynamic order parameter and conjugate field may have. We would like to acknowledge Mark Novotny and Per Arne Rikvold for introducing us to the fascinating phenomenon of the dynamic phase transition, and would like to thank Andreas Berger for several useful comments on the present manuscript. [17]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****, ()]{}
--- abstract: | In this paper, we compare two definitions of Rauzy classes. The first one was introduced by Rauzy and was in particular used by Veech to prove the ergodicity of the Teichmüller flow. The second one is more recent and uses a “labeling” of the underlying intervals, and was used in the proof of some recent major results about the Teichmüller flow. The Rauzy diagrams obtained from the second definition are coverings of the initial ones. In this paper, we give a formula that gives the degree of this covering. This formula is related to moduli spaces of *framed* translation surfaces, which corresponds to surfaces where we label horizontal separatrices on the surface. We compute the number of connected component of these natural coverings of the moduli spaces of translation surfaces. Delecroix [@Delecroix2010] proved a recursive formula for the cardinality of the (reduced) Rauzy classes. Therefore, we also obtain formula for labeled Rauzy classes. address: ' Université Aix-Marseille 3 LATP, case cour A, Faculté de Saint Jérôme Avenue Escadrille Normandie-Niemen, 13397 Marseille cedex 20 ' author: - Corentin Boissy title: Labeled Rauzy classes and framed translation surfaces --- Introduction ============ Rauzy induction was introduced in [@Rauzy] as a tool to study interval exchange maps. It is a renormalization process that associates to an interval exchange map, another one obtained as a first return map on a well chosen subinterval. After the major works of Veech [@Veech82] and Masur [@Masur82], the Rauzy induction became a powerful tool to study the Teichmüller geodesic flow. A slightly different tool was used by Kerckhoff [@Kerckhoff], Bufetov [@Buf], and Marmi-Moussa-Yoccoz [@Marmi:Moussa:Yoccoz]. It is obtained after labeling the intervals, and keeping track of them during the renormalization process. This small change was a significative improvement, and was used in the recent past to prove other important results about the Teichmüller geodesic flow, for instance the simplicity of the Liapunov exponents (Avila-Viana [@Avila:Viana]), or the exponential decay of correlations (Avila-Gouezel-Yoccoz [@AGY]). An interval exchange map is naturally decomposed into a continuous and a combinatorial datum. A *Rauzy* class is a minimal set of such combinatorial data invariant by the combinatorial Rauzy induction. We will speak of *reduced* or *labeled* Rauzy classes depending whether we use the definition of Rauzy, or the other one. Let $k_1,\ldots ,k_r$ be some pairewise distinct nonnegative integers and let $n_1,\ldots ,n_r$ be positive integers such that $\sum_{i=1}^{r}n_i k_i=2g-2$. The stratum of the moduli space of Abelian differentials whose corresponding surfaces have precisely $n_i$ singularities of degree $k_i$, for all $i\in \{1,\dots ,r\}$, is usually denoted as $\mathcal{H}(k_1^{n_1},\ldots ,k_r^{n_r})$. In this notation, we implicitely assume that $k_i\neq k_j$ for all $i\neq j$. A singularity of degree zero is by convention a regular marked point on the surface. The Veech construction naturally associates to a Rauzy class a connected component $\mathcal{C}$ of a stratum $\mathcal{H}(k_1^{n_1},\ldots ,k_r^{n_r})$ of the moduli space of Abelian differentials, and an integer $k\in \{k_1,\dots ,k_r\}$ that corresponds to degree of the singularity attached on the left in the Veech construction. In the next statement, the pair $(\mathcal{C},k)$ will be refered to the data associated to a Rauzy class by the Veech construction. \[MT\] Let $R_{lab}$ be a labeled Rauzy class and let $R$ be the corresponding reduced one. Let $(\mathcal{C},k)$ be data associated to $R$ by the Veech construction, where $\mathcal{C}$ is a connected component of a stratum $\mathcal{H}(k_1^{n_1},\ldots ,k_r^{n_r})$ of the moduli space of Abelian differentials, and $k\in \{k_1,\dots ,k_r\}$. Let $n$ be number of singularities of degree $k$ for a surface in $\mathcal{H}(k_1^{n_1},\dots ,k_r^{n_r})$. We have: $$\frac{|R_{lab}|}{|R|}=\frac{\Pi_{i=1}^r n_i! (k_i+1)^{n_i}}{n(k+1)} \varepsilon$$ where $\varepsilon$ satisfies: - $\varepsilon=\frac{1}{g}$ if $\mathcal{C}$ consists only of hyperelliptic surfaces with two conical singularities of degree $g-1$ and possibly some regular marked points. - $\varepsilon=\frac{1}{2}$ if $\mathcal{C}$ contains nonhyperelliptic surfaces and if there exists $k'\in \{k_1,\dots ,k_r\}$ which is odd. - $\varepsilon=1$ in all the other cases. Delecroix [@Delecroix2010] has found a recursive formula for the cardinality of the reduced Rauzy classes. The previous theorem complete his result for the case of labeled Rauzy classes. Our result is related to moduli spaces of framed translation surfaces. Informally, we will call a *frame* on a translation surface $X$ a map $F_X$ from a discrete alphabet $\mathcal{A}$, to a set $\mathcal{S}_X$ of discrete combinatorial data on the surface $X$ such that the moduli space of framed translation surfaces is a covering of the corresponding moduli space of translation surface. In our case, we are interested in the case where $\mathcal{S}_X$ is the set of *horizontal outgoing separatrices*. Several different kind of frames will appear in this context. The most important will be the space $\mathcal{C}(\mathcal{F}_{comp})$. It corresponds to the space were we label exactly one horizontal separatrix for each singularity (see Section \[framed\] for a more precise definition), and the alphabet $\mathcal{A}$ is a disjoint union of subalphabets $\mathcal{A}_k$, such that the labels in $\mathcal{A}_k$ correspond to singularities of degree $k$. Choosing $\alpha$ in some $\mathcal{A}_k$, one gets a natural covering $p_\alpha:\mathcal{C}(\mathcal{F}_{comp})\to \mathcal{C}(\mathcal{F}_{k})$, were $\mathcal{C}(\mathcal{F}_{k})$ is a moduli space of framed translation surfaces that corresponds to translation surfaces with a with a single marked separatrix adjacent to a singularity of degree $k$. Theorem \[MT\] will be a consequence of the two following results, which give a geometrical interpretation of the formula. \[prop:intro\] Let $R_{lab}$ be a labeled Rauzy class and let $R$ be the corresponding reduced one. The ratio $|R_{lab}|/|R|$ is equal to the degree of the covering $p_\alpha$, when restricted to a connected component of $\mathcal{C}(\mathcal{F}_{comp})$. \[th:cc:Cfr\] Let $\mathcal{C}$ be a connected component of a stratum of the moduli space of Abelian differentials. The space $\mathcal{C}(\mathcal{F}_{comp})$ is not connected in general. More precisely, it has: - $g$ connected components if $\mathcal{C}$ is the hyperelliptic connected component of $\mathcal{H}(g-1,g-1)$, with possibly some regular marked points. - Two connected components if it does not corresponds to a hyperelliptic connected component and if there exists odd degree conical singularities. - One connected component in all the other cases. Acknowledgements {#acknowledgements .unnumbered} ---------------- We thank Vincent Delecroix, Luca Marchese and Erwan Lanneau for remarks and comments on this paper. We also thank [@sage] for computational help. Background ========== Labeled and reduced interval exchange transformations ----------------------------------------------------- We give here the two definitions of interval exchange transformations that are used in the literature. In order to distinguish them, we will add the terms *reduced* and *labeled*. The first one is due to Rauzy [@Rauzy] Let $I \subset \mathbb R$ be an open interval and let us choose a finite subset $\Sigma= \{x_1,\ldots,x_{d-1}\}$ of $I$. The complement of $\Sigma$ in $I$ is a disjoint union of $d\geq 2$ open subintervals $\{I_j, \ j=1,\dots,d \}$. An *reduced interval exchange transformation* is a map $T$ from $I\backslash \Sigma$ to $I$ that permutes, by translation, the subintervals $I_j$. It is easy to see that $T(I\backslash \Sigma)=I\backslash \Sigma'$, were $\Sigma'\subset I$ is of the same cardinality as $\Sigma$, and that $T$ is precisely determined by: - A *combinatorial datum*: a permutation $\pi\in \Sigma_d$ which expresses that the interval number $k$, when counted from the left to the right, is sent to the place $\pi(k)$ by the map $T$. - A *continuous datum*: a vector $\lambda\in \mathbb{R}^d$ with positive entries that corresponds to the lengths of the intervals. We will identify an interval exchange transformation with its parameters $(\pi,\lambda)$. The second definition of interval exchange transformation was first introduced by Kerckhoff [@Kerckhoff] and later formalized by Bufetov [@Buf] and Marmi, Moussa & Yoccoz [@Marmi:Moussa:Yoccoz]. As we will see later, it simplifies the description of the Rauzy induction, but we get bigger Rauzy classes. A *labeled interval exchange map* is an reduced interval exchange map with a pair $(\pi_t,\pi_b)$ of one-to-one maps from a finite alphabet $\mathcal{A}$ to $\{1,\ldots,d\}$. The interval number $k$, when counted from the left to the right, is denoted by $I_{\pi_t^{-1}(k)}$. Once the intervals are exchanged, the interval number $k$ is $I_{\pi_b^{-1}(k)}$. In the previous definition, it is easy to see that the permutation corresponding to the underlying reduced interval exchange map is $\pi=\pi_b \circ \pi_t^{-1}$, and the continuous datum is a vector with positive entries $\lambda\in \mathbb{R}^{\mathcal{A}}$. We will identify a labeled interval exchange map with the pair $(\tilde{\pi},\lambda)$, were $\tilde{\pi}=(\pi_t,\pi_b)$. We will call $\tilde{\pi}$ a *labeled permutation*. We will usually represent a labeled permutation by a table: $$\begin{aligned} \tilde{\pi}= \left(\begin{array}{ccccc}\pi_t^{-1}(1)&\pi_t^{-1}(2)&\ldots&\pi_t^{-1}(d) \\ \pi_b^{-1}(1)&\pi_b^{-1}(2)&\ldots&\pi_b^{-1}(d) \end{array}\right).\end{aligned}$$ As we can see from this representation, we have “t” for top, “b” for bottom in the notation $\pi_t,\pi_b$. A *renumbering* of a labeled permutation is the composition of $(\pi_t,\pi_b)$ by a one-to-one map $f$ from $\mathcal{A}$ to $\mathcal{A}'$. It just corresponds to changing the labels without changing the underlying permutation. From the precious definitions, it is clear that a reduced interval exchange transformation (*resp.* a permutation) is an equivalent class of labeled interval exchange maps (*resp.* labeled permutations) up to renumbering. We will sometime identify a permutation with its unique representative $(\pi_t,\pi_b)$ with $\mathcal{A}=\{1,\ldots,d\}$ and $\pi_t=Id$. Rauzy-Veech induction --------------------- Let $T$ be a labeled or reduced interval exchange map. The Rauzy–Veech induction $\mathcal R(T)$ of $T$ is defined as the first return map of $T$ to a certain subinterval $J$ of $I$ (see [@Rauzy; @Marmi:Moussa:Yoccoz] and [@Rauzy] for details). We recall briefly the construction for the labeled case. Following the terminology of [@Marmi:Moussa:Yoccoz] we define the *type* of $T$ by $t$ if $\lambda_{\pi_t^{-1}(d)} > \lambda_{\pi_b^{-1}(d)}$ and $b$ if $\lambda_{\pi_t^{-1}(d)} < \lambda_{\pi_b^{-1}(d)}$. When $T$ is of type $t$ (respectively, $b$) we will say that the label $\pi_t^{-1}(d)$ (respectively, $\pi_b^{-1}(d)$) is the winner and that $\pi_b^{-1}(d)$ (respectively, $\pi_t^{-1}(d)$) is the looser. We define a subinterval $J$ of $I$ by $$J=\left\{ \begin{array}{ll} I \backslash T(I_{\pi_b^{-1}(d)}) & \textrm{if $T$ is of type t};\\ I \backslash I_{\pi_t^{-1}(d)} & \textrm{if $T$ is of type b.} \end{array} \right.$$ The image of $T$ by the Rauzy-Veech induction $\mathcal R$ is defined as the first return map of $T$ to the subinterval $J$. This is again a labeled interval exchange transformation, defined on $d$ letters. The combinatorial datum of this new interval exchange transformation is very easy to calculate in terms of the one of $T$. Indeed, let $\alpha\in \mathcal{A}$ (*resp.* $\beta\in \mathcal{A}$) be the winner (*resp.* looser). Let $\lambda' \in \mathbb{R}^\mathcal{A}$ such that: $$\begin{aligned} \lambda'_\alpha&=&\lambda_\alpha-\lambda_\beta \\ \lambda'_\nu&=&\lambda_\nu \qquad \textrm{for all } \nu \in \mathcal{A}\backslash\{\alpha\}\end{aligned}$$ Then, $\mathcal{R}(T)=(\mathcal{R}_\varepsilon(\pi),\lambda')$, were $\varepsilon$ is the type of $T$, and $\mathcal{R}_t,\mathcal{R}_b$ are the following combinatorial maps: 1. [$\mathcal{R}_t$]{}: let $k=\pi_b(\pi_t^{-1}(d))$ with $k\leq d-1$. Then, $\mathcal R_t(\pi_t,\pi_b)=(\pi_t',\pi_b')$ where $\pi_t=\pi_t'$ and $$\pi_b'^{-1}(j) = \left\{ \begin{array}{ll} \pi_b^{-1}(j) & \textrm{if $j\leq k$}\\ \pi_b^{-1}(d) & \textrm{if $j = k+1$}\\ \pi_b^{-1}(j-1) & \textrm{otherwise.} \end{array} \right.$$ 2. [$\mathcal{R}_b$]{}: let $k=\pi_t(\pi_b^{-1}(d))$ with $k \leq d-1$. Then, $\mathcal R_b(\pi_t,\pi_b)=(\pi_t',\pi_b')$ where $\pi_b=\pi_b'$ and $$\pi_t'^{-1}(j) = \left\{ \begin{array}{ll} \pi_t^{-1}(j) & \textrm{if $j\leq k$}\\ \pi_t^{-1}(d) & \textrm{if $j = k+1$}\\ \pi_t^{-1}(j-1) & \textrm{otherwise.} \end{array} \right.$$ The maps $\mathcal{R}_t,\mathcal{R}_b$ are called the combinatorial Rauzy moves. It is easy to define similar maps for reduced permutations. We first identify $\pi$ with the corresponding $(Id,\pi_b)$ and perform the Rauzy move. Then, if needed, we renumber the result so that it corresponds to a reduced permutation. We will still denote the corresponding maps by $\mathcal{R}_t,\mathcal{R}_b$, since it will always be clear, when the distinction is needed, whether or not the objects are labeled or reduced. We define the Rauzy induction for reduced interval exchange maps by considering labeled interval exchange maps up to renumbering. A *Rauzy class*, usually denoted by $R$ is a minimal set of labeled or reduced permutations invariant by the combinatorial Rauzy moves. A *Rauzy diagram* is a graph whose vertices are the elements of a Rauzy class and whose vertices are the combinatorial Rauzy maps. A Rauzy class or Rauzy diagram will be called labeled or reduced depending on the corresponding permutations. \[ex:card:rc\] - Let $\tau_n=\left(\begin{smallmatrix} 1&2&\ldots &n\\ n&n-1&\ldots &1 \end{smallmatrix}\right)$. Rauzy proved that the cardinality of $R(\tau_n)$ is $2^{n-1}-1$ for the reduced case, and one can prove “by hand” that it is the same for the labeled case. - Let $\pi_n= \left( \begin{smallmatrix} 0 & 2 & 3 & \dots & n-1 & 1 & n \\ n & n-1 & \dots & 3 & 2 & 1 & 0 \end{smallmatrix} \right)$. Contrary to the previous case, the labeled and reduced diagrams are not isomorphic anymore. The structure of the labeled and reduced Rauzy diagrams is precisely described in [@BL:pA]. It is in particular shown that the cardinality of the reduced diagram is $2^{n-1} -1 + n$ and the cardinality of the labeled diagram is $(2^{n-1} -1+ n)(n-1)$. - Consider $\pi=\left( \begin{smallmatrix} 1&2&3&4&5&6&7&8&9\\ 9&1&4&3&2&5&8&7&6 \end{smallmatrix} \right)$. The reduced Rauzy class is of size[^1] 1255 and the labeled Rauzy class is of size 30120. The ratio is 24. Translation surfaces and moduli space ------------------------------------- ### Translation surfaces A *translation surface* is a (real, compact, connected) genus $g$ surface $X $ with a translation atlas *i.e.* a triple $(X,\mathcal U,\Sigma)$ such that $\Sigma$ is a finite subset of $X$ (whose elements are called [*singularities*]{}) and $\mathcal U = \{(U_i,z_i)\}$ is an atlas of $X \setminus \Sigma$ whose transition maps are translations. We will require that for each $s\in \Sigma$, there is a neighborhood of $s$ isometric to a Euclidean cone. One can show that the holomorphic structure on $X\setminus \Sigma$ extends to $X$ and that the holomorphic 1-form $\omega=dz_i$ extends to a holomorphic $1-$form on $X$ were $\Sigma$ corresponds to the zeroes of $\omega$ and maybe some marked points. We usually call $\omega$ an *Abelian differential*. For $g \geq 1$, we define the moduli space of Abelian differentials ${\mathcal{H}}_g$ as the moduli space of pairs $(X,\omega)$ where $X$ is a genus $g$ (compact, connected) Riemann surface and $\omega$ non-zero holomorphic $1-$form defined on $X$. The term moduli space means that we identify the points $(X,\omega)$ and $(X',\omega')$ if there exists an analytic isomorphism $f:X \rightarrow X'$ such that $f^* \omega'=\omega$. The group $\textrm{SL}_2(\mathbb R)$ naturally acts on the moduli space of translation surfaces by post composition on the charts defining the translation structures. One can also see a translation surface obtained as a polygon (or a finite union of polygons) whose sides come by pairs, and for each pairs, the corresponding segments are parallel and of the same lengths. These parallel sides are glued together by translation and we assume that this identification preserves the natural orientation of the polygons. In this context, two translation surfaces are identified in the moduli space of Abelian differentials if and only if the corresponding polygons can be obtained from each other by cutting and gluing and preserving the identifications. Also, the $SL_2(\mathbb{R})$ action in this representation is just the natural linear action on the polygons. The moduli space of Abelian differentials is stratified by the combinatorics of the zeroes; we will denote by ${\mathcal{H}}(k_1^{n_1},\ldots ,k_r^{n_r})$ the stratum of ${\mathcal{H}}_g$ consisting of (classes of) pairs $(X,\omega)$ such that $\omega$ possesses exactly $n_i$ zeroes on $X$ with multiplicities $k_i$ for all $i\in \{1,\ldots ,r\}$, and no other zeroes. It is a well known part of the Teichmüller theory that these spaces are (Hausdorff) complex analytic, and in fact algebraic, spaces. These strata are non-connected in general but each stratum has at most three connected components (see [@KoZo] for a complete classification, or see Section \[sec4\]). Given a translation surface $X$, we will call *separatrix* an oriented half line (possibly finite) starting from a singularity of $X$. A horizontal separatrix $l$ will be *outgoing* if it goes on the right in a translation chart, and *incoming* otherwise. Suspension data {#sec:suspension} --------------- The next construction provides a link between interval exchange transformations and translation surfaces. A suspension datum for $T=(\pi,\lambda)$ is a collection of vectors $\{\tau_\alpha\}_{\alpha\in \mathcal{A}}$ such that - $\forall 1 \leq k \leq d-1,\ \sum_{\pi_t(\alpha)\leq k} \tau_\alpha>0$, - $\forall 1 \leq k \leq d-1,\ \sum_{\pi_b(\alpha)\leq k} \tau_\alpha<0$. We will often use the notation $\zeta=(\lambda,\tau)$. To each suspension datum $\tau$, we can associate a translation surface $(X,\omega)=X(\pi,\zeta)$ in the following way. Consider the broken line $L_t$ on $\mathbb{C}=\mathbb R^2$ defined by concatenation of the vectors $\zeta_{\pi_t^{-1}(j)}$ (in this order) for $j=1,\dots,d$ with starting point at the origin. Similarly, we consider the broken line $L_b$ defined by concatenation of the vectors $\zeta_{\pi_b^{-1}(j)}$ (in this order) for $j=1,\dots,d$ with starting point at the origin. If the lines $L_t$ and $L_b$ have no intersections other than the endpoints, we can construct a translation surface $X$ by identifying each side $\zeta_j$ on $L_t$ with the side $\zeta_j$ on $L_b$ by a translation. The resulting surface is a translation surface endowed with the form $\omega=dz$. Note that the lines $L_t$ and $L_b$ might have some other intersection points. But in this case, one can still define a translation surface by using the *zippered rectangle construction*, due to Veech ([@Veech82]). See for instance Figure \[ietsusp\]. ![The zippered rectangle construction, for two examples of suspension data.[]{data-label="ietsusp"}](ietsusp) Let $I \subset X$ be the horizontal interval defined by $I = (0,\sum_{\alpha} \lambda_{\alpha}) \times \{0\}$. The reduced interval exchange transformation $T$ is precisely the one defined by the first return map to $I$ of the vertical flow on $X$. We can extend the Rauzy induction to suspension data in the following way: let $\tau$ be a suspension data over $(\pi,\lambda)$, we define $\mathcal R(\pi,\lambda,\tau)=(\pi',\lambda',\tau')$ by: - $\mathcal{R}(\pi,\lambda)=(\pi',\lambda')$ - $\tau'_\alpha=\tau_\alpha-\tau_\beta$, where $\alpha$ (resp. $\beta$) is the winner (resp. looser) for $T=(\pi,\lambda)$ This extension is known as the *Rauzy–Veech induction*, and is used as a discretization of the Teichmüller flow. \[rk:isometric\] By construction the two translation surfaces $X(\pi,\zeta)$ and $X(\pi',\zeta')$ define the same element in the moduli space. Note that $\lambda,\tau$ define natural local parameters for the stratum of the moduli space of Abelian differentials. \[data:separatrix\] The left end of the two lines in the previous construction $L_t,L_b$ is a singularity, and the horizontal half line starting from this point to the right corresponds to a choice of a horizontal separatrix starting from this singularity, and it is easy to see that this combinatorial data is preserved under the Rauzy–Veech induction. Let us denote by $l(\pi,\zeta)$ this separatrix. Rauzy diagrams and framed translation surfaces ============================================== Moduli space of framed translation surface. {#framed} ------------------------------------------- A *frame* on a translation surface $X$ a map $F_X$ from a discrete alphabet $\mathcal{A}$, to a set $\mathcal{DC}_X$ of discrete combinatorial data on the surface $X$. Let $\mathcal{C}$ be a connected component of a stratum of moduli space of translation surfaces, and a collection $\mathcal{F}$ of frames for translation surfaces in $\mathcal{C}$, with a fixed alphabet. One can define the corresponding moduli space of framed translation surfaces: two elements $(X,F_X)$ and $(X',F'_{X'})$ are identified if there is a translation mapping $X\to X'$ which is consistent with the frames. Then, we will denote by $\mathcal{C}(\mathcal{F})$ the corresponding moduli space. The sets $\mathcal{DC}_X$ can be many things: incoming or outgoing horizontal separatrices, $H_1(X,\mathbb{Z})$, etc…Here we will not study the precise conditions on the collections of frames so that $\mathcal{C}(\mathcal{F})$ is a “nice” space. We will just introduce three cases, for which $\mathcal{C}(\mathcal{F})$ are coverings of $\mathcal{C}$. We first ask that $\mathcal{DC}_X=\mathcal{S}_X$ is the set of *horizontal outgoing separatrices*, then we consider the three families of all frames that satisfy the following conditions respectively: 1. $\mathcal{F}_k$: the set $\mathcal{A}$ is a singleton and the image of $F_X$ is any separatrix adjacent to a degree $k$ singularity. 2. $\mathcal{F}_{sat}$: $F_X$ is a one-to-one mapping from $\mathcal{A}$ to $\mathcal{S}_X$. 3. $\mathcal{F}_{comp}$: $\mathcal{A}=\sqcup_k \mathcal{A}_k$, the map $F_X$ is injective, and we require that for each $k$ any singularity of $S$ of degree $k$ has a unique separatrix in $F(\mathcal{A}_k)$. We will denote respectively by $\mathcal{C}(\mathcal{F}_{k})$, $\mathcal{C}(\mathcal{F}_{sat})$, and $\mathcal{C}(\mathcal{F}_{comp})$, the corresponding moduli spaces, which are finite coverings of $\mathcal{C}$. 1. The space $\mathcal{C}(\mathcal{F}_{k})$ is the moduli space of pairs $(X,l)$, were $X\in \mathcal{C}$ and $l$ is a separatrix adjacent to a singularity of degree $k$ in $X$. 2. The space $\mathcal{C}(\mathcal{F}_{sat})$ corresponds to translation surfaces were we label each horizontal outgoing separatrix by an element in $\mathcal{A}$. It corresponds to a “saturated case”. 3. The space $\mathcal{C}(\mathcal{F}_{comp})$ corresponds to translation surfaces were we label exactly one separatrix for each singularity, accordingly to the degree of the singularity. As we will see, the spaces $\mathcal{C}(\mathcal{F}_{k})$ (resp. $\mathcal{C}(\mathcal{F}_{sat})$) will appear naturally in the study of reduced (resp. labeled) Rauzy classes. We will then reduce the problem to the study of the space $\mathcal{C}(\mathcal{F}_{comp})$. Note that the space $\mathcal{C}(\mathcal{F}_{comp})$ was also introduced recently by Marchese in [@Marchese] (Section 3.2). Moduli space of reduced suspension data. ---------------------------------------- The set of suspension data associated to a labeled or reduced permutation is connected (in fact, convex). Hence, for a Rauzy class $R$, all flat surfaces obtained from the Veech construction are in the same connected component of a stratum of the moduli space of Abelian differentials. In fact, according to Remark \[data:separatrix\] there is a natural map: $$\begin{aligned} \Phi:\mathcal{H}_{R}=\left\{ \begin{array}{l} (\pi,\zeta), \pi\in R,\\ \zeta \textrm{ susp. dat. for }\pi \end{array} \right \} {/ \mathcal{R}}&\rightarrow& \mathcal{C}(\mathcal{F}_{k}) \\ \left[(\pi,\zeta) \right] \qquad &\mapsto & \big(X(\pi,\zeta),l(\pi,\zeta) \big)\end{aligned}$$ Let us consider the permutations given in Example \[ex:card:rc\]. We have: - For $\tau_n=\left(\begin{smallmatrix} 1&2&\ldots &n\\ n&n-1&\ldots &1 \end{smallmatrix}\right)$, the corresponding connected component is $\mathcal{H}^{hyp}(n-2)$ or $\mathcal{H}^{hyp}(\frac{n-1}{2}-1,\ \frac{n-1}{2}-1)$ depending on the parity of $n$. - For $\pi_n= \left( \begin{smallmatrix} 0 & 2 & 3 & \dots & n-1 & 1 & n \\ n & n-1 & \dots & 3 & 2 & 1 & 0 \end{smallmatrix} \right)$, the corresponding connected component is $\mathcal{H}^{hyp}(0, n-2)$ or $\mathcal{H}^{hyp}(0, \frac{n-1}{2}-1,\frac{n-1}{2}-1)$, the singularity which is marked by the Veech construction being of degree 0. - For $\pi=\left( \begin{smallmatrix} 1&2&3&4&5&6&7&8&9\\ 9&1&4&3&2&5&8&7&6 \end{smallmatrix} \right)$, the corresponding stratum is $\mathcal{H}(1,1,1,1)=\mathcal{H}(1^4)$ and is connected. When $R$ is a Rauzy class of reduced permutations, then the following theorem was proven in [@Boissy:rc]. \[th:Boissy:rc\] The map $\Phi$ is a homeomorphism on its image. The complement of the image of $\Phi$ is contained in a codimension 2 subset of $\mathcal{C}(\mathcal{F}_{k})$, which is connected. Moduli space of labeled suspension data --------------------------------------- The previous section describes what represents “geometrically” a reduced Rauzy diagram (or more precisely, the corresponding moduli space of suspension data): a suspension data for a reduced permutation, modulo the Rauzy–Veech induction corresponds to a translation surface with a marked separatrix, *i.e.* an element of $\mathcal{F}_k(\mathcal{C})$. In this section, we give an analogous description of a labeled Rauzy diagram. In this case, a suspension data for a labeled permutation, modulo the Rauzy–Veech induction corresponds to a translation surface where all the separatrices are labeled, *i.e* an element of $\mathcal{C}(\mathcal{F}_{sat})$. Then, we will show next that we can reduce the problem to studying the moduli space of translation surfaces with a single marked separatrix for each singularities. $l_1$ at 15 45 $l_2$ at 40 78 $l_3$ at 55 58 $l_4$ at 125 65 1 at 16 62 2 at 40 58 3 at 80 60 4 at 150 58 2 at 8 25 4 at 50 5 3 at 120 0 1 at 165 20 ![A framing of a surface issued from the Veech construction. Here we have $l_i=F(i)$ for $i=1,2,3,4$ and $l_1=l_2$.[]{data-label="complab"}](complab) Let $\pi=(\pi_t,\pi_b)$ be a labeled permutation and let $\zeta$ be a suspension data for $\pi$. The zippered rectangle construction naturally defines a framed translation surface (see Figure \[complab\]) in the following way: for each rectangle in the Veech construction, the left vertical side contains a unique singularity. Hence, we can label the corresponding outgoing horizontal separatrix with the letter of the rectangle. Furthermore, two of these rectangles (corresponding to $\pi_t^{-1}(1)$ and $\pi_b^{-1}(1)$) intersect the corresponding singularity at a left corner (the bottom for one, the top for the other), and the corresponding horizontal outgoing separatrix is the same, so is labeled twice: once by the symbol $\pi_{t}^{-1}(1)$, and once by the symbol $\pi_b^{-1}(1)$. For all the other rectangles, the singularity on the left is in the interior of the left vertical side, hence, each corresponding separatrix is uniquely labeled. Therefore one gets a one-to-one map: $$F:\mathcal{A}' \to \mathcal{S}_X$$ were $\mathcal{A}'$ is the quotient of $\mathcal{A}$ by the equivalence relation $\pi_t^{-1}(1)\sim \pi_b^{-1}(1)$. The element $F(\{\pi_t^{-1}(1), \pi_b^{-1}(1)\})$ will be refered as the *doubly labeled* separatrix. \[Rlab:to:Hsat\] Let $(\pi,\zeta)$ as previously. The framed translation surface constructed as before from $(\pi,\zeta)$ and the one defined by $\mathcal{R}(\pi,\zeta)$ are the same element in $\mathcal{C}(\mathcal{F}_{sat})$. This is an elementary check. The following proposition transforms the initial combinatorial question into a topological one on the moduli space of Abelian differentials. \[rc:to:cc\] There is a natural one-to-one correspondence between labeled Rauzy classes and connected components of the moduli space of framed translation surfaces $\mathcal{C}(\mathcal{F}_{sat})$. The degree of the mapping from a labeled Rauzy diagram to the reduced one is then precisely the degree of the natural mapping from a connected component of $\mathcal{C}(\mathcal{F}_{sat})$ to $\mathcal{C}(\mathcal{F}_{k})$. Let $R_{all}$ be the set of labeled permutations that corresponds to $\mathcal{C}(\mathcal{F}_{k})$, and let: $$\mathcal{H}_{R_{all}}=\{(\pi,\zeta),\pi\in R_{all}, \ \zeta \textrm{ suspension data for }\pi\} /\mathcal{R}.$$ By Lemma \[Rlab:to:Hsat\] there is a map $\Phi_{sat}: \mathcal{H}_{R_{all}}\to\mathcal{C}(\mathcal{F}_{sat})$ such that the following diagram commutes. $$\begin{array}{ccc} \mathcal{H}_{R_{all}} &\xrightarrow{\Phi_{sat}} & \mathcal{C}(\mathcal{F}_{sat})\\ \downarrow {p_0}& & \downarrow {p_1} \\ \mathcal{H}_{R}& \xrightarrow{\ \Phi\ } &\mathcal{C}(\mathcal{F}_{k}) \end{array}$$ Where $p_0$ is the canonical map that replace a labeled permutation by a reduced one, and $p_1$ is the map that “forget” all labels except for the doubly labeled separatrix. The maps $\Phi$ and $\Phi_{sat}$ are homeomorphisms on their images, and onto up to codimension 2 subsets (see [@Boissy:rc], Section 2 for details.). Hence, $\mathcal{H}_{R_{all}}$ and $\mathcal{C}(\mathcal{F}_{sat})$ have the same number of connected components, and the degree of the maps $p_1$ and $p_{0}$, when restricted to a connected component are the same. But the degree of the map $p_{0}$ restricted to a connected component is precisely the degree of natural map from the labeled Rauzy diagram to the reduced one. Moduli space of translation surfaces with frame. ------------------------------------------------ $\ $\ There are obvious invariants for the connected components of the moduli space $\mathcal{C}(\mathcal{F}_{sat})$. Indeed, two elements of $\mathcal{C}(\mathcal{F}_{sat})$ that are in the same connected component must satisfy the following property: - The labels that correspond to a given singularity on one surface must correspond to a same singularity on the other surface. - The canonical cyclic order on the set of labels obtained by rotating clockwise around a singularity must be the same. Hence, a connected component of $\mathcal{C}(\mathcal{F}_{sat})$ is clearly isomorphic to a connected component of $\mathcal{C}(\mathcal{F}_{comp})$ Let $\alpha\in \mathcal{A}_k$ be a label associated to a degree $k$ singularity. There is a natural covering $p_{\alpha}$ from $\mathcal{C}(\mathcal{F}_{comp})$ to $\mathcal{C}(\mathcal{F}_{k})$ obtained by “forgetting” all the markings, except the one that corresponds to $\alpha$. The following proposition summarizes the discussion of this section, and is equivalent to Proposition \[prop:intro\]. \[deg:coverings\] Let $R_{lab}$ be a labeled Rauzy class and $R$ be the corresponding reduced one. Let $k$ be the degree of the marked singularity associated to $R$. The ratio $\frac{|{R_{lab}}|}{|R |}$ equals the degree of the canonical projection $p_\alpha:\mathcal{C}(\mathcal{F}_{comp})\to \mathcal{C}(\mathcal{F}_{k})$, restricted to a connected component of $\mathcal{C}(\mathcal{F}_{comp})$, where $\alpha$ is a label associated to a degree $k$ singularity. It was proven in [@Boissy:rc] that $\mathcal{C}(\mathcal{F}_{k})$ is connected. A connected component of $\mathcal{C}(\mathcal{F}_{comp})$ is naturally isomorphic to a connected component of $\mathcal{C}(\mathcal{F}_{sat})$. Then, we just apply Proposition \[rc:to:cc\]. Topological invariants for framed translation surfaces {#sec4} ====================================================== From now, a framed translation surface will be an element in $\mathcal{C}(\mathcal{F}_{comp})$. As seen in Proposition \[deg:coverings\], the formula given in Theorem \[MT\] is related to the number of connected components of $\mathcal{C}(\mathcal{F}_{comp})$. Also, the degree of the covering $\mathcal{C}(\mathcal{F}_{comp})\to \mathcal{C}$, restricted to a connected component of $\mathcal{C}(\mathcal{F}_{comp})$, is clearly $\frac{\Pi_{i=1}^r n_i! (k_i+1)^{n_i}}{c}$, were $c$ is the number of connected component of $\mathcal{C}(\mathcal{F}_{comp})$, since ${\Pi_{i=1}^r n_i! (k_i+1)^{n_i}}$ is the number of possible frame $F\in \mathcal{F}_{comp}$ on a surface. In this section, we give lower bounds on the number of connected components of $\mathcal{C}(\mathcal{F}_{comp})$. There are two cases. - The “hyperelliptic case”. If the corresponding surfaces are all hyperelliptic and have two singularities of degree $g-1$, with possibly some added regular marked points. Then, $\mathcal{C}(\mathcal{F}_{comp})$ cannot be connected due to the extra symmetries of the underlying translation surfaces. - The ”odd singularity case”. When there are odd degree singularities, we can define on $\mathcal{C}(\mathcal{F}_{comp})$ a topological invariant which generalizes the well known *spin structure invariant* for the moduli space of Abelian differentials, found by Kontsevich and Zorich. Recall that a Riemann $S$ surface is hyperelliptic if there exists an involution $\tau$ such that $S/\tau=\mathbb{CP}^1$. Since $\mathbb{CP}^1$ does not have any nontrivial Abelian differential, then for any translation surface $(S,\omega)$ such that $S$ is hyperelliptic the corresponding involution $\tau$ satisfies $\tau^* \omega=-\omega$. In particular, this means that the translation surface $(S,\omega)$ have an isometric involution which reverse the vertical direction. Kontsevich and Zorich have shown that for each genus $g\geq 2$, there are exactly two strata that contain a connected component which consists only of hyperelliptic translation surfaces. These are the strata $\mathcal{H}(2g-2)$ and $\mathcal{H}(g-1,g-1)$. Of course, for each strata, one can also define new ones by adding regular marked points on the surfaces. \[low:bound:hyp\] Assume that $\mathcal{C}$ consists only of hyperelliptic translation surfaces with two singularities of degree $g-1$ and $n_0$ regular marked points. Then, $\mathcal{C}(\mathcal{F}_{comp})$ has at least $g$ connected components. Let $X\in \mathcal{C}(\mathcal{F}_{comp})$. We denote by $l_0$ and $l_1$ the marked separatrices associated to the degree $g-1$ singularities, and we denote by $P_i$ the singularity corresponding to $l_i$. The hyperelliptic involution interchanges $P_0$ and $P_1$. Hence, there is a well defined (incoming) separatrix $l_1'$ adjacent to $P_1$ which is the image of $l_0$. The angle $\theta$ between $l_1'$ and $l_1$ is an odd multiple of $\pi$ and is constant under continuous deformations of $X$ inside the ambient stratum. Note that the value of $\theta$ does not depend on any choice. Hence the value of $\theta$ is an invariant of the connected component of $\mathcal{C}(\mathcal{F}_{comp})$. Since all the values $\pi,3\pi,\dots ,(2g-1)\pi$ are possible, we see that the number of connected components of $\mathcal{C}(\mathcal{F}_{comp})$ is at least $g$. \[prop:odd:sing\] Assume that $\mathcal{C}$ consists of translation surfaces with at least one odd degree singularity. Then, $\mathcal{C}(\mathcal{F}_{comp})$ has at least $2$ connected components. We postpone the proof of this proposition to the end of this section. We first define the “spin structure” invariant for $\mathcal{C}(\mathcal{F}_{comp})$. Let $X$ be a completely framed surface with at least one odd degree singularity. Note that the number of odd degree singularities of $X$ is necessarily even, since the sum of the degree of the singularities must be equal to $2g-2$ by the Riemann-Roch formula. For each singularity, we have given a name $\alpha\in \mathcal{A}$ to a horizontal outgoing separatrix. Now let us fix a total order on the finite alphabet $\mathcal{A}$, so that the marked separatrices are naturally ordered. This order induces an oriented pairing of the separatrices corresponding to odd degree singularities. $l_1^+$ at 120 400 $l_2^-$ at 80 470 $a$ at 50 490 $b$ at 120 510 $c$ at 160 530 $d$ at 220 530 $e$ at 260 500 $e$ at 50 410 $d$ at 100 370 $c$ at 160 385 $b$ at 200 400 $a$ at 280 420 $e$ at 40 60 $d$ at 80 20 $0$ at 130 -10 $c$ at 170 20 $b$ at 200 50 $a$ at 270 60 $a$ at 35 140 $0$ at 80 180 $b$ at 110 160 $c$ at 145 180 $d$ at 220 190 $e$ at 260 150 $e$ at 440 60 $d$ at 480 20 $0$ at 530 -10 $c$ at 570 20 $b$ at 600 50 $a$ at 670 60 $a$ at 435 140 $0$ at 480 180 $b$ at 510 160 $c$ at 545 180 $d$ at 620 190 $e$ at 660 150 ![Building a surface with only even degree singularities.[]{data-label="curvpoly"}](curvpoly) Now let $(l_1,l_2)$ be such a pair. We rotate the first separatrix clockwise by an angle $\pi/2$, and the second one counterclockwise by an angle $\pi/2$. We obtain pairs of vertical separatrices, the first one being on the positive direction, the second one on the negative direction. We denote by $(l_1^+,l_2^-)$ this pair of positive/negative vertical separatrices, let us also denote by $k_1,k_2$ the degree of the corresponding zeroes. According to Hubbard–Masur [@Hubbard:Masur], there exists a (smooth) path $\nu$ transverse to the horizontal foliation which starts being tangent to $l_1^+$ and ends being tangent to $l_2^-$. Now we consider the following surgery: we cut the surface $X$ along the path and paste in a “curvilinear parallelogram” with two small horizontal sides and two opposite sides that are isomorphic to $\nu$ (see Figure \[curvpoly\]). Then, gluing together the horizontal sides of the parallelogram, one obtains a translation surface where the pair of singularities corresponding to $l_1^+,l_2^-$ have become a singularity of degree $k_1+k_2+2$, which is even. We will refer to this construction as the *parallelogram construction with parameters* $(l_1,l_2)$. Then, we apply this procedure on all the pairs of vertical separatrices that were defined previously. The resulting translation surface only has even singularities and is of genus at least 3, since the minimal genus case corresponds to starting from $\mathcal{H}(1,1)$ and ending in $\mathcal{H}(1+1+2)$. Recall that the strata of the moduli space of Abelian differentials corresponding to only even degree singularities are not connected as soon as the genus is greater than or equal to 3, and are distinguished by the parity of spin structure. We will prove the following lemma: The connected component of the resulting surface in the previous construction doesn’t depend on the chosen paths. Up to a small deformation of the surface $X$, one can assume that it is obtained by the Veech construction starting from a data $(\pi,\lambda,\tau)$. Then, for a pair $l_1,l_2$ of separatrices as previously, the surface obtained after the parallelogram construction with parameters $l_1,l_2$ also arises from the Veech construction, where the corresponding permutation is obtained from $\pi$ by adding a new label on the top before the symbol corresponding to $l_2$ and the same label on the bottom before the label corresponding to $l_1$. For instance, in Figure \[curvpoly\], the labeled permutation $\left(\begin{smallmatrix} a&b&c&d&e \\ e&d&c&b&a \end{smallmatrix}\right)$ becomes $\left(\begin{smallmatrix} a&0&b&c&d&e \\ e&d&0&c&b&a \end{smallmatrix}\right)$, since $l_1$ corresponds to $c$ and $l_2$ corresponds to $b$. In particular, the permutation after removing all the odd degree singularities doesn’t depend on the choices of the paths, but only on the order that we have chosen on $\mathcal{A}$. We just need to show that the spin structure invariant defined before reaches all the possible values. First we recall Kontsevich–Zorich formula for the parity of spin structure for a translation surface $X'$ of genus $g'$ with only even degree singularities. Let $a_1,b_1,\dots ,a_{g'},b_{g'}$ be a collection of closed paths that represent a symplectic basis of the homology $H_1(X'; \mathbb{Z})$, and such that each path does not pass trough any singularity. We can assume that the $a_i,b_i$ are parametrized by the arc length. For each $a_i$ (resp. $b_i$), we define $\textrm{ind}(a_i)$ (resp. $\textrm{ind}(b_i)$) to be the index of the map $\mathbb{S}^1\to \mathbb{S}^1$, $t\mapsto a_i'(t)$ (resp. $t\mapsto b_i'(t))$). Then, the *parity of spin structure* of $X'$ is defined by the following formula: $$\sum_{i=1}^{g'} (\textrm{ind}(a_i)+1)(\textrm{ind}(b_i)+1) \mod 2$$ The result does not depend on the choice of the symplectic basis and is therefore an invariant of connected components of the strata of the moduli space of Abelian differentials (see [@KoZo]). In the definition of the invariant for $\mathcal{C}(\mathcal{F}_{comp})$, we successively glue together some pairs of odd degree singularities. We can also glue all pairs except one and therefore we can assume that there is only one pair $(P_1,P_2)$ of odd degree singularities, of degree $k_1$ and $k_2$ respectively, on the surface. We present such surface $X$ as coming from the Veech construction with parameters $(\pi,\lambda,\tau)$. Let $g$ be the genus of this surface. As in Figure \[curvpoly\], we have a pair $l_1^+,l_2^-$ of positive/negative vertical separatrix, we choose a path $\gamma$ transverse to the horizontal foliation. There exists a collection of closed paths $a_1,b_1,\dots ,a_g,b_g$ that do not intersect $\gamma$ and that represent a symplectic basis of the homology $H_1(X,\mathbb{Z})$. Let also be $a_0$ a small circle around the singularity $P_1$. When doing the parallelogram construction with parameters $l_1,l_2$ using $\gamma$, the closed paths $a_i,b_i$ persists and also the path $a_0$. Considering a path isometric to $\gamma$ inside the parallelogram, one obtains a closed path $b_0$, that intersect $a_0$ only once, and that does not intersect $a_i,b_i$ for all $i\in \{1,\dots ,g\}$. Hence, one gets symplectic basis on the homology of the newly built surface, that can be used to compute the corresponding parity of spin structure. Here, as we will see later, the only relevant data are the indices of $a_0$ and $b_0$. We clearly have: - $\textrm{ind}(a_0)=k_1+1 \mod 2=0 \mod 2$. - $\textrm{ind}(b_0)=0$. $l_1^+$ at 325 280 $l_2^-$ at 270 360 $1$ at 235 370 $2$ at 310 390 $3$ at 350 400 $4$ at 410 410 $5$ at 460 370 $5$ at 240 300 $4$ at 285 265 $3$ at 345 270 $2$ at 390 285 $1$ at 465 305 $5$ at 40 60 $4$ at 80 20 $0$ at 125 -10 $3$ at 175 25 $2$ at 220 45 $1$ at 290 60 $1$ at 35 125 $0$ at 90 160 $2$ at 135 145 $3$ at 175 155 $4$ at 240 160 $5$ at 280 125 $5$ at 400 60 $4$ at 445 25 $0$ at 490 -10 $3$ at 535 25 $2$ at 580 50 $1$ at 655 60 $1$ at 405 125 $0$ at 560 195 $2$ at 475 145 $3$ at 515 155 $4$ at 600 170 $5$ at 650 130 $l_3^-$ at 400 380 $\gamma$ at 300 330 $a_0$ at 400 330 $b_0$ at 115 80 $a_0$ at 225 80 $b_0'$ at 480 80 $a_0$ at 590 80 ![Building two surfaces with different spin structure[]{data-label="fig:base:spin"}](spin "fig:") Now we start again from the surface $X$ and replace the separatrix $l_2$ by the separatrix $l_3$, obtained by rotating $l_2$ by the angle $2\pi$. Then, we do the parallelogram construction with parameters $(l_1,l_3)$. We consider the following symplectic basis on the resulting flat surface: - The path $a_i,b_i$, for $i\in \{1\dots g\}$ which persist under this construction. - The path $a_0$, which also persists under this construction. - A path $b_0'$ obtained as in Figure \[fig:base:spin\] The indices of $a_0,a_1,\dots ,a_g$ and of $b_1,\dots,b_g $ are the same as previously, but $\textrm{ind}(b_0')=1$. Since, $\textrm{ind}(a_0)+1=1 \mod 2$, the surface obtained from $(l_1,l_3)$ has a different parity of spin structure as the one obtained from $(l_1,l_2)$. Hence, the two corresponding flat surfaces are in different connected components of the moduli space of Abelian differentials. This proves that $\mathcal{C}(\mathcal{F}_{comp})$ has at least $2$ connected components. Number of connected components of $\mathcal{C}(\mathcal{F}_{comp})$ {#surg} =================================================================== In the previous section, we have used topological invariants to find lower bounds on the number of connected components of $\mathcal{C}(\mathcal{F}_{comp})$. Here, we show that they are the exact values. Three elementary surgeries. --------------------------- Here we describe some elementary closed paths in $\mathcal{C}$ that lift to unclosed paths in $\mathcal{C}(\mathcal{F}_{comp})$. Recall that a saddle connection $\gamma$ joining two distinct singularities is *simple* if there exists no other saddle connection homologous to $\gamma$. In particular, it means that up to a small deformation of the surface in the ambient stratum, there is no other saddle connection in the surface parallel to $\gamma$. Then, deforming suitably the surface with the Teichmüller geodesic flow (see [@B2; @Boissy:rc] for instance), one gets a surface for which the saddle connection corresponding to $\gamma$ is very short compared to the other ones. Then, one can show that such surface is obtained by the *breaking up a zero* surgery (see [@EMZ]). We give a short description of this surgery. ### Breaking up a singularity. {#breaking-up-a-singularity. .unnumbered} $\varepsilon$ at 35 55 $\varepsilon$ at 35 80 $\varepsilon$ at 100 55 $\varepsilon$ at 100 80 $\varepsilon$ at 60 20 $\varepsilon$ at 60 100 $\varepsilon-\delta$ at 320 55 $\varepsilon-\delta$ at 320 80 $\varepsilon-\delta$ at 385 55 $\varepsilon-\delta$ at 385 85 $\varepsilon+\delta$ at 370 20 $\varepsilon+\delta$ at 370 115 $6\pi$ at 120 0 $4\pi+4\pi$ at 400 0 $\partial V_\varepsilon$ at 280 145 ![Local surgery that break a zero of degree $k_1+k_2$ into two zeroes of degree $k_1$ and $k_2$ respectively.[]{data-label="break:zero"}](bzero "fig:") Let $k_1,k_2$ be the degree of the zeroes that are the endpoints of $\gamma$. We start from a zero $P$ of degree $k_1+k_2$. The neighborhood $V_\varepsilon=\{x\in X, d(x,P)\leq \varepsilon\}$ of this conical singularity is obtained by considering $2(k_1+k_2)+2$ Euclidean half disks of radii $r$ and gluing each half side of them to another one in a cyclic order. We can break the zero into two smaller one by changing continuously the way they are glued to each other as in Figure \[break:zero\]. Note that in this surgery, the metric is not modified outside $V_\varepsilon$ . In particular, the boundary $\partial V_\varepsilon$ is isometric to (a connected covering) of an Euclidean circle. Note that is this construction, we can “rotate” the two singularities by an angle $\theta$ by cutting the surface along $\partial V_\varepsilon$, rotating $V_\varepsilon$ by an angle $\theta$ and regluing it. \[move1\] Let $X$ be a framed surface and let $P_1,P_2$ be two distinct singularities of degree $k$, joined by a simple saddle connection $\gamma$. We deform slightly the surface so that no saddle connection is parallel to $\gamma$. Then, using the Teichmüller geodesic flow, we contract the saddle connection $\gamma$ until it is very small compared to any other saddle connection. So the new surface $X'$ is obtained by breaking a zero of degree $2k$ into two zeroes $P_1'$ and $P_2'$ of degree $k$. Now we continuously rotate these two zeroes by the angle $\theta=(2k+1)\pi$. The resulting unframed surface is the same as $X'$, but this procedure interchanges $P_1'$ and $P_2'$. Then, we come back to the initial surface $X$, but the labeled zeroes $P_1$ and $P_2$ have been interchanged. The labels on the separatrices adjacent to the other singularities have not changed. The projection of this move in $\mathcal{C}$ is a closed path. This move in $\mathcal{C}(\mathcal{F}_{comp})$ interchanges $P_1$ and $P_2$, and fixes the separatrices adjacent to the other singularities. The idea of this previous move is, as we will see, to authorize us to do any degree preserving permutation on the set of simply marked singularities. This explains the terms $n_i!$ in the formula of Theorem \[MT\]. Now the next two moves will fix the labeled singularities and change marked the outgoing separatrices. \[move2\] Let $X$ be a framed surface and let $P_1,P_2$ be two distinct singularities of degree $k_1$ and $k_2$ respectively, joined by a simple saddle connection $\gamma$. Here we do not assume that $k_1=k_2$. We perform the same as in Move \[move1\], but we turn $P_1'$ and $P_2'$ by $(k_1+k_2+1)2\pi$ instead. This move clearly corresponds to a closed path in $\mathcal{C}$. It also preserves $P_1,P_2$ pointwise. Let us look how changes the marked separatrices. For this, we can fix once for all a marked separatrix for all the singularities. Then, for a singularity of degree $k$, we can identify the set of corresponding horizontal separatrices to $\mathbb{Z}/(k+1)\mathbb{Z}$ by ordering them counterclockwise. We have the following lemma. Let $l_1\in \mathbb{Z}/(k_1+1)\mathbb{Z}$ and let $l_2\in \mathbb{Z}/(k_2+1)\mathbb{Z}$ be the separatrices associated to $P_1$ and $P_2$. Then, Move \[move2\] acts on the set of separatrices in the following way: - $l_1$ becomes $l_1-k_2 \mod k_1+1$ - $l_2$ becomes $l_2-k_1 \mod k_2+1$ - All the other labeled separatrices remain unchanged Note that it is enough to prove this lemma in the case when $P_1,P_2$ is obtained after breaking up a singularity. The last statement of the lemma is obvious by construction: we do not change the metric outside a small neighborhood of $\gamma$. Now we look at the surgery, keeping track of the labeled separatrices. When turning continuously the set $V_\varepsilon$ by an angle $\theta$, one must simultaneously change the separatrices by an angle $-\theta$, so that they stay horizontal. So at the end, they have moved by the angle $-(k_1+k_2+1)2\pi$ each, so for $i=1,2$, $l_i$ is replaced by $l_i-(k_1+k_2+1)$ modulo $k_i+1$, which gives the result. Note that this move is especially useful when $k_1=k_2$. Then, the corresponding transformation is $(l_1,l_2)\mapsto (l_1+1,l_2+1)$. Before describing the last move, we first describe a surgery which is analogous to the one presented in [@KoZo]. ### Bubbling $r$ handles: {#sec:bubble:handles .unnumbered} We start from a singularity of degree $p\in \{0,1\}$. Let us consider a small polygonal line $L$ with no self intersection starting from the singularity. Let $r$ be the number of segments $s_1,\dots ,s_r$ of $L$. We consider $r$ parallelograms, each one having a pair on sides parallel to one of the $s_i$. Then, we cut the surface along each $s_i$ and paste in the corresponding parallelogram, and we glue by translation each remaining opposite sides of each parallelogram. We obtain a translation surface of genus $ g(X)+r$, and the degree p singularity have been replaced by a degree $p+2r$ singularity. Note that this surgery can be performed without changing the metric outside a small neighborhood of the singularity of degree $p$. Note that we can “rotate” the construction in the following way: the surgery is performed inside a $\varepsilon$ neighborhood $V_\varepsilon$ of the initial singularity of degree $p$. The boundary $\partial V_\varepsilon$ remains a metric covering of a euclidean circle after bubbling the handles. Now we can cut the surface $X$ along this circle and reglue it after a rotation by $\theta$. Now we can describe the last move. \[move3\] Assume that the translation surface $X$ was obtained after bubbling $r$ handles and let $P$ be the corresponding singularity. We continuously rotate the construction as explained previously, by a angle of $(p+2r+1)2\pi$. As before, the underlying surface in $\mathcal{C}$ is the same after Move \[move3\] and any separatrix that does not correspond to the singularity $P$ remains unchanged. Let $l\in \mathbb{Z}/(p+2r+1)\mathbb{Z}$ be the separatrix corresponding to $P$. Then, Move \[move3\] changes $l$ in the following way: - If $p=0$, $l$ is replaced by $l-1$. - If $p=1$, $l$ is replaced by $l-2$. It is easy to see that, as in the case of Move \[move2\], a marked separatrix attached to the singularity is changed by the transformation $l\mapsto l-(p+1)\mod p+2r+1$, and the separatrices associated to the other singularities remain unchanged. In particular if $p=0$, we reach all possible separatrices adjacent $P$ in this way. If $p=1$, then $p+2r+1$ is even, and we reach only half of the separatrices adjacent to $P$ in this way. Generating the monodromy group. ------------------------------- \[prop:monod:1\] Assume that $\mathcal{C}$ contains nonhyperelliptic surfaces, then the following holds: - The set $\mathcal{C}(\mathcal{F}_{comp})$ is connected if all the singularities have even degree - The set $\mathcal{C}(\mathcal{F}_{comp})$ has two connected components otherwise. We start with the following lemma. Let $\mathcal{C}$ be a connected component of $\mathcal{H}(k_1^{n_1},\ldots ,k_r^{n_r})$. Choose an ordering on the set with multiplicities $\{k_1,\dots ,k_1,\dots ,k_r,\dots ,k_r\}$. There exists $X\in \mathcal{C}$ and a polygonal line in $X$ that consists of simple saddle connections and that joins all the singularities of $X$ in that order. The proof is the same as the proof of Proposition 3.5 in [@Boissy:rc]. We first assume that there exists odd degree singularities in the underlying stratum. Since we assume that it is not a hyperelliptic stratum, it is connected (see [@KoZo]). We write this stratum as $\mathcal{H}(k_1^{n_1},\dots ,k_{s}^{n_s}, (2k'_1)^{\beta_1},\dots ,(2k'_{s'})^{\beta_{s'}})$ , with $s+s'=r$, and $k_1,\dots ,k_{s}$ are odd. Now we start from a surface in $\mathcal{H}(k_1^{n_1-1},1,\dots ,k_{s}^{n_s-1},1)$ (i.e. we don’t take the even degree singularities and we replace one singularity of each odd degree by a singularity of degree one). From the previous lemma, we can assume that there is a polygonal path of simple saddle connections which has no self intersection and that joins successively all the singularities in the following order: - first the singularities of degree $k_1$, - then, a singularity of degree 1, - then, the singularities of degree $k_2$, - then, a singularity of degree 1, - and so on … Now for each singularity of degree $1$ that ends a group of singularities of degree $k_i$, we bubble $(k_i-1)/2$ handles as in section \[sec:bubble:handles\]. This replace the singularity of degree $1$ by a singularity of degree $k_i$. Note that the polygonal path of simple saddle connections persists under this surgery. We will denote by $\gamma$ this polygonal line. Now for each $i\in \{1,\dots ,s'\}$, we consider a polygonal line $\gamma_i$ joining $\beta_i $ regular points, such that the paths $\gamma,(\gamma_i)_i$ have no intersection points. Then, for each vertex, we bubble $n_i$ handles. We obtain $s'$ chains of simple saddle connections that join each collection of singularities of degree $2k'_i$. The resulting surface is therefore in $\mathcal{H}(k_1^{n_1},\dots ,k_{s}^{n_s}, (2k'_1)^{\beta_1},\dots ,(2k'_{s'})^{\beta_{s'}})$, which is the stratum that we study. Now using Move 1, we see that for any polygonal line of simple saddle connections joining singularities with the same degree, we can perform any transposition of two consecutive singularities. Hence we can arbitrarily permute the singularities sharing the same degree. Using Move 3, we see that we can reach any choice of separatrices for the even degree singularities. Now we consider the chain $\gamma$ of simple saddle connections joining all the separatrices of odd degree, that was constructed before. The first $n_1$ vertices of $\gamma$ makes a chain of singularities of degree $k_1$ . Let us name the singularities $P_{1,1},\dots ,P_{1,n_1}$ according to the order given by the polygonal path $\gamma$. If $n_1>1$, we reach any choice of separatrices for $P_{1,1},\dots ,P_{1,n_1-1}$ by applying successively Move 2 on the pairs $(P_{1,i},P_{1,i+1})$ for $i\in \{1,\dots ,n_1-1\}$, in order to choose arbitrarily a labeled separatrix of $P_{1,i}$. Note that once this is done for some $i$, the next moves don’t change the marked separatrix corresponding to $P_{1,i}$. Then, for the singularity $P_{1,n_1}$, we use Move 3 to rotate the corresponding separatrix by any even number (recall that the set of outgoing separatrices corresponding to a singularity of degree $k$ is naturally identified with $\mathbb{Z}/(k+1)\mathbb{Z})$). If $P_{1,n_1}$ is not the end of $\gamma$, *i.e.* the polygonal line $\gamma$ continues to some other (odd) degree singularity $P_{2,1}$, then Move 2 on the pair $P_{1,n_1},P_{2,1}$ will act on the marked separatrice of $P_{1,n_1}$ as $l_1\mapsto l-k_2$. Hence, it will be changed by a *odd* number, so in combination with Move 3, we obtain all possible choices. If we iterate this procedure until the last singularity of degree $k_s$, we see that we can reach any choice of separatrices for the singularities of odd degree, except the last one of the chain where we obtain only half of the possibilities. This proves the proposition in the case when there exists odd singularities. If there does not exists any singularity of odd degree, the procedure described above works (with $\gamma=\emptyset$) as soon as we can find a surface like above in the connected component that we study. But in this case, the corresponding stratum of translation surfaces is not connected. Consider a translation surface obtained from a torus with the “bubbling $r$ handles” construction. We can easily show that in this case, each singularity contribute to zero to the spin structure. See Figure \[spin:bubble1\]. Hence the resulting parity of spin structure is the same as for the flat torus, which is odd. 1 at 150 220 2 at 170 220 1 at 420 290 2 at 540 290 3 at 140 160 4 at 160 160 3 at 410 200 4 at 515 200 5 at 130 100 6 at 150 100 5 at 410 120 6 at 515 120 $a_1$ at 220 240 $b_1$ at 500 305 $a_2$ at 250 240 $b_2$ at 475 220 $a_3$ at 300 240 $b_3$ at 460 135 $a_1$ at 450 290 $a_2$ at 440 200 $a_3$ at 440 115 ![The “bubbling $r$-handles standard construction. We have $\textrm{ind}(a_i)=1$, so the collection $(a_i,b_i)_{i\in \{1\dots \beta\}}$ contributes to $0$ for the spin structure.[]{data-label="spin:bubble1"}](spin1 "fig:") If there exists a singularity of degree $k\geq 4$. It is easy to see that one can slightly change the construction to make this singularity contribute to 1 to the spin structure, and obtain a surface with even spin structure (see Figure \[spin:bubble2\]) 1 at 148 212 2 at 175 170 1 at 420 290 2 at 540 290 3 at 135 165 4 at 180 210 3 at 410 200 4 at 515 200 5 at 130 100 6 at 150 100 5 at 410 120 6 at 515 120 $a_1$ at 235 250 $b_1$ at 500 305 $a_2$ at 190 260 $b_2$ at 475 220 $a_3$ at 300 240 $b_3$ at 450 135 $a_1$ at 450 285 $a_2$ at 450 325 $a_2$ at 440 205 $a_3$ at 440 115 $a_2$ at 105 170 ![A slight change in the $r$-handle construction, for $r\geq 2$ changes the spin structure, since in this case, $\textrm{Ind}(a_2)=0 \mod 2$.[]{data-label="spin:bubble2"}](spin2) The last remaining case to see is when all the singularities are of degree $2$, and the parity of spin structure is even. If there are exactly $2$ singularities, then the connected component is $\mathcal{H}^{even}(2,2)=\mathcal{H}^{hyp}(2,2)$, which is a hyperelliptic case. If there are at least $3$ singularities. We can find a surface with a chain of simple saddle connections joining all the singularities, and such that the last element of the chain is obtained by the “bubbling a handle” construction. Then, combining Move 2 along the chain, and Move 3 at the end of the chain, we obtain that $\mathcal{C}(\mathcal{F}_{comp})$ is connected. This concludes the proof of the proposition. The nonhyperelliptic case is given by Proposition \[prop:monod:1\]. For the hyperelliptic case, the lower bound on the number of connected components is given by Proposition \[low:bound:hyp\], and the upper bound is easy and left to the reader. Recall that we denote by $\mathcal{H}(k_1^{n_1},\dots ,k_r^{n_r})$ the ambient stratum of the moduli space of Abelian differentials. The degree of the covering $\mathcal{C}(\mathcal{F}_{comp})\to \mathcal{C}$, restricted to a connected component of $\mathcal{C}(\mathcal{F}_{comp})$, is clearly $\frac{\Pi_{i=1}^r n_i! (k_i+1)^{\alpha_i}}{c}$, were $c$ is the number of connected component of $\mathcal{C}(\mathcal{F}_{comp})$ and is given by Theorem \[th:cc:Cfr\]. Let $k$ be the degree of the marked singularity associated the Rauzy class $R$, and let $n$ be the number of singularities of degree $k$. The set $\mathcal{C}_{k}$ is connected and the degree of the projection $\mathcal{C}_{k}\to \mathcal{C}$ is $n (k+1)$. Hence we have: $$\frac{|R_{lab}|}{|R|}=\frac{\Pi_{i=1}^r n_i! (k_i+1)^{\alpha_i}}{c.(k+1).n}$$ Which gives Theorem \[MT\]. Rauzy classes for quadratic differentials ========================================= *Half-translation surfaces* are a natural generalization of translation surfaces. They are surfaces with an atlas such that the changes of coordinates are not only translations, but can also be half-turns. They corresponds to Riemann surfaces with *quadratic differentials*. Danthony and Noguiera have generalized interval exchange transformations and Rauzy induction to describe first return maps of nonoriented measured foliations on transverse segment. One gets *linear involutions*, see [@DaNo]. The relation between quadratic differentials and linear involutions was described by the author and Lanneau in [@BL09]. One can wonder if there is a analogous result for Rauzy classes appearing in this context. There doesn’t seem to be a natural relation between “labeled generalized permutations” and framed half-translation surfaces. In particular, there is no “quadratic” equivalent of Lemma \[Rlab:to:Hsat\]. Numerical experiments on SAGE, suggest that the ratio between a labeled Rauzy class and its corresponding reduced one is always $n!$ or $\frac{n!}{2}$, were $n$ is the number of underlying intervals, which is generally much more than the Abelian case. In particular, using Rauzy induction and labelling the intervals, these numerical experiments suggests that one obtain either any renumbering of the intervals, or any even renumbering of the intervals depending on the stratum. One can also look at *extended Rauzy classes*, where we add Rauzy moves that correspond to cutting on the left of the interval (see [@KoZo]). In this case, numerical experiments suggests that the ratio is also $n!$ or $n!/2$ depending on the stratum. [EMM2]{} – [Exponential mixing for the Teichmüller flow ]{}, *Publ. Math. IHES* [**104**]{} (2006), pp. 143–211. – [Simplicity of Lyapunov spectra: proof of the Zorich-Kontsevich conjecture ]{}, *Acta Math.* [**198**]{} (2007), no. 1, pp. 1–56. – [Degenerations of quadratic differentials on $\mathbb{CP}^1$ ]{}, *Geometry and Topology* [**12**]{} (2008) pp. 1345-1386 – [Dynamics and geometry of the Rauzy-Veech induction for quadratic differentials ]{}, *Ergodic Theory Dynam. Systems* [**29**]{} (2009), no. 3, pp. 767–816. – [Classification of Rauzy classes in the moduli space of quadratic differentials. ]{}, *preprint* arXiv:0904.3826 (2009). – [Pseudo-Anosov homeomorphisms on translation surfaces in hyperelliptic components have large entropy ]{}, *preprint* arXiv:1005.4148 (2010). – [“Decay of correlations for the Rauzy-Veech-Zorich induction map on the space of interval exchange transformations and the central limit theorem for the Teichmüller flow on the moduli space of abelian differentials”]{}, *J. Amer. Math. Soc.* [**19**]{} (2006), no. 3, pp.579–623 –[Measured foliations on nonorientable surfaces]{}, *Ann. Sci. École Norm. Sup.* (4) [**23**]{} (1990), pp. 469–494. – [Cardinality of Rauzy classes ]{}, *preprint* (2010). – [Moduli spaces of Abelian differentials: the principal boundary, counting problems, and the Siegel–Veech constants ]{}. *Publ. Math. IHES* [**97**]{} (2003), pp. 61–179. – [“Quadratic differentials and foliations”]{}, *Acta Math.*, [**142**]{} (1979), pp. 221–274. – [“Simplicial systems for interval exchange maps and measured foliations”]{}, *Ergodic Theory Dynam. Systems* [**5**]{} (1985), no. 2, pp.257–271. – [Connected components of the moduli spaces of Abelian differentials with prescribed singularities]{}, *Invent. Math.* [**153**]{} (2003), no. 3, pp. 631–678. – [Khinchin type condition for translation surfaces and asymptotic laws for the Teichmüller flow ]{}, *preprint* (2010). – [The cohomological equation for Roth type interval exchange transformations]{}, *Journal of the Amer. Math. Soc.* [**18**]{} (2005), pp. 823–872. – [Interval exchange transformations and measured foliations]{}, *Ann of Math.* [**141**]{} (1982) pp. 169–200. – [*[S]{}age [M]{}athematics [S]{}oftware ([V]{}ersion 4.2.1)* ]{}, The Sage Development Team, 2009, [http://www.sagemath.org]{}. – [Échanges d’intervalles et transformations induites]{}, *Acta Arith.* [**34**]{} (1979), pp. 315–328. – [Gauss measures for transformations on the space of interval exchange maps]{}, *Ann. of Math. (2)* **115** (1982), no. 1, pp. 201–242. [^1]: This can be computed for instance using Zorich’s MATHEMATICA software, or using the SAGE package developed by Delecroix
--- abstract: | We classify all Rota—Baxter operators of nonzero weight on the matrix algebra of order three over an algebraically closed field of characteristic zero which are not arisen from the decompositions of the entire algebra into a direct vector space sum of two subalgebras. : Rota—Baxter operator, matrix algebra. --- [Rota—Baxter operators of nonzero weight on the matrix algebra of order three]{} M. Goncharov, V. Gubarev Introduction ============ Given an algebra $A$ and a scalar $\lambda\in F$, where $F$ is a ground field, a linear operator $R\colon A\rightarrow A$ is called a Rota—Baxter operator (RB-operator, for short) on $A$ of weight $\lambda$ if the following identity $$\label{RB} R(x)R(y) = R( R(x)y + xR(y) + \lambda xy )$$ holds for all $x,y\in A$. Then the algebra $A$ is called a Rota—Baxter algebra (RB-algebra). Glen Baxter in 1960 [@Baxter] introduced the notion of a Rota—Baxter operator as formal generalization of integration by parts formula. Further, F. Atkinson [@Atkinson], G.-C. Rota [@Rota], P. Cartier [@Cartier] and others studied such operators on commutative algebras. At the beginning of the 1980s, the deep connection between solutions of the classical Yang—Baxter equation (named after Rodney Baxter) from mathematical physics and RB-operators on a semisimple finite-dimensional Lie algebra was found by A.A. Belavin and V.G. Drinfel’d [@BelaDrin82] and M.A. Semenov-Tyan-Shanskii [@Semenov83]. Actually, M.A. Semenov-Tyan-Shanskii had rediscovered the notion of an RB-operator of nonzero weight on a Lie algebra. He called such operators as solutions of modified Yang—Baxter equation. From the 2000s, the active study of Rota—Baxter operators on associative algebras has begun, see the monograph [@GuoMonograph]. To the moment, many applications and connections of Rota—Baxter operators with symmetric polynomials, shuffle algebra, Loday algebras, etc. were found [@Atkinson; @Shuffle; @Aguiar00; @BurdeGubarev; @GuoMonograph]. One of the interesting direction in the study of Rota—Baxter operators is a problem of classification of RB-operators on a given algebra. RB-operators were classified on $\mathrm{sl}_2(\mathbb{C})$ [@Kolesnikov; @KonovDissert; @sl2; @sl2-0], on $M_2(\mathbb{C})$ [@BGP; @Mat2], on $\mathrm{sl}_3(\mathbb{C})$ [@KonovDissert] and on other algebras [@AnBai; @BGP; @3-Lie]. In 2013, V.V. Sokolov described all skew-symmetric RB-operators of nonzero weight on $M_3(\mathbb{C})$ [@Sokolov]; up to conjugation with automorphisms and transpose he got 8 series. In [@Unital], some general properties of RB-operators were stated. In particular, every RB-operator $R$ of weight zero on $M_n(F)$ over a field $F$ of characteritic zero is nilpotent and $R^{2n-1} = 0$. Given an RB-operator $R$ of nonzero weight on $M_n(\mathbb{C})$, we may assume that $R(1)$ is diagonal (up to conjugation with automorphisms of $M_n(\mathbb{C})$). The last result gives a powerful tool for the study of RB-operators of nonzero weight on the matrix algebra. As a corollary, it was proved in [@Unital] that every RB-operator $R$ of nonzero weight on $M_3(\mathbb{C})$ is diagonal, it means that the subalgebra of diagonal matrices in $M_3(\mathbb{C})$ is preserved by $R$ (up to conjugation with automorphisms). In the current work we classify all RB-operators on $M_3(\mathbb{C})$ which are not projections of $M_3(\mathbb{C})$ onto a subalgebra parallel to another one. The study of such projections is a part of the research area of the decompositions of an algebra into a sum of two subalgebras, see, e.g. [@BurdeGubarev]. Let us remark that for the algebra $M_n(\mathbb{C})$, solutions of the associative Yang—Baxter equation [@Zhelyabin; @Aguiar] are in one-to-one correspondence with RB-operators of weight zero [@Unital]. The same bijection holds [@AYBE-ext] for solutions of the weighted associative Yang—Baxter equation [@FardThesis p. 113] and RB-operators of nonzero weight on $M_n(\mathbb{C})$. Let us give a brief outline of the work. In §2, we give some required preliminaries in Rota—Baxter operators. In §3, we consider some results about RB-operators of nonzero weight on the matrix algebra. In Corollary 1, we clarify the classification of RB-operators on $M_2(\mathbb{C})$ from [@BGP] . As we said above, an RB-operator of nonzero weight on $M_3(\mathbb{C})$ preserves the subalgebra of diagonal matrices. In §4, we apply the descriptions of RB-operators of nonzero weight on $F\oplus F\oplus F$ from [@SumOfFields]. In §5, we prove the main result. In Theorem 3, we classify all RB-operators of nonzero weight on $M_3(\mathbb{C})$, we get 36 cases up to all natural actions. Given an RB-operator $R$ of weight 1 on the subalgebra $D_3$ of diagonal matrices, we may extend it on the entire algebra $M_3(\mathbb{C})$ as follows: we put $R(U_3) = 0$ and $(R+\id)(L_3) = 0$, where $U_3$ and $L_3$ denote the subalgebra of upper and lower triangular matrices respectively. Thus, we get 20 cases in Theorem 3. In the same way we may extend a given RB-operator $R$ of $F\oplus M_2(\mathbb{C})$ onto the entire $M_3(\mathbb{C})$; such construction gives another 12 cases in Theorem 3. Finally, we also have four “exceptional” cases in Theorem 3. In Corollary 2, we check that all obtained RB-operators lie in pairwise distinct orbits. Preliminaries ============= [**Statement 1**]{} [@GuoMonograph]. Given an RB-operator $P$ of weight $\lambda$, a\) the operator $-P-\lambda\id$ is again an RB-operator of weight $\lambda$, b\) the operator $\lambda^{-1}P$ is an RB-operator of weight 1, provided $\lambda\neq0$. Given an algebra $A$, let us define a map $\phi$ on the set of all RB-operators on $A$ as $\phi(P)=P'=-P-\lambda(P)\id$, where $\lambda(P)$ denotes the weight of an RB-operator $P$. Note that $\phi^2$ coincides with the identity map. [**Statement 2**]{} [@BGP]. Given an algebra $A$, an RB-operator $P$ of weight $\lambda$ on $A$, and $\psi\in\Aut(A)$, the operator $P^{(\psi)} = \psi^{-1}P\psi$ is an RB-operator of weight $\lambda$ on $A$. [**Statement 3**]{} [@GuoMonograph]. Let an algebra $A$ to split as a vector space into a direct sum of two subalgebras $A_1$ and $A_2$. An operator $P$ defined as $$\label{Split} P(a_1 + a_2) = -\lambda a_2,\quad a_1\in A_1,\ a_2\in A_2,$$ is an RB-operator of weight $\lambda$ on $A$. Let us call an RB-operator from Statement 3 as [*splitting*]{} RB-operator with subalgebras $A_1,A_2$. There is a bijection between the set of all splitting RB-operators on an algebra $A$ and all decompositions of $A$ into a direct sum of two subalgebras $A_1,A_2$. Note that if $P$ is a splitting RB-operator on $A$ of weight $\lambda$ with subalgebras $A_1,A_2$, then $\phi(P)$ is the splitting RB-operator of weight $\lambda$ with the same subalgebras $A_1, A_2$ (just another projection). [**Statement 4**]{} [@BGP]. Let $A$ be a unital algebra, and let $P$ be an RB-operator of nonzero weight $\lambda$ on $A$. a\) If $P(1)\in F$, then $P$ is splitting; b\) If $P(P(x) + \lambda x) = 0$ for all $x\in A$, then $P$ is splitting. Given a unital algebra $A$, we call an RB-operator $R$ on $A$ of nonzero weight as [*inner-splitting*]{} if $R(1)\in F$. [**Statement 5**]{} [@Guil]. Let an algebra $A$ be a direct sum of subalgebras $A_-,A_0,A_+$, and $A_\pm$ are $A_0$-modules. If $R_0$ is an RB-operator of weight $\lambda$ on $A_0$, then an operator $P$ defined as follows $$\label{RB:SubAlg2} P(a_-+a_0+a_+) = R_0(a_0) - \lambda a_+,\quad a_{\pm}\in A_{\pm},\ a_0\in A_0,$$ is an RB-operator of weight $\lambda$ on $A$. Let us call an RB-operator of nonzero weight defined by as [*triangular-splitting*]{} provided that at least one of $A_-,A_+$ is nonzero. If $A_0 = (0)$, then $P$ is splitting RB-operator on $A$. If $A_0$ has trivial (zero) product, then any linear map on $A_0$ is suitable as $R_0$. Note that if $P$ is a triangular-splitting RB-operator on an algebra $A$ with subalgebras $A_\pm,A_0$, then the operator $\phi(P)$ is the triangular-splitting RB-operator with the same subalgebras. [**Statement 6**]{} [@Unital]. Let an algebra $A$ be equal a direct sum of two ideals $A_1$ and $A_2$ and $R$ be an RB-operator of weight $\lambda$ on $A$. Then $\mathrm{Pr}_i R$ is the RB-operator of weight $\lambda$ on $A_i$, $i=1,2$. Here $\mathrm{Pr}_i$ denotes the projection from $A$ onto $A_i$. RB-operators on $M_n(F)$ ======================== Over a field $F$, denote the algebra of all upper and lower triangular matrices of order $n$ by $U_n$ and $L_n$ respectively. [**Example 1**]{} [@BGP]. Decomposing $M_n(F) = L_n\oplus D_n\oplus U_n$ (as vector spaces) and given an RB-operator defined on $D_n$, we have a triangular-splitting RB-operator defined with $A_- = U_n$, $A_+ = L_n$, $A_0 = D_n$. [**Example 2**]{}. Let $1\leq k<n$, define $$\begin{gathered} M_1 = \Span\{e_{ij}\mid 1\leq i,j\leq k\}\oplus \Span\{e_{ij}\mid k+1\leq i,j\leq n\},\\ M_2 = \Span\{e_{ij}\mid 1\leq i\leq k;\,k+1\leq j\leq n\}, \\ M_3 = \Span\{e_{ij}\mid k+1\leq i\leq n;\,1\leq j\leq k\}.\end{gathered}$$ Decomposing $M_n(F) = M_1\oplus M_2\oplus M_3$ (as vector spaces) and given an RB-operator defined on $M_1$, we have a triangular-splitting RB-operator on $M_n(F)$ defined with $A_+ = M_2$, $A_- = M_3$, $A_0 = M_1$. [**Example 3**]{}. Let an algebra $A$ be equal a direct vector space sum $A_- \oplus A_0 \oplus A_+$ of its three subalgebras satisfying the conditions $A_0 A_-,A_-A_0\subset A_+$. Suppose that $P$ is an RB-operator of weight 1 on $A_0$. Then a linear operator $R$ defined on $A$ as $R(a_- + a_0 + a_+) = P(a_0) - a_+$ is an RB-operator of weight 1 if $R(A_0)A_-,A_-R(A_0)\subset A_-$. [**Lemma 1**]{}. Let $R$ be an RB-operator of weight 1 on $M_n(F)$ such that $R(1)$ is a diagonal matrix, $$\label{R(1)diagonal} \begin{gathered} R(1) = \sum\limits_{i=1}^s \lambda_i (e_{k_1+\ldots+k_{i-1}+1,k_1+\ldots+k_{i-1}+1}+\ldots+e_{k_1+\ldots+k_i,k_1+\ldots+k_i}),\\ k_0 = 0,\quad k_1,\ldots,k_s\geq 1,\quad \sum\limits_{i=1}^s k_i = n. \end{gathered}$$ Suppose that either $s$-tuple $(\lambda_1,\ldots,\lambda_s)$ or $(\lambda_s,\ldots,\lambda_1)$ equals $(-f,-f+1,\ldots,0,\ldots,g)$ for $f,g\geq 0$. Then the space $$V_t = \Span\{e_{ij}\mid k_1+\ldots+k_{p-1}<i\leq k_1+\ldots+k_p, \,k_1+\ldots+k_{q-1}<j\leq k_1+\ldots+k_q,\,p-q=t\}$$ is $R$-invariant for all $t$. Moreover, $R$ is splitting on both $V_{f+g}$ and $V_{-f-g}$. [Proof]{}. Write down the following equalities, $$\begin{gathered} R(1)R(x) = R^2(x) + R(x) + R(R(1)x), \label{R(1)R(x)} \\ R(x)R(1) = R^2(x) + R(x) + R(xR(1)). \label{R(x)R(1)}\end{gathered}$$ From  and , we deduce that $$\label{[R(1),R(x)]} [R(1),R(x)] = R([R(1),x]).$$ Let $x$ be a matrix unity from $V_t$, it means $x = e_{yz}$ for some $k_1+\ldots+k_{p-1}<y\leq k_1+\ldots+k_p$ and $k_1+\ldots+k_{q-1}<z\leq k_1+\ldots+k_q$ with $p - q = t$. For $i,j$ such that $k_1+\ldots+k_{u-1}<i\leq k_1+\ldots+k_u$ and $k_1+\ldots+k_{v-1}<j\leq k_1+\ldots+k_v$ and for $A = (a_{cd})_{c,d=1}^n = R(x)$, we get the equalities $a_{ij}(\lambda_u-\lambda_v) = a_{ij}(\lambda_p-\lambda_q)$ by . When $u - v \neq t$, we get $a_{ij} = 0$. Consider $V_{f+g}$, it is exactly the block $\Span\{e_{ij}\mid 1\leq i\leq k_1,\,k_1+\ldots+k_{s-1}<j\leq k_1+\ldots+k_s\}$. Putting a matrix unity $e_{yz}$ from $V_{f+g}$ into , we get $R^2(e_{yz}) + R(e_{yz}) = 0$. By Statement 4b), $R$ is splitting on $V_{f+g}$. Analogously, $R$ is splitting on $V_{-f-g}$. $\square$ [**Remark 1**]{}. Actually, in [@Unital] it was proved that $V_0$ is $R$-invariant. We have extended this statement in Lemma 1 for all $t$. [**Theorem 1**]{} [@BGP; @Unital]. Let $F$ be an algebraically closed field of characteristic zero. Every nontrivial RB-operator of weight 1 on $M_2(F)$ up to conjugation with an automorphism of $M_2(F)$ or transpose, up to $\phi$ equals one of the following cases: \(a) $R\begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \\ \end{pmatrix} = \begin{pmatrix} 0 & - x_{12} \\ 0 & x_{11} \end{pmatrix}$, \(b) $R\begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \\ \end{pmatrix} = \begin{pmatrix} -x_{11} & - x_{12} \\ 0 & 0 \end{pmatrix}$, \(c) $R\begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \\ \end{pmatrix} = -x_{21}\begin{pmatrix} \alpha & \alpha\gamma \\ 1 & \gamma \end{pmatrix}$,$\alpha,\gamma \in F$, \(d) $R\begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \\ \end{pmatrix} = \begin{pmatrix} \alpha x_{12} & - x_{12}\\ -x_{21} & (1/\alpha)x_{21} \end{pmatrix}$,$\alpha\in F$, \(e) $R\begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \\ \end{pmatrix} = \begin{pmatrix} x_{22}-x_{11} + \beta x_{21} & (-\beta^2/4) x_{21} - x_{12}\\ 0 & 0 \end{pmatrix}$,$\beta\in F$. Let us refine the classification of RB-operators on $M_2(F)$ from Theorem 1 in the following way. [**Corollary 1**]{}. Let $F$ be an algebraically closed field of characteristic zero. Every nontrivial RB-operator on $M_2(F)$ of weight 1, up to conjugation with an automorphism of $M_2(F)$ or transpose and up to the action of $\phi$ from Statement 1, equals one of the following cases: (M1) $R\begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \\ \end{pmatrix} = \begin{pmatrix} 0 & - x_{12} \\ 0 & x_{11} \end{pmatrix}$, (M2) $R\begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \\ \end{pmatrix} = \begin{pmatrix} -x_{11} & - x_{12} \\ 0 & 0 \end{pmatrix}$, (M3) $R\begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \\ \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ -x_{21} & 0 \end{pmatrix}$, (M4) $R\begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \\ \end{pmatrix} = \begin{pmatrix} x_{21} & 0 \\ -x_{21} & 0 \end{pmatrix}$, (M5) $R\begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \\ \end{pmatrix} = \begin{pmatrix} x_{12} & - x_{12}\\ -x_{21} & x_{21} \end{pmatrix}$, (M6) $R\begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \\ \end{pmatrix} = \begin{pmatrix} x_{22}-x_{11} & - x_{12}\\ 0 & 0 \end{pmatrix}$. Moreover, these 6 cases lie in different orbits of the set of RB-operators of weight 1 on $M_2(F)$ under the action of $\phi$ and conjugation with $\Aut(M_2(F))$ or transpose. [Proof]{}. We are applying the classification from Theorem 1. The RB-operator $R$ from (c) when $\alpha+\gamma=0$ is conjugate to the RB-operator $P$ from (M3) with the help of the automorphism $$\label{AutForM2} \begin{gathered} \psi(e_{11}) = e_{11} - \alpha e_{12},\quad \psi(e_{12}) = e_{12},\\ \psi(e_{22}) = e_{22} + \alpha e_{12},\quad \psi(e_{21}) = \alpha(e_{11} - e_{22}) - \alpha^2 e_{12} + e_{21}, \end{gathered}$$ it means that $\psi^{-1}P\psi = R$. When $\alpha+\gamma\neq0$, $R$ is conjugate to (M4) under the action of the automorphism $$\begin{gathered} \chi(e_{11}) = e_{11} + \gamma e_{12},\quad \chi(e_{12}) = -(\alpha+\gamma)e_{12},\quad \chi(e_{22}) = e_{22} - \gamma e_{12},\\ \chi(e_{21}) = \frac{1}{\alpha+\gamma}( \gamma(e_{11} - e_{22}) + \gamma^2 e_{12} - e_{21} ).\end{gathered}$$ Further, the RB-operator $R$ from (d) is conjugate to (M5) with the automorphism $$\xi(e_{11}) = e_{11},\quad \xi(e_{12}) = (1/\alpha) e_{12},\quad \xi(e_{22}) = e_{22},\quad \xi(e_{21}) = \alpha e_{21}.$$ The RB-operator $R$ from (e) is conjugate to (M6) with the automorphism $\psi$  defined with $\alpha = \beta/2$. Now, let us clarify that all 6 cases lie in different orbits. We note that given an RB-operator $R$, conjugation with an automorphism or transpose does not change the algebraic properties of $\ker(R)$ and $\ker(R+\id)$. The case (M1) is unique non-splitting RB-operator from the list. The case (M2) is unique splitting RB-operator with $R(1)\not \in F$. The cases (M3) and (M4) satisfy the condition $\dim(\ker(R)) = 3$ in contrast to (M5) and (M6). We may distinguish the cases (M3) and (M4) as follows: the algebra $\ker(R+\id)$ is nilpotent in (M3) but not in (M4). Finally, in the case (M5), in contrast to the case (M6), we have one of the kernels isomorphic to $F\oplus F$. $\square$ [**Remark 2**]{}. It is easy to show that the RB-operator (M4) is conjugate to the RB-operator (M4${}^\prime$) $R\begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \\ \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ -x_{21} & x_{21} \end{pmatrix}$. Let us call an RB-operator $R$ on $M_n(F)$ [*diagonal*]{}, if $R^{(\psi)}(D_n)\subseteq D_n$ for some $\psi\in\Aut(M_n(F))$. By Theorem 1, all RB-operators of nonzero weight on $M_2(F)$ are diagonal. In [@Unital], the same result was stated for the matrix algebra of order three. [**Theorem 2**]{} [@Unital]. Let $F$ be an algebraically closed field of characteristic zero. All RB-operators of nonzero weight on $M_3(F)$ are diagonal. In advance, we will apply the following automorphism $\Phi_{ab}$ of $M_3(F)$, where $a,b\in\{1,2,3\}$ and $a\neq b$. Let $c$ be such that $\{a,b,c\} = \{1,2,3\}$. The $\Phi_{ab}$ acts on matrix unities as $\Phi_{ab}(e_{ij}) = e_{i'j'}$, where $a' = b$, $b' = a$ and $c' = c$. Moreover, we define an automorphism $\Phi_{abc}$ of $M_3(F)$ for $\{a,b,c\} = \{1,2,3\}$ acting on the indices of matrix unities as the cycle $(abc)$. RB-operators on $F^3$ ===================== In [@SumOfFields], RB-operators of nonzero weight on a sum of fields were studied. In particular, it was proved that [**Statement 7**]{} [@SumOfFields]. Let $R$ be a not inner-splitting RB-operator of weight 1 on the sum of fields $F^3 = Ff_1\oplus Ff_2\oplus Ff_3$, then up to permutation of coordinates and action of $\phi$, we have nine cases: $$\begin{aligned} 1&) & R(f_1) &= f_2 + f_3, & R(f_2) &= f_3, & R(f_3) &= 0, \\ 2&) & R(f_1) &= f_2 + f_3, & R(f_2) &= f_3, & R(f_3) &= -f_3, \\ 3&) & R(f_1) &= f_2 + f_3, & R(f_2) &= -(f_2+f_3), & R(f_3) &= -f_3, \\ 4&) & R(f_1) &= f_2 + f_3, & R(f_2) &= -f_2, & R(f_3) &= 0, \\ 5&) & R(f_1) &= f_2 + f_3, & R(f_2) &= 0, & R(f_3) &= 0, \\ 6&) & R(f_1) &= f_2, & R(f_2) &= 0, & R(f_3) &= 0, \\ 7&) & R(f_1) &= f_2, & R(f_2) &= 0, & R(f_3) &= -f_3, \\ 8&) & R(f_1) &= -f_1, & R(f_2) &= 0, & R(f_3) &= 0,\\ 9&) & R(f_1) &= f_2, & R(f_2) &= -f_2, & R(f_3) &= -f_3.\end{aligned}$$ Let us also write down the action of the RB-operator $R' = -(R+\id)$ in all nine cases, $$\begin{aligned} 1&) & R'(f_1) &= -f_1 - f_2 - f_3, & R'(f_2) &= -f_2 - f_3, & R'(f_3) &= -f_3, \\ 2&) & R'(f_1) &= -f_1 - f_2 - f_3, & R'(f_2) &= -f_2 - f_3, & R'(f_3) &= 0, \\ 3&) & R'(f_1) &= -f_1 - f_2 - f_3, & R'(f_2) &= f_3, & R'(f_3) &= 0, \\ 4&) & R'(f_1) &= -f_1 - f_2 - f_3, & R'(f_2) &= 0, & R'(f_3) &= -f_3, \\ 5&) & R'(f_1) &= -f_1 - f_2 - f_3, & R'(f_2) &= -f_2, & R'(f_3) &= -f_3, \\ 6&) & R'(f_1) &= -f_1 - f_2, & R'(f_2) &= -f_2, & R'(f_3) &= -f_3, \\\end{aligned}$$ $$\begin{aligned} 7&) & R'(f_1) &= -f_1 - f_2, & R'(f_2) &= -f_2, & R'(f_3) &= 0, \\ 8&) & R'(f_1) &= 0, & R'(f_2) &= -f_2, & R'(f_3) &= -f_3,\\ 9&) & R'(f_1) &= -f_1 - f_2, & R'(f_2) &= 0, & R'(f_3) &= 0.\end{aligned}$$ Now, we state some required lemmas about RB-operators of weight 1 on $M_3(F)$. By $E$ we denote the unit of $M_3(F)$. [**Lemma 2**]{}. If there exists $i$ such that $R(e_{ii})\in \{0,-E\}$, then $$e_{ii}\Imm(R+\id),\Imm(R+\id)e_{ii}\subset \ker(R).$$ [Proof]{}. Suppose that $R(e_{ii}) = 0$ or $R(e_{ii}) = -E$. For all $x$ we have $$\label{ImR'InKerR} R( e_{ii}(R(x)+x) ) = R(e_{ii})R(x) - R(R(e_{ii})x) = 0.$$ We deal analogously with $\Imm(R+\id)e_{ii}$. $\square$ We call a subspace $V$ of $M_3(F)$ [*homogeneous*]{} if for any $a = \sum\limits_{i,j=1}^3 \alpha_{ij}e_{ij}\in V$ we have $\alpha_{ij}e_{ij}\in V$ for all $i,j=1,2,3$. [**Lemma 3**]{}. If there exist $i\neq j$ such that $R(e_{ii}),R(e_{jj})\in \{0,-E\}$, then $\ker(R)$ is homogeneous. [Proof]{}. Since $\ker(R)\subset \Imm(R+\id)$, by Lemma 2 we get that $$\label{Homogeneous-Cond} e_{ii}\ker(R),e_{jj}\ker(R),\ker(R)e_{ii},\ker(R)e_{jj}\subset \ker(R).$$ Let $\{i,j,k\} = \{1,2,3\}$, so, $e_{kk}\ker(R) = (1-e_{ii}-e_{jj})\ker(R)\subset \ker(R)$ and $\ker(R)e_{kk}\subset \ker(R)$. If $a\in \ker(R)$, then $e_{ss} a e_{tt}\in \ker(R)$ for all $s,t=1,2,3$. $\square$ [**Lemma 4**]{}. If there exist $i\neq j$ such that $R(e_{ii})\in \{0,-E\}$ and $R(e_{jj}) = e_{ii}$ or $R(e_{jj}) = e_{ii} - 1$, then $\ker(R)$ is homogeneous. [Proof]{}. Suppose that $R(e_{jj}) = e_{ii}$ or $R(e_{jj}) = e_{ii} - 1$. Applying , we get $$\begin{gathered} R( e_{jj}(R(x)+x) ) = R(e_{jj})R(x) - R(R(e_{jj})x) = e_{ii}R(x) - R(e_{ii} x) \\ = e_{ii}R(x) + R(e_{ii}R(x)),\end{gathered}$$ so $e_{jj}\ker(R),\ker(R)e_{jj}\subset \ker(R)$. By Lemma 2, $e_{ii}\ker(R),\ker(R)e_{ii}\subset \ker(R)$. So, we get  and thus $\ker R$ is homogeneous. $\square$ Given an RB-operator of weight 1 on $M_3(F)$, by Theorem 2 we may assume that $D_3(F)$ is $R$-invariant and the action of $R$ on $D_3(F)$ is one of Cases 1–9 listed in Statement 7. Let us apply Lemma 3 (for Cases 2–5, 8) and Lemma 4 (for 1, 6, 7) to get the following data about RB-operators of weight 1 on $M_3(F)$ with prescribed action on $D_3(F)$ (see the table on the next page). [**Lemma 5**]{}. Let $R$ be an RB-operator on $M_3(F)$ of weight 1. Suppose that $R'(e_{ii}) = -E$ for some $i\in\{1,2,3\}$. Then $\Imm(R)$ has zero projection on $e_{ii}$. [Proof]{}. By Lemma 2, $e_{ii}\Imm(R)e_{ii}\subset \ker(R')$. Since $R'(e_{ii})\neq 0$, the space $\Imm(R)$ has zero projection on $e_{ii}$. $\square$ Case $\ker(R)$-homogeneity $\ker(R')$-homogeneity ------ ----------------------- ------------------------ 1 + + 2 + 3 + 4 + 5 + 6 + 7 + + 8 + 9 + Main result =========== Call an RB-operator of weight 1 on $M_3(F)$ defined by Example 1 as [*primitive*]{} one. Let $R$ be a primitive RB-operator on $M_3(F)$ and its action on $D_3(F)$ corresponds to Case $X$, where $X\in\{1,2,3,4,5,6,7,8,9\}$ (see Statement 7). We denote it as case $Xa$). By case $Xb$) ($Xc$)), we denote a primitive RB-operator which action on $D_3(F)$ is conjugate with the automorphism $\Phi_{12}$ ($\Phi_{23}$) (restricted on $D_3(F)$) to Case $X$. [**Theorem 3**]{}. Let $F$ be an algebraically closed field of characteristic zero. Every non-splitting RB-operator of weight 1 on $M_3(F)$, up to conjugation with an automorphism of $M_3(F)$ or transpose and up to the action of $\phi$ A\) is defined by Example 1 with all cases 1a)–7a), 1b)–7b), 1c)–7c) except 5c); B\) or is one of the following ones 1-I) $R(e_{11}) = e_{22}+e_{33}$, $R(e_{22}) = e_{33}$; $e_{31},e_{32},e_{33}\in \ker(R)$, $e_{12},e_{13}\in\ker(R')$, $R(e_{21})=-e_{32}$, $R(e_{23})=e_{12}-e_{23}$; 6-I) $e_{22},e_{23},e_{32},e_{33}\in\ker(R)$, $R(e_{11}) = e_{22}$, $R(e_{12})=-e_{12}-e_{32}$, $R(e_{21})=-e_{21}+e_{23}$, $R(e_{13})=e_{11}+e_{22}-e_{13}$, $R(e_{31})=-e_{31}+e_{33}$; 6-II) $e_{12},e_{13},e_{22},e_{23},e_{32},e_{33}\in\ker(R)$, $R(e_{11}) = e_{22}$, $R(e_{21})=-e_{21}$, $R(e_{31})=-e_{31}+e_{11}+e_{22}$; 6-III) $e_{12},e_{13},e_{22},e_{23},e_{32},e_{33}\in\ker(R)$, $R(e_{11}) = e_{22}$, $R(e_{21})=-e_{21}-e_{23}$, $R(e_{31})=-e_{31}+e_{11}+e_{22}$; C\) or is defined by Example 2 for $k=1$, i.e., $e_{12},e_{13}\in\ker(R')$, $e_{21},e_{31}\in\ker(R)$, 2-I) $R(e_{11}) = e_{22}+e_{33}$, $R(e_{22}) = e_{33}$; $e_{23},e_{33}\in \ker(R')$, $R(e_{32}) = e_{33}$; 2-II) $R(e_{11}) = e_{22}+e_{33}$, $R(e_{22}) = e_{33}$; $e_{32},e_{33}\in \ker(R')$, $R(e_{23}) = e_{33}$; 3-I) $e_{11},e_{23}\in \ker(R')$, $-R(e_{22}) = R(e_{33}) = e_{11}+e_{22}$; $R(e_{32}) = e_{11}+e_{22}$; 3-II) $e_{11},e_{32}\in \ker(R')$, $-R(e_{22}) = R(e_{33}) = e_{11}+e_{22}$; $R(e_{23}) = e_{11}+e_{22}$; 4-I) $R(e_{22}) = e_{11}+e_{33}$; $e_{33},e_{32}\in \ker(R')$, $e_{11}\in \ker(R)$, $R(e_{23}) = e_{33}$; 4-II) $R(e_{22}) = e_{11}+e_{33}$; $e_{33},e_{23}\in \ker(R')$, $e_{11}\in \ker(R)$, $R(e_{32}) = e_{33}$; 5-I) $R(e_{11}) = e_{22}+e_{33}$; $e_{22},e_{23},e_{32},e_{33}\in \ker(R)$; 5-II) $R(e_{11}) = e_{22}+e_{33}$; $e_{22},e_{23},e_{33}\in \ker(R)$, $R(e_{32}) = e_{33} - e_{32}$; 6-IV) $e_{11},e_{33},e_{32}\in\ker(R)$, $R(e_{22}) = e_{11}$, $R(e_{23})=-e_{23} + e_{11}+e_{22}$; 6-V) $e_{11},e_{33},e_{23}\in\ker(R)$, $R(e_{22}) = e_{11}$, $R(e_{32})=-e_{32} + e_{11}+e_{22}$; 6-VI) $e_{22},e_{33},e_{23},e_{32}\in\ker(R)$, $R(e_{11}) = e_{22}$; 8-I) $e_{11}\in \ker(R')$, $e_{22},e_{23},e_{33}\in \ker(R)$, $R(e_{32}) = e_{11} + e_{22} - e_{32}$. [Proof]{}. Let $R$ be an RB-operator of weight 1 on $M_3(F)$. By Theorem 2, we assume that $R$ is diagonal and we have one of Cases 1)–9) from Statement 7 for the action of $R$ on $D_3(F)$. To prove Theorem, we consider all of them case-by-case. Let us show that all primitive non-splitting RB-operators on $M_3(F)$ are described in A). First, RB-operators defined by Example 1 for Cases 8) and 9) are splitting. Further, let $R$ be an RB-operator on $M_3(F)$ defined by Example 1 and its action on $D_3(F)$ is conjugate with the automorphism $\Phi_{13}$ (restricted on $D_3(F)$) to Case $X$, $X\in\{1,2,3,4,5,6,7\}$. Up to transpose, $R$ is conjugate with the automorphism $\Phi_{13}$ of $M_3(F)$ to Case $X$. So, the action of $S_3 = \Aut(D_3(F))$ gives only three different cases $Xa$, $Xb$, and $Xc$ for each $X$. Finally, the subcase 5c) coincides with the subcase 5b). Now we consider one by one possible actions of $R$ on $D_3(F)$ from Statement 7. $R(e_{11}) = e_{22} + e_{33}$, $R(e_{22}) = e_{33}$, $R(e_{33}) = 0$. In this case $R(1) = e_{22} + 2e_{33}$. Now we want to use Lemma 1 to specify the information about $R$. Let us illustrate the statement of Lemma 1 in this case in details. We will need the equation . Let $a\in M_3(F)$ and let $R(a) = (\alpha_{ij})$. Then $$[R(1),R(a)]=\left ( \begin{array}{ccc} 0 & -\alpha_{12} & -2\alpha_{13} \\ \alpha_{21} & 0 & -\alpha_{23} \\ 2\alpha_{31} & \alpha_{32} & 0 \end{array} \right).$$ Now we can one by one put $e_{ij}$ instead of $a$. First take $a = e_{12}$. Since $[R(1),e_{12}] = [e_{22}+2e_{33},e_{12}] = -e_{12}$, we obtain $$\left ( \begin{array}{ccc} 0 & -\alpha_{12} & -2\alpha_{13} \\ \alpha_{21} & 0 & -\alpha_{23} \\ 2\alpha_{31} & \alpha_{32} & 0 \end{array} \right)=-\left ( \begin{array}{ccc} \alpha_{11} & \alpha_{12} & \alpha_{13} \\ \alpha_{21} & \alpha_{22} & \alpha_{23} \\ \alpha_{31} & \alpha_{32} & \alpha_{33} \end{array} \right).$$ Comparing the coefficients, we deduce that $R(e_{12}) = \alpha_{12}e_{12} + \alpha_{23}e_{23}$. Similar arguments give us the following equalities: $R(e_{21}) = \alpha_{21}e_{21} + \alpha_{32}e_{32}$, $R(e_{23}) = \beta_{12}e_{12} + \beta_{23}e_{23}$, $R(e_{32}) = \beta_{21}e_{21} + \beta_{32}e_{32}$, $R(e_{13}) = \alpha_{13}e_{13}$, $R(e_{31}) = \beta_{31}e_{31}$. Consider $R(e_{13})$ and $R(e_{31})$. We have that $$0 = R(e_{13})R(e_{33}) = (\alpha_{13}+1)R(e_{13}).$$ Thus, if $R(e_{13})\neq0$, then $R(e_{13}) = -e_{13}$. Similarly, if $R(e_{31})\neq0$, then $R(e_{31}) = -e_{31}$. Since $\ker(R)$ and $\ker(R')$ are subalgebras in $M_3(F)$, we have that $\alpha_{13}\neq\beta_{31}$. Note that up to conjugation with transpose it is sufficient to consider only one of these situations. We assume that $$R(e_{13})=-e_{13},\quad R(e_{31})=0.$$ Consider a subspace $M_1 = \Span\{e_{12},e_{23}\}$ spanned by $e_{12}$ and $e_{23}$. As we know, $M_1$ is $R$-invariant. Let $a\in M_1$ and $R(a)=\gamma_{12}e_{12}+\gamma_{23}e_{23}$. We have $$\gamma_{23}e_{23} = R(a)R(e_{22}) = R(\gamma_{12}e_{12}+a(e_{33}+e_{22})) = \gamma_{12}R(e_{12})+R(a).$$ Therefore, $\gamma_{12}(R(e_{12})+e_{12}) = 0$ for all $a\in M_1$ and either $R(a)\in \Span\{e_{23}\}$ for all $a\in M_1$ or $R(e_{12})=-e_{12}$. Similarly, if $M_2 = \Span\{e_{21},e_{32}\}$, then either $R(a)\in \Span\{e_{32}\}$ for all $a\in M_2$ or $R(e_{21})=-e_{21}$. If simultaneously $R(e_{12})=-e_{12}$ and $R(e_{21})=-e_{21}$, then since $\ker(R')$ is a subalgebra in $M_3(F)$, $R(e_{11})=-e_{11}$, a contradiction. Suppose that $\alpha_{12}=\alpha_{21}=0$. Then $$\alpha_{23}\alpha_{32}e_{32} = R(e_{12})R(e_{21}) = R(e_{11}),$$ a contradiction. We have only two possibilities left: Case 1a) $R(e_{21})=-e_{21}$, $R(e_{12})=\alpha_{23}e_{23}$, and $R(e_{23})=\beta_{23}e_{23}$. Case 1b) $R(e_{12})=-e_{12}$, $R(e_{21})=\alpha_{32}e_{32}$, and $R(e_{32})=\beta_{32}e_{32}$. Consider case 1a). Since $e_{21}$ and $e_{13}$ are in $\ker(R')$, then so is $e_{23}=e_{21}e_{13}$. From $$0=R(e_{12})R(e_{33})=R(\alpha_{23}e_{23})=-\alpha_{23}e_{23},$$ it follows that $R(e_{12})=0$. Finally, since $e_{31}$ and $e_{12}$ are in $\ker(R)$, then $e_{32}=e_{31}e_{12}\in \ker(R)$. Summing up the obtained equalities we have $\{e_{13},e_{21},e_{23}\}=\ker(R')$ and $\{e_{12},e_{31},e_{23}\}\subset \ker(R)$. It is a primitive RB-operator. Consider case 1b). From $0 = R(e_{33})R(e_{32}) = (\beta_{32}+1)R(e_{32})$, it follows that either $R(e_{32})=0$ or $R(e_{32})=-e_{32}$. Suppose that $R(e_{32})=-e_{32}$. For $R(e_{23})=\beta_{12}e_{12}+\beta_{23}e_{23}$, we have $$-\beta_{23}e_{22} = R(e_{23})R(e_{32}) = R(\beta_{23}e_{22}) = \beta_{23}e_{33}.$$ Thus, $\beta_{23}=0$. Moreover, from $0 = R(e_{22})R(e_{23}) = R(e_{23})$ it follows that $R(e_{23}) = 0$. Since $\ker(R)$ is a subalgebra, $R(e_{21}) = 0$, and we obtain a primitive RB-operator. It remains to consider the case $R(e_{32}) = 0$. From $$0 = R(e_{23})R(e_{32}) = R(\beta_{23}e_{22}+e_{22}) = (\beta_{23}+1)e_{33},$$ we obtain that $\beta_{23} = -1$. Further, from the equation $$-\alpha_{32}e_{33} = R(e_{21})R(e_{23}) = R(\alpha_{32}e_{33}+\beta_{12}e_{22}) = \beta_{12}e_{33},$$ we have $\alpha_{32} =-\beta_{12}$. So, the RB-operator $R$ satisfies $e_{31},e_{32}\in \ker(R)$, $e_{12},e_{13}\in\ker(R')$, $R(e_{21})= -a e_{32}$, $R(e_{23}) = ae_{12} - e_{23}$. If $a = 0$, we get a primitive RB-operator. For $a\neq0$, we apply the conjugation with $\Upsilon_a$, $$\label{Upsilon-a} \begin{gathered} \Upsilon_a(e_{ii}) = e_{ii},\ i=1,2,3,\quad \Upsilon_a(e_{12}) = e_{12},\quad \Upsilon_a(e_{21}) = e_{21},\\ \Upsilon_a(e_{13}) = a e_{13},\quad \Upsilon_a(e_{23}) = a e_{23},\quad \Upsilon_a(e_{31}) = (1/a) e_{31},\quad \Upsilon_a(e_{32}) = (1/a) e_{32}, \end{gathered}$$ and get the RB-operator 1-I). $R(e_{11}) = e_{22} + e_{33}$, $R(e_{22}) = e_{33}$, $R(e_{33}) = -e_{33}$. By Lemma 1, the subalgebras $M_1 = \Span\{e_{11},e_{22},e_{23},e_{32},e_{33}\}$, $M_2 = \Span\{e_{12},e_{13}\}$, $M_3 = \Span\{e_{21},e_{31}\}$ are invariant under the action of $R$, and $R$ is splitting on each of $M_2$ and $M_3$. Now, we use that $\ker(R')$ is homogeneous. Since $e_{11}\not\in \ker(R)$ and $e_{11}\not\in \ker(R')$, we have that $$\label{dimR=dimR'=2} \dim (\ker(R)\cap M_2) + \dim (\ker(R)\cap M_3) = \dim (\ker(R')\cap M_2) + \dim (\ker(R')\cap M_3) = 2.$$ We have two variants (up to conjugation with transpose). [**Main variant**]{}: $e_{12},e_{13}\in \ker(R')$ and $e_{21},e_{31}\in \ker(R)$. We will consider it later. [**Second variant**]{}. $e_{21},e_{13}\in \ker(R')$ and $p = e_{12}+ye_{13}, q = e_{31}+ae_{21}\in \ker(R)$. Since $\ker(R')$ is a subalgebra, $e_{23}\in\ker(R')$. Let us multiply $p,q$, we will get again an element from $\ker(R)$, $$(e_{12}+ye_{13})(e_{31}+ae_{21}) = (a+y)e_{11},$$ so $y = -a$. If $a = 0$, then $e_{12},e_{31}\in \ker(R)$ as well as $e_{32}\in \ker (R)$. It is a primitive RB-operator. Suppose that $a\neq0$. Thus, $$\label{r-element} r = (e_{31}+ae_{21})(e_{12}-ae_{13}) = ae_{22}-a^2e_{23}+e_{32}-ae_{33}\in \ker(R)$$ and $R(e_{32}) = -a^2 e_{23} - 2a e_{33}$. Then the RB-operator $P = \phi(\Upsilon_{1/a}^{-1}\Phi_{12}R\Phi_{12}\Upsilon_{1/a})$ is defined by Example 3 for $A_0 = D_3$, $A_- = U_3$ and $A_+ = \Span\{e_{21}-e_{23},e_{32}+e_{12},e_{11}-e_{33}+e_{31}-e_{13}\}$ with the action $P(e_{22}) = -e_{11} - e_{22} - e_{33}$, $P(e_{11}) = -e_{11} - e_{33}$, $P(e_{33}) = 0$. Define the automorphism $\varrho = \varrho(a,b,c)$ of $M_3(F)$ for nonzero $a,b\in F$ and any $c\in F$ as follows $$\label{2-III-Iso} \begin{gathered} \varrho(e_{11}) = e_{11}, \quad \varrho(e_{12}) = \frac{1}{a}(e_{12} - bc e_{13}), \quad \varrho(e_{13}) = b e_{13}, \\ \varrho(e_{21}) = a e_{21}, \quad \varrho(e_{22}) = e_{22} - bc e_{23}, \quad \varrho(e_{23}) = ab e_{23}, \quad \varrho(e_{31}) = c e_{21} + \frac{1}{b}e_{31},\\ \varrho(e_{32}) = \frac{1}{a}\left( ce_{22}-b c^2 e_{23}+\frac{1}{b}e_{32}-c e_{33}\right),\quad \varrho(e_{33}) = e_{33} + bc e_{23}. \end{gathered}$$ Conjugation with the automorphism $\Phi_{12}\varrho\Phi_{12}$, where $a = b = c = 1$, maps $P$ to a primitive RB-operator. Let us return to the main variant. By Lemma 5, the projection of $\Imm(R)$ on $e_{11}$ is zero. So, the matrix algebra $N = \Span\{e_{22},e_{23},e_{32},e_{33}\}$ is $R$-invariant. Now, we apply Corollary 1. In the case (M3) we get only primitive RB-operators. In other cases, we have $R(e_{11}) = e_{22} + e_{33}$ and 2a) $e_{12},e_{13}\in \ker(R')$, $e_{21},e_{31},e_{23},e_{22},e_{33}\in \ker(R)$, $R(e_{32}) = e_{33} - e_{32}$ (by (M4${}^\prime$)), 2b) $e_{12},e_{13}\in \ker(R')$, $e_{21},e_{31},e_{32},e_{22},e_{33}\in \ker(R)$, $R(e_{23}) = e_{22} - e_{23}$ (by (M4${}^T$)), 2c) $e_{12},e_{13}\in \ker(R')$, $e_{21},e_{31},e_{22},e_{33}\in \ker(R)$, $R(e_{23}) = e_{22}- e_{23}$, $R(e_{32}) = e_{33}- e_{32}$ (by (M5)), 2d) $e_{12},e_{13}\in \ker(R')$, $e_{21},e_{31},e_{22},e_{33}\in \ker(R)$, $R(e_{23}) = e_{33}- e_{23}$, $R(e_{32}) = e_{22}- e_{32}$ (by (M5${}^T$)). The following lemma shows that such trick is correct. [**Lemma 6**]{}. Let $R_1$ and $R_2$ be RB-operators on $M_3(F)$ of weight 1 such that $e_{12},e_{13}\in \ker(R_1'),\ker(R_2')$, $e_{21},e_{31}\in \ker(R_1),\ker(R_2)$, and $M = \Span\{e_{11}\}\oplus N$ is $R_1$- and $R_2$-invariant, where $N = \Span\{e_{22},e_{23},e_{32},e_{33}\}$. Suppose that the projections $R_1|_N$ and $R_2|_N$ of $R_1,R_2$ from $M$ on $N$ are conjugate with some automorphism of $N$. Then there exists $\psi\in\Aut(M_3(F))$ such that the map $Q = R_2 - \psi^{-1}R_1\psi$ is zero on $M_2\oplus M_3$ and $\Imm(Q) \subset \Span\{e_{11}\}$. [Proof]{}. Suppose that $R_1|_N$ and $R_2|_N$ as RB-operators on $N$ are conjugate with an automorphism $\xi$ of $N$, i.e., $R_2|_N = \xi^{-1}R_1|_N\xi$. Since each automorphism of $M_2(F)$ is inner, we have $\xi(A) = T^{-1}AT$, $A\in N$, for some nondegenerate matrix $T\in N$. Let us extend $\xi$ from $N$ on the entire algebra $M_3(F)$ as follows, $$\xi(X) = \xi\left(\begin{pmatrix} s & u \\ v & A \end{pmatrix}\right) = \begin{pmatrix} s & uT \\ T^{-1}v & T^{-1}AT \end{pmatrix},$$ where $s = x_{11}$, $u = (x_{12},x_{13})$, $v = (x_{21},x_{31})^T$. It is an inner automorphism of $M_3(F)$ defined by the matrix $\begin{pmatrix} 1 & 0 \\ 0 & T \end{pmatrix}$. It is easy to check that $Q = R_2 - \xi^{-1}R_1\xi$ is, maybe, nonzero only on $N$ and $\Imm(Q) \subset \Span\{e_{11}\}$. Lemma is proved. $\square$ The RB-operator 2a) coincides with the RB-operator 5-I. The RB-operator 2b) is conjugate to the RB-operator 5-I with the help of $\Phi_{23}$. The RB-operator 2c) is conjugate to the RB-operator 2-I) with the help of $\Phi_{23}\varrho\Phi_{23}$, where $\varrho$ is defined by  with $a = c = 1/b$. Analogously, the RB-operator 2d) is conjugate to the RB-operator 2-II). $R(e_{11})= e_{22} + e_{33}$, $R(e_{22}) = -(e_{22}+e_{33})$, $R(e_{33})=-e_{33}$. By Lemma 1, the subalgebras $M_1 = \Span\{e_{11},e_{12},e_{21},e_{22},e_{33}\}$, $M_2 = \Span\{e_{13},e_{23}\}$, $M_3 = \Span\{e_{31},e_{32}\}$ are invariant under the action of $R$. Let $N = \Span\{e_{11},e_{12},e_{21},e_{22}\}$. Analogously to Case 2), we have two variants (up to conjugation with transpose). [**Second variant**]{}. $e_{13},e_{32},e_{12}\in \ker(R')$ and $p = e_{23}+ae_{13}, q = e_{31}-ae_{32}\in \ker(R)$. In both cases $a = 0$ and $a\neq0$ we get only primitive RB-operators, the proof is analogous to the one from Case 2). [**Main variant**]{}: $e_{13},e_{23}\in \ker(R')$ and $e_{31},e_{32}\in \ker(R)$. Applying Statement 6, Corollary 1, and Lemma 6, we get the following RB-operators defined by (M4${}^\prime$), (M4${}^T$), (M5), and (M5${}^T$) respectively (in (M3) we have only primitive ones), 3a) $R(e_{11}) = \alpha e_{33}$, $R(e_{12}) = 0$, $R(e_{22}) = -\alpha e_{33}$, $R(e_{21}) = e_{22} - e_{21} + \beta e_{33}$; 3b) $R(e_{11}) = \alpha e_{33}$, $R(e_{21}) = 0$, $R(e_{22}) = -\alpha e_{33}$, $R(e_{12}) = e_{11} - e_{12} + \beta e_{33}$; 3c) $R(e_{11}) = \alpha e_{33}$, $R(e_{12}) = e_{11}-e_{12}+\gamma e_{33}$, $R(e_{22}) = -\alpha e_{33}$, $R(e_{21}) = e_{22} {-} e_{21} {+} \beta e_{33}$; 3d) $R(e_{11}) = \alpha e_{33}$, $R(e_{12}) = e_{22}-e_{12}+\gamma e_{33}$, $R(e_{22}) = -\alpha e_{33}$, $R(e_{21}) = e_{11} {-} e_{21} {+} \beta e_{33}$. In all cases we have used that $R(e_{11}+e_{22}) = 0$ and the following easy fact, [**Lemma 7**]{}. Let $R$ be an RB-operator of weight 1 on $M_3(F)$ and $R(e_{12}) = \alpha e_{12}+\beta e_{33}$, then $\beta = 0$. [Proof]{}. We are done by the equality $\beta^2 e_{33} = R(e_{12})R(e_{12}) = 0$. $\square$ Note that in all cases 3a)–3d) the equality $-\alpha^2e_{33} = R(e_{11})R(e_{22}) = 0$ implies $\alpha = 0$. Let us consider the subcase 3a). From $$\label{3a,c} \beta^2 = R(e_{21})R(e_{21})|_{e_{33}} = R(e_{21})|_{e_{33}} = \beta,$$ we have either $\beta = 0$ or $\beta = 1$. If $\beta = 0$, then $R$ is splitting. If $\beta = 1$, then $R$ is conjugate to the RB-operator 8-I) with the help of $\Phi_{13}\circ T$. In the subcase 3b), we have an RB-operator which is conjugate to the RB-operator from 3a) with the help of $\Phi_{12}$. Consider the subcase 3c). By , we have $\beta\in\{0,1\}$. Further, $$\begin{gathered} \beta = R(e_{12}-e_{11})|_{e_{33}} = R(e_{12})R(e_{21})|_{e_{33}} = \beta\gamma = R(e_{21})R(e_{12})|_{e_{33}} = R(e_{21}-e_{22})|_{e_{33}} = \gamma.\end{gathered}$$ If $\beta = \gamma = 0$, then $R$ is splitting. If $\beta = \gamma = 1$, $R$ is conjugate to the RB-operator 3-II) with the help of $\Phi_{13}\circ T\circ \varrho$, where $\varrho$ is defined by  with $a=c=1/b$. Analogously, in the subcase 3d) we get either splitting RB-operator or an RB-operator which is conjugate to the RB-operator 3-I) with the help of $\Phi_{13}\circ T\circ \varrho\circ\Phi_{13}$, where $\varrho$ is taken with $a=c=-1/b$. $R(e_{11})=e_{22}+e_{33}$, $R(e_{22})=-e_{22}$, $R(e_{33})=0$. As in Case 3), the subalgebras $M_1 = \Span\{e_{11},e_{12},e_{21},e_{22},e_{33}\}$, $M_2 = \Span\{e_{13}$, $e_{23}\}$, $M_3 = \Span\{e_{31},e_{32}\}$ are invariant under the action of $R$. Analogously to Case 2), we have two variants (up to conjugation with transpose). [**Second variant**]{}. $e_{13},e_{32},e_{12}\in \ker(R')$ and $p = e_{23}+ae_{13}, q = e_{31}-ae_{32}\in \ker(R)$. As above, we get only primitive RB-operators in both cases $a = 0$ and $a\neq0$. [**Main variant**]{}: $e_{13},e_{23}\in \ker(R')$ and $e_{31},e_{32}\in \ker(R)$. Let us show that $$\label{Case4ImR} \Imm(R) = \ker (R')\oplus \Span\{e_{33}\}$$ (as vector spaces). Denote $A = \Imm(R)$. By Lemma 2, $e_{11}A,Ae_{11},e_{22}A,Ae_{22}\subset \ker (R')$. If $a = \sum\limits_{i,j=1}^3\alpha_{ij}e_{ij}\in A$, then $\alpha_{ij}e_{ij}\in \ker(R')$ for $i,j\in\{1,2\}$. Subtracting them, we get that $\alpha_{13}e_{13}$, $\alpha_{31}e_{31}$, $\alpha_{23}e_{23}$, $\alpha_{32}e_{32}\in \ker(R')$. Thus, $A = \ker (R')\oplus \Span\{e_{33}\}$, otherwise $R$ is splitting. Denote $N = \Span\{e_{11},e_{12},e_{21},e_{22}\}$. As in Case 3), since $\ker(R')$ is homogeneous, we have the following cases: a\) $\ker(R')\cap N$ is one-dimensional with basis $e_{22}$; b\) $\ker(R')\cap N$ is two-dimensional with basis $e_{12},e_{22}$ or $e_{21},e_{22}$. In the subcase a), we have $R(e_{12}) = \alpha e_{22} + \beta e_{33}$ and $R(e_{21}) = \gamma e_{22} + \delta e_{33}$. From $$\alpha\gamma e_{22} + \beta\delta e_{33} = R(e_{21})R(e_{12}) = R(e_{22}),$$ we deduce that $\alpha\gamma = -1 $ and $\beta\delta = 0$. From the projection of the equality $R(e_{12})R(e_{21}) = R(\gamma e_{12} + \alpha e_{21} + e_{11})$ on the $e_{33}$-coordinate, we get $1 + \alpha\delta + \beta\gamma = \beta\delta$. Thus, we obtain two RB-operators: 4a1) $e_{13},e_{23}\in \ker(R')$, $e_{31},e_{32}\in \ker(R)$, $R(e_{12}) = a e_{22}$, $R(e_{21}) = (-1/a)(e_{22} + e_{33})$; 4a2) $e_{13},e_{23}\in \ker(R')$, $e_{31},e_{32}\in \ker(R)$, $R(e_{21}) = a e_{22}$, $R(e_{12}) = (-1/a)(e_{22} + e_{33})$. Conjugation with $\Phi_{13}\circ T\circ \Upsilon_{1/a}$ maps the RB-operator 4a1) to the RB-operator $P$ with $e_{12},e_{13},e_{22}\in\ker(P')$, $e_{11},e_{21},e_{31}\in\ker(P)$, $P(e_{23}) = e_{22}$, $P(e_{32}) = -e_{11} - e_{22}$, $P(e_{33}) = e_{11} + e_{22}$. Finally, $\psi^{-1}P\psi$ is the RB-operator 6-V). Here $\psi = \Phi_{23}\varrho\Phi_{23}$, where $\varrho$ is defined by  with the parameters $a = c = -1/b$. Conjugation with $\Phi_{13}\circ T\circ \Upsilon_a$, maps the RB-operator 4a2) to the RB-operator $P$ with $e_{12},e_{13},e_{22}\in\ker(P')$, $e_{11},e_{21},e_{31}\in\ker(P)$, $P(e_{32}) = e_{22}$, $P(e_{23}) = -e_{11} - e_{22}$, $P(e_{33}) = e_{11} + e_{22}$. Conjugation with $\varrho$ defined with $a = -c = -1/b$ maps $P$ to the RB-operator 6-IV). Consider the subcase b). By Statement 6 and Corollary 1, we have the following subcases arisen from the cases (M5), (M5${}^T$), (M6), and (M6${}^T$) respectively, 4b1) $R(e_{11}) = \alpha e_{33}$, $R(e_{22}) = (1-\alpha)e_{33}$, $R(e_{12}) = e_{11}-e_{12}+\gamma e_{33}$, $R(e_{21}) = e_{22}-e_{21}+\delta e_{33}$; 4b2) $R(e_{11}) = \alpha e_{33}$, $R(e_{22}) = (1-\alpha)e_{33}$, $R(e_{21}) = e_{11}-e_{21}+\gamma e_{33}$, $R(e_{12}) = e_{22}-e_{12}+\delta e_{33}$; 4b3) $R(e_{11}) = -e_{11}+\alpha e_{33}$, $R(e_{22}) = e_{11}+(1-\alpha)e_{33}$, $R(e_{12}) = -e_{12}+\gamma e_{33}$, $R(e_{21}) = \delta e_{33}$; 4b4) $R(e_{11}) = -e_{11}+\alpha e_{33}$, $R(e_{22}) = e_{11}+(1-\alpha)e_{33}$, $R(e_{21}) = -e_{21}+\gamma e_{33}$, $R(e_{12}) = \delta e_{33}$. We have used here that $R(e_{11}+e_{22}) = e_{33}$ for all variants. Let us consider the case 4b1). From $$\begin{gathered} \gamma e_{33} = R(e_{11}+e_{22})R(e_{12}) = R(e_{12}-e_{12}+e_{11}) = \alpha e_{33}, \nonumber \\ \delta e_{33} = R(e_{11}+e_{22})R(e_{21}) = R(e_{21}-e_{21}+e_{22}) = (1-\alpha)e_{33},\nonumber \\ \alpha(1-\alpha)e_{33} \label{4b:Rf1Rf2} = R(e_{11})R(e_{22}) = 0,\end{gathered}$$ we get the RB-operators $R_1$ (when $\alpha=0$) and $R_2$ (when $\alpha=1$) satisfying $$\begin{gathered} R_1(e_{11}) = 0,\quad R_1(e_{22}) = e_{33},\quad R_1(e_{12}) = e_{11}-e_{12},\quad R_1(e_{21}) = e_{22}-e_{21}+e_{33},\\ R_2(e_{11}) = e_{33},\quad R_2(e_{22}) = 0,\quad R_2(e_{12}) = e_{11}-e_{12}+e_{33},\quad R_2(e_{21}) = e_{22}-e_{21}.\end{gathered}$$ They are conjugate with the help of $\Phi_{12}$. The conjugation of $R_1$ with $\Phi_{13}\circ T\circ \varrho$, where $\varrho$ is defined by  with $a = -c = 1/b$, gives the RB-operator 4-I). In the case 4b2) we analogously get the RB-operator 4-II). Consider the case 4b3). By $$\gamma e_{33} = R(e_{11}+e_{22})R(e_{12}) = R(e_{12}-e_{12}) = 0, \quad \delta^2 e_{33} = R(e_{21})R(e_{21}) = 0,$$ we get only primitive RB-operators. We deal analogously with the subcase 4b4). $R(e_{11}) = e_{22} + e_{33}$, $R(e_{22}) = R(e_{33}) = 0$. We follow the strategy from Case 2). Since $e_{11}\not\in \ker(R)$ and $e_{11}\not\in\ker(R')$, we get . We have $\ker(R)$-homogeneity. So, we have two variants (up to conjugation with transpose). [**Second variant**]{}. $e_{21},e_{13}\in\ker(R)$ and so, $e_{23}\in \ker(R)$. Also, $p = e_{12}-ae_{13}, q = e_{31}+ae_{21}\in \ker(R')$. As above, we get only primitive RB-operators in both cases $a = 0$ and $a\neq0$. [**Main variant**]{}: $e_{12},e_{13}\in \ker(R')$ and $e_{21},e_{31}\in \ker(R)$. By Lemma 5, the projection of $\Imm(R)$ on $e_{11}$ is zero. So, the matrix algebra $N = \Span\{e_{22},e_{23},e_{32},e_{33}\}$ is $R$-invariant. Suppose that $e_{23},e_{32}\in\ker(R)$, then we get the RB-operator 5-I). Then we have three subcases: 5a) $e_{23}\in\ker(R)$; 5b) $e_{32}\in\ker(R)$; 5c) $e_{23},e_{32}\not \in \ker(R)$. In 5a–5c), we apply Corollary 1 (joint with Lemma 6) to get the following RB-operators (in (M3) we get only primitive ones) 5-II. $e_{12},e_{13}\in \ker(R')$, $e_{21},e_{31},e_{23}\in \ker(R)$, $R(e_{32}) = e_{33} - e_{32}$ (by (M4${}^\prime$)), 5-III. $e_{12},e_{13}\in \ker(R')$, $e_{21},e_{31},e_{32}\in \ker(R)$, $R(e_{23}) = e_{22} - e_{23}$ (by (M4${}^T$)), 5-IV. $e_{12},e_{13}\in \ker(R')$, $e_{21},e_{31}\in \ker(R)$, $R(e_{23}) = e_{22}- e_{23}$, $R(e_{32}) = e_{33}- e_{32}$ ((M5)), 5-V. $e_{12},e_{13}\in \ker(R')$, $e_{21},e_{31}\in \ker(R)$, $R(e_{23}) = e_{33}- e_{23}$, $R(e_{32}) = e_{22}- e_{32}$ ((M5${}^T$)) respectively. Conjugation of the RB-operator 5-IV) with the automorphism $\Phi_{23}\varrho\Phi_{23}$ coincides with the RB-operator from the case 2-I). Here $\varrho$ is defined by  with $a = c = 1/b$. Also, conjugation of the RB-operator 2-II) with the same $\varrho$ gives the RB-operator 5-V). The RB-operator 5-III) is conjugate to the RB-operator 5-II) with the help of $\Phi_{23}$. $R(e_{11}) = e_{22}$, $R(e_{22}) = R(e_{33}) = 0$. In this case $R(1)=e_{22}$ and by Lemma 1 we obtain the following equations: $$\begin{gathered} R(e_{12})=\alpha_{12}e_{12}+\alpha_{32}e_{32},\quad R(e_{21})=\alpha_{21}e_{21}+\alpha_{23}e_{23},\\ R(e_{23})=\beta_{21}e_{21}+\beta_{23}e_{23},\quad R(e_{32})=\beta_{12}e_{12}+\beta_{32}e_{32},\\ R(e_{13})=\alpha_{11}e_{11}+\alpha_{13}e_{13}+\alpha_{22}e_{22}+\alpha_{31}e_{31}+\alpha_{33}e_{33},\\ R(e_{31})=\beta_{11}e_{11}+\beta_{13}e_{13}+\beta_{22}e_{22}+\beta_{31}e_{31}+\beta_{33}e_{33}.\end{gathered}$$ From $$\alpha_{22}e_{22}=R(e_{11})R(e_{13})=R(\alpha_{11} e_{11}+(\alpha_{13}+1)e_{13}),$$ we conclude that $$\label{us1} \text{if}\ \alpha_{13}=-1,\ \text{then}\ \alpha_{11}=\alpha_{22}.$$ Similarly, $$\label{us2} \text{if}\ \beta_{31}=-1,\ \text{then}\ \beta_{11}=\beta_{22}.$$ Consider a subalgebra $M_1 = \Span\{e_{11},e_{22},e_{33},e_{13},e_{31}\}$. We proved that $M_1$ is $R$-invariant. It is easy to see that $M_1 = \Span\{e_{22}\}\oplus N$ (as vector spaces) for $N = \Span\{e_{11},e_{33},e_{13},e_{31}\}\cong M_2(F)$. Let $M_2 = \Span\{e_{21},e_{23}\}$ and $M_3 = \Span\{e_{12},e_{32}\}$. Then $M_2$ and $M_3$ are $R$-invariant subspaces in $M_3(F)$. From (7) it follows that restrictions $R_i$, $i=2,3$, on $M_i$ satisfy $R_i^2 + R_i=0$. By Lemma 2, $e_{33}(R(e_{32})+e_{32})\in \ker(R)$. Thus, $(\beta_{32}+1)R(e_{32})=0$ and $$\label{us3} \text{if}\ R(e_{32})\neq 0,\ \text{then}\ \beta_{32}=-1.$$ Similarly, $$\label{us4} R(e_{23})=0\ \text{or}\ \beta_{23}=-1.$$ Also, from $$0 = R(e_{21})R(e_{11})=(\alpha_{21}+1)R(e_{21}),\quad 0 = R(e_{11})R(e_{12})=(\alpha_{12}+1)R(e_{12})$$ we obtain that $$\label{us5} R(e_{21})=0\ \text{or}\ \alpha_{21}=-1,\quad R(e_{12})=0\ \text{or}\ \alpha_{12}=-1.$$ Let $Pr_N$ be the projection from $M_1$ onto $N$ with $\ker(Pr_N) = \Span\{e_{22}\}$. By Statement 6 the composition $\bar{R} = Pr_N\circ R$ is the Rota-Baxter operator on $N$. Note that $\bar{R}(e_{11})=\bar{R}(e_{33})=0$. Now we apply Corollary 1 and obtain that $\bar{R}$ may be conjugate to (M3), (M4) or (M5). As in Lemma 6, we can find an RB-operator $Q$ satisfying the conditions\ (i) $Q$ is conjugate to $R$,\ (ii) $M_1$, $M_2$, and $M_3$ are $Q$-invariant,\ (iii) the projection $\bar{Q}$ is equal to (M3), (M4) or (M5),\ (iv) $Q(e_{22})=0$. Note that in all cases (M3), (M4), and (M5), $Pr_N(e_{11})=Pr_N(e_{33})=0$. Therefore, $Q(e_{11})=\delta_1 e_{22}$ and $Q(e_{33})=\delta_2 e_{22}$. Let us show that we may assume that\ (v) $Q(e_{11}) = e_{22}$ and $Q(e_{33}) = 0$. From $$\delta_1\delta_2 e_{22} = Q(e_{11})Q(e_{33}) = 0,$$ it follows that $\delta_1=0$ or $\delta_2=0$. Since $R$ and $Q$ are conjugate with automoprhism or antiautomorphism of $M_3(F)$, $\tr(Q(1)) = \tr(R(1)) = 1$. Therefore, $\delta_1=1$ and $\delta_2=0$ or vice versa. It is straightforward to check that in Cases (M3) and (M5), up to conjugation with $T\circ\Phi_{13}$ and $\Phi_{13}$ respectively, we can assume that $\delta_1=1$, $\delta_2=0$, i.e., the condition (v) is fulfilled. Suppose that the restriction $\bar{Q}$ corresponds to (M4) and $\delta_1=0$, $\delta_2=1$. Then we have $$\begin{gathered} Q(e_{11})=0,\quad Q(e_{22})=0, \quad Q(e_{33})=e_{22},\quad Q(e_{13})=0,\quad Q(e_{31})=e_{11}-e_{31}+\gamma e_{22}\end{gathered}$$ for some $\gamma\in F$. Consider the RB-operator $Q_1=(T\circ\Phi_{13})^{-1}Q(T\circ\Phi_{13})$. We have $$\begin{gathered} Q_1(e_{11})=e_{22},\quad Q_1(e_{22})=0, \quad Q_1(e_{33})=0,\quad Q_1(e_{13})=0,\quad Q_1(e_{31})=e_{33}-e_{31}+\gamma e_{22}.\end{gathered}$$ For $Q_1$, we may apply and obtain $\gamma=0$. Define an automorphism $\psi$ of $M_3(F)$ as $$\begin{gathered} \psi(e_{11})=e_{11}-e_{13},\ \psi(e_{22})=e_{22},\ \psi(e_{33})=e_{33}+e_{13},\ \psi(e_{12})=e_{12},\ \psi(e_{21})=e_{21}-e_{23},\\ \psi(e_{13})=e_{13},\quad \psi(e_{31})=e_{11}-e_{33}+e_{31}-e_{13},\quad \psi(e_{23})=e_{23},\quad \psi(e_{32})=e_{32}+e_{12}.\end{gathered}$$ It is easy to check that the RB-operator $Q_2=\psi^{-1}Q_1\psi$ satisfies all conditions (i)–(v). Consider the following subcases:\ *Subcase 6a*: $R(e_{21})=0$;\ *Subcase 6b*: $R(e_{21})\neq 0$ and $R(e_{23})\neq 0$;\ *Subcase 6c*: $R(e_{21})\neq 0$ and $R(e_{23})= 0$. *Subcase 6a*) Suppose that $R(e_{21})=0$. Since $e_{11},e_{31}\notin \ker(R)$, then $e_{12},e_{32}\notin \ker(R)$. By Lemma 3, $\ker(R)$ is homogeneous, therefore the restriction $R_3$ on $M_3$ is non-degenerate. From $R_3^2+R_3=0$ we derive $R_3=-\id$ in this case. Thus, $R(e_{12})=-e_{12}$ and $R(e_{32})=-e_{32}$. Finally, for any $\beta\in F$ we have $$(-\beta_{21}-\beta \beta_{23})e_{22} = R(e_{23})R(e_{12}+\beta e_{32}) = R((\beta_{21}+\beta \beta_{23})e_{22})=0.$$ Therefore, $\beta_{21}=\beta_{23}=0$ and $R(e_{23})=0$. Suppose that the restriction $\bar{R}$ is equal to (M3) from Corollary 1. In this case, $$R(e_{13})=\gamma_1 e_{22},\ \ R(e_{31})=-e_{31}+\gamma_2 e_{22}.$$ By Lemma 7, $\gamma_1=\gamma_2=0$ and $R$ is equal to the RB-operator 6b). Suppose that the restriction $\bar{R}$ is equal to (M5). Then we have: $$\begin{gathered} R(e_{11})=e_{22},\quad R(e_{22})=0, \quad R(e_{33})=0,\quad R(e_{12})=-e_{12},\quad R(e_{21})=0,\\ R(e_{13})=e_{11}+\gamma_1e_{22}-e_{13},\\ R(e_{31})=-e_{31}+e_{33}+\gamma_2 e_{22},\quad R(e_{23})=0,\quad R(e_{32})=-e_{32}.\end{gathered}$$ We can now use and to get $\gamma_1=1$ and $\gamma_2=0$. Define an automorphism $\xi$ of $M_3(F)$ as $$\begin{gathered} \xi(e_{11})=e_{22},\ \xi(e_{22})=e_{11}+e_{13},\ \xi(e_{33})=e_{33}-e_{13},\ \xi(e_{12})=e_{21}+e_{23},\ \xi(e_{21})=e_{12},\\ \xi(e_{13})=e_{23},\quad \xi(e_{31})=e_{32}-e_{12},\quad \xi(e_{23})=e_{13},\quad \xi(e_{32})=e_{33}-e_{13}-e_{11}+e_{31}.\end{gathered}$$ The conjugation of $R$ with $T\circ\xi$ gives us the RB-operator 4-I. If the restriction $\bar{R}$ corresponds to (M4), then similar reasons give us that $$\begin{gathered} R(e_{11})=e_{22},\ R(e_{22})=R(e_{33})=0,\ R(e_{12})=-e_{12},\\ R(e_{21})=0,\ R(e_{13})=0,\ R(e_{31})=e_{11}+e_{22}-e_{31},\ R(e_{23})=0,\ R(e_{32})=-e_{32}.\end{gathered}$$ This operator is conjugate to the RB-operator 6-IV with the help of $\Phi_{12}\circ T$. *Subcase 6b*) Now suppose that $R(e_{21})\neq 0$ and $R(e_{23})\neq 0$. Then by Lemma 3 the restriction $R_2$ is non-degenerate. Therefore, $R(e_{21})=-e_{21}$ and $R(e_{23})=-e_{23}$. If $R(e_{12})\neq 0$, then by , $R(e_{12})=-e_{12}+\alpha_{32}e_{32}$. But from $$\label{6b:e21-e_12} e_{22} = R(e_{21})R(e_{12}) = R(-e_{22}) = 0$$ we get a contradiction. Thus, $R(e_{12})=0$. By the same reasons, inequality $R(e_{32})\neq 0$ contradicts with $$e_{22} = R(e_{23})R(e_{32}) = R(-e_{22}) = 0.$$ As in Subcase 6a, we consider the restriction of $R$ on $N$ and obtain RB-operators corresponding to the cases (M3)–(M5). Using similar arguments as in 6a we obtain that if the restriction $\bar{R}$ corresponds to (M3), we obtain operator 6c). If the restriction corresponds to (M5), then $R$ lies in the orbit of 4-II and if the restriction corresponds to (M4), then $R$ is conjugate to the RB-operator 6-V. *Subcase 6c*) The only remaining question is what happens if $R(e_{21})\neq 0$ and $R(e_{23}) {=} 0$. Suppose that $\bar{R}$ corresponds to (M3). Then, by Lemma 7, $R(e_{31})=-e_{31}$ and $R(e_{13})=0$. Consider $R(e_{21})$. By , $R(e_{21})=-e_{21}+\alpha_{23}e_{23}$. From $$-\alpha_{23}e_{21}=R(e_{21})R(e_{31})=R(\alpha_{23}e_{21})$$ we obtain that $\alpha_{23}=0$ and $R(e_{21}) = -e_{21}$. We will consider two subcases: $R(e_{32})=0$ and $R(e_{32})\neq 0$. If $R(e_{32})=0$, then, since $\ker(R)$ is a subalgebra in $M_3(F)$, $R(e_{12})=0$. Conjugation with $T$ gives us the RB-operator 6-VI. If $R(e_{32})\neq 0$, then from we have $R(e_{32})=-e_{32}+\beta_{12}e_{12}$. From $$-\beta_{12}e_{32}=R(e_{31})R(e_{32})=R(\beta_{12}e_{32})$$ we deduce that $R(e_{32})=-e_{32}$. By , we get $R(e_{12}) = 0$ and therefore $R$ is a primitive RB operator. Suppose that the restriction $\bar{R}$ corresponds to (M5). Then $R$ satisfies $R(e_{13})=-e_{13}+ e_{11}+\beta e_{22}.$ From we deduce that $R(e_{13})=-e_{13}+e_{11}+ e_{22}$. The same reasons give us $R(e_{31})=-e_{31}+e_{33}$. Recall that $R(e_{21})\neq 0$ implies $R(e_{21})=-e_{21}+\alpha_{23}e_{23}$ by . From $$- e_{21}+e_{23}=R(e_{21})R(e_{13})=R(-e_{23}+ e_{21} -e_{23}+e_{23})= R(e_{21}),$$ we obtain that $R(e_{21})=-e_{21}+e_{23}$. Since $R(e_{23})=0$ and $R(e_{13})\neq 0$, then $R(e_{12})\neq 0$. Thus, by , $R(e_{12})=-e_{12}+\alpha_{32}e_{32}$. From $$(1+\alpha_{32})e_{22} = R(e_{21})R(e_{12})=R(-e_{22}-e_{22}+e_{22}) = 0,$$ we deduce that $\alpha_{32}=-1$ and $R(e_{12})=-e_{12}- e_{32}$. Finally, since $(R^2+R)(e_{12})=0$ we obtain that $R(e_{32})=0$. Thus, we obtain the RB-operator 6-I. Now suppose that the restriction $\bar{R}$ corresponds to (M4). We have $R(e_{13})=0$, $R(e_{31})=-e_{31}+e_{11}+e_{22}$, $R(e_{21})=-e_{21}+\alpha_{23}e_{23}$, and $R(e_{23})=0$. From $$\begin{gathered} - e_{21}-\alpha_{23}e_{21} = R(e_{21})R(e_{31}) \\ = R(\alpha_{23}e_{21}+ e_{21}) = (\alpha_{23}+1)R(e_{21})=(\alpha_{23}+1)(-e_{21}+\alpha_{23}e_{23}),\end{gathered}$$ we obtain that $\alpha_{23}(\alpha_{23}+1) = 0$. Therefore, $\alpha_{23} = -1$ or $\alpha_{23} = 0$. Consider $e_{12}$. If $R(e_{12})\neq 0$, then $R(e_{12})=-e_{12}+\alpha_{32}e_{32}$ by . From $$\begin{gathered} e_{11}-\alpha_{23}e_{13}-\alpha_{32}e_{31}+\alpha_{23}\alpha_{32}e_{33}=R(e_{12})R(e_{21}) \\ = R(-e_{11}+\alpha_{32}e_{31}-e_{11}+\alpha_{23}e_{13}+e_{11})=-e_{22}+\alpha_{32}R(e_{31}),\end{gathered}$$ we obtain $\alpha_{32}\neq 0$. The last inequality holds if and only if $R(e_{32}) = 0$, otherwise the restriction $R_2$ is non-degenerate and consequently $R_2 =-\id$. Since $\ker(R)$ is a subalgebra, $e_{13}e_{32}=e_{12}\in \ker(R)$, a contradiction. Therefore, $R(e_{12}) = 0$. If $R(e_{32}) = 0$, then we obtain the RB-operators 6-II (when $\alpha_{23}=0$) and 6-III (when $\alpha_{23}=-1$). Suppose that $R(e_{32})\neq 0$. Then $R(e_{32}) = -e_{32}+\beta_{12}e_{12}$. Consider $$\begin{gathered} e_{31}-\alpha_{23}e_{33}-\beta_{12}e_{11}+\beta_{12}\alpha_{23}e_{13}=R(e_{32})R(e_{21}) \\ = R(-e_{31}+\beta_{12}e_{11}-e_{31}+\alpha_{23}e_{33}+e_{31})=-R(e_{31})+\beta_{12}e_{22}.\end{gathered}$$ So, $R(e_{31})=\beta_{12}(e_{11}+e_{22})-e_{31}+\alpha_{23}e_{33}-\beta_{12}\alpha_{23}e_{13}$. It holds if and only if $\beta_{12} = 1$ and $\alpha_{23}=0$. Therefore, $e_{21},\ e_{32}- e_{12}\in\ker(R')$ and we obtain $e_{21}(e_{32}- e_{12}) = - e_{22}\in\ker(R')$, a contradiction. $R(e_{11}) = e_{22}$, $R(e_{22}) = 0$, $R(e_{33}) = -e_{33}$. We have $R(1) = e_{22}-e_{33}$ and by Lemma 1 (as in Case 1) we deduce $$\label{Case7Variables} \begin{gathered} R(e_{12}) = \alpha_{12}e_{12}+\alpha_{31}e_{31},\quad R(e_{31}) = \beta_{12}e_{12}+\beta_{31}e_{31}, \quad R(e_{23}) = \gamma_{23}e_{23}, \\ R(e_{21}) = \alpha_{21}e_{21}+\alpha_{13}e_{13}, \quad R(e_{13}) = \beta_{21}e_{21}+\beta_{13}e_{13}, \quad R(e_{32}) = \gamma_{32}e_{32}. \end{gathered}$$ Moreover, by Lemma 1, $\gamma_{23},\gamma_{32}\in\{0,-1\}$. Since $\ker(R)$ and $\ker(R')$ are subalgebras in $M_3(F)$, then $\gamma_{23}\neq\gamma_{32}$. Up to conjugation with transpose, we may assume that $$R(e_{23})=-e_{23},\quad R(e_{32})=0.$$ From the equation $$0 = R(e_{12})R(e_{22})=(\alpha_{12}+1)R(e_{12}),$$ it follows that either $\alpha_{12}=-1$ or $R(e_{12})=0$. Similarly, either $\alpha_{21}=-1$ or $R(e_{21})=0$. The case $\alpha_{12} = \alpha_{21} = 0$ does not hold, otherwise $e_{11}\in \ker(R)$. The case $\alpha_{12} = \alpha_{21} = -1$ does not hold, otherwise $e_{11}\in \Imm(R)$, a contradiction to . So, we have two subcases: 7a) $\alpha_{12}=-1$, $R(e_{21})=0$. Since $\ker(R)$ is a subalgebra in $M_3(F)$, then $R(e_{31})=0$. From $$-\alpha_{31}e_{31}=R(e_{33})R(e_{12})=R(\alpha_{31}e_{31})=0,$$ it follows that $R(e_{12})=-e_{12}$ and since $\ker(R')$ is a subalgebra, then $R(e_{13})=-e_{13}$. We obtain that $R$ is a primitive RB-operator. 7b) $\alpha_{21}=-1$, $R(e_{12})=0$. Since $R(e_{21})\neq 0$, then $0 = R(e_{22})R(e_{13})=R(\beta_{21}e_{21})$ implies $\beta_{21}=0$. From $0=R(e_{11})R(e_{13})=(\beta_{13}+1)R(e_{13})$, it follows that either $R(e_{13})=-e_{13}$ or $R(e_{13})=0$. From $$\beta_{12}e_{12}=R(e_{31})R(e_{11})=(\beta_{31}+1)R(e_{31}),$$ it follows that either $R(e_{31})=-e_{31}$ or $\beta_{31}=0$. If $\beta_{31} = 0$, then $$-\beta_{12}e_{11} = R(e_{31})R(e_{21}) = R(\beta_{12}e_{11}+\alpha_{13}e_{33}) = \beta_{12}e_{22}-\alpha_{13}e_{33}$$ lead us to $\beta_{12} = \alpha_{13} = 0$. Thus, either $R(e_{31})=-e_{31}$ or $R(e_{31}) = 0$, $R(e_{21}) = -e_{21}$. If $R(e_{13}) = 0$, then, since $\ker(R)$ is a subalgebra, $R(e_{31})=-e_{31}$. So, $e_{21} = e_{23}e_{31}\in\ker(R')$. Applying the conjugation with $\Phi_{23}$, we get that $R$ is a primitive RB-operator. If $R(e_{13}) = -e_{13}$, then $R(e_{31}) = 0$ and so, $R(e_{21}) = -e_{21}$. Applying the conjugation with $\Phi_{12}$, we get that $R$ is again a primitive RB-operator. $R(e_{11}) = -e_{11}$, $R(e_{22}) = R(e_{33}) = 0$. We follow the strategy from Case 2). Since $e_{11}\not\in \ker(R)$ and $e_{22},e_{33}\not\in\ker(R')$, we again get . We have $\ker(R)$-homogeneity. So, we have [**Main variant**]{}: $e_{12},e_{13}\in \ker(R')$ and $e_{21},e_{31}\in \ker(R)$. We will consider it later. [**Second variant**]{}. $e_{21},e_{13},e_{23}\in \ker(R)$ and $p = e_{12}-ae_{13}, q = e_{31}+ae_{21}\in \ker(R')$. In both cases when $a = 0$ or $a\neq0$, we get primitive RB-operators (the proof is analogous to the one from Case 2) which are splitting in Case 8). Let us return to the main variant. Suppose that $e_{23},e_{32}\in\ker(R)$, then $R$ is splitting. Then we have three subcases: a\) $e_{23}\in \ker(R)$, b\) $e_{32}\in\ker(R)$, c\) $e_{23},e_{32}\not \in \ker(R)$. Let us consider the subcase a). By Lemma 1, $Fe_{11}\oplus N$ is $R$-invariant, where $N = \Span\{e_{22},e_{23},e_{32},e_{33}\}$. By Statement 6 and Corollary 1, we may assume that $R(e_{32}) = \left(\begin{matrix} \gamma_{11} & 0 & 0 \\ 0 & \gamma_{22} & 0 \\ 0 & -1 & 0 \end{matrix}\right)$, where $\gamma_{22}\in\{0,1\}$. From the equality $R(e_{32})R(e_{32}) = \gamma_{22}R(e_{32})$, we get $\gamma_{11}(\gamma_{11}-\gamma_{22}) = 0$. If $\gamma_{11} = 0$, we get a splitting RB-operator. So, $\gamma_{11} = \gamma_{22} = 1$ gives the RB-operator 8-I. In b), analogously to a), we get the RB-operator $R$ satisfying $e_{12},e_{13}\in \ker(R')$, $e_{21},e_{31},e_{32}\in \ker(R)$, $R(e_{23}) = e_{11}+e_{33} - e_{23}$, it is conjugate to the RB-operator 8-I) with the help of $\Phi_{23}$. In c), denote $R(e_{23}) = (\delta_{ij})$ and $R(e_{32}) = (\gamma_{ij})$. By Statement 6, the projection $R|_N$ of $R$ on the subalgebra $N$ is an RB-operator. Applying Corollary 1, we have either $R(e_{23}) = \delta_{11}e_{11} + e_{22} - e_{23}$ and $R(e_{32}) = \gamma_{11}e_{11} - e_{32} + e_{33}$ or $R(e_{23}) = \delta_{11}e_{11} - e_{23} + e_{33}$ and $R(e_{32}) = \gamma_{11}e_{11} + e_{22} - e_{32}$. We consider the first variant, the second one is similar. From $$\begin{gathered} \gamma_{11}\delta_{11}e_{11}+e_{22}-e_{23} = R(e_{23})R(e_{32}) = R(e_{23}), \\ \gamma_{11}\delta_{11}e_{11}+e_{33}-e_{32} = R(e_{32})R(e_{23}) = R(e_{32}),\end{gathered}$$ we get that either $\gamma_{11} = \delta_{11} = 0$ and so, $R$ is splitting, or $\gamma_{11} = \delta_{11} = 1$. So, we obtain the RB-operator $R$ satisfying $e_{12},e_{13}\in \ker(R')$, $e_{21},e_{31}\in \ker(R)$, $R(e_{23}) = e_{11} + e_{22} - e_{23}$, $R(e_{32}) = e_{11} + e_{33} - e_{32}$. The RB-operator $R$ is conjugate to the RB-operator 3-II) with the help of $\varrho$ is defined by  with $a=c=1/b$. Analogously, for the second variant we get either splitting RB-operator or the RB-operator $R$ satisfying $e_{12},e_{13}\in \ker(R')$, $e_{21},e_{31}\in \ker(R)$, $R(e_{32}) = e_{11} + e_{22} - e_{32}$, $R(e_{23}) = e_{11} + e_{33} - e_{23}$ which is conjugate to the RB-operator 3-I). $R(e_{11}) = e_{22}$, $R(e_{22}) = -e_{22}$, $R(e_{33}) = -e_{33}$. [**Lemma 8**]{}. In Case 9), there are no non-splitting RB-operators. Denote $A = \Imm(R) = R(M_3(F))$. As in Case 4), we get that either $A = \ker(R+\id)$ and then $R$ is splitting or $A = \ker (R+\id)\oplus\Span\{e_{11}\}$ (as vector spaces). Consider the second variant. Suppose that $R(y) = e_{11}$ for some $y\in M_3(F)$. So, $$0 = R(y)R(1) = R(e_{11} + y(-e_{33}) + y) = e_{11} + e_{22} - R(ye_{33}). \label{9:UnitOnpreImagef1}$$ From , we have $R(ye_{33}) = e_{11}+e_{22}$. Let $y = \alpha e_{13}+\beta e_{23} + \gamma e_{33} + y'$ for $y'\in\Span\{e_{ij}\mid i=1,2,3,\,j=1,2\}$. Thus, $$R(\alpha e_{13}+\beta e_{23} + \gamma e_{33} - e_{11}) = e_{11},$$ and we may assume that $y = \alpha e_{13}+\beta e_{23} + \gamma e_{33} - e_{11}$. Further, $$\begin{gathered} e_{11} = R(y)R(y) = R(e_{11}y + ye_{11} + y^2) \\ = R(-e_{11}+\alpha e_{13} -e_{11} + \alpha\gamma e_{13}+\beta\gamma e_{23}+\gamma^2 e_{33}-\alpha e_{13}+e_{11} ) \\ = R((\gamma-1)e_{11}+\gamma(\alpha e_{13}+\beta e_{23} + \gamma e_{33} - e_{11})) = (\gamma-1)e_{22} + \gamma e_{11},\end{gathered}$$ which leads us to $\gamma = 1$. We may rewrite the formula $R(y) = e_{11}$ as $$1 = R(\alpha e_{13}+\beta e_{23}-e_{11}-e_{22}) = R(\alpha e_{13}+\beta e_{23}).$$ Define $z = \alpha e_{13}+\beta e_{23}$. On the one hand, $$1 = R(z)R(z) = R(R(z)z+zR(z)+z^2) = 2 + R(z^2)$$ and so $R(z^2) = -1$. On the other hand, $z^2 = 0$. We have a contradiction. $\square$ We have considered all cases of the action $R$ on $D_3(F)$. Theorem is proved. $\square$ [**Corollary 2**]{}. All RB-operators obtained in Theorem 3 lie in different orbits under the action of the operator $\phi$ from Statement 1 and conjugation with automorphisms of $M_3(F)$ and transpose. [Proof]{}. Let us denote by $X*$ the set of cases $Xa,Xb,Xc$ for $X\in\{1,2,3,4,6,7\}$ and $Xa,Xb$ for $X = 5$. The Jordan form of $R$ as a linear map as well as rank of $R(1)$ have to be preserved under the action of $\Aut(M_3(F))$ and transpose. So, we may compute the 6-tuple $$\label{6-tuple} (\dim(\ker(R)),\dim(\ker(R^2)), \dim(\ker(R')),\dim(\ker(R'^2)), \rank(R(1)),\rank(R'(1)))$$ for each case and compare them up to the action of $\phi$. Immediately, we get that the cases $1*)$, 1-I), 5-I), $7*)$, 8-I) lie in their own orbits. Indeed, the cases $1*)$ and 1-I) are unique with the property $R^2(R+\id)^2\neq 0$, and they lie in different orbits, since their minimal polynomials $m_1 = x^3(x+1)$ and $m_{1-I} = x^3(x+1)^2$ do not coincide. Only for the cases $7*)$, we have $(\rank (R(1)),\rank (R'(1))) = (2,2)$. The case 8-I) is unique case with $\dim(\ker R^2) = \dim(\ker R) = 5$. The case 5-I) is the only case satisfying the conditions $\dim(\ker(R)) = 6$ and $(\rank (R(1)),\rank (R'(1)) = (2,3)$. Let us check that all listed in Theorem 3 primitive RB-operators lie in different orbits. [**Lemma 9**]{}. The cases $Xa$, $Xb$ and $Xc$ for $X\in\{1,2,3,4,6,7,8,9\}$ and $5a$, $5b$ lie in pairwise different orbits. [Proof]{}. Analyzing 6-tuples , we obtain that primitive RB-operators of different types lie in different orbits. Consider two different RB-operators $O$ and $P$ of the same type $X$. Suppose that they lie in the same orbit, so either $\psi^{-1}O\psi = P$ or $\psi^{-1}O\psi = T\circ P\circ T$ for a $\psi\in\Aut(M_3(F))$. Consider the first case. Note that $\psi$ preserves both kernels and their powers. Moreover, $\psi$ preserves the radicals of the kernels and the one-dimensional annihilators of such radicals, i.e., $$\psi(e_{13}) = se_{13},\quad \psi(e_{31}) = te_{31}$$ for some $s,t\in F$. So, $\psi(e_{11}) = st e_{11}$ and $\psi(e_{33}) = st e_{33}$. Since the image of an idempotent under the action of an automorphism has to be an idempotent, $st = 1$. Thus, $\psi(e_{11}) = e_{11}$, $\psi(e_{33}) = e_{33}$, and $\psi(e_{22}) = e_{22}$, since $\psi$ preserves the identity matrix. So, we get a contradiction. In the second case, we analogously get that $$\psi(e_{13}) = se_{31},\quad \psi(e_{31}) = te_{13}.$$ Again, $st=1$ and $\psi(e_{11}) = e_{33}$, $\psi(e_{33}) = e_{11}$, and $\psi(e_{22}) = e_{22}$. It means that the action of $O$ and $P$ on the subalgebra of diagonal matrices should coincide up to the action of $\Phi_{13}$. We get a contradiction for the all cases $Xa,Xb,Xc$. $\square$ Let us continue on the separation of cases by 6-tuples . Up to $\phi$ we have the following orbits, a\) $2*)$, 2-I), 2-II) for the 6-tuple $(4,5,4,4,2,3)$, b\) $3*)$, 3-I), 3-II) for the 6-tuple $(4,4,4,5,1,2)$, c\) $4*)$, 4-I), 4-II), 6-I) for the 6-tuple $(4,5,4,4,1,3)$, d\) 6-II), 6-III), 6-VI) for the 6-tuple $(6,7,2,2,1,3)$, e\) $5*)$, 5-II) for the 6-tuple $(5,6,3,3,2,3)$, f\) $6*)$, 6-IV), 6-V) for the 6-tuple $(5,6,3,3,1,3)$. Let us show how the analysis of left and right annihilators of the kernels helps to separate cases. If RB-operators $P$ and $Q$ lie in the same orbit, then both their kernels should be isomorphic or anti-isomorphic. In particular, pairs of the dimensions of left and right annihilators for each of $\ker(R)$ and $\ker(R+\id)$ should be pairwise equal. For example, in the case 6-IV) we have $\ker(R_{6-IV})=\Span\{e_{11},e_{33},e_{32},e_{21},e_{31}\}$ and $\Ann_l(\ker(R_{6-IV})) = \Ann_r(\ker(R_{6-IV})) = (0)$. In the case 6-V) we have $\ker(R_{6-V})=\Span\{e_{11},e_{33},e_{23},e_{21},e_{31}\}$ and $\Ann_l(\ker(R_{6-V}))$\ $=(0)$ but $\Ann_r(\ker(R_{6-V}))=\Span\{e_{23},e_{21}\}$. It means that 6-IV and 6-V lie in different orbits. By the same argument applied for $\ker(R+\id)$, we separate cases 2-I) and 2-II), 4-I) and 4-II) respectively. a\) Further, the cases $2*)$ do not lie in same orbit with neither 2-I) nor 2-II), since $\ker(R_{2*}^2)$ has two-dimensional semisimple part, while $\ker(R^2)$ has a three-dimensional semisimple part for $R$ in the cases 2-I) and 2-II). b\) The semisimple part of $\ker(R)$ is one-dimensional in the cases $3*)$ and it is two-dimensional in the cases 3-I) and 3-II). We will show below that the RB-operators from the cases 3-I) and 3-II) lie in different orbits. c\) The case 6-I) lies in its own orbit since it is the only variant from c) satisfying the condition $\ker(R)\cong M_2(F)$. The cases $4*)$ do not lie in the same orbit with neither of 4-I), 4-II). Indeed, $\ker(R_{4*})$ has one-dimensional semisimple part, when the semisimple part of $\ker(R)$ in the cases 4-I), 4-II) is two-dimensional. d\) The subalgebra $\ker(R+\id)$ has trivial product only in the case 6-VI). Further, $\ker(R+\id)^2 = \ker(R+\id)$ only in the case 6-II). e\) The cases $5*)$ and 5-II) do not lie in the same orbit. The algebra $\Imm(R)$ in the case $5*)$ has one-dimensional semisimple part, when the semisimple part of $\Imm(R)$ in the case 5-II) is two-dimensional. f\) The cases $6*)$ do not lie in the same orbit with neither 6-IV) nor 6-V), since $\ker(R'_{6*})$ is nilpotent when both $\ker(R'_{6-IV})$ and $\ker(R'_{6-V})$ have one-dimensional semisimple part. Finally, let us show that the RB-operators $P$ and $O$ taken from the cases 3-I and 3-II respectively lie in different orbits. Assume there exists $\psi\in\Aut(M_3(F))$ such that $\psi^{-1}O\psi = P$ or $\psi^{-1}O\psi = T\circ P\circ T$. Note that in both cases $\psi(P(1)) = O(1)$. So, $\psi(e_{11}) = e_{11}$. Consider the first case. We have $$\begin{gathered} \allowdisplaybreaks \ker(P+\id) = \Span\{e_{11},e_{12},e_{13},e_{23}\},\quad \ker(O+\id) = \Span\{e_{11},e_{12},e_{13},e_{32}\},\\ \allowdisplaybreaks \ker(P+\id)^2 = \ker(P+\id) \oplus \Span\{e_{22}\},\quad \ker(O+\id)^2 = \ker(O+\id) \oplus \Span\{e_{22}\},\\ \ker(P) = \Span\{e_{21},e_{31},e_{22}+e_{33},e_{22}+e_{32}\},\ \ker(O) = \Span\{e_{21},e_{31},e_{22}+e_{33},e_{22}+e_{23}\}.\end{gathered}$$ Further, $\psi(e_{13}) = ae_{12}$ for some nonzero $a\in F$, since $\psi$ has to map the centralizer of $\rad(\ker(P+\id))$ onto the centralizer of $\rad(\ker(O+\id))$. As $\psi$ maps $\rad(\ker P)$ onto $\rad(\ker O)$, we get $$\psi(e_{21}) = be_{21} + ce_{31},\quad \psi(e_{31}) = de_{21} + fe_{31}.$$ Further, $\psi(e_{33}) = \psi(e_{31})\psi(e_{13}) = e_{22} + af e_{32}$, so $\psi(e_{22}) = \psi(e_{21})\psi(e_{12}) = e_{33} - af e_{32}$. The following equalities $$(P+\id)(e_{22}) = - e_{11} \neq 1 = \psi^{-1}(1) = \psi^{-1}(O+\id)(e_{33} - af e_{32}) = \psi^{-1}(O+\id)\psi(e_{22})$$ imply a contradiction. Now consider the second case when $\psi^{-1}O\psi = T\circ P\circ T$. Then analogously, we get $\psi(e_{31}) = a e_{12}$ for some nonzero $a\in F$, and $$\psi(e_{12}) = be_{21} + ce_{31}, \quad \psi(e_{13}) = de_{21} + fe_{31}.$$ Thus, $\psi(e_{33}) = \psi(e_{31})\psi(e_{13}) = e_{11}$, a contradiction with $\psi(e_{11}) = e_{11}$. $\square$ [**Remark 3**]{}. One can derive from Theorem 3 the classification of all non-splitting RB-operators of nonzero weight on the 5-dimensional semisimple associative algebra $A = Fe\oplus M_2(F)$, here $e^2 = e(\neq0)$. Indeed, given an RB-operator $R$ of weight one on $A$, we may extend its action on the entire algebra $M_3(F)$ by Example 2. More detailed, we embed $A$ into $M_3(F)$ as follows: $\psi(e) = e_{11}$, $\psi(e_{ij}) = e_{i+1\, j+1}$ for $e_{ij}\in M_2(F)$. Then we put $e_{12},e_{13}\in\ker(R+\id)$ and $e_{21},e_{31}\in\ker(R)$. If one starts with a non-splitting RB-operator $R$ on $A$, then its extension $R$ on $M_3(F)$ is again a non-splitting RB-operator. So, $R$ up to $\phi$ and up to conjugation with an automorphism of $M_3(F)$ and transpose is one of the RB-operators from Theorem 3. On the other hand, all RB-operators from Theorem 3 except the cases 1-I), 6-I), 6-II), 6-III) are exactly mentioned above extensions of RB-operators on $A$. Acknowledgements {#acknowledgements .unnumbered} ================ M. Goncharov was supported by Russian Scientific Fond (project N 19-11-00039). V. Gubarev was supported by the Program of fundamental scientific researches of the Siberian Branch of Russian Academy of Sciences, I.1.1, project 0314-2019-0001. [20]{} M. Aguiar, Infinitesimal Hopf algebras, Contemp. Math. [**267**]{} (2000) 1–30. M. Aguiar, Pre-Poisson algebras, Lett. Math. Phys. [**54**]{} (2000) 263–277. H. An, C. Bai, From Rota-Baxter Algebras to Pre-Lie Algebras, J. Phys. A. [**1**]{} (2008), 015201, 19 p. F.V. Atkinson, Some aspects of Baxter’s functional equation, J. Math. Anal. Appl. [**7**]{} (1963) 1–30. G. Baxter, An analytic problem whose solution follows from a simple algebraic identity, Pacific J. Math. [**10**]{} (1960) 731–742. A.A. Belavin, V.G. Drinfel’d, Solutions of the classical Yang—Baxter equation for simple Lie algebras, Funct. Anal. Appl. (3) [**16**]{} (1982) 159–180. P. Benito, V. Gubarev, A. Pozhidaev, Rota—Baxter operators on quadratic algebras, Mediterr. J. Math. [**15**]{} (2018), 23 p. (N189). S.L. de Bragança, Finite Dimensional Baxter Algebras, Stud. Appl. Math. [**54**]{} (1) (1975) 75–89. D. Burde, V. Gubarev. Decompositions of algebras and post-associative algebra structures, Int. J. Algebr. Comput. (accepted), DOI: 10.1142/S0218196720500071. P. Cartier, On the structure of free Baxter algebras, Adv. Math. [**9**]{} (1972) 253–265. C. Du, C. Bai, L. Guo, 3-Lie bialgebras and 3-Lie classical Yang-Baxter equations in low dimensions, Linear Multilinear A. [**66**]{} (8) (2018) 1633–1658. K. Ebrahimi-Fard, Rota-Baxter Algebras and the Hopf Algebra of Renormalization, Ph.D. Thesis, University of Bonn, 2006. V. Gubarev. Rota—Baxter operators on unital algebras, arXiv:1805.00723v3 \[math.RA\], 43 p. V. Gubarev, Rota—Baxter operators on a sum of fields, J. Algebra Appl. (2020) (accepted), DOI: 10.1142/S0219498820501182, 12 p. F. Guil. Banach-Lie groups and integrable systems, Inverse Probl. [**5**]{} (1989) 559–571. L. Guo, An Introduction to Rota—Baxter Algebra. Surveys of Modern Mathematics. Vol. 4. Somerville, MA: International Press; Beijing: Higher education press, 2012. L. Guo, W. Keigher, Baxter algebras and shuffle products, Adv. Math. [**150**]{} (2000) 117–149. P.S. Kolesnikov, Homogeneous averaging operators on simple finite conformal Lie algebras, J. Math. Phys. [**56**]{} (2015), 071702, 10 p. E.I. Konovalova, Double Lie algebras, Ph.D. Thesis, Samara State University, 2009. 189 p. (in Russian). Yu Pan, Q. Liu, C. Bai, L. Guo, PostLie algebra structures on the Lie algebra $\mathrm{sl}(2,\mathbb{C})$, Electron. J. Linear Algebra [**23**]{} (2012) 180–197. J. Pei, C. Bai, and L. Guo, Rota-Baxter operators on $\mathrm{sl}(2,\mathbb{C})$ and solutions of the classical Yang-Baxter equation, J. Math. Phys. [**55**]{} (2014), 021701, 17 p. G.-C. Rota, Baxter algebras and combinatorial identities. I, Bull. Amer. Math. Soc. [**75**]{} (1969) 325–329. M.A. Semenov-Tyan-Shanskii, What is a classical $r$-matrix? Funct. Anal. Appl. [**17**]{} (1983) 259–272. V.V. Sokolov, Classification of constant solutions of the associative Yang—Baxter equation on $\mathrm{Mat}_3$, Theor. Math. Phys. (3) [**176**]{} (2013) 1156–1162. X. Tang, Y. Zhang, and Q. Sun, Rota-Baxter operators on 4-dimensional complex simple associative algebras, Appl. Math. Comp. [**229**]{} (2014) 173–186. Y. Zhang, X. Gao, J. Zheng, Weighted infinitesimal unitary bialgebras on matrix algebras and weighted associative Yang-Baxter equations, arXiv:1811.00842, 24 p. V.N. Zhelyabin, Jordan bialgebras of symmetric elements and Lie bialgebras, Siberian Mat. J. (2) [**39**]{} (1998) 261–276. Maxim Goncharov\ Vsevolod Gubarev\ Novosibirsk State University\ Pirogova str. 2, 630090 Novosibirsk, Russia\ Sobolev Institute of Mathematics\ Acad. Koptyug ave. 4, 630090 Novosibirsk, Russia\ e-mail: gme@math.nsc.ru, wsewolod89@gmail.com
--- abstract: | We consider the ferromagnetic Ising model on a highly inhomogeneous network created by a growth process. We find that the phase transition in this system is characterised by the Berezinskii–Kosterlitz–Thouless singularity, although critical fluctuations are absent and the mean-field description is exact. Below this infinite order transition, the magnetization behaves as $\exp(-\text{const}/\sqrt{T_c-T})$. We show that the critical point separates the phase with the power-law distribution of the linear response to a local field and the phase where this distribution rapidly decreases. We suggest that this phase transition occurs in a wide range of cooperative models with a strong infinite-range inhomogeneity.\ [*Note added.*]{}—After this paper had been published, we have learnt that the infinite order phase transition in the effective model we arrived at was discovered by O. Costin, R.D. Costin and C.P. Grünfeld in 1990. This phase transition was considered in the following papers:\ $[$1$]$ O. Costin, R.D. Costin and C.P. Grünfeld, Infinite-order phase transition in a classical spin system, J. Stat. Phys. [**59**]{}, 1531 (1990);\ $[$2$]$ O. Costin and R.D. Costin, Limit probability distributions for an infinite-order phase transition model, J. Stat. Phys. [**64**]{}, 193 (1991);\ $[$3$]$ M. Bundaru and C.P. Grünfeld, On a phase transition in a one-dimensional non-homogeneous model, J. Phys. A [**32**]{}, 875 (1999);\ $[$4$]$ S. Romano, Computer simulation study of one-dimensional lattice spin models with long-range inhomogeneous interactions, Mod. Phys. Lett. B [**9**]{}, 1447 (1995).\ We would like to note that Costin, Costin and Grünfeld treated this model as a one-dimensional inhomogeneous system. We have arrived at the same model as a one-replica ansatz for a random growing network where expected to find a phase transition of this sort based on earlier results for random networks (see the text). We have also obtained the distribution of the linear response to a local field, which characterises correlations in this system. We thank O. Costin and S. Romano for indicating these publications of 90s. author: - 'M. Bauer' - 'S. Coulomb' - 'S. N. Dorogovtsev' title: | Phase Transition with the Berezinskii–Kosterlitz–Thouless Singularity\ in the Ising Model on a Growing Network --- The ferromagnetic Ising model on lattices has an ordinary second-order phase transition [@o44]. Above the upper critical dimension of the model, the critical fluctuations are absent, and the mean-field description of this transition is exact. In particular, this takes place if couplings between all spins are equal—infinite-range interactions. In this Letter we report the finding of a phase transition with the Berezinskii–Kosterlitz–Thouless (BKT) singularity in the Ising model on a growing network, which is infinitely-dimensional as most of networks. This transition is quite unusual for an infinitely-dimensional system as well for a cooperative model with the order parameter of discrete symmetry. Recall that in “ordinary” continuous phase transitions, pair correlations of an order parameter show a slow, power-law space decay only at the critical point, $T_c$, and decay exponentially both in the low- and high-temperature phases. This behavior was observed in the Ising model on equilibrium complex networks [@ahs02; @dgm02; @lvv02; @g03; @remark1] (for percolation and for disease spreading on equilibrium networks, see Refs. [@cah02] and [@pv01], respectively). In contrast, the BKT phase transition [@b71; @kt73] separates the phase with rapidly decreasing correlations and the critical phase with correlations decaying by a power law. The contact of these two phases is characterised by specific dependences. For example, the order parameter behaves as $M(T) \sim \exp(-\text{const}/\sqrt{T_c-T})$, and the phase transition is of infinite order. Normally, the BKT transition is realized in systems with two-component order parameters of continuous symmetry at a lower critical dimension. Also, this anomalous phase transition is present in a few low-dimensional systems (e.g., the Luttinger liquid) which actually can be reduced to above indicated ones. There is one more interesting situation, where the BKT singularity emerges. It was observed that in some growing networks, near the birth point of the giant connected component, its relative size behaves similarly to the magnetization near the BKT transition [@chk01; @dms01; @kkkr02; @l02; @bb03; @d03; @cb03; @br05; @kd04]. [*The model.*]{}—We find an exact solution of the following cooperative model. A network grows up to a large size, and and interacting spins are considered on the resulting net. The interaction between the nearest neighbor spins is described by the ferromagnetic Ising model. We use the following growing network: - The growth starts with a single vertex ($t=0$). - At each time step, we add a new vertex and attach it to one of “older” vertices. - For simplicity, we suggest a specific annealing. For an edge born at time $t$, the end of the edge at vertex $t$ is fixed, and the second end can be found at each of the vertices in the range $0\leq\tau<t$ with equal probability. Characteristic times for the jumps of this end between the vertices $0\leq\tau<t$ are assumed to be not greater than those of the spin relaxation. One can show that the resulting model is equivalent to the ferromagnetic Ising model on the deterministic graph shown in Fig. \[f1\]. In this system, the spin on a vertex, which was born at time $t$, has equal coupling $1/t$ to each of spins on the older vertices. The Hamiltonian of the model is: $${\cal H} =\, -\!\!\!\!\sum_{0\leq i<j\leq t} \frac{s_i s_j}{j} - \sum_{i=0}^t H_i s_i \, , \label{e1}$$ where spins $s_i=\pm 1$, and $H_i \geq 0$ is an inhomogeneous magnetic field. Actually, we reduce our problem to a system with a strong deterministic infinite-range disorder. [*Mean-field treatment.*]{}—Let us first use a mean-field ansatz. We will show afterwards that the mean-field solution is exact. For the sake of brevity, we use the following simple mean-field treatment. We assume small fluctuations of spins from their mean-field values $m_i$: $s_i s_j \to m_i m_j + m_i(s_j-m_j) + m_j(s_i-m_i)$. Substituting this relation into Eq. (\[e1\]) gives a linear effective mean-field Hamiltonian. With this Hamiltonian, it is easy to obtain the partition function $Z = \sum_{\{s_i=\pm1\}}e^{-\beta{\cal H}[\{s_i\}]}$ ($\beta \equiv 1/T$) and the free energy $F = - \beta^{-1}\ln Z$: $$\begin{aligned} && F = t\! \int_0^1 \!dx \int_x^1 \frac{dy}{y}\,m(x)m(y) - \frac{t}{\beta}\ln2- \nonumber \\[5pt] && \phantom{|}\!\!\!\!\!\!\!\!\!\!\! \frac{t}{\beta} \!\!\int_0^1 \!\!\!dx\ln \cosh\! \left\{\!\beta\!\left[\frac{1}{x}\int_0^x\!\!\!\!\! dy\,m(y) + \!\int_x^1 \!\!\frac{dy}{y}\,m(y) + H(x)\right]\right\} . \nonumber \\[5pt] && %%\, \label{e2}\end{aligned}$$ Here we assumed that $t$ is large and passed to the continuum limit: $m_i = m(x\!=\!i/t)$. Expression (\[e2\]) together with the relation $m(x) = -(1/t)\,\delta F/\delta H (x)$ allows us to obtain the equation for the mean local magnetization: $$m(x) = \tanh \left\{\!\beta\!\left[\frac{1}{x}\int_0^x\!\!\!\! dy\,m(y) + \!\int_x^1 \!\frac{dy}{y}\,m(y) + H(x)\right]\right\} \, . \label{e3}$$ [*The exact derivation of the free energy.*]{}—The free energy can be found exactly. We compare the free energies of the network at times $t-1$ and $t$. For brevity, here we consider only the homogeneous magnetic field. The form of the Hamiltonian (\[e1\]) results in the relation $$e^{-\beta F_t(H)} = \sum_{s_{t}=\pm 1} e^{-\beta F_{t-1}(H+s_{t}/t)}e^{\beta Hs_{t}} \, , \label{e4}$$ that is, $$\begin{aligned} && e^{-\beta [F_t(H)-F_{t-1}(H)]} \to e^{-\beta F_t(H)/t} = \nonumber \\[5pt] && \sum_{s=\pm 1} \exp\left[-\beta \frac{\partial F_{t}(H)}{\partial H}\,\frac{s}{t} + \beta Hs\right] \, . \label{e5}\end{aligned}$$ Here we took into account the fact that at large $t$, the ratio $F_t(H)/t$ approaches a $t$-independent limit. Using the relation for the (relative) full magnetization $M(H) = \int_0^1 dx\, m(x,H) = - (1/t)\partial F_{t}(H)/\partial H$, we get the exact form of the free energy: $$F = - t \,\beta^{-1} \ln\{ 2\cosh[\beta(H+M(H))]\} \, %%. \label{e6}$$ at $t \to \infty$. One can check that free energy expressions (\[e2\])—the mean-field one—and (\[e6\])—the exact expression—coincide. Indeed, substituting Eq. (\[e3\]) into the relation (\[e2\]) and making partial integration, we arrive at the free energy exactly in form (\[e6\]). In this sense, the mean-field treatment of this problem is exact. [*Analysis of the equation for the magnetization.*]{}—Let us consider Eq. (\[e3\]). From this equation, one can see that the assumption $m(x) \neq 0$ at some $x$ immediately leads to the following behavior of $m(x)$ near $x=0$: $$m(x \sim 0) \cong 1 - A\,x^{2\beta} \, , \label{e7}$$ where $A$ depends on $\beta$ and $H$. If $H=0$, this behavior is realized only in the low-temperature phase, and $m(x)=0$ above $T_c$. If $H> 0$, $m(x=0)=1$ at any temperature. (We will see that the critical point, $T_c$, exists.) On the boundary, Eq. (\[e3\]) readily gives $m(1) = \tanh\{\beta [M+H(1)]\}$. For finding this profile, it is convenient to pass to a differential equation. For brevity, here we assume that $H=0$. We introduce a new variable, $n(z)$: $$%%\widetilde{m}(1/x) \equiv \frac{1}{x}\int_0^x\!\! dy\,m(y) + \int_x^1 \frac{dy}{y}\,m(y) n(-\ln x)\equiv\beta\left[ \frac{1}{x}\int_0^x\!\! dy\,m(y) + \int_x^1 \frac{dy}{y}\,m(y)\right] \, , \label{e8}$$ so $$%%m(x) = \tanh[\beta\widetilde{m}(1/x)] m(x) = \tanh n(-\ln x) \, . \label{e9}$$ Differentiating Eq. (\[e8\]) and using Eq. (\[e9\]) gives the second order differential equation $$%%\frac{d^2\widetilde{m}(x)}{d x^2} = -\frac{1}{\phantom{.}x^2}\,\tanh[\beta\,\widetilde{m}(x)] \frac{dn(z)}{dz} - \frac{d^2n(z)}{dz^2} = \beta\tanh n(z) \, %%. \label{e10}$$ with the boundary conditions: (i) $(dn/dz)(z\!=\!0) = n(z\!=\!0)$ \[note that $n(z\!=\!0) = \beta M \equiv \beta\int_0^1 dy\,m(y)$\] and (ii) $n(z\to\infty) \cong \beta z + \text{const}$. $z$ is related to $x=i/t$: $z=-\ln x$, so $0 \leq z < \infty$, where $z=0$ corresponds to $x=1$. Boundary conditions (i) and (ii) follow from definition (\[e8\]) and relation (\[e7\]), respectively. At each value of $\beta$, there is a single solution of Eq. (\[e10\]) with these boundary conditions, which allows one to get $M$. Equation (\[e10\]) can be transformed into a first order differential equation. For this, we pass from variables $\{t,n(t)\}$ to $\{n,w(n)\}$, where $w \equiv \beta^{-1}(dn/dt)$. \[$n$ varies from $0$ to $\infty$, while $w(n)$ takes values between $0$ and $1$.\] This gives the equation $$w\frac{dw}{dn} = \beta^{-1}(w - \tanh n) \, %%. \label{e11}$$ for $w(n)$ with the following boundary conditions: (i) $w[n(z\!=\!0)]=\beta^{-1}n(z\!=\!0)$ \[recall that $\beta^{-1}n(z\!=\!0) = M$\] and (ii) $w(n\to\infty)=1$. Here, boundary condition (i) on the line $w=\beta^{-1}n$ corresponds to that at $x=i/t=1$. Asymptotic boundary condition (ii) corresponds to the limit $i/t \to 0$. Knowing $w(n)$ one can easily get $m(x)$. The analysis of Eq. (\[e11\]) is similar to that of an equation of this type in Ref. [@dms01]. At small $n$, one can substitute $\tanh n$ by $n$ on the right-hand side of Eq. (\[e11\]), so we have $w dw/dn = \beta^{-1}(w - n)$. The solutions of this equation can be presented in an analytical form. A physically reasonable non-zero solution must cross the ordinate axis (and the $w=\beta^{-1}n$ line) at non-negative $w$. This solution exists if $\beta\geq1/4$. There is a critical point, $\beta_c=1/4$, where the solution is $$w_c(n,\beta=1/4) = 2n[1-f(n)] \, %%. \label{e11a}$$ with $f(n)$ satisfying $f(n\to 0)\to 0$ and the relation: $\ln[nf(n)] + 1/f(n) = \ln c$. Here the constant $c=1.554\ldots$ ensures that $w_c(n)$ (\[e11a\]) fits the corresponding solution of Eq. (\[e11\]) which approaches $1$ as $n \to \infty$. The form of the critical solution at small $n$ indicates the presence of the BKT singularity. Near $T_c$, the solution of Eq. (\[e11\]) is close to the critical one. In this range, the asymptotics of the solution at small $n$ satisfies the relation: $$\begin{aligned} && -\frac{1}{\sqrt{4\beta-1}}\arctan \frac{[2\beta w(n)/n]-1}{\sqrt{4\beta-1}} - \nonumber \\[5pt] && \ln\sqrt{n^2-w(n)n+\beta w^2(n)} =\text{const} \, %%, \label{e11b} \end{aligned}$$ ($\beta>1/4$). This asymptotics and the solution at large $n$ can be sewed together (see details in the full version of the present work). For obtaining the dependence of the full (relative) magnetization on $\beta$ near the critical $\beta_c=1/4$, we use the following procedure. (i) We substitute the boundary condition $w(n=\beta M)=M$ into relation (\[e11b\]). After expansion of the arctangent, we obtain the left-hand side of the relation below: $$\begin{aligned} && \!\!\!\!\!\!\!-\frac{\pi/2}{\sqrt{4\beta-1}} + 1 - \ln{\frac{M(\beta)}{4}} = %%\frac{1}{1-w(n)/(2n)} %%-\ln{[n-w(n)/2]} \nonumber \\[5pt] && \!\!\!\!\!\!\!\frac{\pi/2}{\sqrt{4\beta\!-\!1}} - \!\!\left[1\!-\!\frac{w(n)}{2n}\right]^{-1}\!\!\!\!\! - \ln{\!\left[n\!-\!\frac{w(n)}{2}\right]} \to \frac{\pi/2}{\sqrt{4\beta\!-\!1}} -\ln{c} %%\, .\end{aligned}$$ (ii) On the other hand, near $\beta=1/4$, in the region $\beta M \ll n \ll 1$, the main contribution of Eq. (\[e11b\]) gives the right-hand side of the equality above. We also use the fact that the solution must approach the critical one as $\beta\to 1/4$. So, we obtain the full magnetization near $\beta_c=1/4$: $$M(\beta) \cong 4ce \exp\left( -\,\frac{\pi}{2\sqrt{\beta-1/4}} \right) \, , \label{e12}$$ where $4ce=16.90\ldots$, $e$ is Euler’s number. Note that this BKT behavior is a direct result of the specific singular form of Eq. (\[e11\]) at small $n$ and $w$. The behaviors of the magnetization and other main thermodynamic quantities near the phase transition are shown in Fig. \[f3\]. By using Eq. (\[e11\]), one can also find the coefficient of the term $x^{2\beta}$ in relation (\[e7\]). Near $T_c$, at small $x$, $$m(x) \cong 1 - 2\, e^{\beta[(2\pi/\sqrt{\beta-1/4}) - 13.06]}\, x^{2\beta} \, . \label{e13}$$ Here $H=0$. That is, as the temperature approaches $T_c$, $m(x)$ decreases with $x$ more and more rapidly. [*Specific heat and susceptibility.*]{}—Substituting result (\[e12\]) into formula (\[e6\]) for the free energy readily gives the specific heat, $tC(T) = - T\partial^2 F/\partial T^2$. $C(T>T_c)=0$, as is usual for mean-field theories. If $T<T_c$, $$C(T) = \frac{(\pi ce)^2}{8\,(\beta-1/4)^3} \exp\left( -\,\frac{\pi}{\sqrt{\beta-1/4}} \right) \, , \label{e15}$$ where $(\pi ce)^2/8=22.01\ldots$. Similarly, one can consider the case of a non-zero homogeneous magnetic field. Here we present the resulting expressions for the magnetic susceptibility: $$\begin{aligned} %%&& \chi(\beta>1/4)& = &\beta^{-1} - 1 \, , \nonumber \\[5pt] %%&& \chi(\beta<1/4) &= &(1-\sqrt{1-4\beta})/(1+\sqrt{1-4\beta}) %%\frac{1-\sqrt{1-4\beta}}{1+\sqrt{1-4\beta}} \, . \label{e16}\end{aligned}$$ There is a finite jump of the susceptibility at the phase transition point: $\chi[\beta\!=\!(1/4)^-]\!=\!1$ and $\chi[\beta\!=\!(1/4)^+]\!=\!3$ \[see Fig. \[f3\](c)\]. [*Response to a local magnetic field.*]{}—In networks, instead of correlations in space, one has to consider other options. In our case, to characterize the decrease of correlations between interacting spins, we use the distribution of linear responses to local magnetic fields. We apply a small field to the neighborhood of some point $y$: $$H(x,y) = h [\theta(x-(y-\Delta/2)) - \theta(x-(y+\Delta/2))] \, %%, \label{e17}$$ \[$\theta(x)$ is the theta-function, $h$ and $\Delta$ are small\] and find the change of the full magnetization which is induced by this field: $\mu(y) = \int_0^1 dx\,[m(x,y)-m(x)]$. Knowing $\mu(y)$ readily gives the distribution $P(\mu)$ of the response. Calculations are especially simple at $T>T_c$. One can find $m(x,y)$ by iterating Eq. (\[e3\]), which gives $$\mu(y) = \beta h\Delta \frac{2}{1+\sqrt{1-4\beta}}\, y^{-(1-\sqrt{1-4\beta})/2} \, . %%, \label{e18}$$ Note the power-law divergence of the linear response as $y\to0$. Relation (\[e18\]) results in the power-law response distribution: $$P(\mu) \propto \mu^{-[1 + 2/(1-\sqrt{1-4\beta})]} \, . \label{e19}$$ It is important that in contrast to “normal” continuous phase transitions, $P(\mu)$ is a power-law function in the entire phase and not only at $T_c$. As is natural, in the other phase, this distribution is a rapidly decreasing function. At the phase transition point, $P(\mu,\beta=1/4) \propto \mu^{-3}$. [*Discussion and conclusions.*]{}—Several points must be emphasised. \(i) The spins on the oldest vertices are oriented in most of situations: $m(x\!=\!0)=1$ even above $T_c$, if any non-zero (positive) magnetic field is applied at least to one spin of the system. (ii) We considered networks with a specific annealing. The problem of a quenched disorder is more complex. However, there is a quenched situation, to which our results are applicable. Let each new vertex have a large number $N$—greater than the final size of the network or of this order—of new connections to randomly chosen vertices. Let each of this edges bear the Ising coupling equal to $1/N$. Then we arrive at the situation similar to that is shown in Fig. \[f1\]. \(iii) The phase transition found in this paper, as well as the structural transition considered in Refs. [@chk01; @dms01; @kkkr02; @l02; @bb03; @d03; @cb03; @br05; @kd04], seriously differs from the usual BKT transition. In our case, the analogue of the power-law correlations takes place in the normal phase. In contrast, in the traditional BKT transition, the power-law decay of correlations is in the phase with a non-zero order parameter. \(iv) We stress that the more traditional-looking transitions of Refs. [@ahs02; @dgm02; @lvv02; @g03; @cah02; @pv01] are realized in equilibrium networks where all vertices are statistically equivalent. In contrast, the networks, where we observed the transition with the BKT singularity, are specifically inhomogeneous. For our general conclusions, the specific $1/j$ form of the inhomogeneity of the interaction in the Hamiltonian (\[e1\]) is important only in the region of relatively small $j$. Deviations from this form at larger $j$ do not change the critical behavior. We studied a growing network but the problem has been reduced to the Ising model on a compact system with strong long-range inhomogeneity. We believe that our results are applicable to other systems with inhomogeneity of this kind. Furthermore, the Ising model is only a simple example of cooperative models were the observed transition should be present. In conclusion, we have solved the ferromagnetic Ising model on a highly inhomogeneous growing net. In this system we have found an infinite order phase transition with the BKT singularity. This transition separates a phase, where the distribution of the response to a local field is power-law, and the phase, where this distribution is rapidly decreasing. We suggest that this transition also occurs in other cooperative models on compact substrates with strong long-range inhomogeneity. This work was partially supported by projects POCTI. SND thanks A. Samukhin for useful discussions and Service de Physique Théorique, CEA-Saclay for hospitality. [10]{} O. Costin, R.D. Costin and C.P. Grünfeld, J. Stat. Phys. [**59**]{}, 1531 (1990). O. Costin and R.D. Costin, J. Stat. Phys. [**64**]{}, 193 (1991). M. Bundaru and C.P. Grünfeld, J. Phys. A [**32**]{}, 875 (1999). S. Romano, Mod. Phys. Lett. B [**9**]{}, 1447 (1995). L. Onsager, Phys. Rev. [**65**]{}, 117 (1944). A. Aleksiejuk, J.A. Holyst, and D. Stauffer, Physica A [**310**]{}, 260 (2002). S.N. Dorogovtsev, A.V. Goltsev, and J.F.F. Mendes, Phys. Rev. E [**66**]{}, 016104 (2002). M. Leone, A. V' azquez, A. Vespignani, and R. Zecchina, Eur. Phys. J. B [**28**]{}, 191 (2002). G. Bianconi, Phys. Lett. A [**303**]{}, 166 (2002). The complex architectures of the equilibrium networks may change the values of critical exponents and increase the order of the phase transition. However, they do not change dramatically the type of critical singularity. R. Cohen, K. Erez, D. ben-Avraham, and S. Havlin, Phys. Rev. Lett. 85, 4626 (2000); R. Cohen, D. ben-Avraham, and S. Havlin, Phys. Rev. E [**66**]{}, 036113 (2002). R. Pastor-Satorras and A. Vespignani, Phys. Rev. Lett. [**86**]{}, 3200 (2001). V.L. Berezinskii, Sov. Phys. JETP [**32**]{}, 493 (1971). J.M. Kosterlitz and D.J. Thouless, J. Phys. C [**6**]{}, 1181 (1973). D.S. Callaway, J.E. Hopcroft, J.M. Kleinberg, M.E.J. Newman, and S.H. Strogatz, Phys. Rev. E [**64**]{}, 041902 (2001). S.N. Dorogovtsev, J.F.F. Mendes, and A.N. Samukhin, Phys. Rev. E [**64**]{}, 066110 (2001). J. Kim, P.L. Krapivsky, B. Kahng, and S. Redner, Phys. Rev. E [**66**]{}, 055101 (2002). D. Lancaster, J. Phys. A: Math. Gen. [**35**]{} 1179 (2002). M. Bauer and D. Bernard, J. Stat. Phys. [**111**]{}, 703 (2003). R. Durrett, in [*Discrete Random Walks, DRW’03*]{}, C. Banderier and C. Krattenthaler (eds.), Discrete Mathematics and Theoretical Computer Science Proceedings AC, p.95. S. Coulomb and M. Bauer, Eur. Phys. J. B [**35**]{}, 377 (2003). B. Bollobás, S. Janson, and O. Riordan, Random Struct. Algor. [**26**]{}, 1 (2005). P.L.Krapivsky and B.Derrida, Physica A [**340**]{},714(2004).
--- abstract: 'We prove that for every bounded linear operator $T:X\to X$, where $X$ is a non-reflexive quotient of a von Neumann algebra, the point spectrum of $T^*$ is non-empty (i.e. for some $\lambda\in\mathbb C$ the operator $\lambda I-T$ fails to have dense range.) In particular, and as an application, we obtain that such a space cannot support a topologically transitive operator.' address: - | Departamento de Análisis Matemático\ Universidad de La Laguna\ 38271 La Laguna (Tenerife)\ Canary Islands\ Spain. - | Department of Mathematics\ University of Missouri-Columbia\ Columbia, MO 65211\ USA author: - Teresa Bermúdez - 'N. J. Kalton' title: The range of operators on von Neumann algebras --- [^1] Introduction ============ The results in this note are motivated by a question related to hypercyclic operators. In [@Godefroy-Shapiro] G. Godefroy and J. Shapiro suggest an extension of the notion of a hypercyclic operator to Banach space which are not necessarily separable, via the notion of topologically transitive operators (see Section \[applications\] below). Every Hilbert space supports a topologically transitive operator (see the example due to J. Shapiro in Section \[applications\].) Recently, it has been shown by S. Ansari [@ansari] and L. Bernal [@bernal] that every separable Banach space supports a hypercyclic operator, so it is natural to ask whether every Banach space supports a topologically transitive operator. It is well-known that if $T$ is hypercyclic then the adjoint operator $T^*$ has empty point spectrum, $\sigma_p(T^*),$ [@herrero] and [@kitai]; this extends to topologically transitive operators (Proposition \[espectro-puntual\]). Thus we are led to the question whether there exist complex Banach spaces so that for every operator $T$ we have $\sigma_p(T^*)\neq\emptyset.$ Such an example exists in the literature, [@Shelah] and [@Shelah2]. However, we show here that there are much more natural examples. If $X$ is any von Neumann algebra (or even a non-reflexive quotient of a von Neumann algebra), then any operator $T$ on $X$ has $\sigma_p(T^*)\neq \emptyset.$ In particular this holds if $X=\ell_{\infty}$ or $X=\mathcal L(\ell_2).$ We note hypercyclicity with respect to the strong-operator topology on $\mathcal L(\ell_2)$ has been considered in [@chan] and [@montes]. Our main result is rather stronger in that we show that if $X$ is a non-reflexive quotient of a von Neumann algebra, then for any operator $T$ we have that the quotient space $X/\overline{{\mathcal R}(\lambda-T)}$ contains a copy of $\ell_{\infty}$ and is in particular non-separable. Let us point out by way of further motivation that any operator $T$ on $\ell_1$ satisfies $\sigma_p(T^{**})\neq\emptyset$, since if $\lambda$ is in the approximate point spectrum of $T$ then it is in the point spectrum of $T^{**}$ by an argument depending on the Schur property of $\ell_1$ (this was shown to us by M. González). This is suggestive of the main result in the case $X=\ell_{\infty}.$ Our arguments depend on two Banach space concepts, which we now introduce. A projection $P$ on a Banach space $X$ is an L-projection if $\|x\|=\|Px\|+\|x-Px\|$ for any $x\in X.$ A Banach space $X$ is said to be [*L-embedded*]{} if there is an L-projection of $X^{**}$ onto $X$ i.e. if there is a projection $\Pi:X^{**}\to X$ so that we have: $$\|x^{**}\|=\|x^{**}-\Pi x^{**}\|+\|\Pi x^{**}\| \qquad \mbox{ for } x^{**}\in X^{**}.$$ For the basic facts on L-embedded spaces we refer to [@HWW] Chapter IV. A Banach space $X$ is called a [*Grothendieck space*]{} if every bounded operator $T:X\to Y$ with separable range is weakly compact. This is equivalent to requiring that if $\{x_n^*\}_{n\in \mathbb N} $ is a weak$^*$-null sequence in $X^*$, then it is also weakly null. Any von Neumann algebra is a Grothendieck space [@P] and its dual is L-embedded [@Takesaki], [@HWW]. We also recall that a Banach space $X$ is called an [*Asplund space*]{} if every separable subspace has separable dual (this is equivalent to the original definition, [@DGZ] Theorem 5.7, p.29). Most of our notation is standard. We will use $B_X$ to denote the closed unit ball of a Banach space $X$. If $F$ is a subset of $X$ then $\langle F\rangle$ denotes its linear span. We would like to thank M. González, J. Shapiro and D. Werner for helpful comments. Main results ============ We use repeatedly the following principle: \[lema\][@Wojtaszczyk II.E.15] Let $X$ be a Banach space and suppose $\{C_k\}_{k=1}^n$ is a finite set of convex sets. Suppose $D_k$ is the weak$^*$-closure of $C_k$ in $X^{**}.$ If $\cap_{k=1}^nD_k\neq \emptyset$ then for any $\epsilon>0$ there exists $x\in C_1$ with $d(x,C_k)<\epsilon$ for $k=2,3\ldots,n.$ We will also need the following well-known variant of the Hahn-Banach Theorem. \[HB\] Let $X$ be a Banach space and suppose $F$ is a finite-dimensional subspace of $X^*$. If $\psi$ is a linear functional on $F$ with $\|\psi\|<1$ then there exists $x\in X$ with $\|x\|<1$ and $x^*(x)=\psi(x^*)$ for $x^*\in F.$ This can be proved directly or from Lemma \[lema\]. Let $C_1=\{x\in X:\ x^*(x)=\psi(x^*)\ \forall x^*\in F\}$ and $C_2=\{x\in X: \ \|x\|\le \|\psi\|\}.$ Then, by the Hahn-Banach Theorem, the weak$^*$-closure $D_1$ of $C_1$ is the set $\{x^{**}\in X^{**}:\ x^{**}(x^*)=\psi(x^*)\ \forall x^*\in F\}$. By an application of the Hahn-Banach Theorem and Goldstine’s Theorem ([@megginson] Theorem 2.6.26, p. 232) $D_1$ meets the weak$^*$-closure of $C_2$ so that we can apply Lemma \[lema\]. \[oneone\] Suppose $T:X\to Y$ is a bounded linear operator. Then the following properties are equivalent:(1) ${\mathcal N}(T^{**})=\{0\}.$(2) If $\{x_n\}_{n\in \mathbb N}\subset X$ is a bounded sequence such that $\displaystyle\lim_{n\to\infty}\|Tx_n\| =0$ then $\displaystyle\lim_{n\to\infty}x_n=0$ weakly. \(1) implies (2). Clearly $0$ is the only weak$^*$-cluster point of $\{x_n\}_{n\in \mathbb N}$ in $X^{**}$ and so $\displaystyle\lim_{n\to \infty} x_n=0$ weakly. \(2) implies (1). Assume for some $x^{**}\neq 0$ with $\| x^{**}\|=1$, we have $T^{**}x^{**}=0.$ Pick $x^*\in X^{*}$ with $x^{**}(x^*)=1.$ Then for each $n$ the sets $C_1=\{x: \|x\|\le 1\}$, $C_2=\{x: x^*(x)\ge 1\}$ and $C_3=\{x:\|Tx\|\le n^{-1}\}$ satisfy the conditions of Lemma \[lema\], so we pick $\{x_n\}_{n\in \mathbb N}\subset X$ with $\|Tx_n\|\le n^{-1}$, $\|x_n\|\le 2$ and $x^*(x_n)\ge \frac12,$ contradicting (2). Now if $T:X\to Y$ is a bounded linear operator we denote by $\hat T$ the induced operator $\hat T:X^{**}/X\to Y^{**}/Y.$ \[lowerbound\] Suppose $T:X\to Y$ is a bounded operator. Then the following are equivalent: (1) There exists a sequence $\{\xi_n\}_{n\in \mathbb N}\subset X^{**}/X$ such that $\|\xi_n\|=1$ and $\displaystyle \lim_{n\to \infty}\|\hat T\xi_n\|=0.$(2) There exists a bounded sequence $\{x^{**}_n\}_{n\in \mathbb N}\subset X^{**}$ such that $d(x^{**}_n,X)=1$ and $\displaystyle\lim_{n\to \infty} \|T^{**}x_n^{**}\|=0.$ We only need prove (1) implies (2). Pick $w_n^{**}\in\xi_n$ with $\|w_n^{**}\|\le 2.$ Let $\epsilon_n:=\|\hat T\xi_n\|+\frac1n.$ Then there exists $u_n\in X$ with $\|T^{**}w_n^{**}-u_n\| <\epsilon_n.$ We now argue that since $T^{**}w_n^{**}$ is in the weak$^*$-closure of both $u_n+ \epsilon_nB_X$ and $2T(B_X)$, then there exists $v_n\in X$ with $\|v_n\|\le 2$ and $\|Tv_n-u_n\|\le 2\epsilon_n.$ Thus $\|T^{**}(w_n^{**}-v_n)\|\le 3\epsilon_n.$ Letting $x_n^{**}:=w_n^{**}-v_n$ we are done. \[lowerbound2\] Suppose $X$ is a subspace of an L-embedded Banach space $V$, and $Y$ is any Banach space. Suppose $T:X\to Y$ is a bounded linear operator such that ${\mathcal N}(T^{**})\subset X.$ Then there exists $\delta>0$ so that for all $\xi\in X^{**}/X$ we have $\|\hat T\xi\|\ge \delta\|\xi\|.$ We start by proving the Theorem in the special case when ${\mathcal N}(T^{**})=\{0\}.$ Suppose the conclusion is false. Using Proposition \[lowerbound\] we produce a bounded sequence $\{x_n^{**}\}_{n\in \mathbb N}\subset X^{**}$ with $d(x_n^{**},X)=1$ but $\lim_{n\to \infty } \|T^{**}x_n^{**}\|=0.$ We can regard $X^{**}$ as a subspace of $V^{**}$. Now let $\delta_n=d(x_n^{**},V).$ For fixed $n$, if $\rho>\delta_n$, then $x_n^{**}$ is in the weak$^*$-closure of both $X$ and $v+\rho B_V$ for some $v\in V.$ Hence there is $y\in v+\rho B_V$ such that $d(y,X) \le \rho$ by an application of Lemma \[lema\] and so $d(x_n^{**},X)\le 2\rho.$ We conclude that $\delta_n\ge \frac12$ for each $n\in\mathbb N.$ Let us denote by $\Pi$ the $L$-projection of $V^{**}$ onto $V,$ and let $V_s=\text{ker }\Pi.$ Let $v_n:=\Pi x_n^{**}$ and $v_n^{**}:=x^{**}_n-v_n.$ Then $v_n^{**}\in V_s$ and $\|v_n^{**}\|=\delta_n\ge \frac12.$ Let $a:=\sup _{n\in \mathbb N} \|x^{**}_n\|$ and $\eta_n:=\|T^{**}x_n^{**}\|+\frac1n$. We shall define inductively a sequence $\{x_n\}_{n\in \mathbb N}$ in $X,$ and a sequence $\{x_n^*\}_{n\in \mathbb N}$ in $X^*$ such that: $$\label{one} \|x_n\|\le a \qquad n\in\mathbb N$$ $$\label{onea} \|Tx_n\|<\eta_n$$ $$\label{two} \|x^*_n\|<1 \qquad n\in\mathbb N$$ $$\label{three} |x^*_n(x_k)|\ge \frac18 \quad 1\le k\le n .$$ Let us suppose $n\in\mathbb N$ and $\{x_k\}_{k<n},$ and $\{x^*_k\}_{k<n}$ have been determined and satisfy (\[one\]), (\[onea\]), (\[two\]) and (\[three\]); if $n=1$ these sets are empty of course. We shall determine $x_n$ and $x_n^*$. Let $ F:=\langle \{x_1,\ldots,x_{n-1},v_n\}\rangle$ and $G:=\langle \{x_1,\ldots,x_{n-1},v_n,v_n^{**}\}\rangle.$ If $n>1$ we define $\psi=\psi_n\in F^*$ by taking $\psi$ to be a norm-preserving extension of $x^*_{n-1}|_{F\cap X};$ if $n=1$ we simply let $\psi=0.$ Then $\|\psi\|<1.$ Let $\psi(v_n)=re^{i\theta}$ where $0\le \theta<2\pi$ and $r\ge 0.$ We next define $\varphi\in G^*$ to the extension of $\psi$ such that $\varphi(v_n^{**})=\frac14e^{i\theta}.$ We claim that $\|\varphi\|<1.$ In fact if $u^{**}\in G$ then we can write $u^{**}=u+ \mu v_n^{**}$ where $\mu\in\mathbb C$ and $u\in F$. Then $$\begin{aligned} |\varphi(u^{**})|&\le |\psi(u)|+\frac14|\mu|\\ &\le \|\psi\|\|u\|+\frac12 |\mu|\|v_n^{**}\| \\ &\le \max(\frac12,\|\psi\|)\|u^{**}\|<\| u^{**}\|.\end{aligned}$$ Now by Lemma \[HB\] we can define $v^*\in V^*$ with $\|v^*\|<1$ and $u^{**}(v^*)=\varphi(u^{**})$ for $u^{**}\in G.$ Let $x^*_{n}$ be the restriction of $v^*$ to $X$. Now consider the sets $C_1=\{x:\|x\|\le a\},\ C_2=\{x:\|Tx\|\le \|T^{**}x_n^{**}\|\}$ and $C_{3}=\{x:\ x^*_{n}(x)=x^{**}_n(x^*_{n}) \}.$ Clearly $x^{**}_n$ belongs to the weak$^*$-closure of each set. By Lemma \[lema\] we can find $x_n\in C_1$ with $\|Tx_n\|<\eta_n,$ and so that $$|x^*_{n}(x_n)|> |x^{**}_n(x^*_n)|-\frac18.$$ It is now clear that (\[one\]), (\[onea\]) and (\[two\]) hold. For (\[three\]) note that if $k<n$ we have $x^*_n(x_k)=x^*_{n-1}(x_k)$ while $$|x_n^*(x_n)|\ge |x_n^{**}(x_n^*)|-\frac18 = (\frac14+r)-\frac18\ge \frac18.$$ Now the proof is completed (for the special case ${\mathcal N}(T^{**})=\{0\}$) by observing if $x^*$ is any weak$^*$-cluster point of the sequence $\{x_n^*\}_{n\in \mathbb N}$ then $|x^*(x_n)|\ge \frac18$ for all $n.$ Since $\lim_{n\to\infty}\|Tx_n\|=0$ this contradicts Proposition \[oneone\], since $x_n$ does not converge weakly. To treat the general case suppose $R={\mathcal N}(T^{**})={\mathcal N}(T).$ Then $R$ is reflexive. Consider the induced map $T_0:X/R\to Y$; clearly ${\mathcal N}(T_0^{**})=\{0\}.$ We next note that $X/R$ embeds into $V/R$ and $V/R$ is L-embedded [@HWW] p.160. Hence $\hat T_0$ satisfies a lower bound on $Z=(X/R)^{**}/(X/R).$ However it is easily seen that $Z$ coincides with $X^{**}/X$ and $\hat T_0=\hat T.$ We next need some facts about Grothendieck spaces: \[Kernels\] Suppose $Y$ is a Grothendieck space and that $T:X\to Y$ is a bounded linear operator such that $T^{*}$ is one-one. Then $T^{***}$ is one-one. Suppose $\{y_n^*\}_{n\in \mathbb N}\subset Y^*$ is a bounded sequence such that $\lim_{n\to \infty }\|T^*y_n^*\|=0.$ Let $y^*$ be any weak$^*$-cluster point of $\{y_n^*\}_{n\in \mathbb N}$. Then $T^*y^*=0$ so that $y^*=0.$ Therefore $\displaystyle \lim_{n\to\infty} y_n^*=0$ weak$^*$. But since $Y$ is a Grothendieck space this implies $\displaystyle \lim_{n\to \infty } y_n^*=0$ weakly and we can apply Proposition \[oneone\]. \[Grothendieck\] Suppose $X$ is a Grothendieck space and $Y$ is a subspace of $X$ so that $X/Y$ is reflexive. Then $Y$ is a Grothendieck space. Suppose $T:Y\to c_0$ is any bounded operator. Then we may find a Banach space $E\supset c_0$ with $E/c_0\cong X/Y$ and an extension $\tilde T: X\to E.$ We claim $E$ is an Asplund space. Indeed if $F$ is a separable subspace of $E$ then let $F'$ be the closure of $c_0+F$ which is also separable. Then $F'/c_0$ is separable and reflexive so that since $c_0^*\cong \ell_1$ is separable, $F'$ has separable dual. Now it follows from a deep result of Hagler and Johnson [@haglerjohnson] (see also [@diestel]) that $B_{E^*}$ is weak$^*$-sequentially compact. Hence if $(e_n^*)$ is any sequence in $B_{E^*}$ there is a subsequence $(f_n^*)$ so that $\tilde T^*f_n^*$ is weak$^*$ and hence weakly convergent in $X^*.$ Thus $\tilde T$ is weakly compact by Gantmacher’s theorem (see [@megginson] Theorem 3.5.13 p. 343) and in particular $T$ is weakly compact. \[reflexive\] Suppose $X$ and $Y$ are Banach spaces and $Y$ is a Grothendieck space. Suppose $T:X\to Y$ is a bounded operator such that $Y/\overline{{\mathcal R}(T)}$ is reflexive. Then ${\mathcal N}(T^{***})\subset Y^*.$ Let $Y_0=\overline{{\mathcal R}(T)}.$ Then by Proposition \[Grothendieck\] $Y_0$ is also a Grothendieck space. We write $T=JT_0$ where $J:Y_0\to Y$ is the inclusion map and $T_0:X\to Y_0$. Clearly $(Y/Y_0)^*\cong {\mathcal N}(T^*)$ is reflexive. We observe that $T_0^*$ is one-one and by Proposition \[Kernels\] we obtain that $T_0^{***}$ is also one-one. Now, since $Y/Y_0$ is reflexive, this implies ${\mathcal N}(T^{***})= {\mathcal N}(J^{***})={\mathcal N}(J^*)\subset Y^*$ as required. \[main\] Let $X$ be a non-reflexive complex Banach space which is a Grothendieck space such that $X^*$ is isometric to a subspace of an $L$-embedded space. Suppose $T:X\to X$ is a bounded linear operator. Then there exists $\lambda\in\mathbb C$ so that $X/\overline{{\mathcal R(\lambda -T)}}$ is non-reflexive (and hence non-separable). In particular the point spectrum $\sigma_p(T^*)$ is non-empty. Let $S=T^*$. Then since $X$ is non-reflexive, the operator $\hat S$ has non-empty spectrum and furthermore for any $\lambda$ in the boundary $\partial \sigma(\hat S)$ there is a sequence $\xi_n\in X^{***}/X^*$ with $\|\xi_n\|=1$ so that $\lim_{n\to\infty}\| (\lambda -\hat S)\xi_n\|=0.$ This implies that for $\lambda\in\partial \sigma (\hat S)$ we have ${\mathcal N}((\lambda-S)^{**})$ is not contained in $X^*$ by Theorem \[lowerbound2\]. Then we apply Theorem \[reflexive\] and deduce that $X/\overline{{\mathcal R}(\lambda -T)}$ is non-reflexive. By Proposition \[Kernels\] we have that $\lambda \in \sigma _p(T^*)$. Our main example for Theorem \[main\] is when $X$ is a von Neumann algebra. The fact that von Neumann algebras have the Grothendieck property is a recent result of Pfitzner [@P]. In fact slightly more follows from Pfitzner’s work. \[Pfitzner\] Let $A$ be a von Neumann algebra and suppose $T:A\to Y$ fails to be weakly compact, then there is a closed subspace $E$ of $A$ such $T|_E$ is an isomorphism and $E$ is isomorphic to $\ell_{\infty}.$ Suppose $T$ fails to be an isomorphism on any subspace isomorphic to $\ell_{\infty}.$ Let $A_0$ be any maximal Abelian subalgebra of $A.$ Then it follows from classical results of Rosenthal [@R] that $T$ is weakly compact on $A_0$; by Pfitzner’s Theorem [@P] Theorem 1 (see also Corollary 10), $T$ is weakly compact. \[main2\] Let $X$ be a non-reflexive quotient of a von Neumann algebra, and let $T:X\to X$ be any bounded linear operator. Then there exists $\lambda\in\mathbb C$ so that $X/\overline{{\mathcal R(\lambda -T)}}$ contains an isomorphic copy of $\ell_{\infty}$ and hence ${\mathcal N}(\lambda -T^*)$ contains an isomorphic copy of $\ell_{\infty}^*.$ In particular the point spectrum $\sigma_p(T^*)$ is non-empty. The dual of any $C^*$-algebra is $L$-embedded ([@Takesaki], [@HWW]) and so it follows from the work of Pfitzner [@P] that $X$ satisfies the hypotheses of Theorem \[main\]. Proposition \[Pfitzner\] implies that if $X/\overline{{\mathcal R(\lambda -T)}}$ is non-reflexive then it contains a complemented isomorphic copy of $\ell_\infty$. Since $\left(X/\overline{{\mathcal R(\lambda -T)}}\right) ^*\cong {\mathcal N}(\lambda -T^*)$, there exists in ${\mathcal N}(\lambda -T^*)$ an isomorphic copy of $(\ell_\infty)^*$. Applications to hypercyclic operators {#applications} ===================================== A bounded linear operator $T$ on a complex Banach space $X$ is called [*hypercyclic*]{} if there is a vector $x\in X $ (called [*hypercyclic vector for $T$*]{}) such that $\{ T^nx\;\;:\;\; n\in \mathbb N\}$ is dense on $X$. This concept is related to the problem of the existence of proper closed invariant subsets for a bounded linear operator. It is an open problem whether every bounded linear operator on a Hilbert space has a proper closed invariant subset, or equivalently if every operator has a non-zero vector which is not hypercyclic. We refer to [@Grosse] for an excellent survey. We note that a non-separable Banach space cannot support a hypercyclic vector. An approach to obtain something similar to hypercyclicity in a non-separable Hilbert and Banach spaces was given by K. Chan [@chan] and A. Montes and C. Romero [@montes], respectively. In fact, they give certain “hypercyclicity" results in ${\mathcal L}(X)$ where $X$ is a separable Banach space, using the strong operator topology in place of the standard uniform norm topology. It is however possible to extend the notion of hypercyclic operators to nonseparable Banach spaces in a natural way using result of [@Godefroy-Shapiro]. Let us say that an operator $T$ on an arbitrary Banach space is [*topologically transitive*]{} if for every pair $U,\;V$ of non-void open subsets of $X$, there exists a positive integer $n$ such that $T^n(U) \cap V\neq \emptyset. $ In Theorem 1.2 of [@Godefroy-Shapiro] it is proved if $X$ is a separable Banach space, then $T$ is hypercyclic if and only if $T$ is topologically transitive.   The following Proposition is immediate. \[Toptrans\] A bounded linear operator $T$ is topologically transitive if and only if every proper closed invariant subset has empty interior. An argument similar to the result due to J. Bés and A. Peris [@Bes-Peris] provides a sufficient condition for topological transitivity. \[hc\] [*(Topologically transitive Criterion)*]{} Let $T$ be a bounded linear operator on a complex Banach space $X$ (not necessarily separable). Suppose that there exists a strictly increasing sequence of positive integers $\{ n_k\}_{k\in \mathbb N}\subset \mathbb N$ for which there are: 1. A dense subset $X_0\subset X$ such that $ T^{n_k}x\to 0$ for every $x\in X_0$. 2. A dense subset $Y_0\subset X$ and a sequence of mappings $S_{k}:Y_0 \to X$ such that 1. $ S_{k}y\to 0$ for every $y\in Y_0$. 2. $T^{n_k}S_{k}y\to y$ for every $y\in Y_0$. Then $T$ is topologically transitive. [*Example.*]{} The following example was suggested by J. Shapiro. Let us use Proposition \[hc\] to show that there is a topologically transitive operator on any Hilbert space. If $H$ is separable, then the result is clear by S. Ansari [@ansari] and L. Bernal [@bernal]. If $H$ is a non-separable Hilbert space we write $H=\ell_2(X)$ where $X$ is a Hilbert space of the same density character. Define $T$ as twice the backward shift on $\ell_2(X)$, that is $$T(x_1, x_2, \dots ):=2(x_2, x_3, \dots ).$$ Using Proposition \[hc\], we have that $T$ is topologically transitive taking $n_k=k,$ $$X_0:=\{ \mbox{ finitely non-zero sequences in } \ell_2(X)\}$$ $$Y_0:=\ell_2(X),$$ $$S(x_1, x_2, \dots ):=\frac{1}{2} (0, x_1, x_2, \dots )$$ and $S_{k}:=S^{k}$. Clearly this example can be modified to replace $H$ by any space $\ell_p(I)$ where $1\le p<\infty.$ It has been shown by S. Ansari [@ansari] and L. Bernal [@bernal] that any separable complex Banach space supports a hypercyclic operator. Recently, J. Bonet and A. Peris gave a version for ${\cal F}$-spaces [@bonet-peris]. This suggests the corresponding problem of determining whether every complex Banach space supports a topologically transitive operator. This question has a negative answer. In order to see this, we need to give a spectral property of topologically transitive operators, which is well-known in the case of hypercyclic operators [@kitai] and [@herrero]. \[espectro-puntual\] Let $T$ be a bounded linear operator on a complex Banach space. If $T$ is topologically transitive, then $\sigma_p(T^*)$ is empty. If $\lambda\in\sigma_p(T^*)$ and $x^*$ is a corresponding eigenvector then one of the sets $\{x:|x^*(x)|\ge 1\}$ or $\{x:|x^*(x)|\le 1\}$ is an invariant set with non-empty interior. Then use Proposition \[Toptrans\]. As pointed out in the introduction the examples of [@Shelah] and [@Shelah2] of non-separable spaces such that every bounded operator is a perturbation of a multiple of the identity by an operator with separable range give examples of spaces which support no topological transitive operators. However, the following theorem shows that $\ell_{\infty}$ and $\mathcal L(\ell_2)$ are more natural examples, where $\mathcal L(\ell_2)$ denotes the algebra of all bounded linear operators on $\ell_2.$ \[appli\] Let $X$ be a non-reflexive quotient of a von Neumann algebra. Then $X$ does not support a topologically transitive operator. In particular $\mathcal L(\ell_2)$ and $\ell_{\infty}$ do not support a topologically transitive operator. Just apply Theorem \[main2\] and Proposition \[espectro-puntual\].   We conclude with a remark on ultrapowers. We recall some concepts about ultrapowers of Banach spaces and operators. See [@Heinrich] for more information. We fix a non-trivial ultrafilter $\mathcal U$ on the set $\mathbb N$ of all positive integers. For every Banach space $X$, we consider the Banach space $\ell_{\infty}(X)$ of all bounded sequences $(x_n)$ in $X$, endowed with the norm $\|(x_n)\|_{\infty}:= \sup\{\|x_n\|: n\in \mathbb N\}$. Let $N_{\mathcal U}(X)$ be the closed subspace of all sequences $(x_i)\in\ell_{\infty}(X)$ which converge to $0$ following $\mathcal U$. The [*ultrapower of $X$ following $\mathcal U$*]{} is defined as the quotient $$X_{\mathcal U}:=\frac{\ell_{\infty}(X)}{N_{\mathcal U}(X)}.$$ The element of $X_{\mathcal U}$ including the sequence $(x_i)\in \ell_{\infty}(X)$ as a representative is denoted by $[x_i]$. Its norm in $X_{\mathcal U}$ is given by $$\bigl\|[x_n]\bigr\|=\lim_{\mathcal U}{\| x_n\|}.$$ The constant sequences generate a subspace of $X_{\mathcal U}$ isometric to $X$. So we can consider the space $X$ embedded in $X_{\mathcal U}$. Moreover, every operator $T\in L(X, Y)$ admits an extension $T_{\mathcal U}\in L(X_{\mathcal U},Y_{\mathcal U})$, defined by $$T_{\mathcal U}([x_n]) := [T x_n],\,\,\, [x_n]\in X_{\mathcal U}.$$ An easy argument with ultrapowers gives that any ultrapower cannot be a topologically transitive operator. This fact can be obtained by the following easy argument. Let $\mathcal U$ be an ultrafilter, $X$ a complex Banach space and $T$ any bounded linear operator on $X$. Then $T_\mathcal U$ is not topologically transitive. We note that any $\lambda\in\partial \sigma(T)$ is in the approximate point spectrum of $T^*$, i.e. there exists a sequence $\{x_n^*\}_{n\in \mathbb N}$ in $X^*$ with $\|x_n^*\|=1$ and $\lim_{n\to\infty}\| \lambda x_n^*-T^*x_n^*\|=0.$ Now let $\xi^*\in X_{\mathcal U}^*$ be defined by $\xi^*([x_n])=\displaystyle \lim_{\mathcal U}x_n^*(x_n).$ Then $\xi^*\in\sigma_p(T_{\mathcal U}^*)\neq \emptyset,$ so we can apply Proposition \[espectro-puntual\].   We conclude with the following open question: [*Is there any characterization of non-separable Banach spaces which support a topologically transitive operator?*]{} [WW]{} S. Ansari, [*Existence of hypercyclic operators on topological vector spaces,*]{} J. Funct. Anal. [**148**]{} (1997), 384–390. L. Bernal-González, [*On hypercyclic operators on Banach spaces,*]{} Proc. Amer. Math. Soc. [**127**]{} (1999), 1003–1010. J. Bes and A. Peris, [*Hereditarily hypercyclic operators,*]{} J. Funct. Anal. [**167**]{} (1999), 94–112. J. Bonet and A. Peris, [*Hypercyclic operators on non-normable Fréchet spaces,*]{} J. Funct. Anal. [**159**]{} (1998), 587–595. K.C. Chan,[*Hypercyclicity of the operator algebra for a separable Hilbert space,*]{} J. Operator Theory [**42** ]{}(1999), 231–244. R. Deville, G. Godefroy and V. Zizler, [*Smoothness and renormings in Banach spaces,*]{} Pitman Monographs no. 64, Longman, London 1993. J. Diestel, [*Sequences and series in Banach spaces,*]{} Springer Verlag, Berlin 1984. G. Godefroy and J.H. Shapiro,[*Operators with dense, invariant, cyclic vector manifolds,*]{} J. Funct. Anal. [**98**]{} (1991), 229–269. K. Grosse-Erdmann, [*Universal families and hypercyclic operators,*]{} Bull. Amer. Math. Soc. [**36**]{} (1999), 345–381. J. Hagler and W.B. Johnson, [*On Banach spaces whose dual balls are not weak$^*$-sequentially compact,*]{} Israel J. Math. [**28**]{} (1977) 325–330. P. Harmand, D. Werner and W. Werner, [*M-ideals in Banach spaces and Banach algebras,*]{} Springer Lecture Notes 1547, Springer, Berlin 1993. S. Heinrich, [*Ultraproducts in Banach space theory,*]{} J. Reine Angew. Math. [**313**]{} (1980), 72–104. D.A. Herrero, [*Hypercyclic operators and chaos,*]{} J. Operator Theory [**28**]{} (1992), 93–103. C. Kitai, [*Invariant Closed Sets for Linear Operators,*]{} Ph.D. thesis, Univ. of Toronto, 1982. R.E. Megginson, [*An introduction to Banach space theory,*]{} Graduate Texts no. 183, Springer Verlag, New York 1998. A. Montes-Rodríguez and C. Romero-Moreno, [*Supercyclicity in the operator algebra,*]{} preprint. H. Pfitzner, [*Weak compactness in the dual of a $C\sp *$-algebra is determined commutatively,*]{} Math. Ann. [**298**]{} (1994), 349–371. H.P. Rosenthal, [*On relatively disjoint families of measures, with some applications to Banach space theory,*]{} Studia Math. [**37**]{} (1970) 13–36. S. Shelah, [*A Banach space with few operators,*]{} Israel J. Math. [**30**]{} (1978), 181–191. S. Shelah and J. Steprans, [*A Banach space on which there are few operators,*]{} Proc. Amer. Math. Soc. [**104**]{} (1988) 101–105. M. Takesaki, [*On the conjugate space of an operator algebra,*]{} Tohoku Math. J. [**10**]{} (1958) 194–203. P. Wojtaszczyk, [*Banach spaces for analysts,*]{} Cambridge University Press, Cambridge 1991. [^1]: The first author was supported by DGICYT Grant PB 97-1489 (Spain); the second was supported by NSF grant DMS-9870027
--- abstract: 'Let $A$ be an infinitely generated free abelian group. We prove that the automorphism group $\aut A$ first-order interprets the full second-order theory of the set $|A|$ with no structure. In particular, this implies that the automorphism groups of two infinitely generated free abelian groups $A_1,A_2$ are elementarily equivalent if and only if the sets $|A_1|,|A_2|$ are second-order equivalent.' address: | Department of Mathematics\ Yeditepe University\ 34755 Kayişdaği\ Istanbul\ Turkey author: - Vladimir Tolstykh title: | What does the automorphism group\ of a free abelian group $A$ know about $A$? --- [^1] Introduction {#introduction .unnumbered} ============ In his paper [@ShCler] of 1976 Shelah proved that the elementary theories of the endomorphism semi-groups of free algebras of ‘large’ infinite ranks had very strong expressive power. More precisely, let $\bold V$ be an [*arbitrary*]{} variety of algebras and $F_\vk(\bold V)$ be a free algebra from $\bold V$ with $\vk {\geqslant}\aleph_0$ free generators. Then the endomorphism semi-group $\End{F_\vk(\bold V)}$ first-order interprets the full-second theory $\Theo_2(\vk)$ of the cardinal $\vk$ (viewed as a set with no structure), provided that $\vk$ is greater than the cardinality of the language of $\bold V.$ That remarkable result naturally leads to the following problem: what are the varieties of algebras for which the [*automorphism groups*]{} of free algebras are logically strong in a similar sense? Shelah himself formulated this problem in the cited paper [@ShCler] and then after more than 20 years mentioned it again in his survey [@ShMatJap]: Problem 3.14 from [@ShMatJap] suggested to classify the varieties of algebras $\bold V$ such that the automorphism groups $\aut{F_\vk(\bold V)}$ first-order interpret the theory $\Theo_2(\vk)$ for all (or all sufficiently large) infinite cardinals $\vk.$ The results on symmetric groups obtained by Shelah before the publication of the paper [@ShCler] implied that, for instance, the variety of all sets with no structure and the variety of all semi-groups were the examples of, say, ‘negative’ kind. Indeed, according to [@Sh1], the symmetric group of an infinite cardinal $\vk,$ in other words, the automorphism group of the set $\vk$ with no structure, first-order interprets the theory $\Theo_2(\vk)$ only if the cardinal $\vk$ is ‘small’ (namely, at most $2^{\aleph_0}$). The author found in [@ToAPAL]–as a byproduct of his study of the elementary types of infinite-dimensional classical groups–that for any variety of vector spaces the automorphism groups of free algebras are as logically strong as the endomorphism semi-groups. A bit informally, one of the results from [@ToAPAL] can be quoted in the following form: if $\vk$ is an infinite cardinal, then the general linear group $\gl{\vk,D}$ over a division ring $D$ first-order interprets $\Theo_2(\vk),$ provided that $\vk > |D|.$ Thus varieties of vector spaces give examples of ‘positive’ kind as to Shelah’s problem. In the papers [@ToJLM2] and [@ToContMat] the author studied Shelah’s problem for classical group varieties. It turned out that the variety of all groups and any variety $\frak N_c$ of nilpotent groups of class $c {\geqslant}2$ meet the requirements of Shelah’s problem: if $F$ is an infinitely generated free or free nilpotent group, then the group $\aut F$ first-order interprets the theory $\Theo_2(|F|)$ $(=\Theo_2(\rank F).)$ In the present paper we examine the case of the variety of all abelian groups. The main result of the paper states that the variety in question also meets requirements of Shelah’s problem. Let $A$ denote an infinitely generated free abelian group; clearly, $A$ can be considered as a free $\Z$-module. One of the standard approaches to understanding of the nature of the automorphism groups of modules is an investigation of possibility of generalization for these groups of the methods developed for general linear groups, the automorphism groups of vector spaces. In the first section of the paper we, like in [@ToAPAL], work to reconstruct by means of first-order logic in $\aut A$ some geometry of the $\Z$-module $A.$ Namely, we interpret in $\aut A$ the family $\cD^1(A)$ consisting of all direct summands of $A$ having rank or corank one. To make comparison, the first-order interpretation in the general linear group $\gl V$ of an infinite-dimensional vector space $V$ of the family of all lines and hyperplanes of $V$ done in [@ToAPAL] is much longer. However, both interpretations have much in common and both originated from the well-known works on classical groups. In principle, the reconstruction of $\cD^1(A)$ can be extended to the reconstruction in $\aut A$ of the family $\cD(A)$ of all direct summands of $A$ followed by the first-order interpretation in the structure $\str{\aut A,\cD(A)}$ of the endomorphism semi-group $\End A$ of $A$ (similarly to [@ToAPAL]). We, however, prefer a shorter way, making in Section 2 an effort to reconstruct in $\aut A$ the general linear group of some vector space of dimension $|A|.$ Namely, using the action of $\aut A$ on $\cD^1(A)$ we prove $\varnothing$-definability in $\aut A$ of the principal congruence subgroup $\Gamma_2(A)$ of level two. The quotient subgroup $\aut A/\Gamma_2(A)$ is isomorphic to the general linear group of the vector space $A/2A$ over the field $\Z_2.$ Thus the group $\aut A$ first-order interprets the group $\gl{|A|,\Z_2}.$ The latter group, as it has been said above, first-order interprets the theory $\Theo_2(|A|).$ As a consequence, we have that the automorphism groups $\aut{A_1}$ and $\aut{A_2},$ where $A_1,A_2$ are infinitely generated free abelian groups, are elementarily equivalent if and only the cardinals $|A_1|$ and $|A_2|$ are second-order equivalent as sets. The author is very grateful to Oleg Belegradek for his kind and genuine interest in this research and for his valuable comments on the first draft of this paper. The author would like also to thank Valery Bardakov for helpful discussions. Definable geometric properties of automorphisms =============================================== Let $A$ denote a free abelian group of infinite rank. As it has been said in the Introduction, our aim in this section is a first-order reconstruction in $\aut A$ of the family of direct summands of $A$ of rank or corank one (we say that a direct summand $B$ of $A$ has [*corank $m$*]{}, if any direct complement of $B$ to $A$ is of rank $m$.) We shall essentially exploit the structure of involutions (the elements of the order two) in the group $\aut A$ given by the following theorem. \[HRCanForm\] Let $G$ be a free abelian group. Every involution $\f \in \aut G$ has a basis $\cB$ of $G$ such that for any $b \in \cB$ either $\f b=\pm b,$ or $\f b \in \cB.$ The theorem was first established for the groups of finite rank by Hua and Reiner [@HuaRei Lemma 1]; in general, the result is proven in [@ToCamb]. Let us call a basis of $A$ on which $\f$ acts in a way described in the Theorem a [*canonical*]{} basis for $\f.$ Let $2A$ denote the group of even elements of $A$: $$2A=\{2a : a \in A\}.$$ The natural homomorphism $A \to A/2A$ induces the homomorphism of the automorphism groups $\aut A \to \aut{A/2A}$ which we will denote by $\widehat{\phantom a}.$ The fact that the group $A/2A$ can be viewed as a vector space over $\Z_2$ will be extensively used in this paper. \[To-Can-Forms\] *Take an involution $\f \in \aut A$ and some its canonical basis $\cB.$ The (cardinal) number $p(\cB)$ of unordered pairs $\{b,\f b\},$ where $b \in \cB$ and $\f b \ne \pm b$ is an invariant of $\f.$ Indeed, $p(\cB)$ equals the [*residue*]{} of the induced linear transformation $\widehat{\f}$ of the vector space $A/2A$ over $\Z_2$: $$p(\cB) =\text{res}(\widehat \f) =\dim \text{Res}(\widehat \f).$$ (here $\text{Res}(\widehat \f)$ is the image of the linear transformation $1-\widehat \f,$ see [@O'M]). This implies that if $(\f_1,\cB_1),$ $(\f_2,\cB_2)$ are pairs similar to the pair $(\f,\cB)$ and $\f_1,\f_2$ are conjugate in $\aut A,$ then $p(\cB_1)=p(\cB_2).$* Let $\f$ be an involution in $\aut A$; we let $A^+_\f$ and $A^-_\f$ denote the subgroups $$\{a : \f a =a\} \text{ and } \{a: \f a = -a\}$$ respectively; clearly, $\f$ is diagonalizable if and only $$A = A^+_\f \oplus A^-_\f.$$ It is helpful to remember that two diagonalizable involutions from $\aut A$ are commuting if and only if there is a basis of $A$ in which they both diagonalizable. We shall call a diagonalizable involuton $\f$ a $\gamma$-[*involution*]{}, where $\gamma$ is a cardinal, if $$\gamma = \rank A^-_\f < \rank A^+_\f.$$ $1$-involutions, like in linear group theory, will be called [*extremal*]{} involutions. A number of facts on definability of certain families of involutions in the automorphism groups of infinitely generated free abelian groups has been proved implicitly in the author’s paper [@ToCamb]. Because of that we shall give only sketches of proofs for the next two statements, Lemma \[Diags-Are-Def\] and Lemma \[1-2-4-Def\]; the reader is referred to the proof of Proposition 2.4 in [@ToCamb] to find there the omitted details. For an involution $\f$ in the group $\aut A$ we shall denote by $K(\f)$ the conjugacy class of $\f$ in $\aut A.$ The set $K^2(\f)=K(\f)K(\f)$ is the family of all products $\f_1\f_2,$ where $\f_1,\f_2 \in K(\f).$ \[Diags-Are-Def\] The family of all diagonalizable involutions is $\varnothing$-definable in $\aut A.$ We claim that $\f$ is diagonalizable if and only if the set $K^2(\f)$ contains no elements of order three. Using Theorem \[HRCanForm\] one checks that the diagonalizable involutions are exactly involutions in the kernel of the homomorphism $\widehat{\phantom a} : \aut A \to \aut{A/2A}.$ On the other hand, the images under $\widehat{\phantom a}$ of all elements of order three from $\aut A$ are non-trivial. This implies that if $\f$ is diagonalizable, then there are no elements of order three in $K^2(\f).$ Conversely, for any non-diagonalizable involution $\psi \in \aut A$ we can easily find a conjugate $\psi'$ of $\psi$ such that the automorphism $\psi\psi'$ is of order three. \[1-2-4-Def\] The families of extremal involutions [*(*]{}$1$-involutions[*)*]{}, $2$-involutions and $4$-involutions are all $\varnothing$-definable in $\aut A.$ A diagonalizable involution $\f$ is an extremal involution if and only if all involutions in $K^2(\f)$ are conjugate and $\f$ is not a square in $\aut A.$ Indeed, if $\f$ is an extremal involution, then the only involutions in the set $K^2(\f)$ are $2$-involutions. In particular, all involutions in $K^2(\f)$ are conjugate. Applying Theorem \[HRCanForm\], we can demonstrate that the latter property holds also only for diagonalizable involutions $\rho$ such that $$\rank A^+_\rho =1.$$ But any such an involution is a square in $\aut A,$ whereas any $1$-involution is not. The $2$-involutions are the only involutions from $K^2(\f),$ where $\f$ is an arbitrary $1$-involution. Let $\theta$ be a $2$-involution. Then $4$-involutions are those involutions in $K^2(\theta)$ that are not conjugate to $\theta.$ We need also a family of [*non-diagonalizable*]{} involutions $\{\pi\}$ whose elements satisfy the condition $$\rank A^+_\pi=1 \text{ or } \rank A^-_\pi=1.$$ For any canonical basis $\cB$ for a non-diagonalizable involution $\pi$ with ([.]{}) we have that - $\cB$ contains exactly one pair of distinct elements, say, $b,c$ taken by $\pi$ to one another (Remark \[To-Can-Forms\]); - $\pi$ either inverts [*all*]{} elements in $\cB\setminus \{b,c\},$ or fixes [*all*]{} these elements (otherwise, both subgroups $A^+_\pi$ and $A^+_\pi$ were of rank $> 1$). Thus either $\pi \sim \pi',$ or $\pi \sim -\pi'$ for every pair of non-diagonalizable involutions $\pi,\pi'$ with ([.]{}), where $\sim$ denotes the conjugacy relation. Keeping in mind (a), we shall call non-diagonalizable involutions with ([.]{}) by $1$-[*permutations.*]{} The following statements are equivalent: - $\pi$ is a $1$-permutation; - $\pi$ is not diagonalizable and the set $K^2(\pi)$ contains no $4$-involutions. In particular, the family of $1$-permutations is $\varnothing$-definable in $\aut A.$ Let $\pi$ be a non-diagonalizable involution, which is not a $1$-permutation, and let $\cB$ be a canonical basis for $\pi.$ One then can readily find $\pi',$ a conjugate of $\pi,$ whose product with $\pi$ is a $4$-involution. Indeed, suppose first that $p(\cB) > 1$ (the notation was introduced in Remark \[To-Can-Forms\]). Then $\cB$ contains distinct elements $b_1,b_2,b_3,b_4$ such that $\pi b_1=b_2$ and $\pi b_3=b_4.$ The second case is the case when $p(\cB)=1.$ Here $\pi b_1=b_2$ for some distinct $b_1,b_2 \in \cB$ and, since $\pi$ is not a $1$-permutation, two such elements $b_3$ and $b_4$ can be found in $\cB$ that $$\pi b_3 = b_3 \text{ and } \pi b_4 = -b_4.$$ Then, for both of the cases under consideration, we construct $\pi'$ as follows: $\pi'b_i=-\pi b_i$ for $i=1,\ldots,4$ and $\pi' b=\pi b$ for all $b \in \cB\setminus \{b_1,b_2,b_3,b_4\}.$ Conversely, suppose $\pi$ is a 1-permutation. We may assume that $\rank A_\pi^-=1.$ Let $\pi_1, \pi_2$ be conjugates of $\pi.$ Then $\operatorname{Im}(1-\pi_1)$ and $\operatorname{Im}(1-\pi_2)$ are subgroups of rank 1. Since $$1-\pi_1\pi_2=(1-\pi_2)+(1-\pi_1)\pi_2,$$ we have $$\operatorname{Im}(1-\pi_1\pi_2)\subseteq \operatorname{Im}(1-\pi_1)+\operatorname{Im}(1-\pi_2),$$ and so $\rank \operatorname{Im}(1-\pi_1\pi_2){\leqslant}2.$ Then $\pi_1\pi_2$ is not a 4-involution because for any 4-involution $\psi$ we have $\rank \operatorname{Im}(1-\psi)=4.$ Until the end of this section we fix some $2$-involution $\theta^*.$ In order to mark somehow one special type of commutativity with $\theta^*,$ we say that an extremal involution $\psi$ (resp. a $1$-permutation $\psi$) commutes with $\theta^*$ [*properly*]{}, if $\psi \sim \theta^* \psi.$ We fix also an extremal involution $\f^*$ and a $1$-permutation $\pi^*$ both properly commuting with $\theta^*$ such that $$(\pi^* \f^*)^2 = \theta^*.$$ Let $B$ denote the subgroup $A^-_{\theta^*}.$ Since both $\f^*$ and $\pi^*$ commute with $\theta^*,$ they both preserve $B$: $$\f^* B = \pi^* B=B.$$ Since, further, $\f^*$ and $\pi^*$ commute with $\theta^*$ properly, their restrictions to $B$ are an extremal involution and a $1$-permutation of $\aut B,$ respectively. Let $$f^* = \f^*|_B \text{ and } p^*=\pi^*|_B.$$ We have that $$f^* p^* f^* p^* =-\id_B$$ and then $p^* f^* p^*=-f^*.$ This implies that $p^*$ takes to each other the subgroups $A^+_{f^*}$ and $A^-_{f^*}$: $$p^* A^+_{f^*} = A^-_{f^*}.$$ If then $e_1$ is a basis element of $A^+_{f^*},$ then $e_2=p^* e_1$ is a basis element of $A^-_{f^*}.$ Summing up, we see that in the basis $\{e_1,e_2\}$ of $B$ the automorphisms $f^*$ and $p^*$ have the matrices $$\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \text{ and } \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix},$$ respectively. Next is the proof of $\varnothing$-definability of certain transvections. Recall that a [*unimodular*]{} element of $A$ (a [*primitive*]{} element in a more general context) is an element of $A$ that can be included in some basis of $A.$ Let $\delta : A \to \Z$ be a non-zero homomorphism of abelian groups; in this case the $\ker\delta$ is a direct summand of $A$ of corank 1. Fix a unimodular element $x$ in $\ker \delta.$ Then the mapping $$\tau a =a +\delta(a) x,$$ is an automorphism of $A,$ which is called a [*transvection.*]{} If $\tau$ is a transvection determined by a homomorphism $\delta,$ then one may correctly associate with $\tau$ a natural number defining it via $$m(\tau) = |\delta(y)|$$ where $y \in A$ satisfies $A =\str y \oplus \ker \delta.$ It can be easily seen that for every pair $\tau_1,\tau_2$ of transvections $m(\tau_1)=m(\tau_2)$ if and only if $\tau_1$ and $\tau_2$ are conjugate. We shall call a transvection $\tau$ an $m$-[*transvection*]{}, if $m(\tau)=m.$ \[4-1-Perms\] [*(i)*]{} Among the conjugates $\rho$ of $\pi^*$ properly commuting with $\theta^*$ there are exactly four ones different from $\pi^*$ that satisfy the equation $$(\pi^* \rho)^3 =\id_A;$$ [*(ii)*]{} The automorphisms $(\f^* \rho)^2,$ where $\rho$ is any of $1$-permutations described in [*(i)*]{}, are all $2$-transvections. Let $\rho$ be a $1$-permutation satisfying the conditions from (i). First note that due to the proper commutativity with $\theta^*,$ the restriction of $\rho$ to $A^+_{\theta^*}$ must be equal to that one of $\pi^*.$ We denote by $R$ the matrix of the restriction of $\rho$ on $B=A^-_{\theta^*}$ in the above described basis $\{e_1,e_2\}.$ Since the condition $(\pi^* \rho)^3=\id$ can be rewritten as $$\pi^* \rho \pi^* =\rho \pi^* \rho,$$ we have $$\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} R \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} = R \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} R.$$ Let now $$R = \begin{pmatrix} a & b\\ c & -a \end{pmatrix},$$ where $a,b,c \in \Z$ (the trace of $R$ should be equal to zero like the trace of any non-central involution in $\gl{2,\Z},$ Theorem \[HRCanForm\]). It follows from ([.]{}) that $$\begin{aligned} -a &= a(b+c),\\ c &=b^2-a^2, \nonumber\\ b &=c^2-a^2. \nonumber\end{aligned}$$ According to ([.]{}), there are two cases for study: $a=0$ and $b+c+1=0.$ In the first case we have that $b=c=1$ and then $\rho=\pi^*,$ which is impossible. The second case: we use the condition $\det R =-1$ ($\rho$ is a conjugate of $\pi^*$). Then $$\det R=-1 =-a^2-bc=-a^2-b(-b-1)$$ or $$a^2=b^2+b+1.$$ The only $b \in \Z$ for which the number $b^2+b+1$ is a square are $b=0,-1.$ Thus, there are indeed at most four possibilities for $R$: $$R= \begin{pmatrix} e & 0 \\ -1 & -e \end{pmatrix}, \begin{pmatrix} e & -1 \\ 0 & -e \end{pmatrix},$$ where $e=\pm 1.$ One easily verifies that for all four $1$-permutations $\rho$ that correspond to the matrices in ([.]{}) and such that $\pi^* c =\rho c$ for all $c \in A^+_{\theta^*},$ the conditions from (i) of the Lemma are true. The statement in (ii) is now a consequence of the following observations: $$\begin{aligned} \left[ \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} e & 0 \\ -1 & -e \end{pmatrix} \right]^2 = \begin{pmatrix} e^2 & 0 \\ 2e & e^2 \end{pmatrix}\\ \left[ \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} e & -1 \\ 0 & -e \end{pmatrix} \right]^2 = \begin{pmatrix} e^2 & -2e\\ 0 & e^2 \end{pmatrix},\end{aligned}$$ where $e=\pm 1.$ \[Def-of-2m-Trans\] The family of all $2m$-tranvections [*(*]{}where $m$ runs over $\N$[*)*]{} is $\varnothing$-definable in $\aut A.$ We shall continue to use the parameters picked up above. One more parameter will be serviceable, however: a $2$-transvection $\tau^*,$ one of the four $2$-transvections described in Lemma \[4-1-Perms\] (ii). Let us consider the set $S$ of automorphisms $\{\f^* \rho\},$ where $\rho$ is an extremal involution or a $1$-permutation properly commuting with $\theta^*.$ If the matrix of the restriction of $\tau^*$ on $B$ is, for instance, $$\begin{pmatrix} 1 & 2 \\ 0 & 1 \end{pmatrix}$$ then only those elements from $S$ commute with $\tau^*$ whose restrictions on $B$ have matrices $$\begin{pmatrix} e & b \\ 0 & e \end{pmatrix}$$ where $e =\pm 1$ and $b \in \Z.$ The squares of the matrices of the form ([.]{}) are matrices $$\begin{pmatrix} 1 & 2b \\ 0 & 1 \end{pmatrix}.$$ Thus the set consisting of squares of elements of $S$ is a set that for each natural number $m$ contains a $2m$-transvection. This implies that a suitably chosen existential formula defines the $2m$-transvections. \[MutSubgr\] Two distinct extremal involutions $\f_1,\f_2$ have the mutual [*(*]{}eigen[*)*]{} subgroup, that is, either $$A^+_{\f_1}=A^+_{\f_2}, \text{ or } A^-_{\f_1}=A^-_{\f_2}.$$ if and only if the product $\f_2\f_1$ is a $2m$-tranvection for some non-zero natural $m.$ Assume that subgroups $A^-_{\f_1}$ and $A^-_{\f_2}$ are generated by unimodular elements $x_1$ and $x_2$ respectively, and write $B_1$ and $B_2$ for $A^+_{\f_1}$ and $A^+_{\f_2}.$ Let also $\tau$ denote the product $\f_2\f_1.$ ($\Leftarrow$). Suppose $B_1 \ne B_2.$ Then the intersection $B_1 \cap B_2$ is of corank $2.$ The fixed-point subgroup $C$ of $\tau$ has corank $1$ and contains (a direct summand of $A$) $B_1 \cap B_2$; then there is a unimodular element $y \in A$ such that $$C =\str y \oplus (B_1 \cap B_2).$$ We have $\f_2 \f_1 y =y$ and then $$\f_1 y - y =\f_2 y -y.$$ The above element is non-zero, since otherwise $y \in B_1 \cap B_2.$ Thus $\str{x_1} \cap \str{x_2} \ne 0,$ or $\str{x_1}=\str{x_2},$ since both $x_1,x_2$ are unimodular. ($\Rightarrow$). (i) Suppose that $B_1 \ne B_2,$ but $\str{x_1} =\str{x_2}.$ Since $B_1 \cap B_2$ is a direct summand of $A$ of corank $2,$ then for some unimodular $z$ $$B_1 = \str z \oplus (B_1 \cap B_2);$$ the element $z$ can be expressed as $m x_2+b_2,$ where $m \in \Z$ and $b_2 \in B_2.$ We then have $$\tau z = \f_2 \f_1 z=\f_2 z=\f_2(m x_2+b_2) = -m x_2+b_2 = m x_2+b_2 -2m x_2=z-2mx_2.$$ Taking into account that $\tau x_2=x_2,$ we see that $\tau$ is a $2m$-transvection. \(ii) Suppose that $B=B_1=B_2$ and $\str{x_1} \ne \str{x_2}.$ The element $x_1$ can be then written as $$x_1 = e x_2 + b=e x_2 +m c,$$ where $b=m c$ is an element of $B$ and $c$ is a unimodular. Hence $$\tau x_1 =\f_2\f_1 x_1 =\f_2(-e x_2-m c) =e x_2-m c= x_1 -2m c$$ and $\tau$ is a $2m$-transvection. Let $\cD^1(A)$ be the family of all direct summands of $A$ having rank or corank one. Then the action of the group $\aut A$ on the family $\cD^1(A)$ is first-interpretable in $\aut A$ without parameters. In view of Lemma \[1-2-4-Def\], Lemma \[Def-of-2m-Trans\] and Lemma \[MutSubgr\] all we have to do is to explain when two pairs of extremal involutions $(\f_1,\f_2)$ and $(\psi_1,\psi_2)$ both having mutual subgroups determine the same direct summand of $A$. It is easy: we just say that for all $i,j$ either $\f_i=\psi_j,$ or $\f_i\psi_j$ is a $2m$-transvection. In the conclusion of the section we present a purely algebraic observation due to Oleg Belegradek who had found it while reading the first draft of the paper. Let $A_1,A_2$ be infinitely generated free abelian groups. The groups $\aut{A_1}$ and $\aut{A_2}$ are isomorphic if and only if the cardinals $\rank A_1$ and $\rank A_2$ are equal. Let $A$ be an infinitely generated free abelian group. It is easy to show that the cardinality of any maximal family of pairwise commuting 1-involutions in ${\rm Aut}(A)$ is equal to rank of $A$. Since, by Lemma 1.4, the 1-involutions are $\varnothing$-definable in ${\rm Aut}(A)$ uniformly in $A$, and isomorphisms preserve first-order formulae, the result follows. Definability of the congruence subgroup of level two ==================================================== Let $m > 1$ be a natural number. Write $\Gamma_m(A)$ for the subgroup of $\aut A$ consisting of the automorphisms of $A$ that act trivially (in the natural way) on the group $A/mA.$ The subgroups $\Gamma_m(A)$ are natural analogues of the principal congruence subgroups of the groups $\spl{n,\Z}.$ We are going to prove $\varnothing$-definability of the subgroup $\Gamma_2(A),$ the principal congruence subgroup of $\aut A$ of level two. As it has been said in the Introduction this will imply a possibility of first-order interpretation in $\aut A$ of the general linear group of the vector space $A/2A$ over the field $\Z_2.$ \[Def-of-Gamma2A\] The subgroup $\Gamma_2(A)$ is $\varnothing$-definable in $\aut A.$ We shall use properties of the group $\spl{3,\Z}$ and with this idea in mind we are going to fix somehow some three direct summands of rank one in $A.$ To achieve that we use certain definable parameters. First, we take three pairwise commuting extremal involutions $\f_1^*,\f_2^*,\f^*_3$ in $\aut A$ such that any product $\f_i^* \f_j^*$, where $i \ne j$ is a $2$-involution. There exists a basis $\cB$ of $A$ in which $\f_1^*,\f_2^*,\f_3^*$ are all diagonalizable. Let $e_i$ denote the element of $\cB$ that $\f_i^*$ ($i=1,2,3$) sends to the opposite. Second, we need two $1$-permutations $\pi_1^*$ and $\pi_2^*$ to provide a suitable action on $\{e_1,e_2,e_3\}$; our requirements on $\pi_1^*$ and $\pi_2^*$ are therefore as follows: - $\pi_1^* \f_1^* \pi_1^* =\f_2^*$ and $\pi_1^*$ commutes with $\f_2^*$; - $\pi_2^* \f_1^* \pi_2^* =\f_3^*$ and $\pi_1^*$ commutes with $\f_3^*$; - $\pi_1^*$ and $\pi_2^*$ are conjugate and their product is of order three. In the following statement we simultaneously introduce and characterize some transvections we are going to deal with. The [*elementary transvections*]{} which act trivially on $\cB \setminus \{e_1,e_2,e_3\}$ and whose matrices in $\{e_1,e_2,e_3\}$ [*(*]{}more precisely, matrices of the corresponding restrictions[*)*]{} are of the form $E+n E_{ij},$ where $1 {\leqslant}i,j {\leqslant}3,$ $i \ne j$ and $E_{ij}$ are the matrix units, are definable with parameters $\f_1^*,\f_2^*,\f_3^*$ and $\pi_1^*,\pi_2^*.$ We choose a $2$-transvection $\tau_1^*,$ one of the four $2$-transvections that satisfy the condition (ii) of Lemma \[4-1-Perms\] for the $2$-involution $\theta_1^*=\f_1^* \f_2^*$ and the $1$-permutation $\pi_1^*.$ Without loss of generality we may suppose that the matrix of $\tau_1^*$ in $\{e_1,e_2,e_3\}$ is $$\begin{pmatrix} 1 & 2 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}.$$ It is easy to see that among the automorphisms $\f_1^* \rho,$ where $\rho$ is either an extremal involution, or a $1$-permutation properly commuting with $\theta_1^*$ there are exactly four automorphisms whose square is $\tau_1^*.$ The reason is that there are two solutions to the matrix equation $$X^2 = \begin{pmatrix} 1 & 2 \\ 0 & 1 \end{pmatrix}$$ in $\spl{2,\Z},$ namely, $$X =\pm \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}$$ and that any automorphism properly commuting with $\theta_1^*$ must act on its fixed-point subgroup, say, $C,$ either as the $\id_C,$ or $-\id_C.$ Let us denote the said four automorphisms by $\s_1,\s_2,\s_3,\s_4$ and let us further agree that $\s_1$ is the [*only*]{} transvection among the automorphisms $\s_i.$ The matrices of the automorphisms $\s_i$ in the basis $\{e_1,e_2,e_3\}$ are $$\pm \begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}, \pm \begin{pmatrix} -1 & -1 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{pmatrix}.$$ (the reader may as well imagine the diagonals of the matrices stretched up to infinity filled with units, but there is actually no need in that, since already three coordinates do the job.) Let $\s$ be one of our automorphisms $\s_i.$ We consider the conjugate $\s'=\pi \s \pi\inv$ of $\s$ by the automorphism $\pi=\pi_2^* \pi_1^*.$ Then the matrix of the commutator $[\sigma,\sigma']= \s \s' \s\inv \s'{}\inv$ is either the matrix $$\begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}, \text{ or } \begin{pmatrix} 1 & 2 & -3 \\ 0 & 1 & -2 \\ 0 & 0 & 1 \end{pmatrix}.$$ Thus only in the case when $\s=\s_1$ we have the commutator $[\s,\s']$ [*conjugate*]{} to $\s.$ Really, as to the automorphisms $\s_2,\s_3,\s_4$ they all have eigen value $-1,$ while none of the commutators $[\s_i,\s_i']$ with $i=2,3,4$ has this eigen value. Summing up, we see that $\s_1,$ a $1$-transvection, is definable over the chosen parameters. Like in the proof of Lemma \[Def-of-2m-Trans\] we see that the elementary transvections whose matrices in $\{e_1,e_2,e_3\}$ are $$\label{2m-transvs} \begin{pmatrix} 1 & 2m & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$$ are definable with parameters $\f_1^*,\f_2^*,\pi_1^*$ and $\tau_1^*.$ Then elementary transvection with the matrices $$\begin{pmatrix} 1 & n & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$$ are also definable with the parameters $\f_1^*,\f_2^*,\pi_1^* ,\pi_2^*,$ $\tau_1^*,$ since they are none the other than either the transvections with matrices , or the products of the transvections with and the elementary transvection with the matrix $$\begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$$ which is now known to be definable over $\f_1^*,\f_2^*,\pi_1^*,\pi_2^*,\tau_1^*.$ The other required elementary transvections are conjugates of the tranvections with ([.]{}) by suitable automorphisms acting on $\{e_1,e_2,e_3\}$ as permutations, definable products of $\pi_1^*$ and $\pi_2^*.$ Claim 1 is proved. Let us note in passing that definability of $1$-transvections with definable parameters we have just proved immediately implies the following proposition. Let $A$ be an infinitely generated free abelian group. Then [*(i)*]{} the family of all transvections is $\varnothing$-definable in $\aut A;$ [*(ii)*]{} Let $m {\geqslant}1$ be a natural number. The family of all $m$-transvections is $\varnothing$-definable in $\aut A.$ Next is the construction of some set which is contained in $\Gamma_2(A)$ and which is definable with our parameters. There is a set $D$ definable with parameters $\f_1^*,$ $\f_2^*,$ $\pi_1^*,$ $\pi_2^*,$ $\tau_1^*,$ $\tau_2^*$ such that [*(i)*]{} the automorphisms from $D$ act trivially on $\cB\setminus \{e_1,e_2,e_3\}$ and their matrices in $\{e_1,e_2,e_3\}$ are congruent modulo $2$ to the identity matrix; [*(ii)*]{} $D$ contains all automorphisms with [*(i)*]{} whose matrices in $\{e_1,e_2,e_3\}$ are of the form $$\begin{pmatrix} a & b & 0 \\ c & d & 0 \\ 0 & 0 & 1 \end{pmatrix}$$ where $$a \equiv d \equiv 1\,(\operatorname{mod} 2) \text{ and } b \equiv c \equiv 0\,(\operatorname{mod} 2).$$ The argument is based upon the remarkable observation made in the paper [@CarKel] by Carter and Keller: > each matrix of the form $$\begin{pmatrix} > a & b & 0 \\ > c & d & 0 \\ > 0 & 0 & 1 > \end{pmatrix}$$ from the (matrix) group $\spl{3,\Z}$ is a product of at most 41 elementary transvections. Suppose that $t_1,\ldots, t_{41}$ are elementary transvections, matrices from $\spl{3,\Z}.$ One corresponds to the product $$t_1 t_2 \ldots t_{41}$$ a sequence $$(\av t_1,\av t_2,\ldots, \av t_{41})$$ where $\av t$ is the image of $t$ in $\spl{3,\Z_2}$ under the natural homomorphism $\spl{3,\Z} \to \spl{3,\Z_2}.$ There are of course finitely many sequences of the form ([.]{}). Some of them determine the identity matrix in $\spl{3,\Z_2},$ some do not; we appreciate the former sequences, say ‘good’ ones. Clearly, the image $\av t$ of an [*elementary*]{} transvection $t$ is trivial in $\spl{3,\Z_2}$ if and only if $t$ is a square of an elementary transvection in $\spl{3,\Z}.$ So the fact that a sequence $(\av t_1,\av t_2,\ldots, \av t_{41})$ is ‘good’ can be translated into a disjunction of statements each of which says for every $i=1,\ldots,41$ that the $i$th transvection $t_i$ is or is not a square. Having the elementary transvections with respect to the basis $\{e_1,e_2,e_3\}$ (this time automorphisms of $A$) definable in $\aut A$ with the parameters introduced above, we may realize the above considerations for the group $\aut A.$ This completes the proof of Claim 2. Let now $\chi(\av v)$ be a first-order formula that describes the parameters $\f_1^*,$ $\f_2^*,$ $\pi_1^*,$ $\pi_2^*,$ $\tau_1^*,$ $\tau_2^*.$ Suppose that $\av \f$ is any tuple of elements of $\aut A$ that satisfies $\chi$; we then denote by $D(\av \f)$ the family of automorphisms constructed over $\av\f$ in the same way as $D$ is constructed over our parameters. The following are equivalent: [*(a)*]{} $\sigma \in \aut A$ is an element of $\Gamma_2(A);$ [*(b)*]{} there is a direct summand $B$ of $A$ of rank or corank $1$ such that for every direct summand $C$ isomorphic via some automorphism from $\aut A$ to $B$ there exist a tuple $\av \f$ satisfying $\chi$ and $\rho \in D(\av \f)$ with $$\sigma C =\rho C.$$ Let consider the implication (b) $\Rightarrow$ (a). Suppose that the direct summand $B$ mentioned in (b) is of rank one and $e$ a unimodular element of $A.$ Then for suitable parameters $\av\f$ there is $\rho \in D(\av \f)$ $$\s\str e =\rho \str e.$$ By Claim 2 the set $D(\av \f)$ is contained in $\Gamma_2(A)$ and hence $$\s e =\pm \rho e \equiv \pm e \equiv e (\operatorname{mod} 2A).$$ It then follows that $\s \in \Gamma_2(A).$ Suppose now that $B$ is of corank 1. Let $e$ be a unimodular element of $A$ and let $\{e,e_0,e_1,\ldots,e_n,\ldots\}$ be a basis of $A.$ According to the condition $\sigma$ moves the direct summand $$C_0 = \str{e,e_1,e_2,\ldots,e_n,\ldots}$$ exactly as some $\rho \in \Gamma_2(A)$ does: $$\sigma C_0 =\rho C_0.$$ This implies that $\sigma e$ is congruent modulo $2A$ to some element of $C_0$: $$\sigma e \equiv ke+ k_1 e_1+k_2 e_2+\ldots+k_n e_n +\ldots (\operatorname{mod} 2A).$$ The same argument can be applied to the subgroup $$C_1=\str{e,e_0,e_2,\ldots,e_n,\ldots}$$ of which $e$ is also a member; this leads to $$\sigma e \equiv le+l_0 e_0 +l_2 e_2+\ldots+l_n e_n+\ldots (\operatorname{mod} 2A).$$ One deduces then that $$(k-l)e-l_0 e_0+k_1 e_1+(k_2-l_2) e_2+\ldots+(k_n-l_n) e_n+\ldots \equiv 0 (\operatorname{mod} 2A).$$ The images of $e,e_0,e_1,e_2,\ldots$ under the natural homomorphism $A \to A/2A$ must be linearly independent over $\Z_2$ and therefore $$l_0 \equiv k_1 \equiv 0\, (\operatorname{mod} 2).$$ Continuing in a similar fashion, we see that all (non-zero) coefficients $k_i$ in ([.]{}) are even; the coefficient $k$ must therefore be odd. Thus $\s$ is in $\Gamma_2(A),$ as required. The implication (a) $\Rightarrow$ (b). Suppose that $\sigma \in \Gamma_2(A)$ and $e$ is a unimodular element of $A.$ Then for a basis $\{e,e_0,e_1,\ldots,e_n,\ldots\}$ of which $e$ forms a part we have $$\sigma e =e +2(ke +\sum_i k_i e_i).$$ Suppose that $s$ is the greatest common divisor of non-zero elements $k_i.$ Then $$\sigma e = (1+2k) e+ 2s(\sum_i k_i' e_i).$$ Clearly, $\gcd(1+2k,2s)=1$ (since $\sigma e$ is unimodular) and the element $g=\sum_i k_i' e_i$ is unimodular. If so, there are $b,d \in \Z$ such that the matrix $$\begin{pmatrix} 1+2k & b & 0 \\ 2s & d & 0 \\ 0 & 0 & 1 \end{pmatrix}$$ from $\spl{3,\Z}$ is congruent to the identity matrix modulo $2.$ This implies that there exist a tuple $\av \f$ satisfying $\chi$ and some $\rho \in D(\av \f)$ such that $$\sigma \str e =\rho \str e.$$ Claim 3 is proved. Since we know how to interpret in $\aut A$ by means of first-order logic the direct summands of $A$ of rank/corank $1,$ the conditions in (ii) of Claim 3 are easily translated into first-order formulae. The proof of Theorem \[Def-of-Gamma2A\] is now completed. *Very recently Bardakov proved that the principal congruence subgroups of the groups $\spl{n,\Z},$ where $n {\geqslant}3$ all have finite width with respect to elementary transvections (unpublished; personal communication). Recall that the [*width*]{} of a group $G$ relative to a generating set $S$ with $S\inv=S$ is either the minimal natural number $k$ such that every element of $G$ is a product of at most $k$ elements of $S,$ or $\infty$ otherwise.* The result by Bardakov could be used then to simplify the proof of Theorem \[Def-of-Gamma2A\]. \[Int-Set-Theo\] Let $A$ be an infinitely generated free abelian group. Then the group $\aut A$ first-order interprets the second-order theory $\Theo_2(|A|),$ uniformly in $A.$ The proof is based on Theorem \[Def-of-Gamma2A\] and the following important theorem from the paper [@BrMa] by Bryant and Macedonska. Let $F$ be a free group of infinite rank and let $V$ be a characteristic subgroup of $F$ such that $F/V$ is nilpotent. Then every automorphism of $F/V$ is induced by an automorphism of $F.$ Let $A$ stand for the free abelian group $F/[F,F].$ As a corollary of the result by Bryant–Macedonska we have that the natural homomorphism $$\mu : \aut A \to \aut{A/2A}$$ (induced by the natural homomorphism $A \to A/2A$) is surjective. Indeed, according to the Theorem, the natural homomorphisms $$\mu_1 : \aut F \to \aut A \text{ and } \mu_2 : \aut F \to \aut{A/2A}$$ are both surjective. On the other hand, $$\mu_2 = \mu \circ \mu_1,$$ and then $\mu$ must be surjective, too. Adding this to the fact that $\Gamma_2(A),$ the kernel of $\mu,$ is $\varnothing$-definable in $\aut A,$ we get that the group $\aut A$ first-order interprets the group $\aut{A/2A}$: $$\aut A/\ker \mu =\aut A/\Gamma_2(A) \cong \aut{A/2A}.$$ The group $\aut{A/2A}$ is the general linear group of the vector space $A/2A$ over the field $\Z_2.$ On the other hand, the general linear group $\gl V$ of a infinite-dimensional vector space $V$ over a field $D$ first-order interprets $\Theo_2(\dim_D V),$ see [@ToAPAL Theorem 11.4]. Therefore the elementary theory of the group $\aut{A/2A}$ first-order interprets the second-order theory $$\Theo_2(\dim_{\Z_2} A/2A)=\Theo_2(|A|),$$ and the result follows. Let $A_1,A_2$ be infinitely generated free abelian groups. The groups $\aut{A_1}$ and $\aut{A_2}$ are elementarily equivalent if and only if the cardinals $|A_1|$ and $|A_2|$ [*(*]{}viewed as sets with no structure[*)*]{} are second-order equivalent. The necessity part is a consequence of Theorem \[Int-Set-Theo\]. To prove the converse, one syntactically interprets in the second-order theory $\Theo_2(\vk),$ where $\vk$ is an infinite cardinal, the elementary theory of the automorphism group of a free abelian group with $\vk$ as the domain (rather easy; cf. [@ToJLM2 Theorem 4.1] where a similar interpretation is done in quite full detail for the case of the elementary theory of the automorphism group a free group over $\vk.$) [99]{} R. Bryant, O. Macedonska, [*Automorphisms of relatively free nilpotent groups of infinite rank*]{}, J. Algebra [**121**]{} (1989), 388–398. D. Carter, G. Keller, [*Elementary expressions for unimodular matrices*]{}, [Commun. Algebra]{}, [**12**]{} (1984), 379–389. L. K. Hua, I. Reiner. [*Automorphisms of the unimodular group*]{}, [Trans. Amer. Math. Soc.]{} [**71**]{} (1951), 331–348. O. T. O’Meara. [*Lectures on linear groups*]{}, Amer. Math. Soc., Providence, RI, 1974. S. Shelah, [*First-order theory of permutation groups*]{}, [Israel. J. Math.]{} [**14**]{} (1973), 149–162. S. Shelah, [*Interpreting set theory in the endomorphism semi-group of a free algebra or in a category*]{}, [Annales Scientifiques de L’universite de Clermont]{} [**13**]{} (1976), 1–29. S. Shelah, [*On what I do not understand [*(*]{}and have something to say[*)*]{}, model theory*]{}, Math. Japon. [**51**]{} (2000), 329–377. V. Tolstykh, [*Elementary equivalence of infinite-dimensional classical groups*]{}, [Ann. Pure Appl. Logic]{} [**105**]{} (2000), 103–156. V. Tolstykh, [*Set theory is interpretable in the automorphism group of an infinitely generated free group*]{}, [J. London Math. Soc.]{} [**62**]{} (2000), 16–26. V. Tolstykh, [*On the logical strength of the automorphism groups of free nilpotent groups*]{}, Contemp. Math. vol. 302, AMS, Providence, 2002, 113–120. V. Tolstykh, [*Free two-step nilpotent groups whose automorphism group is complete*]{}, Math. Proc. Cambridge Philos. Soc. [**131**]{} (2001), 73–90. [^1]: Supported by a NATO PC-B grant via The Scientific and Technical Research Council of Turkey (TÜBÍTAK)
--- author: - 'Thomas Gehrmann,' - 'Andreas von Manteuffel,' - Lorenzo Tancredi bibliography: - 'Biblio.bib' title: ' The two-loop helicity amplitudes for $q \bar{q}'' \to V_1 V_2\to 4~\mathrm{leptons}$ ' --- Introduction {#sec:intro} ============ Vector boson pair production is an outstandingly important process at high energy hadron colliders. Its measurement allows precision studies of the electroweak interaction, thereby testing in detail the $SU(2)_L\times U(1)_Y$ gauge structure and the matter content of the Standard Model of particle physics. The various combinations of vector boson pairs ($ZZ$, $W^+W^-$, $\gamma\gamma$, $ZW^\pm$, $Z\gamma$, $W^\pm\gamma$) lead to spectacular final state signatures (leptons, photons, missing energy), that are often equally relevant to searches for new physics or studies of the Higgs boson. The Higgs boson decay into two vector bosons is among the cleanest signatures for Higgs production, and offers a broad spectrum of observables. Precision studies of the electroweak interaction often focus on the pair production of on-shell gauge bosons, while new physics searches and Higgs boson studies precisely veto these on-shell contributions, such that the remaining background processes are dominated by off-shell gauge boson pair production. For both on-shell and off-shell production processes, it is therefore very important to have a precise prediction of the Standard Model contributions, in order to match the anticipated experimental accuracy of measurements at the LHC, which is usually in the per-cent range. At this level of precision, next-to-leading order (NLO) corrections in the electroweak theory and next-to-next-to-leading order (NNLO) corrections in QCD are indispensable. For all vector boson pair production processes, NLO QCD corrections [@Ohnemus:1992jn; @Baur:1993ir; @Baur:1997kz; @Dixon:1998py; @Campbell:1999ah; @Dixon:1999di] as well as large parts of the NLO electroweak corrections [@Accomando:2004de; @Accomando:2005xp; @Accomando:2005ra; @Bierweiler:2013dja; @Baglio:2013toa; @Billoni:2013aba; @Gieseke:2014gka; @Denner:2014bna] are available. These calculations are fully differential in all kinematical variables, and usually include the leptonic decays of the vector bosons. The derivation of NNLO QCD corrections to vector boson pair production can build upon calculational techniques [@Anastasiou:2005qj; @Catani:2007vq] that were originally developed for the Drell-Yan process [@Melnikov:2006kv; @Catani:2009sm] or for Higgs boson production in gluon fusion [@Anastasiou:2005qj; @Catani:2007vq], which have the same QCD structure due to their colour-neutral final state. As a new ingredient, each vector boson pair production process at NNLO requires the two-loop corrections to the basic scattering amplitude for quark-antiquark annihilation: $q\bar{q}'\to V_1V_2$. These have been known for a while already for $\gamma \gamma$ [@Bern:2001df; @Anastasiou:2002zn] and $V\gamma$ [@Gehrmann:2011ab; @Gehrmann:2013vga] production, enabling the calculations of these processes [@Catani:2011qz; @Grazzini:2013bna] to NNLO accuracy. Compared to the above, the two-loop matrix elements for the production of a pair of massive vector bosons require a new class of two-loop Feynman integrals: two-loop four-point functions with massless internal propagators and two massive external legs. Recently, very important progress has been made on these. For the case of equal vector boson mass, these integrals were derived in [@Gehrmann:2013cxs; @Gehrmann:2014bfa], and used subsequently to compute the NNLO corrections to the on-shell production of $ZZ$ [@Cascioli:2014yka] and $W^+W^-$ [@Gehrmann:2014fva]. The integrals for the most general case of non-equal masses were derived in [@Henn:2014lfa; @Caola:2014lpa; @Papadopoulos:2014hla], which allowed to construct the full two-loop helicity amplitude for $q\bar{q}'\to V_1V_2$ in [@Caola:2014iua]. A subset of these integrals was derived independently in [@Chavez:2012kn; @Anastasiou:2014nha] and used in the derivation of the fermionic NNLO corrections to $\gamma^*\gamma^*$ production [@Anastasiou:2014nha]. In this paper, we perform an independent rederivation of these integrals and optimise our solutions for numerical performance. They are used subsequently for a validation of the two-loop helicity amplitudes of [@Caola:2014iua], uncovering an error in their original results. We present a public implementation for the numerical evaluation of these amplitudes, which in the future will allow the calculation of NNLO QCD corrections to arbitrary electroweak four-fermion production processes. The paper is structured as follows: in Section \[sec:amp\], we introduce the partonic current for vector boson pair production and describe its decomposition into Lorentz structures. Taking into account the vector boson decays into leptons, we present the helicity amplitudes for four particle final state in Section \[sec:hel\]. A detailed description of the calculation and the different contributions to the amplitude is given in Section \[sec:calc\]. The computation of the master integrals and their optimisation is presented in \[sec:masters\]. In Section \[sec:helfin\], we describe the subtraction of UV and IR counter terms, and in Section \[sec:checks\] we list the numerous checks we performed on our results. In Section \[sec:numerics\] we present our [C++]{} implementation for the numerical evaluation of the amplitudes and use it to produce numerical results. Finally, we conclude in Section \[sec:conc\]. In Appendix \[sec:equalmass\], we document the interference of the two-loop and tree amplitudes for the production of on-shell vector boson pairs, which was used in the calculation of the NNLO corrections to $pp\to ZZ$ [@Cascioli:2014yka] and $pp\to WW$ [@Gehrmann:2014fva]. Appendix \[sec:schouten\] contains the derivation of Schouten identities for the leptonic amplitudes, and Appendix \[sec:catani\] describes the conversion of our results between different schemes for the subtraction of infrared singularities. We provide computer readable files for our analytical results and our [C++]{} code for the numerical evaluation of the amplitude on our [VVamp]{} project page on [[HepForge]{}]{} at <http://vvamp.hepforge.org>. Lorentz structure of the partonic current for qq’ -&gt; V1 V2 {#sec:amp} ============================================================== Let us consider the production of two massive electroweak vector bosons in $q \bar{q}'$ annihilation: $$q(p_1) + \bar{q}'(p_2) \longrightarrow V_1(p_3) + V_2(p_4)$$ with $$\begin{aligned} p_1^2 = p_2^2 = 0\,, \qquad p_3^2 \neq 0\,, \qquad p_4^2 \neq 0\,,\end{aligned}$$ where the two vector bosons are off-shell and $V_1V_2$ = $ZZ$, $W^+W^-$, $\gamma\gamma$, $ZW^\pm$, $Z\gamma$, $W^\pm\gamma$. We define the usual Mandelstam variables $$s=(p_1+p_2)^2\,, \qquad t=(p_1-p_3)^2\,,\qquad u=(p_2-p_3)^2\,,$$ such that $$s+t+u = p_3^2 + p_4^2\,.$$ The physical region of phase space is bounded by $t u = p_3^2 p_4^2$ such that $$s \geq \Big(\sqrt{p_3^2} + \sqrt{p_4^2}\Big)^2,\qquad \frac{1}{2}\big(p_3^2+p_4^2-s -\kappa\big) \leq t \leq \frac{1}{2}\big(p_3^2+p_4^2-s +\kappa\big)$$ where $\kappa$ is the Källén function $$\label{kaellen} \kappa\left(s,p_3^2,p_4^2\right) \equiv \sqrt{s^2 + p_3^4 + p_4^4 - 2 (s\,p_3^2 + p_3^2\,p_4^2 + p_4^2\,s)}\,.$$ Let us consider the partonic amplitude for the production of the two off-shell vector bosons $V_1 V_2$ $$\mathcal{S}(s,t,p_3^2,p_4^2) = S^{\mu \nu}(p_1,p_2,p_3)\, \epsilon_3^{\mu}(p_3)^*\,\epsilon_4^{\nu}(p_4)^*\,,$$ where $\epsilon_3$ and $\epsilon_4$ are the two polarisation vectors of $V_1$ and $V_2$ respectively. In this notation, we keep an overall factor $e^2$ implicit, where $e$ is the positron charge. In order to calculate the partonic current $S^{\mu\nu}(p_1,p_2,p_3)$, we consider its tensorial structure. Lorentz invariance restricts it to be a linear combination of $17$ independent structures $$\begin{aligned} S^{\mu\nu}(p_1,p_2,p_3) &= \bar{u}(p_2)\, {p \hspace{-1.0ex} /}_3 u(p_1) \left[\, F_1 \, p_1^\mu p_1^\nu + F_2\, p_1^\mu p_2^\nu + F_3\, p_1^\mu p_3^\nu \, \right] \nonumber \\ &+ \bar{u}(p_2)\, {p \hspace{-1.0ex} /}_3 u(p_1) \left[\, F_4 \, p_2^\mu p_1^\nu + F_5\, p_2^\mu p_2^\nu + F_6\, p_2^\mu p_3^\nu \, \right] \nonumber \\ &+ \bar{u}(p_2)\, {p \hspace{-1.0ex} /}_3 u(p_1) \left[\, F_7 \, p_3^\mu p_1^\nu + F_8\, p_3^\mu p_2^\nu + F_9\, p_3^\mu p_3^\nu \, \right] \nonumber \\ &+ \bar{u}(p_2)\, \gamma^\mu u(p_1) \left[\, F_{10}\, p_1^\nu + F_{11}\, p_2^\nu + F_{12}\, p_3^\nu \, \right] \nonumber \\ &+ \bar{u}(p_2)\, \gamma^\nu u(p_1) \left[\, F_{13}\, p_1^\mu + F_{14}\, p_2^\mu + F_{15}\, p_3^\mu \, \right] \nonumber \\ &+ \bar{u}(p_2)\, \gamma^\mu {p \hspace{-1.0ex} /}_3 \gamma^\nu u(p_1)\, F_{16} \nonumber \\ &+ \bar{u}(p_2)\, \gamma^\nu {p \hspace{-1.0ex} /}_3 \gamma^\mu u(p_1)\, F_{17}\,,\end{aligned}$$ where the form factors $F_1\,,...\,,F_{17}$ are scalar functions of the Mandelstam variables $s,t,p_3^2,p_4^2$ and of the number of space-time dimensions $d$. To further constrain $S^{\mu \nu}$, we choose the Landau gauge for the electroweak vector bosons with the transversality condition $$\epsilon_3\cdot p_3 = \epsilon_4 \cdot p_4 = 0\,, \label{gauge}$$ and the sum over polarisations $$\begin{aligned} &\sum_{pol} (\epsilon_3^\mu)^* \epsilon_3^\nu = - g^{\mu \nu} + \frac{p_3^\mu p_3^\nu}{p_3^2}\,,\nonumber \\ &\sum_{pol} (\epsilon_4^\mu)^* \epsilon_4^\nu = - g^{\mu \nu} + \frac{p_4^\mu p_4^\nu}{p_4^2}\,. \label{polsum}\end{aligned}$$ Imposing condition  we can reduce the number of independent tensor structures to ten [@Denner:1988tv; @Diener:1997nx], which can be chosen as $$\begin{aligned} &T_1^{\mu \nu} = \bar{u}(p_2)\, {p \hspace{-1.0ex} /}_3 u(p_1)\, p_1^\mu p_1^\nu\,, \qquad T_2^{\mu \nu} = \bar{u}(p_2)\, {p \hspace{-1.0ex} /}_3 u(p_1)\, p_1^\mu p_2^\nu\,, \nonumber \\ &T_3^{\mu \nu} = \bar{u}(p_2)\, {p \hspace{-1.0ex} /}_3 u(p_1)\, p_2^\mu p_1^\nu\,, \qquad T_4^{\mu \nu} = \bar{u}(p_2)\, {p \hspace{-1.0ex} /}_3 u(p_1)\, p_2^\mu p_2^\nu\,, \nonumber \\ &T_5^{\mu \nu} = \bar{u}(p_2)\, \gamma^\mu u(p_1)\, p_1^\nu \,, \qquad \hspace{0.32cm} T_6^{\mu \nu} = \bar{u}(p_2)\, \gamma^\mu u(p_1)\, p_2^\nu\,,\nonumber \\ &T_7^{\mu \nu} = \bar{u}(p_2)\, \gamma^\nu u(p_1)\, p_1^\mu \,, \qquad \hspace{0.32cm} T_8^{\mu \nu} = \bar{u}(p_2)\, \gamma^\nu u(p_1)\, p_2^\mu\,,\nonumber \\ &T_9^{\mu \nu} = \bar{u}(p_2)\, \gamma^\mu {p \hspace{-1.0ex} /}_3 \gamma^\nu u(p_1)\,, \qquad T_{10}^{\mu \nu} = \bar{u}(p_2)\, \gamma^\nu {p \hspace{-1.0ex} /}_3 \gamma^\mu u(p_1)\,. \label{tensors}\end{aligned}$$ Without any loss of generality we can thus write the partonic current as $$\begin{aligned} S^{\mu\nu}(p_1,p_2,p_3) &= \sum_{j=1}^{10}\, A_{j}(s,t,p_3^2,p_4^2)\, T_j^{\mu \nu} \,, \label{tensorstr}\end{aligned}$$ where we introduced the new physical form factors $A_1,...,A_{10}$, which are again scalar functions of the Mandelstam variables $s,t,p_3^2,p_4^2$ and of the dimension $d$. Note that in deriving  no assumption has been made on the dimensionality $d$, such that this decomposition is valid for any continuous values of $d$. Its structure has been constrained using solely Lorentz and gauge invariance and is therefore true at every order in perturbation theory. On the other hand, the scalar coefficients $A_{j}(s,t,p_3^2,p_4^2)$ contain the explicit dependence on the perturbative order at which they are computed. These coefficients can be extracted from the amplitude by applying appropriate projecting operators on the latter. The projectors themselves can be expanded in the same tensorial basis: $$\begin{aligned} P_j^{\mu \nu} = \sum_{i=1}^{10} B_{ji}\, ( T_i^{\mu \nu} )^\dagger\,, \qquad j=1,10\,, \label{proj}\end{aligned}$$ where the coefficients $B_{ji}$ are functions of the Mandelstam variables $s,t,p_3^2,p_4^2$ and of the dimension $d$. They can be determined imposing that $$\sum_{pol}\,P_j^{\mu_1 \mu_2}\,\left[ \epsilon_{3\mu_1}\,\epsilon_{4\,\mu_2}\, \epsilon_{3\nu_1}^*\,\epsilon_{4\,\nu_2}^*\, \right] S^{\nu_1 \nu_2} = A_j\,.$$ Note that the contraction is performed in $d$ dimensions and at every stage one should always recall to use the polarisation sum in . For later convenience we introduce also the following scalar quantities: $$\begin{aligned} \tau_i = \sum_{pol} \, \left(T_{i}^{\mu_1 \mu_2}\right)^\dagger \,\left[ \epsilon_{3\mu_1}\,\epsilon_{4\,\mu_2}\, \epsilon_{3\nu_1}^*\,\epsilon_{4\,\nu_2}^*\, \right] S^{\nu_1 \nu_2}\,, \label{tauj}\end{aligned}$$ which are related to the coefficients $A_j$ according to $$A_j = \sum_{i=1}^{10} B_{ji}\, \tau_i\,, \label{ajintauj}$$ with the same coefficients $B_{ji}$ as in . These quantities (rather than the coefficients $A_j$) are particularly useful in order to build up the contractions of the $n$-loop amplitudes with the tree-level ones (see Appendix \[sec:equalmass\]). We provide explicit expressions for $B_{ji}$ in computer readable format on [[HepForge]{}]{}. The partonic current receives contributions from QCD radiative corrections and can be decomposed perturbatively as $$S_{\mu \nu}(p_1,p_2,p_3) = S_{\mu \nu}^{(0)}(p_1,p_2,p_3) + \left( \frac{\alpha_s}{2 \pi} \right)S_{\mu \nu}^{(1)}(p_1,p_2,p_3) + \left( \frac{\alpha_s}{2 \pi} \right)^2S_{\mu \nu}^{(2)}(p_1,p_2,p_3) + \mathcal{O}(\alpha_s^3)\,.$$ Obviously also the un-renormalised tensor coefficients $A_j$ (or, equivalently the $\tau_j$) have the same perturbative expansion of the partonic amplitude $$\begin{aligned} \label{Ajperturb} A_j &= A_j^{(0)} + \left( \frac{\alpha_s}{2 \pi} \right)A_j^{(1)} + \left( \frac{\alpha_s}{2 \pi} \right)^2 A_j^{(2)} + \mathcal{O}(\alpha_s^3)\,, \\ \tau_j &= \tau_j^{(0)} + \left( \frac{\alpha_s}{2 \pi} \right) \tau_j^{(1)} + \left( \frac{\alpha_s}{2 \pi} \right)^2 \tau_j^{(2)} + \mathcal{O}(\alpha_s^3)\,,\end{aligned}$$ where the dependence on the Mandelstam variables is again implicit. Helicity amplitudes for qq’ -&gt; V1 V2 -&gt; 4 leptons {#sec:hel} ======================================================== In physical applications we are interested in the processes $$q(p_1) + \bar{q}'(p_2) \rightarrow V_1(p_3) + V_2(p_4) \rightarrow l_5(p_5) + \bar{l}_6(p_6) + l_7(p_7) + \bar{l}_8(p_8)$$ where each of the two off-shell electroweak vector bosons can decay to pairs of leptons, such that $p_3= p_5+p_6$ and $p_4=p_7+p_8$. Let us first focus on the general structure of the helicity amplitudes for this process. Schematically these amplitudes can be written as the product of the partonic current $S^{\mu \nu}$, and two leptonic currents $L_{\mu}\,,L_{\nu}$, mediated by the propagators of the two off-shell vector bosons $P^{V}_{\mu \nu}(q)$ $$\widetilde{{ {\rm M} }} (p_5,p_6,p_7,p_8; p_1,p_2) = S^{\mu \nu}(p_1,p_2,p_3)\, P^{V_1}_{\mu \rho}(p_3) L_{\rho}(p_5,p_6)\, P^{V_2}_{\nu \sigma}(p_4)\, L_\sigma(p_7,p_8)\,,$$ where we stripped off electroweak couplings not relevant here and postpone their discussion to the presentation of the full amplitude in  below. In the $R_\xi$-gauges the propagator of a vector boson $V$ reads $$P_{\mu \nu}^V(q) = \frac{i\, \Delta^V_{\mu \nu}(q,\xi)}{D_V(q)}\,,$$ with $$\Delta^V_{\mu \nu}(q,\xi) = \left( - g_{\mu \nu} + (1 - \xi) \frac{q_\mu q_\nu}{q^2 - \xi m_V^2}\right)\,,$$ $$\begin{aligned} &D_{\gamma^*}(q) = q^2\,,\qquad D_{Z,W}(q) = (q^2 - m_V^2 + i\,\Gamma_V m_V)\,,\end{aligned}$$ where $m_V$ is the mass of the gauge boson and $\Gamma_V$ is its decay width. While the Landau gauge used in the previous Section corresponds to $\xi\to 0$, the term proportional to $(1-\xi)$ effectively vanishes for any $\xi$ since the electroweak vector bosons are directly coupled to massless fermion lines. By fixing the helicities of the incoming partons and of the outgoing leptons one sees that the left- and right-handed partonic production currents can be written as $$\begin{aligned} &S^{\mu \nu}_L(p_1^-,p_2^+,p_3) = \bar{v}_+(p_2) \Gamma^{\mu \nu} u_-(p_1) = \langle 2\, | \Gamma^{\mu \nu} | \,1\, ]\,, \\ &S^{\mu \nu}_R(p_1^+,p_2^-,p_3) = \bar{v}_-(p_2) \Gamma^{\mu \nu} u_+(p_1) = [ 2\, | \Gamma^{\mu \nu} | \,1\, \rangle \,,\end{aligned}$$ where the $\Gamma^{\mu \nu}$ are rank two-tensors and contain an odd number of $\gamma$-matrices. We note in passing that, by complex conjugation, one gets $$\begin{aligned} \left[ S^{\mu \nu}_R(p_1^+,p_2^-,p_3) \right]^* = \left( \, [ 2\, | \Gamma^{\mu \nu} | \,1\, \rangle\, \right)^* = \langle 2\, | \Gamma^{\mu \nu} | \,1\, ] = S^{\mu \nu}_L(p_1^-,p_2^+,p_3) \,, \qquad \mbox{for all} \; \Gamma^{\mu \nu}\,.\end{aligned}$$ The left- and right-handed leptonic decay currents, on the other hand, can be written as $$\begin{aligned} &L^\mu_L(p_5^-,p_6^+) = \bar{u}_-(p_5) \,\gamma^{\mu} \, v_+(p_6) = [ 6\, | \gamma^{\mu } | \,5\, \rangle = \langle 5\, | \gamma^{\mu } | \,6\, ] \,,\label{LepCurrL}\\ &L^\mu_R(p_5^+,p_6^-) = \bar{u}_+(p_5) \,\gamma^{\mu}\, v_-(p_6)= [ 5\, | \gamma^{\mu } | \,6\, \rangle = \left( L^\mu_L(p_5^-,p_6^+) \right)^* = L_L^\mu(p_6^-,p_5^+)\,. \label{LepCurrR}\end{aligned}$$ Note in particular that, as far as the lepton currents are concerned, a permutation of the external momenta corresponds to a flip of the helicity. All possible helicity amplitudes can be therefore obtained from the two basic amplitudes $$\begin{aligned} { {\rm M} }_{LLL}(p_1,p_2;p_5,p_6,p_7,p_8) &= S_L^{\mu \nu}(p_1^-,p_2^+,p_3)\,L_{L\mu}(p_5^-,p_6^+) L_{L\nu}(p_7^-,p_8^+)\,,\\ { {\rm M} }_{RLL}(p_1,p_2;p_5,p_6,p_7,p_8) &= S_R^{\mu \nu}(p_1^+,p_2^-,p_3)\,L_{L\mu}(p_5^-,p_6^+) L_{L\nu}(p_7^-,p_8^+)\,,\end{aligned}$$ by simple permutations of the leptonic momenta. In particular we find $$\begin{aligned} &{ {\rm M} }_{LLR}(p_1,p_2;p_5,p_6,p_7,p_8) = { {\rm M} }_{LLL}(p_1,p_2;p_5,p_6,p_8,p_7) \,, \nonumber \\ &{ {\rm M} }_{LRL}(p_1,p_2;p_5,p_6,p_7,p_8) = { {\rm M} }_{LLL}(p_1,p_2;p_6,p_5,p_7,p_8) \,, \nonumber \\ &{ {\rm M} }_{LRR}(p_1,p_2;p_5,p_6,p_7,p_8) = { {\rm M} }_{LLL}(p_1,p_2;p_6,p_5,p_8,p_7)\,, \nonumber \\ &{ {\rm M} }_{RLR}(p_1,p_2;p_5,p_6,p_7,p_8) = { {\rm M} }_{RLL}(p_1,p_2;p_5,p_6,p_8,p_7) \,, \nonumber \\ &{ {\rm M} }_{RRL}(p_1,p_2;p_5,p_6,p_7,p_8) = { {\rm M} }_{RLL}(p_1,p_2;p_6,p_5,p_7,p_8) \,, \nonumber \\ &{ {\rm M} }_{RRR}(p_1,p_2;p_5,p_6,p_7,p_8) = { {\rm M} }_{RLL}(p_1,p_2;p_6,p_5,p_8,p_7)\,.\end{aligned}$$ In order to put together the helicity amplitudes in their final form we need also to take into account the electroweak couplings of the gauge bosons to the partonic- and leptonic-currents which we have been kept implicit so far. We follow [@Denner:1991kt] and parametrise the coupling of a vector boson $V$ to a fermion pair $f_1 f_2$ as $$\mathcal{V}_\mu^{V f_1 f_2} = i\, e\, \Gamma_\mu^{V f_1 f_2}\,, \qquad \mbox{where} \quad e = \sqrt{4\,\pi\,\alpha} \quad \mbox{is the positron charge}\,,$$ such that all fermion charges are expressed in units of $e$ and $$\Gamma_\mu^{V f_1 f_2} = L_{f_1 f_2}^V \, \gamma_\mu \left( \frac{1-\gamma_5}{2}\right) + R_{f_1 f_2}^V \, \gamma_\mu \left( \frac{1+\gamma_5}{2}\right)\,.$$ The left- and right-handed interactions are equal for a purely vectorial interaction. Depending on the different kinds of gauge bosons we have $$\begin{aligned} {2} L_{f_1f_2}^\gamma &= -e_{f_1} \,\delta_{f_1f_2} & R_{f_1f_2}^\gamma &= -e_{f_1} \,\delta_{f_1f_2}\,, \label{gLBcoupl} \\ L_{f_1f_2}^Z &= \frac{I_3^{f_1} - \sin^2 {\theta_w} e_{f_1}}{\sin{\theta_w} \cos{\theta_w}} \,\delta_{f_1f_2}\,, &\qquad R_{f_1f_2}^Z &= -\frac{\sin{\theta_w} e_{f_1}}{\cos{\theta_w}} \,\delta_{f_1f_2}\,, \label{ZLRcoupl} \\ L_{f_1f_2}^W &= \frac{1 }{\sqrt{2}\, \sin{\theta_w}} \,\epsilon_{f_1f_2} \,,& R_{f_1f_2}^W &= 0\,, \label{WLRcoupl}\end{aligned}$$ where again the charges $e_i$ are measured in terms of the fundamental electric charge $e>0$ and $\epsilon_{f_1f_2}$ is unity for $f_1\neq f_2$, but belonging to the same isospin doublet and respecting charge conservation, and zero otherwise. Putting everything together we find for the two independent helicity amplitudes for $q\bar{q}' \to V_1 V_2 \to l_5 \bar{l}_6 l_7 \bar{l}_8$ $$\begin{aligned} \mathcal{M}_{\lambda LL}^{V_1 V_2}(p_1, p_2;p_5,p_6,p_7,p_8) &= (4 \pi \alpha)^2 \; \frac{ L_{l_5 l_6}^{V_1} L_{l_7 l_8}^{V_2} } {D_{V_1}(p_3)D_{V_2}(p_4)}\; { {\rm M} }_{\lambda LL}(p_1, p_2;p_5,p_6,p_7,p_8)\,, \label{HelAmpl}\end{aligned}$$ where $\lambda = L,R$ and we have bracketed out the tree-level dependence on the electric charge $i (4 \pi \alpha)^2$ and on the couplings with the decay lepton currents. Obviously the corresponding helicity amplitudes for right-handed leptonic currents can be obtained by the simple exchange $L_{f_i f_j}^{V} \leftrightarrow R_{f_i f_j}^V$ together with $p_i \leftrightarrow p_j$. Once the tensor structure  is given, we can perform the contraction with the leptonic decay currents and fix the helicities of the incoming and outgoing fermions. This enables us to cast the two independent helicity amplitudes ${ {\rm M} }_{LLL}$ and ${ {\rm M} }_{RLL}$ in the familiar spinor-helicity notation [@Dixon:1996wi; @Dixon:1998py]. In doing so, one assumes that the external states are 4-dimensional and this allows to obtain one further Schouten identity between the 10 tensors structures, such that one ends up with 9 independent form factors. Our derivation is spelled out in detail in Appendix \[sec:schouten\]. As a result, we obtain $$\begin{aligned} { {\rm M} }_{LLL}(p_1,p_2;p_5,p_6,p_7,p_8) &= [ 1\, {p \hspace{-1.0ex} /}_3\, 2 \rangle\, \Big\{ E_1\, \langle 15 \rangle \langle 17 \rangle [16][18] \nonumber \\ & + E_2\, \langle 15 \rangle \langle 27 \rangle [16][28] + E_3\, \langle 25 \rangle \langle 17 \rangle [26][18] \nonumber \\ & + E_4\, \langle 25 \rangle \langle 27 \rangle [26][28] \, + E_5 \langle 5 7 \rangle [ 68 ] \Big\}\nonumber \\ &+ E_6\, \langle 15 \rangle \langle 27 \rangle [16][18] + E_7\, \langle 25 \rangle \langle 27 \rangle [26][18] \nonumber \\ &+ E_8\, \langle 25 \rangle \langle 17 \rangle [16][18] + E_9\, \langle 25 \rangle \langle 27 \rangle [16][28]\,, \label{MLLL}\end{aligned}$$ $$\begin{aligned} { {\rm M} }_{RLL}(p_1,p_2;p_5,p_6,p_7,p_8) &= [ 2\, {p \hspace{-1.0ex} /}_3\, 1 \rangle\, \Big\{ E_1\, \langle 15 \rangle \langle 17 \rangle [16][18] \nonumber \\ & + E_2\, \langle 15 \rangle \langle 27 \rangle [16][28] + E_3\, \langle 25 \rangle \langle 17 \rangle [26][18] \nonumber \\ & + E_4\, \langle 25 \rangle \langle 27 \rangle [26][28] \, + E_5 \langle 5 7 \rangle [ 68 ] \Big\}\nonumber \\ &+ E_6\, \langle 15 \rangle \langle 17 \rangle [16][28] + E_7\, \langle 25 \rangle \langle 17 \rangle [26][28] \nonumber \\ &+ E_8\, \langle 15 \rangle \langle 17 \rangle [26][18] + E_9\, \langle 15 \rangle \langle 27 \rangle [26][28]\,, \label{MRLL}\end{aligned}$$ where $$[ 1\, {p \hspace{-1.0ex} /}_3\, 2 \rangle = [15] \langle 52 \rangle + [16] \langle 62 \rangle \,, \qquad [ 2\, {p \hspace{-1.0ex} /}_3\, 1 \rangle = [25] \langle 51 \rangle + [26] \langle 61 \rangle\,,$$ and the 9 form factors $E_j$ are linear combinations of the form factors $A_j$ $$\begin{aligned} E_1 &= A_1\,, & E_6 &= 2 \, A_7 + \frac{2\,(u - p_3^2)}{s}\left( A_9 - A_{10} \right)\,, \nonumber \\ E_2 &= A_2 + \frac{2}{s}\left( A_9 - A_{10} \right)\,, & E_7 &= 2 \, A_8 - \frac{2\,(t - p_3^2)}{s}\left( A_9 - A_{10} \right)\,, \nonumber \\ E_3 &= A_3 - \frac{2}{s} \left( A_9 - A_{10} \right)\,, & E_8 &= 2 \, A_5 - \frac{2}{s} \left[ ( u-s -p_3^2 ) A_9 + (t - p_4^2) A_{10} \right]\,, \nonumber \\ E_4 &= A_4\,, & E_9 &=2 \, A_6 - \frac{2}{s} \left[ (t- s - p_3^2) A_{10} + (u - p_4^2) A_{9} \right]\,. \nonumber \\ E_5 &= 2 \left( A_9 + A_{10} \right)\,. && \label{EjinAj}\end{aligned}$$ In the following, we will consider a perturbative expansion of the form factors $E_j$ defined completely analogous to that of the coefficients $A_j$ in . We note that the expressions  and  are formally identical to the corresponding formulas derived in [@Caola:2014iua], such that our form factors $E_j$ can be mapped one to one to the $F_j$ defined in [@Caola:2014iua]. In the next Section we will describe how to compute form factors $A_1,\ldots,A_{10}$ and therefore also form factors $E_1,\ldots,E_9$ at tree-level, one-loop and two-loop order, following a straightforward diagrammatic approach. In particular, we will discuss the different electroweak coupling arrangements ${ \mathcal{C} }$ contributing to the functions $A_j$ and $E_j$,$$\begin{aligned} \label{AdecompEW} A_j &= \delta_{i_1 i_2} \sum_{{ \mathcal{C} }} Q^{\lambda,V_1V_2,[{ \mathcal{C} }]}_{q\,q'} A_j^{[{ \mathcal{C} }]}, \qquad j = 1,\ldots,10,\notag\\ E_j &= \delta_{i_1 i_2} \sum_{{ \mathcal{C} }} Q^{\lambda,V_1V_2,[{ \mathcal{C} }]}_{q\,q'} E_j^{[{ \mathcal{C} }]}, \qquad j = 1,\ldots,9,\end{aligned}$$\ where $Q^{\lambda,V_1V_2,[j]}_{q\,q'}$ denotes a coupling factor, $\lambda$ is the helicity of the incoming quark, and $i_1$, $i_2$ are the colours of the incoming quark and anti-quark, respectively. We want to stress once more an important point. Reducing the 10 coefficients $A_j$ to the 9 coefficients $E_j$ required the assumption that the external states can be treated as 4-dimensional. In order to avoid any loss of information, we will work considering the $A_j$ as fundamental objects (derived in $d$ dimensions throughout) and refer to formulas  in order to reconstruct the $E_j$ explicitly. Organisation of the calculation {#sec:calc} =============================== The calculation of the two-loop helicity amplitudes can be set up in a way that is independent on the nature of the vector bosons considered, by organising the Feynman diagrams contributing to any such process into different classes. We find in particular that, as long as we limit ourselves to QCD corrections only, at any given number of loops, seven different types of diagrams can contribute, depending on the arrangement of the external vector bosons. ![\[fig:diagclass\] Representative Feynman diagrams for classes $A$, $B$, $C$ and $F_V$ relevant for the production of two electroweak vector bosons at the two-loop level. All of these classes receive contributions both from planar and non-planar diagrams. ](diagclass){width="85.00000%"} Class $\mathbf{A}$ : collects all those diagrams where both vector bosons are attached on the external fermion line, such that $V_1$ is adjacent to the quark $q(p_1)$. In the case of a left-handed (right-handed) quark amplitude these diagrams are proportional to $L_{q\,q''}^{V_1}\,L_{q''q'}^{V_2}$ ($R_{q\,q''}^{V_1}\,R_{q''q'}^{V_2}$). Class $\mathbf{B}$ : collects all those diagrams where both vector bosons are attached on the external fermion line, such that $V_1$ is adjacent to the antiquark $\bar{q}'(p_2)$. Also these diagrams, in the case of a left-handed (right-handed) quark amplitude, are proportional to $L_{q'q''}^{V_1}\,L_{q''q}^{V_2}$ ($R_{q'q''}^{V_1}\,R_{q''q}^{V_2}$). Class $\mathbf{C}$ : contains instead all diagrams where both vector bosons are attached to a fermion loop. These diagrams are proportional to the charge weighted sum of the quark flavours, which we denote as $N_{V_1 V_2}$, depending on nature of the final state bosons. In the general case, these diagrams yield two different contributions. In the first one, which is proportional to the sum of the vector-vector and the axial-axial couplings, all dependence on $\gamma_5$ cancels out. The vector-axial contribution, instead, is linear in $\gamma_5$. Nevertheless, this last contribution is expected to always vanish identically for massless quarks running in the loops, for any choice of $V_1$ and $V_2$, due to charge parity conservation [@Glover:1988rg; @Glover:1988fe; @Caola:2014iua; @Melnikov:2015laa]. Taking this into account we find $$\begin{aligned} \label{coupl1} N_{\gamma \gamma} &= \frac{1}{2}\sum_{i} \left[ \left( L_{q_i q_i}^\gamma \right)^2 + \left( R_{q_i q_i}^\gamma \right)^2 \right], & N_{Z \gamma} &= \frac{1}{2} \sum_{i} \left( L_{q_i q_i}^Z L_{q_i q_i}^\gamma + R_{q_i q_i}^Z R_{q_i q_i}^\gamma \right), \nonumber\\ N_{ZZ} &= \frac{1}{2} \sum_{i} \left[ \left( L_{q_i q_i}^Z\right)^2 + \left(R_{q_i q_i}^Z \right)^2 \right], & N_{WW} &= \frac{1}{2} \sum_{i,\,j} \left( L_{q_i q_j}^W L_{q_j q_i}^W \right),\end{aligned}$$ where the indices $i,j$ run over the flavours of the quarks in the loop and $L_{q_i q_i}^\gamma = R_{q_i q_i}^\gamma$. Of course, $N_{\gamma\gamma}=\sum_{i} e_{q_i}^2$ and, due to charge conservation, $N_{W \gamma} = N_{W Z} = 0\,.$ Class $\mathbf{D_1}$ : contains all diagrams where $V_1$ is attached to a fermion loop and $V_2$ to the external fermion line. Up to two loops, the diagrams in this class must sum up to zero due to Furry’s theorem. Class $\mathbf{D_2}$ : contains all diagrams where $V_2$ is attached to a fermion loop and $V_1$ to the external fermion line. At two loops the diagrams in this class, as for the previous case, must sum up to zero due to Furry’s theorem. Class $\mathbf{E}$ : contains all diagrams there $V_1$ and $V_2$ are attached to two different fermion loops. These diagrams contribute only starting from three-loop order and we can ignore them. Classes $\mathbf{F_V}$ : collect the form-factor diagrams where the production of the two vector bosons $V_1,V_2$ is mediated by the exchange of another vector boson $V$. Depending on the type of vector bosons $V_1,V_2$ there can be more than one such class due to different intermediate vector bosons. In the case of a left-handed (right-handed) quark amplitude these diagrams are proportional to $L_{q\,q'}^V\, c_{V\, V_1 V_2}$ ($R_{q\,q'}^V\, c_{V\, V_1 V_2}$), where $c_{V\, V_1 V_2}$ is the electroweak coupling of the triple gauge boson vertex defined for all particles and momenta outgoing as $$\begin{aligned} \mathcal{V}_{V\, V_1 V_2}^{\rho \mu \nu}(a,b,c) = i\,e\,c_{V\,V_1V_2}\, \left[ \, (a-b)^\nu g^{\mu \rho} + (b-c)^\rho g^{\mu \nu} + (c-a)^\mu g^{\nu \rho}\, \right]\end{aligned}$$ with $$\begin{aligned} \label{coupl2} c_{\gamma W^\pm W^\mp} &= c_{W^\mp\gamma W^\pm} = c_{W^\pm W^\mp\gamma} = \pm 1\,, \notag\\ c_{Z W^\pm W^\mp} &= c_{W^\mp Z W^\pm} = c_{W^\pm W^\mp Z} = \mp \cot\theta_w \,.\end{aligned}$$ It is clear that, depending on the nature of the vector bosons $V_1, \,V_2$ and on the loop order, not all classes above will give non-zero contribution. At tree-level, for example, only classes $A$, $B$ and $F_V$ can contribute. The same is true also at one loop, provided that we limit ourselves to QCD corrections only. The situation changes at two loops, where also diagrams for classes $C,\,D_1$ and $D_2$ occur. Notice moreover that the form-factor type diagrams in class $F_V$ are relevant only for the production of $W\gamma$, $W Z$ or $WW$ pairs. Up to two loops, we can thus restrict the summation in (\[AdecompEW\]) to ${ \mathcal{C} }=A,B,C,F_V$. We show representative diagrams in Figure \[fig:diagclass\]. For the coupling factors we have $$\begin{aligned} Q^{L,V_1V_2,[A]}_{q\,q'} & = L_{q\,q''}^{V_1}\,L_{q''q'}^{V_2}\;, & Q^{R,V_1V_2,[A]}_{q\,q'} & = R_{q\,q''}^{V_1}\,R_{q''q'}^{V_2}\;, \nonumber \\ Q^{L,V_1V_2,[B]}_{q\,q'} & = L_{q'q''}^{V_1}\,L_{q''q}^{V_2}\;,& Q^{R,V_1V_2,[B]}_{q\,q'} &= R_{q'q''}^{V_1}\,R_{q''q}^{V_2}\;, \nonumber \\ Q^{L,V_1V_2,[C]}_{q\,q'} & = N_{V_1V_2} \delta_{q\,q'}\;, & Q^{R,V_1V_2,[C]}_{q\,q'} &= N_{V_1V_2} \delta_{q\,q'} \;, \nonumber \\ Q^{L,V_1V_2,[F_V]}_{q\,q'} & = \frac{L_{q\,q'}^V c_{VV_1V_2}}{s-m_V^2-i\,\Gamma_V\,m_V} \;, & Q^{R,V_1V_2,[F_V]}_{q\,q'} & = \frac{R_{q\,q'}^V c_{VV_1V_2}}{s-m_V^2-i\,\Gamma_V\,m_V} \;. \label{coupl3}\end{aligned}$$ With these definitions, the value of the coefficients $A_j^{[F_V],(n)}$ do not depend on the nature of the mediating vector boson $V$, such that in particular $$A_j^{[F_\gamma],(n)} = A_j^{[F_Z],(n)}= A_j^{[F_W],(n)} = A_j^{[F],(n)}\,.$$ We have computed the coefficients $A_j$ for the different classes of diagrams contributing at tree level, one loop and two loops, namely $A_j^{[{ \mathcal{C} }],(0)}$, $A_j^{[{ \mathcal{C} }],(1)}$, $A_j^{[{ \mathcal{C} }],(2)}$, with ${ \mathcal{C} }= A,B,C,D_1,D_2,F$. At tree-level order we find $$\begin{aligned} A_7^{[A],(0)} &= -\frac{2}{t}\,,& A_{10}^{[A],(0)} &= +\frac{1}{t}\,, & A_j^{[A],(0)} &= 0\,, \quad j=1,...,6,8,9\,, \nonumber \\ A_8^{[B],(0)} &= +\frac{2}{u}\,, & A_9^{[B],(0)} &= -\frac{1}{u}\,, & A_j^{[B],(0)} &= 0\,, \quad j=1,...,7,10\,, \nonumber \\ A_7^{[F],(0)} &= A_8^{[F],(0)} = +2\,, & A_9^{[F],(0)} &= A_{10}^{[F],(0)} = -1\,, & A_j^{[F],(0)} &= 0\,, \quad j=1,...,6\,. \label{HelAmplTree}\end{aligned}$$ We can notice immediately that, as far as the form-factor type diagrams are concerned, any $n$-loop QCD corrections will not modify the structure of , and in particular we have $$\begin{aligned} A_j^{[F],(n)} &= { \mathcal{F} }^{(n)}(s) A_j^{[F],(0)} \label{HelAmplFV}\end{aligned}$$ where ${ \mathcal{F} }^{(n)}(s)$ are the $n$-loop QCD corrections to the quark form-factor. Let us discuss the features of our $E_j$ set of coefficients, which is relevant for the four-dimensional helicity amplitudes for the full $2\to4$ process. We consider crossings of external legs described by the permutations $$\begin{aligned} &\pi_{12} := p_1 \leftrightarrow p_2 \Rightarrow \{\,t \leftrightarrow u \,\} \nonumber \\ &\pi_{34} := p_3 \leftrightarrow p_4 \Rightarrow \{\, t \leftrightarrow u\,, \;\; p_3^2 \leftrightarrow p_4^2 \,\} \,.\end{aligned}$$ and focus on the behaviour of the $E_j^{[{ \mathcal{C} }]}$ for the non-trivial cases ${ \mathcal{C} }= A,B,C$. From the exchange of quark and anti-quark, $\pi_{12}$ we find for the amplitudes $$\begin{aligned} { {\rm M} }^{[A]}_{LLL} = - { {\rm M} }^{[B]}_{RLL}(p_1 \leftrightarrow p_2)\,, \quad { {\rm M} }^{[C]}_{LLL} = - { {\rm M} }^{[C]}_{RLL}(p_1 \leftrightarrow p_2)\,,\end{aligned}$$ from which one can directly obtain $$\begin{aligned} E_1^{[A]}(s,t,p_3^2,p_4^2) &= - E_4^{[B]}(s,u,p_3^2,p_4^2)\,, & E_8^{[A]}(s,t,p_3^2,p_4^2) &= - E_9^{[B]}(s,u,p_3^2,p_4^2)\,, \nonumber \\ E_2^{[A]}(s,t,p_3^2,p_4^2) &= - E_3^{[B]}(s,u,p_3^2,p_4^2)\,, & E_9^{[A]}(s,t,p_3^2,p_4^2) &= - E_8^{[B]}(s,u,p_3^2,p_4^2)\,, \nonumber \\ E_3^{[A]}(s,t,p_3^2,p_4^2) &= - E_2^{[B]}(s,u,p_3^2,p_4^2)\,, & E_1^{[C]}(s,t,p_3^2,p_4^2) &= - E_4^{[C]}(s,u,p_3^2,p_4^2)\,, \nonumber \\ E_4^{[A]}(s,t,p_3^2,p_4^2) &= - E_1^{[B]}(s,u,p_3^2,p_4^2)\,, & E_2^{[C]}(s,t,p_3^2,p_4^2) &= - E_3^{[C]}(s,u,p_3^2,p_4^2)\,, \nonumber \\ E_5^{[A]}(s,t,p_3^2,p_4^2) &= - E_5^{[B]}(s,u,p_3^2,p_4^2)\,, & E_5^{[C]}(s,t,p_3^2,p_4^2) &= - E_5^{[C]}(s,u,p_3^2,p_4^2)\,, \nonumber \\ E_6^{[A]}(s,t,p_3^2,p_4^2) &= - E_7^{[B]}(s,u,p_3^2,p_4^2)\,, & E_6^{[C]}(s,t,p_3^2,p_4^2) &= - E_7^{[C]}(s,u,p_3^2,p_4^2)\,, \nonumber \\ E_7^{[A]}(s,t,p_3^2,p_4^2) &= - E_6^{[B]}(s,u,p_3^2,p_4^2)\,, & E_8^{[C]}(s,t,p_3^2,p_4^2) &= - E_9^{[C]}(s,u,p_3^2,p_4^2)\,, \label{Ejx12}\end{aligned}$$ From exchange of the external vector bosons, $\pi_{34}$, we have $$\begin{aligned} { {\rm M} }^{[A]}_{\lambda LL} = { {\rm M} }^{[B]}_{\lambda LL}(p_3 \leftrightarrow p_4)\,, \quad { {\rm M} }^{[C]}_{\lambda LL} = { {\rm M} }^{[C]}_{\lambda LL}(p_3 \leftrightarrow p_4)\,, \quad \mbox{with} \quad \lambda = L,R\,,\end{aligned}$$ which implies $$\begin{aligned} E_1^{[A]}(s,t,p_3^2,p_4^2) &= - E_1^{[B]}(s,u,p_4^2,p_3^2)\,, & E_9^{[A]}(s,t,p_3^2,p_4^2) &= + E_7^{[B]}(s,u,p_4^2,p_3^2)\,, \nonumber\\ E_2^{[A]}(s,t,p_3^2,p_4^2) &= - E_3^{[B]}(s,u,p_4^2,p_3^2)\,, & E_1^{[C]}(s,t,p_3^2,p_4^2) &= - E_1^{[C]}(s,u,p_4^2,p_3^2)\,, \nonumber\\ E_3^{[A]}(s,t,p_3^2,p_4^2) &= - E_2^{[B]}(s,u,p_4^2,p_3^2)\,, & E_2^{[C]}(s,t,p_3^2,p_4^2) &= - E_3^{[C]}(s,u,p_4^2,p_3^2)\,, \nonumber\\ E_4^{[A]}(s,t,p_3^2,p_4^2) &= - E_4^{[B]}(s,u,p_4^2,p_3^2)\,, & E_4^{[C]}(s,t,p_3^2,p_4^2) &= - E_4^{[C]}(s,u,p_4^2,p_3^2)\,, \nonumber\\ E_5^{[A]}(s,t,p_3^2,p_4^2) &= - E_5^{[B]}(s,u,p_4^2,p_3^2)\,, & E_5^{[C]}(s,t,p_3^2,p_4^2) &= - E_5^{[C]}(s,u,p_4^2,p_3^2)\,, \nonumber\\ E_6^{[A]}(s,t,p_3^2,p_4^2) &= + E_8^{[B]}(s,u,p_4^2,p_3^2)\,, & E_6^{[C]}(s,t,p_3^2,p_4^2) &= + E_8^{[C]}(s,u,p_4^2,p_3^2)\,, \nonumber\\ E_7^{[A]}(s,t,p_3^2,p_4^2) &= + E_9^{[B]}(s,u,p_4^2,p_3^2)\,, & E_7^{[C]}(s,t,p_3^2,p_4^2) &= + E_9^{[C]}(s,u,p_4^2,p_3^2)\,, \nonumber\\ E_8^{[A]}(s,t,p_3^2,p_4^2) &= + E_6^{[B]}(s,u,p_4^2,p_3^2)\,, & &\phantom{= + E_6^{[B]}(s,u,p_4^2,p_3^2)} \label{Ejx34}\end{aligned}$$ Similar but slightly more involved relations can be derived for the primary set of coefficients, $A_j$, but we don’t list them here for brevity. We have explicitly verified that relations , for the $E_j$ and the corresponding relations for the $A_j$ hold for our results at tree level, one loop and two loops. While most of the coefficients $A_j$ are zero at tree level, fewer of the $E_j$ have this property. We find for class A $$\begin{aligned} \label{HelAmplTreeEA} E_1^{[A],(0)} &= 0,& E_2^{[A],(0)} &= -\frac{2}{s t},& E_3^{[A],(0)} &= \frac{2}{s t},\nonumber\\ E_4^{[A],(0)} &= 0,& E_5^{[A],(0)} &= \frac{2}{t},& E_6^{[A],(0)} &= -2 \frac{(s - t + p_4^2)}{s t},\nonumber\\ E_7^{[A],(0)} &= 2 \frac{(t - p_3^2)}{s t},& E_8^{[A],(0)} &= -2 \frac{(t - p_4^2)}{s t},& E_9^{[A],(0)} &= 2 \frac{(s - t + p_3^2)}{s t}, \\ \intertext{for class B} \label{HelAmplTreeEB} E_1^{[B],(0)} &= 0,& E_2^{[B],(0)} &= -\frac{2}{s u},& E_3^{[B],(0)} &= \frac{2}{s u},\nonumber\\ E_4^{[B],(0)} &= 0,& E_5^{[B],(0)} &= -\frac{2}{u},& E_6^{[B],(0)} &= -2 \frac{(u - p_3^2)}{s u},\nonumber\\ E_7^{[B],(0)} &= 2 \frac{(s - u + p_4^2)}{s u},& E_8^{[B],(0)} &= -2 \frac{(s - u + p_3^2)}{s u},& E_9^{[B],(0)} &= 2 \frac{(u - p_4^2)}{s u}, \\ \intertext{and for class F} \label{HelAmplTreeEF} E_1^{[F],(0)} &= 0,& E_2^{[F],(0)} &= 0,& E_3^{[F],(0)} &= 0,\nonumber\\ E_4^{[F],(0)} &= 0,& E_5^{[F],(0)} &= -4,& E_6^{[F],(0)} &= +4,\nonumber\\ E_7^{[F],(0)} &= +4,& E_8^{[F],(0)} &= -4,& E_9^{[F],(0)} &= -4.\end{aligned}$$ As discussed above, class C contributions enter only at the two-loop level. The calculation of the coefficients $A_j$ and thus $E_j$ proceeds as follows. The diagrams belonging to class $F_V$ are known [@Gehrmann:2005pd]. They do not have to be recomputed and we will not refer to them anymore here. As far as the other classes are concerned, we produced all the tree-level, one-loop and two-loop Feynman diagrams with [[Qgraf]{}]{} [@Nogueira:1991ex]. The scalar coefficients $A_j$ are evaluated analytically diagram by diagram by applying the projectors defined in  and summing over the polarisations of the external vector bosons as in . For the gluons we employ the Feynman-’tHooft gauge. All these manipulations are consistently performed in $d$ dimensions. Upon doing this we obtain the coefficients in terms of a large number of scalar two-loop Feynman integrals. The latter are classified into three integral families, two planar and one non-planar. We have made use of [[Reduze]{}]{}2 [@vonManteuffel:2012np; @Studerus:2009ye; @Bauer:2000cp; @fermat] in order to map all integrals to these integral families and to perform a full integration-by-parts reduction [@Tkachov:1981wb; @Chetyrkin:1981qh; @Gehrmann:1999as; @Laporta:2001dd] of the latter to a set of master integrals. All intermediate algebraic manipulations on the Feynman diagrams have been performed using [[Form]{}]{} [@Vermaseren:2000nd]. Once the coefficients $A_j$ for the different classes of diagrams are known at the different loop orders, one can calculate the form factors $E_j$ using . Since the expressions for the coefficients $A_j$ (and equivalently those for the $E_j$) at two loops are very lengthy we decided not to include them explicitly in the text. Analytical expressions for the $A_j$, prior to UV renormalisation and IR subtraction, expressed as linear combinations of masters integrals and retaining full dependence on the dimensions $d$ are available on our project page at [[HepForge]{}]{}. Master integrals {#sec:masters} ================ Computation via differential equations {#sec:tradpolylogs} -------------------------------------- We computed all two-loop master integrals needed for our process with the method of differential equations [@Kotikov:1990kg; @Remiddi:1997ny; @Caffo:1998du; @Gehrmann:1999as] and optimised the solutions for fast and precise numerical evaluations [@Duhr:2012fh; @Bonciani:2013ywa; @Gehrmann:2014bfa]. The master integrals for the case $p_3^2=p_4^2$ have first been calculated in [@Gehrmann:2013cxs; @Gehrmann:2014bfa]. Here, we consider the case $p_3^2\neq p_4^2$, for which the master integrals have been computed in [@Henn:2014lfa; @Caola:2014lpa] for the first time. Our calculation provides an independent check of these results and improves them for numerical applications. In this Section we present our calculation and discuss qualitative aspects of the results. We provide the explicit solutions in computer readable format on [[HepForge]{}]{}. We find that all master integrals are described by the integral families presented in [@Gehrmann:2014bfa] for the case $p_3^2\neq p_4^2$ and crossings thereof. We start by determining a set of linearly independent master integrals for all relevant topologies using [[Reduze]{}]{}2 [@vonManteuffel:2012np]. For convenience, we stick to the normal form definitions for the master integrals given in [@Henn:2014lfa; @Caola:2014lpa]. We supplement these definitions by new normal form definitions for eight factorisable topologies corresponding to products of one-loop integrals. All our definitions are supplied in computer readable form on [[HepForge]{}]{}. We consider the master integrals of all integral families at the same time and eliminate multiple variants of equivalent master integrals using the shift-finder of [[Reduze]{}]{}2. For this purpose we also identify crossed topologies and work out relations between crossed and uncrossed master integrals. Ignoring crossed variants and counting product topologies as two-loop topologies we find a total number of 84 independent master integrals. To apply the method of differential equations, we include also crossed versions for a couple of integrals, which appear in sub-topologies of non-planar topologies. In this way we assemble a minimal set of 111 master integrals suitable for the construction of a system of differential equations. We compute the partial derivatives of the master integrals with respect to all independent external invariants $s$, $t$, $p_3^2$, $p_4^2$ in terms of master integrals with the help of [[Reduze]{}]{}. The coefficients contain rational functions of the invariants and the Källén function $\kappa$, , associated to the two-body phase space. To rationalise the root $\kappa$, we employ the parametrisation $$\begin{aligned} \label{n34params} s &= {\bar{m}}^2 (1+{\bar{x}})^2, & p_3^2 &= {\bar{m}}^2 {\bar{x}}^2 (1-{\bar{y}}^2),\nonumber\\ t &= -{\bar{m}}^2 {\bar{x}}((1+{\bar{y}})(1+{\bar{x}}{\bar{y}}) - 2 {\bar{z}}{\bar{y}}(1+{\bar{x}})), & p_4^2 &= {\bar{m}}^2 (1 - {\bar{x}}^2 {\bar{y}}^2)\,,\end{aligned}$$ (see eq. (2.9) of [@Caola:2014lpa]). In this parametrisation, we define the vector of master integrals $\vec{M}=(M_i)$, $i=1,\ldots,111$, using the integral measure $$\label{measure} \left(\frac{C_\epsilon}{16 \pi^2}\right)^{-2}\, ({\bar{m}}^2)^{2 \epsilon } \int \frac{d^d k}{(2 \pi)^d}\frac{d^d l}{(2 \pi)^d}$$ which absorbs the overall mass dimension ${\bar{m}}$. Here, $d = 4-2\epsilon$ and $$C_\epsilon = (4 \pi)^\epsilon \, \frac{\Gamma(1+ \epsilon) \, \Gamma^2(1-\epsilon)}{\Gamma(1-2\epsilon)}\,.$$ In the following, we will work directly in the physical region of phase space. Due to the specific choice of the master integrals [@Henn:2013pwa; @Kotikov:2010gf], the partial differential equations combine into the simple total differential, $${\mathrm{d}}\vec{M}(\epsilon; {\bar{x}}, {\bar{y}}, {\bar{z}}) = \epsilon \sum_{k=1}^{20} A_k {\mathrm{d}}\ln(\bar{l}_k)\, \vec{M}(\epsilon; {\bar{x}}, {\bar{y}}, {\bar{z}})$$ where the matrices $A_k$ contain just rational numbers and the alphabet is $$\begin{aligned} \label{n34alphabet} \{ \bar{l}_1,\ldots,\bar{l}_{20} \} = \{ & 2, {\bar{x}}, 1 + {\bar{x}}, 1 - {\bar{y}}, {\bar{y}}, 1 + {\bar{y}}, 1 - {\bar{x}}{\bar{y}}, 1 + {\bar{x}}{\bar{y}}, 1 - {\bar{z}}, {\bar{z}}, \notag\\ & 1 + {\bar{y}}- 2 {\bar{y}}{\bar{z}}, 1 - {\bar{y}}+ 2 {\bar{y}}{\bar{z}}, 1 + {\bar{x}}{\bar{y}}- 2 {\bar{x}}{\bar{y}}{\bar{z}}, 1 - {\bar{x}}{\bar{y}}+ 2 {\bar{x}}{\bar{y}}{\bar{z}}, \notag\\ & 1 + {\bar{y}}+ {\bar{x}}{\bar{y}}+ {\bar{x}}{\bar{y}}^2 - 2 {\bar{y}}{\bar{z}}- 2 {\bar{x}}{\bar{y}}{\bar{z}}, 1 + {\bar{y}}- {\bar{x}}{\bar{y}}- {\bar{x}}{\bar{y}}^2 - 2 {\bar{y}}{\bar{z}}+ 2 {\bar{x}}{\bar{y}}{\bar{z}}, \notag\\ & 1 - {\bar{y}}- {\bar{x}}{\bar{y}}+ {\bar{x}}{\bar{y}}^2 + 2 {\bar{y}}{\bar{z}}+ 2 {\bar{x}}{\bar{y}}{\bar{z}}, 1 - {\bar{y}}+ {\bar{x}}{\bar{y}}- {\bar{x}}{\bar{y}}^2 + 2 {\bar{y}}{\bar{z}}- 2 {\bar{x}}{\bar{y}}{\bar{z}}, \notag\\ & 1 - 2 {\bar{y}}- {\bar{x}}{\bar{y}}+ {\bar{y}}^2 + 2 {\bar{x}}{\bar{y}}^2 - {\bar{x}}{\bar{y}}^3 + 4 {\bar{y}}{\bar{z}}+ 2 {\bar{x}}{\bar{y}}{\bar{z}}+ 2 {\bar{x}}{\bar{y}}^3 {\bar{z}}, \notag\\ & 1 - {\bar{y}}- 2 {\bar{x}}{\bar{y}}+ 2 {\bar{x}}{\bar{y}}^2 + {\bar{x}}^2 {\bar{y}}^2 - {\bar{x}}^2 {\bar{y}}^3 + 2 {\bar{y}}{\bar{z}}+ 4 {\bar{x}}{\bar{y}}{\bar{z}}+ 2 {\bar{x}}^2 {\bar{y}}^3 {\bar{z}}\}\,.\end{aligned}$$ Anticipating the solution, we included the letter 2 already, which is of course arbitrary at the level of the differential equations. While we found that it is possible to reduce the number of letters by forming appropriate ratios, a reduction of the alphabet is best performed using a different parametrisation, as we will see below. After expansion in $\epsilon$ it is straight-forward to integrate the differential equations in terms of multiple polylogarithms $$\operatorname{G}(w_1,\dots,w_n; z) = \int_0^z \frac{{\mathrm{d}}t}{t - w_1} \operatorname{G}(w_2,\dots,w_n;t),$$ with $\operatorname{G}(0,\dots,0;z) = \frac{1}{n!}\ln^n(z)$ for $n$ zero weights and $\operatorname{G}(;z)=1$. For each order in $\epsilon$, we integrate the partial derivatives in ${\bar{z}}$. This gives the solution up to a function of ${\bar{x}}$ and ${\bar{y}}$. We employ the partial derivatives in ${\bar{x}}$ to determine this function, this time up to a function of ${\bar{y}}$. Subsequent usage of the derivative in ${\bar{y}}$ fixes the boundary terms up to one constant per master integral for the given order in $\epsilon$. Despite the presence of nonlinearities in , the specific order of our integrations ensures that in fact only *linear* denominators occur in the respective integration variable. We integrate the master integrals through to weight 4, which corresponds to $\epsilon^4$ terms in the chosen normalisation. The necessary argument-change transformations for the multiple polylogarithms were derived using an in-house package, which employs fitting of constants using high precision samples obtained with [@Vollinga:2004sn]. In order to fix the integration constants, we consider the equal mass limit $p_4^2 \to p_3^2$ which implies ${\bar{x}}\to 1$. This limit is smooth and our master integrals become simple linear combinations of the normal form integrals defined in [@Gehrmann:2014bfa], where the coefficients in this map are just rational numbers. We compute the limit at the level of our solutions and equate them to the real-valued solutions of [@Gehrmann:2014bfa]. Using the coproduct-augmented symbol calculus [@2001math.3059G; @Brown:2008um; @Goncharov:2010jf; @Duhr:2011zq; @Duhr:2012fh; @Duhr:2014woa], we find perfect agreement for all non-constant terms and easily fix the boundary constants of the present integrals. We also compared our results to the solutions of [@Henn:2014lfa; @Caola:2014lpa] and find perfect agreement at the analytical level. The solutions we obtained in this way are not ideal for our purposes yet, since their numerical evaluation is rather slow. Moreover, they contain spurious structures: the individual multiple polylogarithms contribute letters $\{1 - {\bar{x}}, 1 + {\bar{x}}{\bar{y}}^2, 2 + {\bar{x}}+ {\bar{x}}{\bar{y}}^2, 1 + 2 {\bar{x}}+ {\bar{x}}^2 {\bar{y}}^2\}$ which cancel for the master integral itself. In particular, the equal-virtuality limit ${\bar{x}}\to 1$ is completely smooth as can be seen from , but the representation does not allow for an evaluation exactly in the equal-virtuality point. Optimisation of the functional basis {#sec:optpolylogs} ------------------------------------ We wish to cast our solutions to a new representation which allows for fast and stable numerical evaluations and is free of spurious letters. In order to achieve that goal we select a new basis of multiple polylogarithms where we do not force individual variables into the argument of $\operatorname{G}$ functions anymore. As a side effect, this gives us more freedom for a rational parametrisation, since we avoid problems due to non-linear denominators in the integration variable. It is convenient to choose new variables $x$, $y$, $z$ and $m^2$ according to $$\label{n12params} s = m^2 (1+x) (1+x y),\quad t = -m^2 x z,\quad p_3^2 = m^2,\quad p_4^2 = m^2 x^2 y$$ (see eq. (2.7) of [@Caola:2014lpa]), which again rationalises the root $\kappa$. We select the branch for which in the physical domain $$x > 0,\quad 0 < y < z < 1,\quad m^2 > 0.$$ This reparametrisation is actually not crucial for what will follow, but it decreases the number of irreducible polynomial letters which will be convenient for our mapping procedure. Under crossings of external legs the parameters transform as $$\begin{aligned} {3} \pi_{12}:\quad z &\to 1 + y - z && && \\ \pi_{34}:\quad z &\to 1 + y - z,&\quad x &\to 1/(x y),&\quad m^2 &\to m^2 x^2 y\,.\end{aligned}$$ In this parametrisation we factor out a normalisation of the form  but with ${\bar{m}}$ replaced by $m$. We find the alphabet $$\begin{aligned} \label{n12alphabet} \{ l_1,\ldots,l_{17} \} = \{ & x, 1 + x, y, 1 - y, z, 1 - z, -y + z, 1 + y - z, 1 + x y, 1 + x z, x y + z, \notag\\ & 1 + y + x y - z, 1 + x + x y - x z, 1 + y + 2 x y - z + x^2 y z, \notag\\ & 2 x y + x^2 y + x^2 y^2 + z - x^2 y z, 1 + x + y + x y + x y^2 - z - x z - x y z, \notag\\ & 1 + y + x y + y^2 + x y^2 - z - y z - x y z \}\end{aligned}$$ at the level of the differential equations and also of the solutions through to weight 4. This alphabet is shorter than the previous one and can not be reduced further by forming ratios. We construct a new functional basis consisting of $\operatorname{Li}_{2,2}$ functions, classical polylogarithms $\operatorname{Li}_n$ $(n=2,3,4)$ and logarithms, similar to the approach taken in [@Bonciani:2013ywa; @Gehrmann:2014bfa]. The $\operatorname{Li}_{2,2}$ function can be written in $\operatorname{G}$-function notation according to $$\operatorname{Li}_{2,2}(x_1,x_2) = \operatorname{G}\left(0, \frac{1}{x_1}, 0, \frac{1}{x_1 x_2}; 1\right) \,.$$ Following the algorithm of [@Duhr:2011zq] we generate functional arguments which are rational functions of $x$, $y$, $z$ and do not lead to new spurious letters. This implies that the arguments factorise into the letters of our alphabet and their inverses. We can therefore systematically scan for admissable $\operatorname{Li}_n$ arguments by constructing power products of letters, their inverses and $-1$. A candidate argument $x_1$ is admissable exactly if $1-x_1$ factorises into the letters of our alphabet and $-1$, since only in that case the introduction of new letters is avoided. Admissable arguments for $\operatorname{Li}_{2,2}$ functions are determined by forming pairs of admissable $\operatorname{Li}_n$ arguments and requiring for any such pair $(x_1,x_2)$ that the difference $x_1-x_2$ factorises into the original letters and $-1$. For the amplitude we need to evaluate also independent master integrals with crossed kinematics, and we chose to implement these expressions explicitly for evaluation time optimisation purposes. We therefore directly construct a shared set of basis functions for uncrossed and crossed variants of the master integrals and consequently close our alphabet  under $\pi_{12}$ and $\pi_{34}$ by adding the letters $$\label{n12cross} \{ l_{18},l_{19} \} = \{ -x y + z + x z + x y z, -y + z + y z + x y z \}.$$ We require all functions to be single valued and real over the entire physical region of phase space. As in [@Gehrmann:2014bfa], we further tighten this constraint and select only those $\operatorname{Li}_{2,2}(x_1,x_2)$ functions, for which their power series representation $$\begin{aligned} \operatorname{Li}_{2,2}(x_1,x_2) &= \sum_{j_1 = 1}^{\infty}\sum_{j_2 = 1}^{\infty} \frac{x_1^{j_1}}{(j_1+j_2)^2} \frac{(x_1 x_2)^{j_2}}{j_2^2}\end{aligned}$$ is convergent, that is, their arguments fulfil $$|x_1| < 1\,, \qquad |x_1 x_2| < 1\,.$$ We wish to express our master integrals in terms of these new functions and employ the coproduct-augmented symbol calculus for that mapping, see [@Duhr:2011zq; @Duhr:2012fh; @Duhr:2014woa]. This step is computationally demanding due to the large number of possible candidate functions. Here, we profit from the reduction of the number of letters described above which leads to a smaller set of candidate integrals for a given maximal total degree of the arguments. Furthermore, we employ a particularly efficient technique for the symbol calculus, where we identify and match individual factors of products directly at the level of the symbol [@vonManteuffel:2013vja]. In particular, this means we never need to construct products of polylogarithms for our candidate functions which avoids a severe combinatorial blowup for the linear algebra routines. Using the coproduct we were able to express all master integrals in terms of our new set of functions described above. We stress that the success of this matching is not a priori obvious. The explicit solutions for all of the master integrals are provided on [[HepForge]{}]{}. Concerning our primary motivation for changing the functional basis, we observe that the new representation indeed allows for significantly faster numerical evaluations. For the numerical evaluation of the multiple polylogarithms we employ the implementation [@Vollinga:2004sn] in the [GiNaC]{} library [@Bauer:2000cp]. The exact evaluation time and the speedup due to the new functional basis depend on the chosen point in phase space and on the required precision of the result. We tested some samples and observed speedup-factors between 8 and 85 when evaluating the 111 master integrals in the system of differential equations. For the benchmark point of [@Caola:2014iua], the numerical evaluation with default precision takes 2250ms for the “traditional” G-functions (Section \[sec:tradpolylogs\]) and 120ms for the “optimised” functions (this Section) on a single core of a standard desktop computer. UV renormalisation and IR subtraction {#sec:helfin} ===================================== Let us go back to the calculation of the helicity amplitude coefficients $A_j$ (or equivalently the $E_j$). In order to simplify the notation for what follows we pick one of the form factors: $$\Omega = A_j ~~(\text{or~} E_j)\,, \quad \text{for some~} j =1,\ldots,10~(9)\,,$$ in order to suppress the index $j$. The following discussions applies to any of the chosen form factors in the same way. We perform renormalisation of the UV divergences in the standard ${\overline{ \rm MS}}$ scheme which, in massless QCD, amounts to simply replacing the bare coupling $\alpha_0$ with the renormalised one $\alpha_s = \alpha_s(\mu^2)$, where $\mu^2$ is the renormalisation scale. Since in our case the tree-level amplitudes do not contain any power of $\alpha_s$ we require only the one-loop relation for the coupling $$\alpha_0\, \mu_0^{2 \epsilon}\, S_\epsilon = \alpha_s\, \mu^{2 \epsilon} \left [ 1 - \frac{\beta_0}{\epsilon} \left( \frac{\alpha_s}{2 \pi} \right)+ \mathcal{O}(\alpha_s^2)\right]$$ where $$S_\epsilon = (4 \pi)^\epsilon \, { {\rm e} }^{-\epsilon \gamma}\,, \qquad \mbox{with the Euler-Mascheroni constant} \quad \gamma = 0.5772...\,,$$ $\mu_0^2$ is the mass-parameter introduced in dimensional regularisation to maintain a dimensionless coupling in the bare QCD Lagrangian density, and finally $\beta_0$ is the first order of the QCD $\beta$-function $$\label{beta0} \beta_0 = \frac{11 \,C_A - 4 \,T_F\,N_f}{6}\,, \quad \mbox{with}\quad C_A = N\,, \quad C_F = \frac{N^2-1}{2\,N}\,, \quad T_F = \frac{1}{2}\,.$$ We perform UV renormalisation at the scale $\mu^2 = s$, the invariant mass of the vector boson pair. Values of the helicity coefficients at different renormalisation scales can be recovered by using the renormalisation group equation. Since at a given loop order $n$ the form factors are defined with all powers of the strong coupling factored out, the renormalised form factors $\Omega^{(n)}$ are expressed in terms of the un-renormalised ones $\Omega^{(n),{ {\rm un} }}$ according to $$\begin{aligned} &\Omega^{(0)} = \Omega^{(0),{ {\rm un} }}\,,\nonumber \\ &\Omega^{(1)} = S_\epsilon^{-1}\,\Omega^{(1),{ {\rm un} }}\,,\nonumber \\ &\Omega^{(2)} = S_\epsilon^{-2}\,\Omega^{(2),{ {\rm un} }} - \frac{\beta_0}{\epsilon}\, S_\epsilon^{-1}\,\Omega^{(1),{ {\rm un} }}\,. \label{UVren}\end{aligned}$$ After performing UV renormalisation, the amplitude contains residual IR singularities which will be cancelled analytically by those occurring in radiative processes at the same order. Catani was the first to show how to organise the IR-pole structure up to two-loop in QCD [@Catani:1998bh]. In subtracting the poles from the one- and two-loop amplitudes we will follow a slightly modified scheme described in [@Catani:2013tia], which is better suited for the $q_T$-subtraction formalism. The two schemes are of course equivalent and we provide formulae to convert the results between the two schemes in Appendix \[sec:catani\]. We define the IR finite amplitudes at renormalisation scale $\mu^2$ in terms of the UV renormalised ones as follows $$\begin{aligned} &\Omega^{(1),{ {\rm finite} }} = \Omega^{(1)} - I_1(\epsilon)\, \Omega^{(0)}\,,\nonumber \\ &\Omega^{(2),{ {\rm finite} }} = \Omega^{(2)} - I_1(\epsilon)\, \Omega^{(1)} - I_2(\epsilon)\, \Omega^{(0)}\,, \label{IRqT}\end{aligned}$$ with $$I_1(\epsilon) = I_1^{soft}(\epsilon) + I_1^{coll}(\epsilon) \,,$$ $$\begin{aligned} I_1^{soft}(\epsilon) &= -\frac{{ {\rm e} }^{\epsilon \gamma}}{\Gamma(1-\epsilon)} \left( \frac{\mu^2}{s}\right)^{\epsilon}\, \left( \frac{1}{\epsilon^2} + \frac{i \pi}{\epsilon} + \delta_{q_T}^{(0)} \right)\,C_F \,,\qquad I_1^{coll}(\epsilon) =- \frac{3}{2}C_F \frac{1}{\epsilon} \left( \frac{\mu^2}{s}\right)^{\epsilon}\, ,\end{aligned}$$ $$I_2(\epsilon) = - \frac{1}{2} I_1(\epsilon)^2 + \frac{\beta_0}{\epsilon} \left[ I_1(2 \epsilon) - I_1(\epsilon) \right] + K\,I_1^{soft}(2\epsilon) + H_2(\epsilon)\,,$$ $$H_2(\epsilon) = \frac{1}{4 \epsilon} \left( \frac{\mu^2}{s}\right)^{2\,\epsilon}\left( \frac{\gamma_q^{(1)}}{4} + C_F\, d_1 + \epsilon \, C_F\, \delta_{q_T}^{(1)} \right)\,,$$ and the constants are defined as $$\begin{aligned} &\delta_{q_T}^{(0)} = 0\,, \qquad K = \left( \frac{67}{18} - \frac{\pi^2}{6} \right) C_A - \frac{5}{9} N_F \,,\nonumber \\ &d_1 = \left( \frac{28}{27} - \frac{1}{3} \zeta_2 \right) N_F + \left( -\frac{202}{27} + \frac{11}{6}\,\zeta_2 + 7 \,\zeta_3 \right) C_A\,,\nonumber \\ &\delta_{q_T}^{(1)} = \frac{10}{3} \zeta_3 \,\beta_0 + \left( -\frac{1214}{81} + \frac{67}{18} \zeta_2 \right) C_A + \left( \frac{164}{81} - \frac{5}{9} \zeta_2 \right) N_F\,, \nonumber \\ & \gamma_{q}^{(1)} = \left( -3 + 24\,\zeta_2 - 48 \zeta_3 \right) C_F^2 + \left( -\frac{17}{3} - \frac{88}{3} \zeta_2 + 24\,\zeta_3 \right) \,C_F\,C_A + \left( \frac{2}{3} + \frac{16}{3} \zeta_2 \right) \, C_F\, N_F\,. \end{aligned}$$ Note that in these equations all imaginary parts are already explicit prior to expansion in $\epsilon$. Setting $\mu^2=s$, we calculated the finite remainder of the $A_j$ for $\epsilon \to 0$ in the $q_T$-subtraction scheme. We provide the explicit analytical results on our project page at [[HepForge]{}]{}. It is straight-forward to convert our finite results obtained in the $q_T$-scheme to Catani’s original scheme, see Appendix \[sec:catani\]. Checks on the amplitudes {#sec:checks} ======================== We performed different checks on our amplitude, which we enumerate here. 1. First of all, we started off by computing the 10 form factors $A_j$ of  for the different classes of diagrams ${ \mathcal{C} }=A,B,C,D_1,D_2$, and we explicitly verified that, according to Furry’s theorem, the diagrams in classes $D_1$ and $D_2$ independently sum up to zero. 2. From the $A_j$ we computed the 9 form factors $E_j$ in  and , and we verified that, both prior to as well as after subtraction of UV and IR poles, all symmetry relations described in , and the corresponding ones for the $A_j$, are identically satisfied. 3. We verified that the poles of the one-loop and two-loop amplitudes are correctly reproduced by Catani’s formula [@Catani:1998bh], which provides a strong check on the calculation. 4. For the NNLO computation of on-shell $ZZ$ and $W^+W^-$ production [@Cascioli:2014yka; @Gehrmann:2014fva] we performed a dedicated calculation, directly for the squared amplitude, employing our equal-mass master integrals [@Gehrmann:2013cxs]. The tree and one-loop contributions have been found to agree with the analytical results of [@Mele:1990bq; @Frixione:1993yp] and with numerical samples obtained with [[OpenLoops]{}]{} [@Cascioli:2011va]. Starting from our general results for the amplitude in the off-shell case, we re-derived the squared amplitudes for on-shell $ZZ$ and $WW$ production as described in Appendix \[sec:equalmass\] and found full agreement through to two-loops. 5. We performed a thorough comparison of our results with an earlier calculation of the two-loop amplitudes for on-shell $W^+\,W^-$ production in the small-mass limit [@Chachamis:2008yb]. Starting from our results for the squared amplitude for $W^+\,W^-$ production (see Appendix \[sec:equalmass\]), we take the small-mass limit, namely $m_W^2/s \to 0$ for fixed $(t-m_W^2)/s$. Adjusting for overall conventions we found agreement with the results obtained in [@Chachamis:2008yb] in all contributions, except for $F_i^{[C],(2)}(s,t)$ arising from the interference of two-loop diagrams in class $C$ with the tree-level diagram in class $A$. From the discussion in [@Chachamis:2008yb], we could trace back this discrepancy to a different treatment of the vector-axial contributions in the fermionic loop in class $C$, resulting in a non-vanishing remainder even for zero-mass quarks. Since this appears to be inconsistent with charge parity conservation, we have good reasons to believe that the prescription used here as well as in [@Caola:2014iua] is the correct one (see our discussion in Section \[sec:calc\]). 6. Finally, we have compared numerically results both for the individual form factors $E_j$ and for the full amplitudes ${ {\rm M} }_{LLL}$ and ${ {\rm M} }_{RLL}$ at tree-level, one-loop and two-loop order, with reference [@Caola:2014iua]. For the numerical evaluations of the helicity amplitudes we employed the package [S@M]{} [@Maitre:2007jq]. We find full agreement with the results reported in [@Caola:2014iua], after a mistake in the calculation of one of the form factors was corrected in that reference. Numerical code and results {#sec:numerics} ========================== ![\[fig:ejclassa\] Real parts of the two-loop form factors $E_j^{[A],{(2)}}$ for the process $q\bar{q}'\to V_1 V_2$ in dependence of the relativistic velocity, $\beta_3$, and the cosine of the scattering angle, $\cos\theta_3$, of the vector boson $V_1$. The virtualities of the vector bosons are set to $p_4^2 = 2 p_3^2$. ](plot3deja){width="\textwidth"} ![\[fig:ejclassc\] Real parts of the two-loop form factors $E_j^{[C],{(2)}}$ for the process $q\bar{q}'\to V_1 V_2$ in dependence of the relativistic velocity, $\beta_3$, and the cosine of the scattering angle, $\cos\theta_3$, of the vector boson $V_1$. The virtualities of the vector bosons are set to $p_4^2 = 2 p_3^2$. ](plot3dejc){width="\textwidth"} We provide a [C++]{} code for the numerical evaluation of the 9 finite form factors $E_j$ for classes $A$, $B$ and $C$. The implementation supports both, evaluation in the $q_T$-scheme and in Catani’s original scheme. Further, it also provides the (alternative) 10 form factors $A_j$. The code is set up in form of a [C++]{} library, which is supplemented by a simple command line interface. The code was optimised for speed and stability of the numerical evaluations, in particular, by employing an appropriate functional basis for the multiple polylogarithms, see Section \[sec:optpolylogs\]. We employ [C++]{} templates to support evaluations with three different data types: double precision, quad precision and arbitrary precision using the [CLN]{} library [@cln]. The multiple polylogarithms are evaluated via their implementation [@Vollinga:2004sn] in the [GiNaC]{} library [@Bauer:2000cp], which also employs the [CLN]{} arbitrary precision capabilities. For the benchmark point of [@Caola:2014iua] no severe cancelations due to asymptotic kinematics take place. In this case our double precision implementation is accurate and gives at least 10 significant digits for each of the $E_j$ at the two-loop level. The evaluation of all $E_j$ incl. crossed variants, as needed for the physical amplitude, takes 150ms on a single core of a standard desktop computer. Close to the phase space boundaries or in the high energy region, numerical cancelations lead to a significant loss of precision. In order to detect and cure a possible instability, we compare the results obtained from evaluations with different precision settings and adaptively increase the precision until the target precision is met. We find the method to converge even in highly collinear configurations, where one needs to allow for a significant increase in the evaluation time though. Of course, also for unproblematic points in the bulk of the phase space, where the double precision results are actually accurate enough, our precision check requires additional run-time. For the aforementioned benchmark point we find an increase in the evaluation time to approximately 0.8s on a single core. In Figures \[fig:ejclassa\] and \[fig:ejclassc\] we show numerical results for the class $A$ and class $C$ contributions to our 9 form factors $E_j$ at the two-loop level. Note that these results were obtained with our [C++]{} code and thus demonstrate the high numerical reliability of our implementation. We vary the relativistic velocity, $\beta_3 = \kappa/(s + p_3^2 - p_4^2)$, and the cosine of the scattering angle, $\cos\theta_3 = (2 t + s - p_3^2 - p_4^2)/\kappa$, of the vector boson $V$. For the virtualities of the vector bosons we have set $p_4^2 = 2 p_3^2$. All results are for $N_f=5$ and given in the $q_T$-scheme. The class $A$ contributions in Figure \[fig:ejclassa\] show pronounced structures in the collinear regions (see for the corresponding tree level coefficients). In contrast, the class $C$ contributions in Figure \[fig:ejclassc\] show no such features and are rather smooth functions in the full $\beta_3$-$\cos\theta_3$ plane. Conclusions {#sec:conc} =========== In this paper, we presented the derivation of the two-loop massless QCD corrections to the helicity amplitudes for massive vector boson pair production in quark-antiquark annihilation. The combination with leptonic decay currents allows to construct the two-loop QCD matrix elements relevant to four-lepton production. In this course, we computed all master integrals and optimised their representation for numerical performance. Our results obtained for the amplitudes provide a fully independent validation of a recent calculation [@Caola:2014iua]. We implemented our amplitudes in a [C++]{} code for the fast and stable numerical evaluation of the amplitudes, which we provide together with our analytical results for public access at <http://vvamp.hepforge.org>. This opens up the path towards precision phenomenology in gauge boson pair production and improvements of the background predictions for Higgs boson studies and searches for physics beyond the Standard Model. Acknowledgements {#acknowledgements .unnumbered} ================ We are grateful to K. Melnikov and F. Caola for their help in checking our results against [@Caola:2014iua]. LT wishes to thank K. Melnikov for a clarifying discussion on the use of 4-dimensional Schouten identities for simplifications of spinor structures. This research was supported in part by the Swiss National Science Foundation (SNF) under contract 200020-149517 and by the Research Executive Agency (REA) of the European Union under the Grant Agreement PITN–GA–2012–316704 ([*HiggsTools*]{}), and the ERC Advanced Grant [*MC@NNLO*]{} (340983). We thank the [[HepForge]{}]{} team for providing web space for our project. The Feynman graphs in this article have been drawn with [[JaxoDraw]{}]{} [@Binosi:2003yf; @Vermaseren:1994je]. Squared amplitudes for the on-shell production of vector-boson pairs {#sec:equalmass} ==================================================================== In this Section we show how the general results described in this article can be used to obtain the squared amplitude for the process $q \bar{q}' \to V_1 V_2$ summed over spins and colours. For the calculations of the NNLO QCD corrections to on-shell $ZZ$ [@Cascioli:2014yka] and $W^+ W^-$ production [@Gehrmann:2014fva] production, we directly computed the squared amplitudes using a dedicated setup based on our solutions for the equal-mass master integrals [@Gehrmann:2013cxs; @Gehrmann:2014bfa]. We compared the results obtained in the two approaches and find full agreement. We denote the squared amplitude as $$\langle \mathcal{M} | \mathcal{M} \rangle =\mathcal{T}(s,t,p_3^2,p_4^2) = \sum_{pol, colour} \left| S_{\mu \nu}(p_1,p_2,p_3) \epsilon_3^\mu(p_3) \epsilon_4^\nu(p_4)\right|^2\,,$$ which of course can be perturbatively expanded in powers of $\alpha_s$ as $$\begin{aligned} \mathcal{T}(s,t,p_3^2,p_4^2) &=(4 \pi \alpha)^2 \, \Bigg[ \mathcal{T}^{(0)}(s,t,p_3^2,p_4^2) + \left( \frac{\alpha_s}{2 \pi} \right)\mathcal{T}^{(1)}(s,t,p_3^2,p_4^2)\nonumber \\ &+ \left( \frac{\alpha_s}{2 \pi} \right)^2 \mathcal{T}^{(2)}(s,t,p_3^2,p_4^2) + \mathcal{O}(\alpha_s^3) \Bigg]\,,\end{aligned}$$ where we have $$\begin{aligned} & \mathcal{T}^{(0)}(s,t,p_3^2,p_4^2) = \langle \mathcal{M}^{(0)} | \mathcal{M}^{(0)} \rangle\,, \\ & \mathcal{T}^{(1)}(s,t,p_3^2,p_4^2) = 2 \Re \left( \langle \mathcal{M}^{(0)} | \mathcal{M}^{(1)} \rangle \right)\,, \\ & \mathcal{T}^{(2)}(s,t,p_3^2,p_4^2) = 2 \Re \left( \langle \mathcal{M}^{(0)} | \mathcal{M}^{(2)} \rangle \right) + \langle \mathcal{M}^{(1)} | \mathcal{M}^{(1)} \rangle\,. \end{aligned}$$ It is easy to write a general expression of $\langle \mathcal{M}^{(n)} | \mathcal{M}^{(m)} \rangle$ in terms of the coefficients $A_j^{(n)}$ and $A_j^{(m)}$ or, equivalently, in terms of the $\tau_j^{(n)}$ and $\tau_j^{(m)}$, simply by contracting the general decomposition  with itself and summing over colours and external polarisations using . The result is quite involved and not particularly illuminating and we decided not to include it here explicitly. This general formula, in fact, is needed explicitly only in order to derive the $1$-loop$\times$$1$-loop corrections $\langle \mathcal{M}^{(1)} | \mathcal{M}^{(1)} \rangle$, which can however also be easily extracted from automated codes, and therefore we will not consider them here. On the other hand, if we limit ourselves to considering the contraction of the generic $n$-loop amplitude with the tree-level, i.e. $m=0$, the results are much more compact. In the following two sections we will discuss the two explicit cases of on-shell $ZZ$ and $WW$ production, which were used for the calculations in [@Cascioli:2014yka; @Gehrmann:2014fva]. The two-loop corrections to ZZ production ------------------------------------------ In the case of $q \bar{q} \to ZZ$ the tree-level is given by the two diagrams belonging to classes ${ \mathcal{C} }=A,B$. As far as two-loop corrections are concerned, the classes of diagrams that can contribute to $ZZ$ production are ${ \mathcal{C} }=A,B,C$, see Section \[sec:calc\]. By contracting the tree-level diagrams with the general amplitude  one easily finds $$\langle \mathcal{M}^{(0)} | \mathcal{M}^{(n)} \rangle_{ZZ} = \frac{N}{2}\left[ (L^Z_{qq})^4 + (R^Z_{qq})^4\right]\,\left( \frac{ 2\, \tau_8^{(ZZ,(n))} - \tau_{9}^{(ZZ,(n))} }{u} - \frac{ 2\, \tau_7^{(ZZ,(n))} - \tau_{10}^{(ZZ,(n))} }{t} \right)\,,$$ where $N$ is the number of colours, while $L_{qq}^Z$ and $R_{qq}^Z$ are defined in . Each of the $\tau_j^{(ZZ,(n))}$ can be obtained summing over the relevant classes of diagrams, re-weighted by appropriate coupling factors $$\tau_j^{(ZZ,(n))} = \tau_j^{[A],(n)} + \tau_j^{[B],(n)} + \widetilde{N}_{ZZ}\,\tau_j^{[C],(2)}\,,$$ where the $\tau_j^{[{ \mathcal{C} }]}$ components of the $\tau_j$ are defined by a decomposition completely analogous to that for the $A_j$ in , $$\widetilde{N}_{ZZ} = \frac{\left[ (L^Z_{qq})^2 + (R^Z_{qq})^2\right]}{\left[ (L^Z_{qq})^4 + (R^Z_{qq})^4\right]}\, N_{ZZ}$$ and $N_{ZZ}$ is defined in . We have verified explicitly that as far at the tree-level and one-loop corrections are concerned, we have full agreement with the results in [@Mele:1990bq]. Similar but much more lengthy formulas can be derived for $\langle \mathcal{M}^{(1)} | \mathcal{M}^{(1)} \rangle_{ZZ}$, and we do not report them here for brevity. The two-loop corrections to W+W- production -------------------------------------------- Let us consider now the case of $q_i \bar{q}_i \to W^+ W^-$, where the index $i$ labels the flavour of the initial state quarks, $q_i=(u,d)$. At the tree-level, this process receives contributions from three diagrams, one in class $A$ and the other two in class $F_V$, with $V = Z, \gamma$. Let us start from the tree-level and one-loop corrections, where only diagrams in classes ${ \mathcal{C} }=A,F_V$ can contribute. Following the notation of [@Frixione:1993yp], we separate the contributions to the squared amplitude into three different form factors $$\langle \mathcal{M}^{(0)} | \mathcal{M}^{(0)} \rangle_{i,WW} = N\left[ c_i^{tt}\, F_i^{(0)}(s,t) - c_i^{ts}\,J_i^{(0)}(s,t) + c_i^{ss} K_i^{(0)}(s,t)\right]\,, \label{decFrix0}$$ $$2 \Re\left( \langle \mathcal{M}^{(0)} | \mathcal{M}^{(1)} \rangle_{i,WW} \right) = N\left[ c_i^{tt}\, F_i^{(1)}(s,t) - c_i^{ts}\,J_i^{(1)}(s,t) + c_i^{ss} K_i^{(1)}(s,t)\right] \,. \label{decFrix1}$$ $F_i^{(n)}$ contains the squared contribution of diagrams in class ${ \mathcal{C} }=A$ (i.e. diagrams where the production of the $W^+ W^-$ pair is not mediated through a $\gamma$ or a $Z$ boson). $J_i^{(n)}$ encapsulates instead the interference of the $F_V$-type diagrams (i.e. those where the $W^+ W^-$ pair is produced via a $\gamma$ or a $Z$ virtual boson) with diagrams in class ${ \mathcal{C} }=A$. Finally $K_i^{(n)}$ is given by the interference of the $F_V$-type diagrams with themselves. Again, following closely [@Frixione:1993yp] we define then $$\begin{aligned} &c_i^{tt} = \frac{1}{16\, \sin^4{\theta_w}}\,,\nonumber \\ &c_i^{ts} = \frac{1}{4\,s\, \sin^2{\theta_w}} \left( e_{q_i} - c_{ZW^+W^-}\,L^Z_{q_i q_i} \frac{s}{s-m_Z^2} \right)\,,\nonumber\\ &c_i^{ss} = \frac{1}{s^2}\, \left[ \left( e_{q_i} - \frac{c_{ZW^+W^-} (L^Z_{q_i q_i} + R^Z_{q_i q_i}) }{2} \frac{s}{s-m_Z^2} \right)^2 + \left( \frac{c_{ZW^+W^-}(L^Z_{q_i q_i} - R^Z_{q_i q_i})}{2} \frac{s}{s-m_Z^2} \right)^2 \right] \label{couplFrix}\end{aligned}$$ where, as always, $e_{q_i}$ is the quark charge in units of $e$, with $e>0$, and the electroweak couplings $L_{q_i q_i}^Z$, $R_{q_i q_i}^Z$ and $c_{ZW^+W^-}$ are defined in  and . At two loops the decomposition  must be enlarged since also diagrams belonging to class ${ \mathcal{C} }= C$ start contributing to the amplitude. We therefore write the two-loop contribution as follows $$\begin{aligned} 2 \Re\left( \langle \mathcal{M}^{(0)} | \mathcal{M}^{(2)} \rangle_{i,WW} \right) = N&\Big[c_i^{tt}\, F_i^{(2)}(s,t) + c_i^{[C],tt}\, F_i^{[C],(2)}(s,t) \nonumber \\ &- c_i^{ts}\,J_i^{(2)}(s,t) - c_i^{[C],ts}\,J_i^{[C],(2)}(s,t) + c_i^{ss} K_i^{(2)}(s,t)\Big]\,, \label{decFrix2} \end{aligned}$$ where we introduced the new couplings $$\begin{aligned} &c_i^{[C],tt} = \frac{1}{32\, \sin^4{\theta_w}} N_g\,,\nonumber \\ &c_i^{[C],ts} = \frac{1}{4\,s\, \sin^2{\theta_w}} \left( e_{q_i} - \frac{c_{ZW^+W^-}\,\left( L^Z_{q_i q_i} + R^Z_{q_i q_i} \right)}{2} \frac{s}{s-m_Z^2} \right)N_g\,. \label{couplFrix2}\end{aligned}$$ Here, the new form factors $F_i^{[C],(2)}(s,t)$ and $J_i^{[C],(2)}(s,t)$ contain the contribution from the two-loop diagrams in class ${ \mathcal{C} }=C$. In deriving  we used the fact that for a fermion loop with an attached $W$-pair we have $$N_{WW} = \frac{1}{2} \sum_{q\,q'} \left( L_{q q'}^W L_{q' q}^W \right) = \frac{1}{4 \sin^2{\theta_w}} N_g\,,$$ where $N_g=N_f/2$ is the number of generations of massless quarks running in the loop. Note that because of the flavour-change induced by the $W^\pm$ bosons, we limit ourselves to consider at most $N_f =4$ massless quarks $(u,d,c,s)$, i.e. two generations $N_g=2$. Finally, the form factor $K_i^{(2)}(s,t)$ receives contributions only from one class of diagrams, ${ \mathcal{C} }= F_V$. At tree level we find that the different form factors can be obtained from $$\begin{aligned} F_i^{(0)}(s,t) &= \left( \frac{2 \,\tau_{10}^{[A],(0)} -4\,\tau_{7}^{[A],(0)}}{t} \right)\,, \\ J_i^{(0)}(s,t) &= 4 \left( \tau_7^{[A],(0)} + \,\tau_8^{[A],(0)} \right) - 2\left( \,\tau_{9}^{[A],(0)} + \,\tau_{10}^{[A],(0)} \right) \,, \\ K_i^{(0)}(s,t) &= 2 \left( \tau_7^{[F],(0)} + \,\tau_8^{[F],(0)} \right) - \left( \,\tau_{9}^{[F],(0)} + \,\tau_{10}^{[F],(0)} \right) \,.\end{aligned}$$ At one loop and two loops we find instead $$\begin{aligned} F_i^{(n)}(s,t) &= 2 \Re \left( \frac{2 \,\tau_{10}^{[A],(n)} -4\,\tau_{7}^{[A],(n)}}{t} \right)\,, \nonumber \\ J_i^{(n)}(s,t) &= 2 \Re \left[ 2 \left( \tau_7^{[A],(n)} + \,\tau_8^{[A],(n)} \right) - \left( \,\tau_{9}^{[A],(n)} + \,\tau_{10}^{[A],(n)} \right) + \,\frac{1}{2}J_i^{(0)}(s,t) { \mathcal{F} }^{(n)}(s) \right]\,, \nonumber \\ K_i^{(n)}(s,t) &= 2 \Re \left( K_i^{(0)}(s,t) \, { \mathcal{F} }^{(n)}(s) \right) \,,\end{aligned}$$ and the two new form factors read $$\begin{aligned} &F_i^{[C],(2)}(s,t) = 2 \Re \left( \frac{2\,\tau_{10}^{[C],(2)} - 4\,\tau_7^{[C],(2)} }{t} \right)\,, \\ &J_i^{[C],(2)}(s,t) = 2 \Re \left[ 2 \left( \tau_7^{[C],(2)} + \,\tau_8^{[C],(2)} \right) - \left( \,\tau_{9}^{[C],(2)} + \,\tau_{10}^{[C],(2)} \right) \right] \,,\end{aligned}$$ where ${ \mathcal{F} }^{(n)}(s)$ are the $n$-loop QCD corrections to the quark form factor. We have verified that the tree-level and one-loop corrections, in the limit of equal virtualities of the massive vector bosons, agree with [@Frixione:1993yp]. Schouten identities for the amplitude {#sec:schouten} ===================================== In this Appendix, we show how to reduce the number of independent form factors entering our helicity amplitudes by exploiting the 4-dimensionality of external states via Schouten identities. We document here a general way to derive such Schouten identities for the ${ {\rm M} }_{RLL}$ case. The $LLL$ case proceeds in exactly the same way. We start off by fixing the helicities for a right-handed incoming quark current in the spinor helicity notation and we get $$\begin{aligned} S^{\mu \nu}_R(p_1^-, p_2^+,p_3) &= [ 2\, {p \hspace{-1.0ex} /}_3\,1 \rangle \, \left( A_1\,p_1^\mu\, p_1^\nu + A_2\, p_1^\mu p_2^\nu + A_3 \, p_1^\nu p_2^\mu + A_4\,\,p_2^\mu\, p_2^\nu \right) \nonumber \\ &+ [2\, \gamma^\mu \, 1 \rangle \left( A_5\, p_1^\nu + A_6 p_2^\nu \right) + [2\, \gamma^\nu \, 1 \rangle \left( A_7\, p_1^\mu + A_8 p_2^\mu \right)\nonumber \\ &+ A_9\, [2\, \gamma^\nu {p \hspace{-1.0ex} /}_3 \gamma^\mu \, 1 \rangle + A_{10} \, [2\, \gamma^\mu {p \hspace{-1.0ex} /}_3 \gamma^\nu \, 1 \rangle \,.\end{aligned}$$ As a first step we notice that we can collect $[ 2\, {p \hspace{-1.0ex} /}_3\,1 \rangle$ as an overall factor: $$\begin{aligned} [2\, {p \hspace{-1.0ex} /}_3\,1 \rangle [ 1\, {p \hspace{-1.0ex} /}_3\,2 \rangle = {\rm Tr}\left[ {p \hspace{-1.0ex} /}_2\, {p \hspace{-1.0ex} /}_3\, {p \hspace{-1.0ex} /}_1\, {p \hspace{-1.0ex} /}_3\, \frac{1+\gamma_5}{2} \right] = t\,u - p_3^2\, p_4^2\,.\end{aligned}$$ Multiplying and dividing by this allows to write the partonic amplitude as $$\begin{aligned} S^{\mu \nu}_R(p_1^-, p_2^+,p_3) &= [ 2\, {p \hspace{-1.0ex} /}_3\,1 \rangle \, \Big\{ \left( A_1\,p_1^\mu\, p_1^\nu + A_2\, p_1^\mu p_2^\nu + A_3 \, p_1^\nu p_2^\mu + A_4\,\,p_2^\mu\, p_2^\nu \right) \nonumber \\ &+ \frac{[1\, {p \hspace{-1.0ex} /}_3\,2 \rangle\, [2\, \gamma^\mu \, 1 \rangle }{t\,u-p_3^2 p_4^2} \left( A_5\, p_1^\nu + A_6 p_2^\nu \right) + \frac{[1\, {p \hspace{-1.0ex} /}_3\,2 \rangle\, [2\, \gamma^\nu \, 1 \rangle }{t\,u-p_3^2 p_4^2} \left( A_7\, p_1^\mu + A_8 p_2^\mu \right)\nonumber \\ &+ \frac{ A_9}{t\,u-p_3^2 p_4^2}\, [1\, {p \hspace{-1.0ex} /}_3\,2 \rangle\, [2\, \gamma^\nu {p \hspace{-1.0ex} /}_3 \gamma^\mu \, 1 \rangle + \frac{A_{10}}{t\,u-p_3^2 p_4^2} \, [1\, {p \hspace{-1.0ex} /}_3\,2 \rangle \, [2\, \gamma^\mu {p \hspace{-1.0ex} /}_3 \gamma^\nu \, 1 \rangle\, \Big\}, \label{Shelfix}\end{aligned}$$ such that every spinor structure is a trace. We can then perform the traces recalling that the transversality of the leptonic decay currents allows to discard contributions proportional to $p_3^\mu$ or $p_4^\nu$. In this way we get $$\begin{aligned} [1\, {p \hspace{-1.0ex} /}_3\,2 \rangle\, [2\, \gamma^\mu \, 1 \rangle\, &= 2\,\epsilon^{p_1,p_3,p_2,\mu}\, - (u-p_3^2) p_1^\mu - (t-p_3^2) p_2^\mu \label{trace1}\end{aligned}$$ and $$\begin{aligned} [1\, {p \hspace{-1.0ex} /}_3\,2 \rangle \, [2\, \gamma^\mu {p \hspace{-1.0ex} /}_3 \gamma^\nu \, 1 \rangle &= 2\,(u-p_3^2)\,\epsilon^{p_1,p_3,\mu,\nu} + 2\, p_3^2\, \epsilon^{p_1,p_2,\mu,\nu} - (t\,u -p_3^2 p_4^2) g^{\mu \nu} \nonumber \\ & - 2\,u\,p_1^\mu p_2^\nu + 2\,p_3^2\, p_1^\nu p_2^\mu - 2\,(u-p_3^2) \,p_1^\mu p_1^\nu \label{trace2}\end{aligned}$$ $$\begin{aligned} [1\, {p \hspace{-1.0ex} /}_3\,2 \rangle \, [2\, \gamma^\nu {p \hspace{-1.0ex} /}_3 \gamma^\mu \, 1 \rangle &= - 2\,(u-p_3^2)\,\epsilon^{p_1,p_3,\mu,\nu} - 2\, p_3^2\, \epsilon^{p_1,p_2,\mu,\nu} + 4\,\epsilon^{p_1,p_3,p_2,\mu} \left( p_1^\nu + p_2^\nu \right) \nonumber \\ &- (t\,u -p_3^2 p_4^2) g^{\mu \nu} - 2\,t\,p_1^\nu p_2^\mu + 2\,p_3^2\, p_1^\mu p_2^\nu - 2\,(t-p_3^2) \,p_2^\mu p_2^\nu\,, \label{trace3}\end{aligned}$$ where we introduced the Levi-Civita $\epsilon$ tensor, with the following notation $$\epsilon^{p,q,r,s} = \epsilon^{\mu,\nu,\rho,\sigma} p_\mu q_\nu r_\rho s_\sigma\,.$$ Moreover, note that the asymmetry between  and  is due to the transversality condition which effectively replaces $p_3^\mu \to 0$ and $p_3^\nu \to p_1^\nu + p_2^\nu$. Using , and  we see that all $10$ spinor structures can be written in terms of the following 11 structures: $$g^{\mu \nu}\,, \qquad p_1^\mu p_1^\nu\,, \quad p_1^\mu p_2^\nu\,, \quad p_2^\mu p_1^\nu\,, \quad p_2^\mu p_2^\nu\,,$$ $$\epsilon^{p_1,p_3,p_2,\mu}\,p_1^\nu\,, \quad \epsilon^{p_1,p_3,p_2,\mu}\,p_2^\nu \,, \quad \epsilon^{p_1,p_3,p_2,\nu}\,p_1^\mu\,, \quad \epsilon^{p_1,p_3,p_2,\nu}\,p_2^\mu$$ $$\epsilon^{p_1,p_3,\mu,\nu}\,, \quad \epsilon^{p_1,p_2,\mu,\nu}\,.$$ This does not appear to be any improvement with respect to the 10 structured we had before. It is nevertheless very easy to show that $2$ out of these $11$ structures can indeed be expressed as linear combinations of the remaining $9$ by means of an anti-symmetrisation of the $\epsilon^{\mu \nu \rho \sigma}$ tensors. In order to see how this works in practice, we start off by considering $\epsilon^{p_1,p_3,\mu,\nu}\, p_2 \cdot p_1$. By anti-symmetrising $\epsilon^{\mu,\nu,\rho,\sigma}p_2^\tau$ in $4$ dimensions one easily finds $$\begin{aligned} \epsilon^{p_1,p_3,\mu,\nu}\, p_2 \cdot p_1 &= - \epsilon^{p_3,\mu,\nu,p_2}\, p_1 \cdot p_1 - \epsilon^{\mu,\nu,p_2,p_1}\, p_3 \cdot p_1 - \epsilon^{\nu,p_2,p_1,p_3}\, p_1^\mu - \epsilon^{p_2,p_1,p_3,\mu}\, p_1^\nu \end{aligned}$$ which implies that $\epsilon^{p_1,p_3,\mu,\nu}$ can be eliminated by $$\begin{aligned} \epsilon^{p_1,p_3,\mu,\nu} &= \frac{2}{s}\, \left( \frac{p_3^2-t}{2}\epsilon^{p_1,p_2,\mu,\nu}\, + \epsilon^{p_1,p_3,p_1,\nu}\, p_1^\mu - \epsilon^{p_1,p_3,p_2,\mu}\, p_1^\nu \right)\,, \label{epsrel1}\end{aligned}$$ leaving us again with $10$ structures. One more anti-symmetrisation can be used, namely consider $\epsilon^{p_1,p_2,\mu,\nu}\, p_3 \cdot r $, where the momentum $r^\mu$ is defined as $$r^\mu = \left(\frac{u-p_3^2}{s}\right) p_1^\mu + \left(\frac{t-p_3^2}{s}\right) p_2^\mu + p_3^\mu\,,$$ such that $r \cdot p_1 = 0\,,\; r \cdot p_2 = 0$. Proceeding as before we find $$\begin{aligned} \epsilon^{p_1,p_2,\mu,\nu} &= \frac{ \epsilon^{p_1,p_3,p_2,\mu} }{t\,u - p_3^2 p_4^2} \left[ (u - p_4^2) \,p_2^\nu +(t - p_4^2) \,p_1^\nu \right] + \frac{ \epsilon^{p_1,p_3,p_2,\nu} }{t\,u - p_3^2 p_4^2} \left[ (u - p_3^2)\,p_1^\mu + (t - p_3^2)\,p_2^\mu \right] \label{epsrel2}\,.\end{aligned}$$ It becomes clear that using these two relations we can eliminate completely $\epsilon^{p_1,p_2,\mu,\nu}$ and $\epsilon^{p_1,p_3,\mu,\nu}$ in favour of the remaining $9$ structures. In particular these relations can be rephrased in terms of the original spinors in  giving two Schouten identities for the spinor lines: $$\begin{aligned} [1\, {p \hspace{-1.0ex} /}_3\,2 \rangle \, [2\, \gamma^\mu {p \hspace{-1.0ex} /}_3 \gamma^\nu \, 1 \rangle &= (t \, u - p_3^2 p_4^2) \left[ \frac{2}{s} \left( p_1^\mu p_2^\nu - p_1^\nu p_2^\mu \right) - g^{\mu \nu} \right] \nonumber \\ & +\frac{1}{s} \left[ (u-p_3^2) \, p_1^\mu - (t-p_3^2)\,p_2^\mu \right] [1\, {p \hspace{-1.0ex} /}_3\,2 \rangle \, [2\, \gamma^\nu \, 1 \rangle \nonumber \\ & - \frac{1}{s} \left[ (u-s-p_3^2)\, p_1^\nu + (u-p_4^2)\,p_2^\nu \right] \, [1\, {p \hspace{-1.0ex} /}_3\,2 \rangle \, [2\, \gamma^\mu \, 1 \rangle\,, \label{Schouten1}\end{aligned}$$ $$\begin{aligned} [1\, {p \hspace{-1.0ex} /}_3\,2 \rangle \, [2\, \gamma^\nu {p \hspace{-1.0ex} /}_3 \gamma^\mu \, 1 \rangle &= (t \, u - p_3^2 p_4^2) \left[ \frac{2}{s} \left( p_1^\nu p_2^\mu - p_1^\mu p_2^\nu \right) - g^{\mu \nu} \right] \nonumber \\ & +\frac{1}{s} \left[ (t-p_3^2) \, p_2^\mu - (u-p_3^2)\,p_1^\mu \right] [1\, {p \hspace{-1.0ex} /}_3\,2 \rangle \, [2\, \gamma^\nu \, 1 \rangle \nonumber \\ & - \frac{1}{s} \left[ (t-p_4^2)\,p_1^\nu + (t-s-p_3^2)\, p_2^\nu \right] \, [1\, {p \hspace{-1.0ex} /}_3\,2 \rangle \, [2\, \gamma^\mu \, 1 \rangle\,. \label{Schouten2}\end{aligned}$$ The corresponding relations for the spinors of the left-handed partonic currents can be found by simply permuting $p_1 \leftrightarrow p_2$. Using ,, and the corresponding ones for the left-handed partonic current, we eliminate $2$ of the structures in  in favour of $g^{\mu \nu}$, plus the remaining $8$ structures, and then proceed by contracting with the left-handed leptonic decay currents . As a result one easily arrives at formulae  and . Conversion to Catani’s original IR subtraction scheme {#sec:catani} ===================================================== In Section \[sec:helfin\] we derived the finite remainder of the one- and two-loop helicity amplitude coefficients $\Omega$ in a subtraction scheme which is particularly well-suited for $q_T$ subtraction [@Catani:2013tia]. In this Appendix we show how these results can be converted to Catani’s original scheme [@Catani:1998bh]. Starting from the UV-renormalised coefficients defined in  at renormalisation scale $\mu^2$, we write the finite remainders in Catani’s scheme as $$\begin{aligned} \Omega^{(1),{ {\rm finite} }}_{\text{Catani}} &= \Omega^{(1)} - I_1^{\text{C}}(\epsilon) \,\Omega^{(0)}\,,\nonumber \\ \Omega^{(2),{ {\rm finite} }}_{\text{Catani}} &= \Omega^{(2)} - I_1^{\text{C}}(\epsilon) \,\Omega^{(1)} - I_2^{\text{C}}(\epsilon) \,\Omega^{(0)}\,, \label{IRCatani}\end{aligned}$$ where Catani’s subtraction operators are defined as $$\begin{aligned} I_1^{\text{C}}(\epsilon) &= -C_F \frac{e^{\epsilon \gamma}}{\Gamma(1-\epsilon)} \left(\frac{1}{\epsilon^2} + \frac{3}{2\epsilon} \right) \left(-\frac{\mu^2}{s}\right)^{\epsilon}\nonumber \\ I_2^{\text{C}}(\epsilon) &= -\frac{1}{2} I_1^C(\epsilon) \left(I_1^C(\epsilon) +\frac{2\beta_0}{\epsilon} \right) +\frac{e^{-\epsilon \gamma} \Gamma(1-2\epsilon)} {\Gamma(1-\epsilon)} \left(\frac{\beta_0}{\epsilon}+ K \right) I_1^C(2 \epsilon) + H^{(2)}(\epsilon) \end{aligned}$$ with $$\begin{aligned} K&=\left(\frac{67}{18} -\frac{\pi^2}{6}\right) C_A -\frac{10}{9} T_F N_f\,,\end{aligned}$$ and since a $q \bar q$ pair is the only coloured state we have $$\begin{aligned} H^{(2)}(\epsilon) &= \frac{e^{\epsilon \gamma}}{4 \epsilon \Gamma(1-\epsilon)} \left(-\frac{\mu^2}{s}\right)^{2\epsilon} \notag\\ &\times 2 C_F \left[ \left(\frac{\pi^2}{2}-6\,\zeta_3 -\frac{3}{8}\right) C_F + \left(\frac{13}{2}\zeta_3 + \frac{245}{216}-\frac{23}{48} \pi^2 \right) C_A + \left(\frac{\pi^2}{12} -\frac{25}{54}\right) T_F N_f \right]\, .\end{aligned}$$ In this article, we present our results for $\mu^2=s$. Note that upon expansion in $\epsilon$ both $I_1^C(\epsilon)$ and $I_2^C(\epsilon)$ generate imaginary parts whose sign is fixed by the prescription $s \to s + i \,0^+$. By comparing  with  one can show that the $\epsilon^0$ parts of the finite, complex form factors in Catani’s original scheme [@Catani:1998bh], can be obtained from those in the $q_T$-scheme [@Catani:2013tia] according to $$\begin{aligned} \label{qt2catani} \Omega^{(1),{ {\rm finite} }}_{\text{Catani}} &= \Omega^{(1),{ {\rm finite} }}_{q_T} + \Delta I_1 \,\Omega^{(0),{ {\rm finite} }}_{q_T},\nonumber\\ \Omega^{(2),{ {\rm finite} }}_{\text{Catani}} &= \Omega^{(2),{ {\rm finite} }}_{q_T} + \Delta I_1 \, \Omega^{(1),{ {\rm finite} }}_{q_T} + \Delta I_2 \,\Omega^{(0),{ {\rm finite} }}_{q_T},\end{aligned}$$ with the finite scheme conversion coefficients given by $$\begin{aligned} \label{qt2catanicoeff} \Delta I_1 &= C_F \left( -\frac{1}{2} \pi^2 + i \pi \frac{3}{2} \right),\\ \Delta I_2 &= C_A C_F \left( - \frac{607}{162}-\frac{1181}{432} \pi^2 +\frac{187}{72} \zeta_3+\frac{7}{96} \pi^4 +i \pi \left(\frac{961}{216}+\frac{11}{72} \pi^2-\frac{13}{2} \zeta_3\right) \right) \nonumber\\ &\quad +C_F^2 \left( -\frac{9}{8} \pi^2 + \frac{1}{8} \pi^4 + i \pi \left(\frac{3}{8}-\frac{5}{4} \pi^2 + 6 \zeta_3\right) \right) \nonumber\\ &\quad + N_f C_F \left( \frac{41}{81}+\frac{97}{216} \pi^2-\frac{17}{36} \zeta_3 + i \pi \left(- \frac{65}{108} - \frac{1}{36} \pi^2 \right)\right),\end{aligned}$$ where we have set $\mu^2 = s$ to match the convention for our final results. Notice that, in order to obtain the finite remainders of the two-loop amplitudes in the two different schemes, only the finite pieces of the latter are required, and in particular the $\mathcal{O}(\epsilon)$ terms of the one-loop amplitudes are not needed, as expected. Note, moreover, that the conversion coefficients are complex, due to the fact that the original formulation of IR subtraction [@Catani:1998bh] factored out a phase for time-like pairs of partons from both the collinear and soft contributions, while in the $q_T$-scheme [@Catani:2013tia] this phase factor is associated only with the soft contributions, in line with the structure of IR factorisation [@Gardi:2009qi; @Becher:2009qa] at higher loop order.
--- abstract: 'We consider the problem of approximating sums of high-dimensional stationary time series by Gaussian vectors, using the framework of functional dependence measure. The validity of the Gaussian approximation depends on the sample size $n$, the dimension $p$, the moment condition and the dependence of the underlying processes. We also consider an estimator for long-run covariance matrices and study its convergence properties. Our results allow constructing simultaneous confidence intervals for mean vectors of high-dimensional time series with asymptotically correct coverage probabilities. A Gaussian multiplier bootstrap method is proposed. A simulation study indicates the quality of Gaussian approximation with different $n$, $p$ under different moment and dependence conditions.' bibliography: - 'Reference.bib' --- [By Danna Zhang and Wei Biao Wu]{} [Department of Statistics, University of Chicago]{} Introduction {#sec:introduction} ============ During the past decade, there has been a significant development on high-dimensional data analysis with applications in many fields. In this paper we shall consider simultaneous inference for mean vectors of high-dimensional stationary processes, so that one can perform family-wise multiple testing or construct simultaneous confidence intervals, an important problem in the analysis of spatial-temporal processes. To fix the idea, let $X_i$ be a stationary process in $\mathbb R^p$ with mean $\mu = (\mu_1, \ldots, \mu_p)^\top$ and finite second moment in the sense that ${\mathbb{E}}(X_i^\top X_i) < \infty$. In the scalar case in which $p = 1$ or when $p$ is fixed, under suitable weak dependence conditions, we can have the central limit theorem (CLT) $$\begin{aligned} \label{eq:15140307} {1\over \sqrt n} \sum_{i=1}^n (X_i-\mu) \Rightarrow N(0, \Sigma), \mbox{ where } \Sigma = \sum_{k=-\infty}^\infty {\mathbb{E}}( (X_0-\mu) (X_k-\mu)^\top).\end{aligned}$$ See, for example, @rosenblatt1956central, @ibragimov1971independent, @Wu2005, @dedecker2007weak and @bradley2007introduction among others. In the high dimension case in which $p$ can also diverge to infinity, @portnoy1986central showed that the central limit theorem can fail for i.i.d. random vectors if $\sqrt n = o(p)$. In this paper we shall consider an alternative form: Gaussian approximation for the largest entry of the sample mean vector $\bar X_n = n^{-1} \sum_{i=1}^n X_i$. For a vector $v = (v_1, \ldots, v_p)^\top$, let $|v|_\infty = \max_{j\le p} |v_j|$. Specifically, our primary goal is to establish the Gaussian Approximation (GA) in $\mathbb R^p$ $$\begin{aligned} \label{eq:1507172206} \sup_{u \ge 0} |{\mathbb{P}}( \sqrt n |\bar X_n-\mu|_\infty \ge u) - {\mathbb{P}}(|Z|_\infty \ge u)| \to 0,\end{aligned}$$ where both $n, p \to \infty$. Here the Gaussian vector $Z = (Z_1, \ldots, Z_p)^\top \sim N(0, \Sigma)$. @chernozhukov2013 studied the Gaussian approximation for independent random vectors. There has been limited research on high-dimensional inference under dependence. The associated statistical inference becomes considerably more challenging since the autocovariances with all lags should be considered. @zhang2014 extended the Gaussian approximation in @chernozhukov2013 to very weakly dependent random vectors which satisfy a uniform geometric moment contraction condition. The latter condition is also adopted in @csw2015 for self-normalized sums. @chernozhukov2013testing did a similar extension to strong mixing random vectors. Here we shall establish (\[eq:J021209\]) for a wide class of high-dimensional stationary process under suitable conditions on the magnitudes of $p$, $n$, and the mild dependence conditions on the process $(X_i)$. In Section \[sec:high-dimentional time series\] we shall introduce the framework of high-dimensional time series and some concepts about functional and predictive dependence measures that are useful for establishing an asymptotic theory. The main result for Gaussian approximation of the normalized mean vector and the choice of the normalization matrix is established in Section \[sec:Gaussian approximation\]. Depending on the moment and the dependence conditions, both high dimension and ultra high dimension cases are discussed. To perform statistical inference based on (\[eq:J021209\]), one needs to estimate the long-run covariance matrix $\Sigma$. The latter problem has been extensively studied in the scalar case; see @politis1999subsampling, @buhlmann2002, @lahiri2003resampling, @Alexopoulos2004, among others. In Section \[sec:estimation of long-run variance\] we study the batched-mean estimate of long-run covariance matrices and derive a large deviation result about quadratic forms of stationary processes. The latter tail probabilities inequalities allow dependent and/or non-sub-Gaussian processes under mild conditions, which is expected to be useful in other high-dimensional inference problems for dependent vectors. The consistency of the batched-mean estimate ensures the validity of the normalized Gaussian multiplier bootstrap method. We provide in Section \[sec:probability inequalities\] some sharp inequalities for tail probabilities for dependent processes in both polynomial tail and exponential tail cases. Part of the proof are relegated to Section \[sec:proof\]. We now introduce some notation. For a random variable $X$ and $q > 0$, we write $X \in \mathcal{L}^q$ if $\|X\|_{q} := ({\mathbb{E}}|X|^q)^{1/q} < \infty$, and for a vector $v = (v_1, \ldots, v_p)^\top$, let the norm-$s$ length $|v|_s = ( \sum_{j=1}^p |v_j|^s)^{1/s}$, $s \ge 1$. Write the $p \times p$ identity matrix as $\text{Id}_p$. For two real numbers, set $x \vee y =\max(x, y)$ and $x \wedge y =\min(x, y)$. For two sequences of positive numbers $(a_n)$ and $(b_n)$, we write $a_n \asymp b_n$ (resp. $a_n \lesssim b_n$ or $a_n \ll b_n$) if there exists some constant $C > 0$ such that $C^{-1} \leq a_n/b_n \leq C$ (resp. $a_n/b_n \leq C$ or $a_n/b_n \to 0$) for all large $n$. We use $C, C_1, C_2, \cdots$ to denote positive constants whose values may differ from place to place. A constant with a symbolic subscript is used to emphasize the dependence of the value on the subscript. Throughout the paper, we assume $p=p_n \rightarrow \infty$ as $n\rightarrow \infty$. High-dimensional Time Series {#sec:high-dimentional time series} ============================ Let ${\varepsilon}_i, i \in \mathbb{Z}$, be i.i.d. random variables and ${\mathcal{F}}^{i}=(\ldots, {\varepsilon}_{i-1}, {\varepsilon}_i)$; let $({X}_i)$ be a stationary process taking values in $\mathbb{R}^p$ that assumes the form $$\label{highdimensionrepresentation} X_i = (X_{i1}, X_{i2}, \ldots, X_{ip})^\top=G({\mathcal{F}}^{i}),$$ where $G(\cdot)=(g_1(\cdot), \ldots, g_p(\cdot))^\top$ is an $\mathbb{R}^p$-valued measurable function such that $X_i$ is well-defined. In the scalar case with $p = 1$, (\[highdimensionrepresentation\]) allows a very general class of stationary processes (cf. @wiener1958nonlinear, @rosenblatt1971markov, @priestley1988non, @tong1990non, @Wu2005, @tsay2005analysis, @wu2011asymptotic). It includes linear processes as well as a large class of nonlinear time series models. Within this framework, $({\varepsilon}_i)$ can be viewed as independent inputs of a physical system and all the dependences among the outputs $(X_i)$ result from the underlying data-generating mechanism $G(\cdot)$. The function $g_j(\cdot)$, $1 \leq j \leq p$, is the $j$-th coordinate projection of $G(\cdot)$. Unless otherwise specified, assume throughout the paper that ${\mathbb{E}}X_i=0$ and $\max_{j \le p} \| X_{ij} \|_q < \infty $ for some $q \geq 2$. Let $\Gamma(k) = (\gamma_{ij}(k))_{i, j=1}^p = {\mathbb{E}}(X_i X_{i+k}^\top)$ be the autocovariance matrix and recall the long-run covariance matrix $$\begin{aligned} \label{eq:J13426p} \Sigma=(\sigma_{i j})_{i, j=1}^p=\sum_{k=-\infty}^\infty \Gamma(k)\end{aligned}$$ if it exists. Note that $\sigma_{jj}=\sum_{k=-\infty}^\infty \gamma_{jj}(k)$, $1 \leq j \leq p$, is the long-run variance of the component process $X_{\cdot j} = (X_{i j})_{i \in \mathbb{Z}}$. For the latter process, following @Wu2005 we define respectively the functional dependence and the predictive dependence measure $$\begin{aligned} \label{eq:1507140845} \delta_{i, q, j} &=& \| X_{ij}-X_{ij, \{0\}} \|_q=\| X_{ij}-g_j({\mathcal{F}}^{i, \{0\}}) \|_q, \cr \theta_{i, q, j}&=&\| {\mathbb{E}}(X_{ij}|{\mathcal{F}}^{0})-{\mathbb{E}}(X_{ij}|{\mathcal{F}}^{-1})\|_q = \| \mathcal{P}^0 X_{ij} \|_q,\cr \theta'_{i, q, j}&=&\|{\mathbb{E}}(X_{ij}|{\mathcal{F}}_{0}^i)-{\mathbb{E}}(X_{ij}|{\mathcal{F}}_{1}^i)\|_q=\|\mathcal{P}_0 X_{ij}\|_q,\end{aligned}$$ where $\mathcal{F}^{i, \{j\}}=(\ldots, \varepsilon_{j-1}, \varepsilon_{j}', \varepsilon_{j+1}, \ldots, \varepsilon_{i})$ is a coupled version of $\mathcal{F}^{i}$ with $\varepsilon_{j}$ in $\mathcal{F}^i$ replaced by $\varepsilon_{j}'$, and $\varepsilon_k, \varepsilon_l'$, $k, l \in \mathbb{Z}$, are i.i.d. random variables, ${\mathcal{F}}_i^j=({\varepsilon}_i, {\varepsilon}_{i+1}, \ldots, {\varepsilon}_j)$ and ${\mathcal{F}}_{i}=({\varepsilon}_i, {\varepsilon}_{i+1}, \ldots)$. Note that $\mathcal{F}^{i, \{j\}}= \mathcal{F}^{i}$ if $j > i$. To account for the dependence in the process $X_{\cdot j}$, we define the dependence adjusted norm $$\begin{aligned} \label{eq:2015081932} \|X_{\cdot j} \|_{q, \alpha} = \sup_{m \geq 0} (m+1)^\alpha \Delta_{m, q, j},\, \alpha \geq 0, \mbox{ where } \Delta_{m, q, j}= \sum_{i=m}^\infty \delta_{i, q, j}, \, m \geq 0.\end{aligned}$$ Due to the dependence, it may happen that $\| X_{i j} \|_q < \infty$ while $\|X_{\cdot j} \|_{q, \alpha} = \infty$. Elementary calculations show that, if $X_{i j}, i \in \mathbb{Z}$, are i.i.d., then $\| X_{i j} \|_q \le \|X_{\cdot j} \|_{q, 0} \le 2 \| X_{i j} \|_q$, suggesting that the dependence adjusted norm is equivalent to the classical $L^q$ norm. To account for high-dimensionality, we define $$\begin{aligned} \Psi_{q, \alpha}= \max_{1 \leq j \leq p}\|X_{\cdot j}\|_{q, \alpha} \mbox{ and } \Upsilon_{q, \alpha}=\left(\sum_{j=1}^p \|X_{\cdot j}\|_{q, \alpha}^q \right)^{1/q},\end{aligned}$$ which can be interpreted as the uniform and the overall dependence adjusted norms of $(X_i)_{i \in \mathbb{Z}}$, respectively. The form (\[highdimensionrepresentation\]) and its associated dependence measures provide a convenient framework for studying high-dimensional time series. @chen2013 and @zhang2014 considered some special cases: the former paper requires that $\max_{1 \leq j \leq p}\|X_{\cdot j}\|_{q, \alpha} \leq C$ while the latter imposes the stronger geometric moment contraction condition $\max_{1 \leq j \leq p}\Delta_{m, q, j} \le C \rho^m$ with $\rho \in (0, 1)$, and in both cases the constant $C$ does not depend on $p$. Those assumptions can be fairly restrictive. In this paper $\Psi_{q, \alpha}$ can be unbounded in $p$. Additionally, we define the $\mathcal{L}^{\infty}$ functional dependence measure and its corresponding dependence adjusted norm for the $p$-dimensional stationary process $(X_i)$ $$\begin{aligned} &&\omega_{i, q} = \| |X_i - X_{i, \{0\}}|_{\infty} \|_q;\\ && \| |X_{\cdot}|_{\infty} \|_{q, \alpha}=\sup_{m \geq 0} (m+1)^\alpha \Omega_{m,q}, \alpha\geq 0, \text{ where } \Omega_{m,q}=\sum_{i=m}^\infty \omega_{i,q}, m \geq 0.\end{aligned}$$ Clearly, we have $\Psi_{q, \alpha} \leq \||X_{\cdot}|_\infty\|_{q, \alpha} \leq \Upsilon_{q, \alpha}$. Gaussian Approximations {#sec:Gaussian approximation} ======================= In this section we shall present main results on Gaussian approximations. Theorem \[th:1507140259\] concerns the finite polynomial moment case with both weaker and stronger temporal dependence. Consequently the dimension $p$ allowed can be at most a power of $n$. If the underlying process has finite dependence-adjusted sub-exponential norms, Theorem \[th:1507140325\] asserts that an ultra-high dimension $p$ can be allowed. Theorem \[maintheorem\] in Section \[sec:1507151006\] provides a convergence rate of the Gaussian approximation. Recall (\[eq:J13426p\]) for the long-run covariance matrix $\Sigma$. Let $\Sigma_0 = {\rm diag}(\Sigma)$ be the diagonal matrix of $\Sigma$, and $D_0 = {\rm diag}(\sigma_{11}^{1/2}, \ldots, \sigma_{pp}^{1/2})$. Assume $\mu = 0$. We consider the following normalized version of (\[eq:1507172206\]): $$\begin{aligned} \label{eq:J021209} \sup_{u \ge 0} |{\mathbb{P}}( \sqrt n |D_0^{-1} \bar X_n|_\infty \ge u) - {\mathbb{P}}(|D_0^{-1} Z|_\infty \ge u)| \to 0,\end{aligned}$$ \[assumption:1507172202\] There exists a constant $c > 0$ such that $\min_{1 \leq j \leq p} \sigma_{jj} \ge c$. To state Theorem \[th:1507140259\], we need to define the following quantities: $\Theta_{q, \alpha}=\Upsilon_{q, \alpha} \wedge (\||X_{\cdot}|_\infty\|_{q, \alpha} \log p)$, $L_1 = (n^{1/q-1/2} (\log p)^{1/2} \Theta_{q, \alpha})^{1/(\alpha - 1/2 + 1/q)}$, $L_2 = (\Psi_{2, \alpha} \Psi_{2, 0} (\log p)^2 )^{1/\alpha}$, $W_1 = (\Psi_{3, 0}^6 + \Psi_{4, 0}^4) (\log (p n) )^7$, $W_2 = \Psi_{2, \alpha}^2 (\log (p n) )^4$, $W_3 = (n^{-\alpha} (\log (pn))^{3/2} \Theta_{q, \alpha})^{1/(1/2-\alpha - 1/q)}$, $N_1 = (n / \log p)^{q/2} / \Theta_{q, \alpha}^q$, $N_2 = n (\log p)^{-2} \Psi_{2, \alpha}^{-2}$, $N_3 = (n^{1/2} (\log p)^{-1/2} \Theta_{q, \alpha}^{-1})^{1/(1/2-\alpha)}$. \[th:1507140259\] Let Assumption \[assumption:1507172202\] be satisfied. (i) Assume that $\Theta_{q, \alpha} < \infty$ holds with some $q \geq 4$ and $\alpha > 1/2 - 1/q$ (the weaker dependence case), $$\begin{aligned} \label{eq:1507142040} \Theta_{q, \alpha} n^{1/q-1/2} (\log (p n) )^{3/2} \to 0\end{aligned}$$ and $$\begin{aligned} \label{eq:1507142041} \max(L_1, L_2) \max(W_1, W_2) = o(1) \min(N_1, N_2).\end{aligned}$$ Then the Gaussian Approximation (\[eq:J021209\]) holds. (ii) Assume $0 < \alpha < 1/2 - 1/q$ (the stronger dependence case). Then (\[eq:J021209\]) holds if $\Theta_{q, \alpha} (\log p)^{1/2} = o(n^{\alpha})$ and $$\begin{aligned} \label{eq:1508010523} L_2 \max(W_1, W_2, W_3) = o(1) \min(N_2, N_3).\end{aligned}$$ (Optimality of our result on the allowed dimension $p$) Assume $\alpha > 1/2 - 1/q$. In the special case with $\Psi_{q, \alpha} \asymp 1$ and $\Theta_{q, \alpha} \asymp p^{1/q}$, (\[eq:1507142040\]) becomes $$\begin{aligned} \label{eq:1507142042} p (\log (p n) )^{3q/2} = o(n^{q/2-1}),\end{aligned}$$ which by elementary manipulations implies (\[eq:1507142041\]), and hence the GA (\[eq:J021209\]). It turns out that condition (\[eq:1507142042\]), or equivalently $p (\log p)^{3q/2} = o(n^{q/2-1})$, is optimal up to a multiplicative logarithmic term. Consider the special case in which $X_{i j}$, $i, j \in \mathbb{Z},$ are i.i.d. symmetric random variables with ${\mathbb{E}}(X_{i j}^2) = 1$ and the tail probability ${\mathbb{P}}( X_{i j} \ge u) = u^{-p} \ell(u)$, $u \ge u_0$, where $\ell(u) = (\log u)^{-2}$. By @nagaev1979large, we have the expansion: for $y \ge \sqrt n$, $$\begin{aligned} \label{eq:J011029} {\mathbb{P}}(X_{1 1} + \ldots + X_{n 1} \ge y) \sim n y^{-q} \ell(y) + 1 - \Phi(y/\sqrt n).\end{aligned}$$ Let $M_n = X_{1 1} + \ldots + X_{n 1}$, $Z = (Z_1, \ldots, Z_p)^\top \sim N(0, \text{Id}_p)$ and assume $$\begin{aligned} \label{eq:1507142204} n^{q/2-1} = o(p (\log n)^{-2} (\log p)^{-q/2}).\end{aligned}$$ Then the Gaussian approximation (\[eq:J021209\]) [*does not hold*]{}. To see this, let $u = (2 \log p)^{1/2}$. Then $p {\mathbb{P}}(|Z_1| \ge u) \to 0$, and, by (\[eq:J011029\]) and (\[eq:1507142204\]), $p {\mathbb{P}}( M_n \ge \sqrt n u) \to \infty$. Hence ${\mathbb{P}}^p( |M_n| \le \sqrt n u) \to 0$ and ${\mathbb{P}}^p(|Z_1| \le u) \to 1$, implying that $$\begin{aligned} |{\mathbb{P}}( \sqrt n |\bar X_n|_\infty \le u) - {\mathbb{P}}(|Z|_\infty \le u)| &=& |{\mathbb{P}}^p( |M_n| \le \sqrt n u) - {\mathbb{P}}^p(|Z_1| \le u)| \cr &=& | [1-2 {\mathbb{P}}( M_n \ge \sqrt n u)]^p - {\mathbb{P}}^p(|Z_1| \le u)| \to 1.\end{aligned}$$ Note that (\[eq:1507142204\]) is equivalent to $n^{q/2-1} = o(p (\log p)^{-2-q/2})$, suggesting that (\[eq:1507142042\]) is optimal up to a logarithmic term. Now suppose there exist $0 \le \kappa_1 \le \kappa_2$ such that $\Psi_{q, \alpha} \asymp p^{\kappa_1}$ and $\Theta_{q, \alpha} \asymp p^{\kappa_2}$, and $p^\tau \asymp n$. Elementary but tedious calculations show that, in the weaker dependence case $\alpha > 1/2 - 1/q$, if $$\begin{aligned} \label{eq:1507150901} \tau > \max\left\{\frac{\kappa_2}{1/2-1/q}, \frac{2\kappa_1}{\alpha}+8\kappa_1, \frac{2}{q}\left(\frac{2\kappa_1}{\alpha}+8\kappa_1\right)+2\kappa_2\right\},\end{aligned}$$ then conditions in (i) of Theorem \[th:1507140259\] are satisfied, while for the stronger dependence case with $0 < \alpha < 1/2 - 1/q$, a larger sample size $n$ is required: $$\begin{aligned} \label{eq:1507150902} \tau > \max\left\{\frac{\kappa_2}{\alpha}, \frac{2\kappa_1}{\alpha}+8\kappa_1,(1-2\alpha)\left(\frac{2\kappa_1}{\alpha}+8\kappa_1\right)+2\kappa_2 \right\}\end{aligned}$$ The lower bounds in (\[eq:1507150901\]) and (\[eq:1507150902\]) are both non-decreasing of $\kappa_1, \kappa_2$ and non-increasing in $q, \alpha$. Under (\[eq:1507142042\]), the allowed dimension $p$ can only be at most a polynomial of $n$. To ensure the validity of GA in the ultra-high dimensional case with $\log p = o(n^c)$ with some $c > 0$, we need to consider the sub-exponential case in which $X_{i j}$ has finite moment with any order. For $\nu \ge 0$ and $\alpha \ge 0$, define the dependence-adjusted sub-exponential norm $$\begin{aligned} \| X_{\cdot j} \|_{\psi_\nu, \alpha} = \sup_{q\ge 2} { {\| X_{\cdot j} \|_{q, \alpha} } \over q^\nu} \mbox{ and } \Phi_{\psi_\nu, \alpha} = \max_{j \le p} \| X_{\cdot j} \|_{\psi_\nu, \alpha}\end{aligned}$$ Let $L_3 = ( (\log p)^{1/\beta + 1/2} \Phi_{\psi_\nu, \alpha} )^{1/\alpha}$, $N_4 = n (\log p)^{-1-2/\beta} \Phi_{\psi_\nu, 0}^{-2}$ and $W_4 = (\log (p n))^{3+2/\beta} \Phi_{\psi_\nu, 0}^2 + (\log (p n))^{4}$. Here $\beta=2/(1+2\nu)$. \[th:1507140325\] Let Assumption \[assumption:1507172202\] be satisfied. Assume that $\Phi_{\psi_\nu, \alpha}<\infty$ for some $\nu \geq 0$, $\alpha>0$ and $$\begin{aligned} \max(L_2, L_3) \max(W_1, W_4) = o(N_4),\quad L_2^\alpha\max(W_1, W_4)=o(n).\end{aligned}$$ Then the Gaussian Approximation (\[eq:J021209\]) holds. The proof is similar to that of Theorem \[th:1507140259\], and thus is omitted. If $\Phi_{\psi_\nu, \alpha} \asymp 1$, then the ultra high-dimensional case with $\log p = o(n^c)$ with some $c>0$ is allowed, where specifically we can let $$c=\left\{ \begin{array}{ll} 1/(8+2/\alpha+2/\beta), & 2/3 \leq \beta \leq 2\\ 1/[7+(1/\beta+1/2)(1/\alpha+2)], & 1/2 \leq \beta <2/3 \\ 1/[3+2/\beta+(1/\beta+1/2)(1/\alpha+2)], & 0 <\beta <1/2 \end{array} \right..$$ Simultaneous Inference of Covariances ------------------------------------- Let $X_1, \ldots, X_n$ be i.i.d. $p$-dimensional vectors with mean $0$ and covariance matrix $\Gamma = \Gamma_0 = (\gamma_{j k})_{j, k=1}^p = {\mathbb{E}}(X_i X_i^\top)$. We can estimate $\Gamma$ by the sample covariance matrix $\hat \Gamma = (\hat \gamma_{jk})_{j, k=1}^p = n^{-1} \sum_{i=1}^n X_i X_i^\top$. To perform simultaneous inference on $\gamma_{j k}, 1\le j,k \le p$, one needs to derive asymptotic distribution of the maximum deviation $\max_{j, k \le p} | \hat \gamma_{jk} - \gamma_{jk}|$ or the normalized version $\max_{j, k \le p} | \hat \gamma_{jk} - \gamma_{jk}|/\tau_{j k}$; cf Equation (2) in @Xiao20132899. @jiang2004 established the Gumbel convergence of the maximum deviation assuming that all entries of $X_i$ are also independent. See @Li2006 and @liu2008 for some refined results. @Xiao20132899 considered the extension which allows dependence among entries of $X_i$. However the latter paper requires that the vectors $X_1, \ldots, X_n$ are i.i.d. The problem of further extension to temporally dependent $X_i$ is open. In analyzing electrocorticogram data in the format of multivariate time series, @PhysRevE.79.061916 proposed to use the maximum cross correlation between time series to identify edges that connect the corresponding nodes in a network, suggesting that an asymptotic theory for maximum deviations of sample covariances is needed. Our Theorems \[th:1507140259\] and \[th:1507140325\] can be applied to the above problem of further extension to temporally dependent process $(X_i)$. Let $(X_i)$ be a mean zero $p$-dimensional stationary process of form (\[highdimensionrepresentation\]). To apply Theorems \[th:1507140259\] and \[th:1507140325\], one needs to deal with the key issue of computing the functional dependence measure of the $p^2$-dimensional vector $\mathcal{X}_i = \mbox{vec}(X_i X_i^\top-{\mathbb{E}}(X_i X_i^\top))$. Interestingly, our framework allows a natural and elegant treatment. Let $a = (j, k)$, $j, k \le p$, and $\mathcal{X}_{i a} = X_{ij}X_{ik} - \gamma_a$, where $ \gamma_a = \gamma_{j k} = {\mathbb{E}}(X_{ij}X_{ik})$. By Hölder’s inequality, the functional dependence of the component process $(\mathcal{X}_{i a})_i$ $$\begin{aligned} \label{eq:1508220838} \varphi_{i, q/2, a} &:= &\|X_{ij}X_{ik}-{\mathbb{E}}(X_{ij}X_{ik})-X_{ij, \{0\}}X_{ik, \{0\}} +{\mathbb{E}}(X_{ij, \{0\}}X_{ik, \{0\}})\|_{q/2} \cr &\leq&2\|X_{ij}X_{ik}-X_{ij, \{0\}}X_{ik, \{0\}}\|_{q/2}\cr &\leq&2\|X_{ij}(X_{ik}-X_{ik, \{0\}})\|_{q/2} + 2\| (X_{ij} -X_{ij, \{0\}} ) X_{ik, \{0\}}\|_{q/2}\cr &\leq &2\|X_{ij}\|_q\delta_{i,q,k} + 2\|X_{ik}\|_q\delta_{i,q,j}.\end{aligned}$$ Hence, we can have an upper bound of the dependence adjusted norm of $(\mathcal{X}_{i a})$ $$\begin{aligned} \label{eq:1508220839} \|\mathcal{X}_{\cdot a}\|_{q/2,\alpha} &:=& \sup_{m \geq 0} (m+1)^\alpha \sum_{i=m}^\infty \varphi_{i,q/2,j,k}\cr &\leq &2\|X_{\cdot j}\|_{q,0}\|X_{\cdot k}\|_{q,\alpha}+2\|X_{\cdot k}\|_{q,0}\|X_{\cdot j}\|_{q,\alpha}.\end{aligned}$$ Consequently, the uniform and the overall dependence adjusted norms of $\mathcal{X}_i$ are $$\begin{aligned} \label{eq:1508220840} && \max_a \|\mathcal{X}_{\cdot a}\|_{q/2, \alpha} \leq 4 \Psi_{q,0}\Psi_{q,\alpha},\cr && \left(\sum_a \|\mathcal{X}_{\cdot a}\|_{q/2, \alpha}^{q/2}\right)^{2/q} \leq 4 \left(\sum_{j=1}^p\|X_{\cdot j}\|_{q,0}^{q/2}\right)^{2/q} \left(\sum_{j=1}^p\|X_{\cdot j}\|_{q,\alpha}^{q/2}\right)^{2/q}.\end{aligned}$$ Similarly, the $\mathcal{L}^\infty$ dependence adjusted norm for the process $(\mathcal{X}_i)$ can be calculated by $$\label{eq:1508220841} \||\mathcal{X}_{\cdot}|_\infty \|_{q/2, \alpha} \leq 4 \||X_{\cdot}|_\infty\|_{q, 0} \||X_{\cdot}|_\infty\|_{q, \alpha}.$$ With (\[eq:1508220838\])-(\[eq:1508220841\]), conditions in Theorems \[th:1507140259\] and \[th:1507140325\] can be formulated accordingly, and under those conditions we can have the following Gaussian Approximation $$\begin{aligned} \label{eq:1508220832} \sup_{u \ge 0} |{\mathbb{P}}( \sqrt n \max_a | \hat \gamma_a - \gamma_a| / \tau_a \ge u) - {\mathbb{P}}(\max_a |Z_a / \tau_a| \ge u)| \to 0,\end{aligned}$$ where $Z = (Z_a)_a \sim N(0, \Sigma_\mathcal{X})$, $\Sigma_\mathcal{X}$ is the $p^2 \times p^2$ long-run covariance matrix of $(\mathcal{X}_i)_i$ and $(\tau_a^2)_a$ is the diagonal matrix of $\Sigma_\mathcal{X}$. Estimation of long-run covariance matrix {#sec:estimation of long-run variance} ======================================== Given the realization $X_1, \ldots, X_n$, to apply the Gaussian approximation (\[eq:J021209\]), we need to estimate the long-run covariance matrix $\Sigma$. Note that $\Sigma/(2\pi)$ is the value of the spectral density matrix of $(X_i)$ at zero frequency. In the one-dimensional case, there is a large literature concerning spectral density estimation; see for example @anderson1971, @priestley1981, @rosenblatt1985, @brockwell1991, @liu2010asymptotics among others. In the high-dimensional setting, @chen2013 studied the regularized estimation of $\Gamma(0) = {\mathbb{E}}(X_0 X_0^\top)$. Assume ${\mathbb{E}}X_i = 0$. We then consider the batched mean estimate $$\label{estimate LRCM} \hat{\Sigma}=\frac{1}{M w}\sum_{b=1}^w Y_bY_b^\top=\frac{1}{M w}\sum_{b=1}^w (\sum_{i \in L_b}X_i)(\sum_{i \in L_b}X_i)^\top.$$ where the window $L_b = \{1+(b-1)M, \ldots, nM\}, b=1, \ldots, w$, the window size $|L_b| = M \to \infty$ and the number of blocks $w = \lfloor n/M \rfloor$. Theorems \[theoremforquadraticformunderpolynomial\] and \[theoremforquadraticformunderexponential\] concern the convergence of the above estimate for processes with finite polynomial and finite sub-exponentail dependence adjusted norms, respectively. The convergence rate depends in a subtle way on the temporal dependence characterized by $\alpha$ (cf. (\[eq:2015081932\])), the uniform and the overall dependence adjusted norms $\Psi_{q, \alpha}$ and $\Upsilon_{q, \alpha}$, respectively, the same size $n$ and the dimension $p$. For a random variable $X$, we define the operator ${\mathbb{E}}_0$ as ${\mathbb{E}}_0 (X) := X-{\mathbb{E}}X$. \[theoremforquadraticformunderpolynomial\] Assume $\Psi_{q, \alpha} < \infty$ with $q>4$ and $\alpha>0$, and $M=O(n^{\varsigma})$ for some $0< \varsigma <1$. Let $F_\alpha = w M$ (resp. $w M^{q/2-\alpha q/2}$ or $w^{q/4-\alpha q/2} M^{q/2-\alpha q/2}$) for $\alpha>1-2/q$ (resp. $1/2-2/q< \alpha <1-2/q$ or $\alpha < 1/2-2/q$). Then for $x \geq \sqrt{w}M \Psi_{q, \alpha}^2$, we have $$\begin{aligned} \label{eq:201508181936} &&{\mathbb{P}}(n|{\text{diag}}(\hat{\Sigma})-{\mathbb{E}}{\text{diag}}(\hat{\Sigma})|_\infty\geq x) \lesssim {{ F_\alpha \Upsilon_{q, \alpha}^q } \over {x^{q/2}}}+ p \exp\left(-\frac{C_{q, \alpha} x^2}{wM^2\Psi_{4, \alpha}^4}\right), \cr && {\mathbb{P}}(n|\hat{\Sigma}-{\mathbb{E}}\hat{\Sigma}|_\infty \geq x) \lesssim {{p F_\alpha \Upsilon_{q, \alpha}^q } \over {x^{q/2}}} + p^2 \exp\left(-\frac{C_{q, \alpha} x^2}{wM^2\Psi_{4, \alpha}^4}\right)\end{aligned}$$ for all large $n$, where the constants in $\lesssim$ only depend on $\varsigma$, $\alpha$ and $q$. Fix $1 \leq j, k \leq p$; let $T=\sum_{b=1}^w Y_{bj} Y_{bk}$, where $Y_{b j} = \sum_{i \in L_b}X_{i j}$. For $\tau \geq 0$, define $X_{ij, \tau}={\mathbb{E}}(X_{ij}|{\varepsilon}_{i-\tau}, \ldots, {\varepsilon}_{i})$, $Y_{bj, \tau}=\sum_{i \in L_b}X_{ij, \tau}$ and $T_{\tau}=\sum_{b=1}^w Y_{bj,\tau}Y_{bk, \tau}$. We will first prove for any $x>0$ $${\mathbb{P}}(|{\mathbb{E}}_0(T-T_{M})| \geq x) \lesssim \left\{ \begin{array}{ll} \label{differenceforquadraticform} x^{-q/2}wM^{q/2-\alpha q/2}\xi_{q, \alpha}^{q/2}+E_{q, \alpha}(x), & \alpha >1/2-2/q \\ x^{-q/2}w^{q/4-\alpha q/2}M^{q/2-\alpha q/2}\xi_{q, \alpha}^{q/2}+E_{q, \alpha}(x), & \alpha <1/2-2/q \\ \end{array} \right. ,$$ where the constants in $\lesssim$ only depend on $\varsigma$, $\alpha$ and $q$, and $$\begin{aligned} && \xi_{q, \alpha}=\|X_{\cdot j}\|_{q, 0}\|X_{\cdot k}\|_{q, \alpha}+\|X_{\cdot k}\|_{q, 0}\|X_{\cdot j}\|_{q, \alpha},\\ && E_{q, \alpha}(x)=\exp\{-C_{q, \alpha}(wM^{2-2\alpha}\xi_{4, \alpha}^2)^{-1}x^{2}\}.\end{aligned}$$ Following the argument in the proof of Lemma \[lemma for m-approximation, polynomial tail\], let $L=\lfloor (\log w)/ (\log 2)\rfloor$, $\varpi_l=2^l$, $1 \leq l <L$, $\varpi_L=w$ and $\tau_l= M \varpi_l$ for $1 \leq l \leq L$. Let $\varpi_0 = 1$ and $\tau_0 = M$. Write $$\label{T-T_{M}} T-T_{M} = T-T_{Mw} + \sum_{l=1}^L V_{w, l}, \mbox{ where } V_{w, l}=T_{\tau_l}-T_{\tau_{l-1}}.$$ By the argument in Lemma 9 of @xiao2012covariance, we have $$\begin{aligned} \| {\mathbb{E}}_0(T-T_{Mw})\|_{q/2} &\leq & C_q M\sqrt{w}(\Delta_{0, q, j}\Delta_{Mw+1, q, k}+\Delta_{Mw+1, q, j}\Delta_{0, q, k})\nonumber \\ \label{appximationmoment} &\leq & C_q M\sqrt{w}(Mw)^{-\alpha}\xi_{q, \alpha}\end{aligned}$$ for some constant $C_q > 0$. By Markov’s inequality, for $x > 0$, $$\label{T-T_{Mw}} {\mathbb{P}}(|{\mathbb{E}}_0(T-T_{Mw})|\geq x)\leq \frac{C_q M^{q/2-\alpha q/2}w^{q/4-\alpha q/2}\xi_{q, \alpha}^{q/2}}{x^{q/2}}.$$ By the same argument for proving (\[appximationmoment\]), we have $$\|{\mathbb{E}}_0(V_{w, l})\|_{q/2} \leq C_q M\sqrt{w} \tau_l^{-\alpha}\xi_{q, \alpha}.$$ Let $c=q/4-1-\alpha q/2$, $\lambda_l= 3 l^{-2} \pi^{-2}$ if $1 \leq l \leq L/2$ and $\lambda_l= 3 (L+1-l)^{-2} \pi^{-2}$ if $L/2 < l \leq L$. Then $\sum_{l=1}^L \lambda_l < 1$. By the Nagaev (1979) inequality, it follows that $$\begin{aligned} {\mathbb{P}}(|\sum_{l=1}^L {\mathbb{E}}_0(V_{w, l})| \geq x) &\leq& \sum_{l=1}^L {\mathbb{P}}(|{\mathbb{E}}_0(V_{w, l})| \geq \lambda_lx) \nonumber \\ & \leq & \sum_{l=1}^L \frac{C_1 w \varpi_l^{-1}(M{\varpi_l}^{1/2} \tau_l^{-\alpha})^{q/2}\xi_{q, \alpha}^{q/2}}{(\lambda_l x)^{q/2}}+4 \sum_{l=1}^L \exp \left(-\frac{C_2 (\lambda_l x)^2\tau_l^{2 \alpha}}{wM^2\xi_{4, \alpha}^{2}} \right)\nonumber \\ &\leq & \frac{C_3wM^{q/2-\alpha q/2}\xi_{q, \alpha}^{q/2}}{x^{q/2}}\sum_{l=1}^L \frac{\varpi_l^{c}}{\lambda_l^{q/2}}+C_4 \sum_{l=1}^L E_{q, \alpha}(\lambda_l \varpi_l^\alpha x). \label{V_{w,l}}\end{aligned}$$ Elementary calculations show that $$\label{poly} \sum_{l=1}^L \frac{\varpi_l^{c}}{\lambda_l^{q/2}} \leq C_5 \text{ for } c<0 \text{ and } \sum_{l=1}^L \frac{\varpi_l^{c}}{\lambda_l^{q/2}} \leq C_6 \varpi_L^c=C_6 w^c \text{ for } c>0.$$ Furthermore, we can use (\[exponentialterm\]) to obtain $$\label{exp} \sum_{l=1}^L E_{q, \alpha}(\lambda_l \varpi_l^\alpha x) \lesssim E_{q, \alpha} (x).$$ Putting (\[T-T\_[M]{}\]), (\[T-T\_[Mw]{}\]), (\[V\_[w,l]{}\]), (\[poly\]) and (\[exp\]) together, we then have (\[differenceforquadraticform\]). Now it suffices to consider ${\mathbb{P}}(|{\mathbb{E}}_0(T_M)|\geq x)$. Observe that $(Y_{bj, M}Y_{bk, M})_{b \text{ is odd}}$ are independent and so are $(Y_{bj, M}Y_{bk, M})_{b \text{ is even}}$. By Corollary 1.7 of @nagaev1979large, for any $J > 1$, $$\begin{aligned} {\mathbb{P}}(|{\mathbb{E}}_0(T_M)|\geq x) &\leq & \sum_{b=1}^w {\mathbb{P}}(|{\mathbb{E}}_0(Y_{bj, M}Y_{bk, M} )|\geq x/(2J))+2\left(\frac{\sum_{b=1}^w \|{\mathbb{E}}_0(Y_{bj, M}Y_{bk, M})\|_{q/2}^{q/2}}{J x^{q/2}}\right)^{J}\\ &&+4\exp\left\{-\frac{C_q x^2}{\sum_{b=1}^w \|{\mathbb{E}}_0(Y_{bj, M}Y_{bk, M})\|_2^2}\right\}.\end{aligned}$$ Note that $\|Y_{bj, M}\|_{q} \leq C_q \sqrt M \|X_{\cdot j}\|_{q, 0}$. Hence for $1 \leq b \leq w$, $1 \leq j, k\leq p$ and $q \geq 4$, $$\|{\mathbb{E}}_0(Y_{bj, M}Y_{bk, M})\|_{q/2} \leq 2\|Y_{bj, M}Y_{bk, M}\|_{q/2} \leq 2\|Y_{bj, M}\|_{q}\|Y_{bk, M}\|_{q}\leq C_q M \|X_{\cdot j}\|_{q, 0}\|X_{\cdot k}\|_{q, 0}.$$ Since $${\mathbb{E}}|Y_{bj, M}Y_{bk, M}| \leq \|Y_{bj, M}\|_2 \|Y_{bk, M}\|_2 \leq M \|X_{\cdot j}\|_{2, 0}\|X_{\cdot k}\|_{2, 0} \le {x \over \sqrt w},$$ we have $$\begin{aligned} {\mathbb{P}}(|{\mathbb{E}}_0(T_M)|\geq x)& \leq& \sum_{b=1}^w {\mathbb{P}}(|Y_{bj, M}Y_{bk, M}|\geq x/(4J)) \\ &&+ 2\left(\frac{wM^{q/2}\|X_{\cdot j}\|_{q,0}^{q/2}\|X_{\cdot k}\|_{q,0}^{q/2}}{J x^{q/2}}\right)^{J} +4\exp\left(-\frac{C_q x^2}{wM^2\Psi_{4,0}^4}\right).\end{aligned}$$ Recall that $M=O(n^\varsigma)$ with $0 < \varsigma < 1$. Let $J = 1+ (2q-2) (q-4)^{-1} (1- \varsigma)^{-1}$. Since $x \geq \sqrt{w}M \|X_{\cdot j}\|_{q,0}\|X_{\cdot k}\|_{q,0}$, elementary calculations show that for sufficiently large $n$ the second term in the above expression is no greater than $C_J wM\|X_{\cdot j}\|_{q,0}^{q/2}\|X_{\cdot k}\|_{q,0}^{q/2}/x^{q/2}$. As for the first term, we have $${\mathbb{P}}(|Y_{bj, M}Y_{bk, M}| \geq x/(4J)) \leq {\mathbb{P}}(|Y_{bj, M}| \geq \sqrt{x/(4J)})+{\mathbb{P}}(|Y_{bk, M}| \geq \sqrt{x/(4J)}).$$ By Lemma \[theorem for the sum polynomial\], for $\alpha>1/2-1/q$ and $\alpha <1/2-1/q$, respectively, we have $${\mathbb{P}}(|Y_{bj, M}| \geq \sqrt{x}) \leq \left\{ \begin{array}{ll} C_{q, \alpha}x^{-q/2}M\|X_{\cdot j}\|_{q, \alpha}^q+ C_{q, \alpha} \exp\left(-\frac{C_{q, \alpha}x}{M \|X_{\cdot j}\|_{2, \alpha}^2} \right), \\ C_{q, \alpha}x^{-q/2}M^{q/2-\alpha q}\|X_{\cdot j}\|_{q, \alpha}^q+ C_{q, \alpha} \exp\left(-\frac{C_{q, \alpha}x}{M \|X_{\cdot j}\|_{2, \alpha}^2} \right). \\ \end{array} \right.$$ A similar inequality holds for ${\mathbb{P}}(|Y_{bk, M}| \geq \sqrt{x})$. Let $\phi_{q, \alpha}=\|X_{\cdot j}\|_{q, \alpha}^q+\|X_{\cdot k}\|_{q, \alpha}^q$. Hence, it follows that for $\alpha>1/2-1/q$ and $\alpha <1/2-1/q$ respectively, $${\mathbb{P}}(|{\mathbb{E}}_0(T_M)|\geq x) \leq \left\{ \begin{array}{ll} \label{maintermforquadraticform} C_{q, \alpha}x^{-q/2} wM \phi_{q, \alpha} + C_{q, \alpha} \exp\left(-\frac{C_{q, \alpha}x^2}{wM^2 \Psi_{4, \alpha}^4} \right), \\ C_{q, \alpha}x^{-q/2}w M^{q/2-\alpha q}\phi_{q, \alpha}+ C_{q, \alpha} \exp\left(-\frac{C_{q, \alpha}x^2}{wM^2 \Psi_{4, \alpha}^4} \right). \\ \end{array} \right.$$ Combining (\[differenceforquadraticform\]) and (\[maintermforquadraticform\]), and noticing that $\xi_{q, \alpha}^{q/2} \leq C_q\phi_{q, \alpha}$, it follows that $$\begin{aligned} {\mathbb{P}}(|{\mathbb{E}}_0(T)|\geq x) \le C_{q, \alpha} x^{-q/2} F_\alpha \phi_{q, \alpha} +C_{q, \alpha} \exp\left(-\frac{C_{q, \alpha} x^2}{wM^2\Psi_{4, \alpha}^4}\right).\end{aligned}$$ which implies (\[eq:201508181936\]) by the Bonferroni inequality by summing over $j$ and $k$. Under stronger moment conditions, we can have an exponential inequality. \[theoremforquadraticformunderexponential\] Assume $\Phi_{\psi_\nu, 0} < \infty$ for some $\nu \geq 0$. Then for all $x >0$, we have $$\begin{aligned} && {\mathbb{P}}(n|{\text{diag}}(\hat{\Sigma})-{\mathbb{E}}{\text{diag}}(\hat{\Sigma})|_\infty \geq x) \lesssim p \exp\left(-\frac{x^\gamma}{4e\gamma (\sqrt{w}M\Phi_{\psi_\nu, 0}^2)^\gamma }\right), \label{exponentialforquadraticform1}\\ &&{\mathbb{P}}(n|\hat{\Sigma}-{\mathbb{E}}\hat{\Sigma}|_\infty \geq x) \lesssim p^2 \exp\left(-\frac{x^\gamma}{4e\gamma (\sqrt{w}M\Phi_{\psi_\nu, 0}^2)^\gamma }\right), \label{exponentialforquadraticform2}\end{aligned}$$ where $\gamma=1/(1+2 \nu)$ and the constants in $\lesssim$ only depend on $\nu$. Let $T=\sum_{b=1}^w Y_{bj} Y_{bk}$. By the Burkholder inequality, we have $$\begin{aligned} \label{eq:1508182136} \| {\mathbb{E}}_0 T \|_{q/2}^2 \le (q/2-1) \sum_{l=-\infty}^{w M} \|{\cal P}^l T \|_{q/2}^2 \le (q/2-1) \sum_{l=-\infty}^{w M} \left( \sum_{b=1}^w \|{\cal P}^l Y_{bj} Y_{bk} \|_{q/2} \right)^2\end{aligned}$$ By Theorem 3 in @wu2011asymptotic, $\| Y_{bj} \|_q \le (q-1)^{1/2} \sqrt M \|X_{\cdot j}\|_{q, 0}$. Since $\| Y_{bk} - Y_{bk, \{l\}} \|_{q} \le \sum_{h=1+(b-1)M}^{b M} \delta_{h-l, q, k}$, we have $$\begin{aligned} \sum_{b=1}^w \|{\cal P}^l Y_{bj} Y_{bk} \|_{q/2} &\le& \sum_{b=1}^w \| Y_{bj} Y_{bk} - Y_{bj, \{l\} } Y_{bk, \{l\}} \|_{q/2} \cr &\le& \sum_{b=1}^w ( \| Y_{bj} \|_q \| Y_{bk} - Y_{bk, \{l\}} \|_{q} + \| Y_{bj} - Y_{bj, \{l\} } \|_q \|Y_{bk, \{l\}}\|_q ) \cr &\le& (q-1)^{1/2} \sqrt M \left(\|X_{\cdot j}\|_{q, 0} \sum_{h=1}^{w M} \delta_{h-l, q, k} + \|X_{\cdot k}\|_{q, 0} \sum_{h=1}^{w M} \delta_{h-l, q, j} \right),\end{aligned}$$ which by (\[eq:1508182136\]) implies that $$\begin{aligned} \label{eq:1508182144} \| {\mathbb{E}}_0 T \|_{q/2}^2 \le (q/2-1) \sum_{l=-\infty}^{w M} \|{\cal P}^l T \|_{q/2}^2 \le (q-2)(q-1) w M^2 \|X_{\cdot j}\|_{q, 0}^2 \|X_{\cdot k}\|_{q,0}^2. \end{aligned}$$ Let $R_{j k} = {\mathbb{E}}_0 T /(\sqrt w M)$. Similarly as the argument for proving Lemma \[m-approximationtheorem2\], if $\gamma h \geq 2$, it follows that $\|R_{jk}\|_{\gamma h} \leq (2\gamma h-1)(2\gamma h)^{2 \nu}\|X_{\cdot j}\|_{\psi_\nu, 0}\|X_{\cdot k}\|_{\psi_\nu, 0}$. Let $\tau_0=(2e\gamma \|X_{\cdot j}\|_{\psi_\nu, 0}^\gamma\|X_{\cdot k}\|_{\psi_\nu, 0}^\gamma)^{-1}$. Notice that $-2\nu=1-1/\gamma$. Then $$\begin{aligned} \frac{t^h\|R_{jk}^\gamma\|^h_h}{h!} &\leq& \frac{t^h(2\gamma h-1)^{\gamma h}(2\gamma h)^{2 \nu \gamma h}\|X_{\cdot j}\|_{\psi_\nu, 0}^{\gamma h}\|X_{\cdot k}\|_{\psi_\nu, 0}^{\gamma h}}{C_1(h/e)^h a_h^{-1}}\\ &\leq & \frac{a_h t^h (2\gamma h -1)^{\gamma h}}{C_1 \tau_0^h(2\gamma h)^{\gamma h}} \leq \frac{a_h t^h}{C_1 \sqrt{e} \tau_0^h}.\end{aligned}$$ If $\gamma h <2$, then $\|R_{jk}\|_{\gamma h} \leq \|R_{jk}\|_2 \leq \sqrt{6}\cdot 4^{2 \nu} \|X_{\cdot j}\|_{\psi_\nu, 0}\|X_{\cdot k}\|_{\psi_\nu, 0}$. So we have $$\begin{aligned} {\mathbb{E}}[\exp(t R_{jk}^\gamma)]&\leq & 1+ \sum_{1 \leq h <2/\gamma}\frac{t^h(\sqrt{6}\cdot 4^{2 \nu} \|X_{\cdot j}\|_{\psi_\nu, 0}\|X_{\cdot k}\|_{\psi_\nu, 0})^{\gamma h}}{h!}+\sum_{h \geq 2/\gamma}\frac{a_h t^h}{C_1 \sqrt{e} \tau_0^h}\\ &\leq & 1+C_\gamma \sum_{h=1}^\infty a_h\frac{t^h}{\tau_0^h} \leq 1+ C_\gamma \frac{t/\tau_0}{(1-t/\tau_0)^{1/2}}.\end{aligned}$$ By choosing $t=\tau_0/2$, and applying the Markov inequality and the Bonferroni inequality, (\[exponentialforquadraticform1\]) and (\[exponentialforquadraticform2\]) are obtained. An alternative estimate of $\Sigma$, which also works with unknown mean ${\mathbb{E}}X_i$, is $$\label{long run covariance estimate minus mean} \tilde{\Sigma}=\frac{1}{w M}\sum_{b=1}^w (\sum_{i \in L_b}X_i-M \bar{X})(\sum_{i \in L_b}X_i-M\bar{X})^\top,$$ where $\bar{X} = (w M)^{-1}\sum_{i=1}^{w M} X_i$, $w = \lfloor n/M \rfloor$. Then $|\tilde{\Sigma} - \hat{\Sigma}|_\infty = M |\bar X|_\infty^2$. Applying Lemma \[theorem for the sum polynomial\] to $\sum_{i=1}^{w M} X_{i j}$, one can conclude that Theorems \[theoremforquadraticformunderpolynomial\] and \[theoremforquadraticformunderexponential\] still hold for $\tilde{\Sigma}$ with ${\mathbb{E}}\hat{\Sigma}$ therein replaced by $\Sigma_M:=\sum_{i=-M}^M (1-|i|/M) \Gamma_i$ (which equals to ${\mathbb{E}}\hat{\Sigma}$ if ${\mathbb{E}}X_i = 0$). \[corollary for Sigma\] (i) Under conditions in Theorem \[theoremforquadraticformunderpolynomial\], we have $| \tilde{\Sigma} - \Sigma|_\infty = O_{\mathbb{P}}(r_n)$, where $$r_n = n^{-1} \max\{p^{2/q} F^{2/q}_\alpha \Upsilon_{q, \alpha}^2, \, \sqrt w M \Psi_{4, \alpha}^2 \sqrt{\log p}, \, \sqrt w M \Psi_{q, \alpha}^2 \} + \Psi_{2,0}\Psi_{2, \alpha} v(M),$$ where $v(M)=1/M$ if $\alpha>1$, $v(M)=\log M/M$ if $\alpha=1$ and $v(M)=1/M^\alpha$ if $0<\alpha <1$. (ii) Under conditions in Theorem \[theoremforquadraticformunderexponential\], we have $| \tilde{\Sigma} - \Sigma|_\infty = O_{\mathbb{P}}(r_n)$ with $r_n = n^{-1} \sqrt{w}M \Phi_{\psi_\nu,0}^2 (\log p)^{1/\gamma} + \Psi_{2,0}\Psi_{2, \alpha} v(M)$. The above Corollary easily follows from Theorems \[theoremforquadraticformunderpolynomial\] and \[theoremforquadraticformunderexponential\] since the bias $|\Sigma_M-\Sigma|_{\infty} \lesssim \Psi_{2,0}\Psi_{2, \alpha} v(M)$; see the proof of Lemma \[lemmafortwogaussian\]. For the estimate $\tilde{\Sigma}$ in (\[long run covariance estimate minus mean\]), let $\tilde{D}_0=[{\text{diag}}(\tilde{\Sigma})]^{1/2}$. Let $\tilde{Z} = \tilde{\Sigma}^{1/2} \eta$, where $\eta \sim N(0, \text{Id}_p)$ is independent of $(X_i)_i$. Then conditioning on $(X_i)_i$, $\tilde{Z} \sim N(0, \tilde{\Sigma})$. Let $0 < \theta < 1$; let $\tilde \chi_\theta$ be the conditional $\theta$-quantile of $|\tilde{D}_0^{-1}\tilde{Z}|_\infty$ given $(X_i)_{i=1}^n$. We can use $\tilde \chi_\theta$ to estimate the $\theta$-quantile of $|D_0^{-1}(\bar{X}_n-\mu)|_\infty$, thus constructing simultaneous confidence intervals for $\mu = (\mu_1, \ldots, \mu_p)^\top$ as $\hat \mu_j \pm \tilde \chi_\theta \tilde \sigma_{j j}^{1/2}$, $1 \le j \le p$. Assume that $r_n = o(1/\log^2 p)$. Then $\pi(|\tilde{\Sigma}-\Sigma|_\infty)=o(1)$, and by Lemma 3.1 in @chernozhukov2013, the latter simultaneous confidence intervals have the asymptotically correct coverage probability $\theta$. Note that $\tilde \chi_\theta$ can be obtained by sample quantile estimates from extensive simulations of $\tilde{Z} = \tilde{\Sigma}^{1/2} \eta$. Tail probability inequalities under dependence {#sec:probability inequalities} ============================================== Tail probability inequalities play an important role in simultaneous inference. Here we shall provide some Nagaev-type tail probability inequalities. They are of independent interest. Let ${\varepsilon}_i, {\varepsilon}'_j, i, j, \in \mathbb{Z}$, be i.i.d. random variables. We start with the one-dimensional stationary process $(e_i)_{i=-\infty}^\infty$ of the form $$\label{representation} e_i=g(\ldots, {\varepsilon}_{i-1}, {\varepsilon}_i),$$ where $g$ is a measurable function such that $e_i$ is well-defined. Recall ${\mathcal{F}}_i^j=({\varepsilon}_i, {\varepsilon}_{i+1}, \ldots, {\varepsilon}_j)$, ${\mathcal{F}}^j=(\ldots, {\varepsilon}_{j-1}, {\varepsilon}_j)$ and ${\mathcal{F}}_{i}=({\varepsilon}_i, {\varepsilon}_{i+1}, \ldots)$. Let the projection operators $\mathcal{P}^0 \cdot = {\mathbb{E}}( \cdot |{\mathcal{F}}^{0}) - {\mathbb{E}}( \cdot | {\mathcal{F}}^{-1})$, $\mathcal{P}_0 \cdot = {\mathbb{E}}( \cdot |{\mathcal{F}}_{0}^i) - {\mathbb{E}}( \cdot |{\mathcal{F}}_{1}^i)$. As in (\[eq:1507140845\]), define respectively the functional and the predictive dependence measures $$\begin{aligned} \delta_{i, q}=\| e_i-g({\mathcal{F}}^{i, \{0\}}) \|_q, \,\, \theta_{i, q} =\|\mathcal{P}^0 e_i\|_q, \,\, \mbox{ and } \theta_{i, q}' =\|\mathcal{P}_0 e_i\|_{q},\end{aligned}$$ where ${\mathcal{F}}^{i, \{0\}} = ( \ldots, {\varepsilon}_{-1}, {\varepsilon}_0', {\varepsilon}_{1}, \ldots, {\varepsilon}_i)$. Let $\delta_{i, q}= 0$ if $i < 0$; let $\Delta_{m, q} = \sum_{i=m}^\infty \delta_{i, q}$, $m \ge 0$, be the tail dependence measures, and the dependence adjusted norm $$\label{dependenceadjustednorm} \| e_\cdot \|_{q, \alpha}:= \sup_{m \geq 0} (m+1)^{\alpha} \Delta_{m, q}, \text{ for }\alpha \geq 0.$$ Here $\delta_{i, q}$ measures the dependence of $e_i$ on ${\varepsilon}_0$ and $\Delta_{m, q}$ measures the cumulative impact of ${\varepsilon}_0$ on $(e_i)_{i \geq m}$. The projections $(\mathcal{P}_{-i}\cdot)_{i \in \mathbb{Z}}$ and $(\mathcal{P}^{i}\cdot)_{i \in \mathbb{Z}}$ induces martingale differences with respect to $({\mathcal{F}}_{-i})$ and $({\mathcal{F}}^i)$, respectively. Both predictive dependence measures provide an evaluation to the effect on the prediction of $e_i$ when part of the previous inputs is concealed, and they satisfy $\theta_{i, q} \leq \delta_{i, q}$ and $\theta'_{i, q} \leq \delta_{i, q}$ in view of Jensen’s inequality. Inequalities with Finite Polynomial Moments ------------------------------------------- For $m\geq 0$, the $m$-dependence approximation of $e_i$ is denoted by $e_{i, m}$ where $$e_{i, m}={\mathbb{E}}(e_i|{\varepsilon}_{i-m},{\varepsilon}_{i-m+1}, \ldots, {\varepsilon}_i).$$ Let $S_n=\sum_{i=1}^n e_i$, $S_{n, m}=\sum_{i=1}^n e_{i, m}$. With the dependence adjusted norm (\[dependenceadjustednorm\]), we are able to provide tail probability inequalities for error bounds when approximating $(e_i)$ by the $m$-dependent process $(e_{i, m})$. In lemmas below the constant $C_{q, \alpha}$ only depends on $q$ and $\alpha$ and its values may change from line to line. \[m-approximationtheorem1\] Assume $\|e_\cdot\|_{q, \alpha} < \infty$, where $q >2$ and $\alpha>0$. (i) If $\alpha>1/2-1/q$, then $$\label{alpha>} {\mathbb{P}}(|S_n-S_{n, m}| \geq x) \leq \frac{C_{q, \alpha} n m^{q/2-1-\alpha q}\|e_\cdot\|_{q, \alpha}^q}{x^q}+ C_{q,\alpha} \exp{\left(}- \frac{C_{q, \alpha} x^2 m^{2 \alpha}}{n\|e_\cdot\|_{2, \alpha}^2}{\right)}$$ holds for all $x>0$ and $1 \leq m \leq n$. (ii) If $0 < \alpha < 1/2-1/q$, we have $$\label{alpha<} {\mathbb{P}}(|S_n-S_{n, m}| \geq x) \leq \frac{C_{q,\alpha} n^{q/2-\alpha q} \|e_\cdot\|_{q, \alpha}^q}{x^q}+ C_{q, \alpha} \exp{\left(}- \frac{C_{q, \alpha} x^2 m^{2 \alpha}}{n\|e_\cdot\|_{2, \alpha}^2}{\right)}.$$ It is a special case of Lemma \[lemma for m-approximation, polynomial tail\] for $p=1$. \[theorem for the sum polynomial\] Assume that $\|e_\cdot\|_{q, \alpha} < \infty$, where $q >2$ and $\alpha>0$. (i) If $\alpha>1/2-1/q$, then there exists some constant $C_{q, \alpha}$ depending on $q$ and $\alpha$ only such that, for $x>0$, $$\label{eq:1508220935} {\mathbb{P}}(|S_n| \geq x) \leq \frac{C_{q, \alpha} n \|e_\cdot\|_{q, \alpha}^q}{x^q}+ C_{q,\alpha} \exp{\left(}- \frac{C_{q, \alpha} x^2}{n\|e_\cdot\|_{2, \alpha}^2}{\right)}.$$ (ii) If $0 < \alpha < 1/2-1/q$, we have the following inequality, $$\label{eq:1508220936} {\mathbb{P}}(|S_n| \geq x) \leq \frac{C_{q,\alpha} n^{q/2-\alpha q} \|e_\cdot\|_{q, \alpha}^q}{x^q}+ C_{q, \alpha} \exp{\left(}- \frac{C_{q, \alpha} x^2}{n\|e_\cdot\|_{2, \alpha}^2}{\right)}.$$ By Markov’s inequality and Lemma 1 of @liu2010asymptotics, one obtains $${\mathbb{P}}(|S_n-S_{n, m}|\geq x) \leq \frac{\|S_n-S_{n,m}\|_q^q}{x^q} \le C_q \frac{n^{q/2}m^{-\alpha q}\|e_\cdot\|_{q, \alpha}^q}{x^q}.$$ In comparison, the polynomial tail bounds in (\[eq:1508220935\]) and (\[eq:1508220936\]) are sharper. Inequalities with Finite Exponential Moments -------------------------------------------- If $e_i$ satisfies stronger moment condition than the existence of finite $q$-th moment, we can have an exponential inequality. We shall assume $\|e_\cdot\|_{q, \alpha} < \infty$ for all $q >0$ and some $\alpha\geq 0$ and we further assume for some $\nu \geq 0$, the dependence adjusted sub-exponential norm $$\label{assumption} \|e_\cdot\|_{\psi_\nu, \alpha}:=\sup\limits_{q \geq 2} q ^{-\nu} \|e_\cdot\|_{q, \alpha} < \infty.$$ By this definition, if $e_i$ are i.i.d., $\|e_\cdot\|_{\psi_\nu, \alpha}$ reduces to the sub-Gaussian norm ($\nu=1$) or sub-exponential norm ($\nu=1/2$) of the random variable by the equivalence of $\|e_\cdot\|_{q, \alpha}$ and $\|e_i\|_q$. The parameter $\nu$ measures how fast $\|e_\cdot\|_{q, \alpha}$ increases with $q$. \[m-approximationtheorem2\] Assume (\[assumption\]). Let $J_n=(S_n-S_{n, m})/\sqrt{n}$ and $\beta=2/(1+2\nu)$. Then $$h(t):= \sup_{n \in \mathbb{N}} {\mathbb{E}}[\exp(tJ_n^{\beta})] \leq 1+C_{\beta}(1-t/t_0)^{-1/2} t/t_0$$ holds for $0 \leq t < t_0$ with $t_0=m^{\alpha\beta}/(e\beta\|e_\cdot\|_{\psi_\nu, \alpha}^\beta)$. Consequently, letting $t=t_0/2$, for $x>0$, $$\label{conclusion1} {\mathbb{P}}(|J_n| \geq x) \leq \exp(-t x^\beta)h(t) \leq C_\beta\exp\left(-\frac{x^\beta m^{\alpha \beta}}{2e\beta \|e_\cdot\|^\beta_{\psi_\nu, \alpha}}\right).$$ \[theorem for the sum exponential\] Assume (\[assumption\]) holds for $\alpha=0$. Let $\beta=2/(1+2\nu)$. Then for $x>0$, $${\mathbb{P}}(|S_n/\sqrt{n}| \geq x) \leq C_\beta \exp\left(-\frac{x^\beta}{2e\beta \|e_\cdot\|^\beta_{\psi_\nu, 0}}\right).$$ Let $Q_{n, l}=\sum_{i=1}^n\mathcal{P}_{i-l}X_i$, $l \geq 0$. Then $Q_{n, l}$ is a backward martingale. By Burkholder’s inequality, we have $$\|Q_{n, l}\|_q^2 \leq (q-1) \sum_{i=1}^n \|\mathcal{P}_{i-l}X_i\|_q^2 =(q-1)n (\theta_{l, q}')^2.$$ By $\theta_{l, q}' \leq \delta_{l, q}$, we have $\|J_n\|_q \leq (q-1)^{1/2}\Delta_{m+1, q}$ in view of $\sqrt{n}J_n= \sum_{l=m+1}^\infty Q_{n, l}$. Write the negative binomial expansion $(1-s)^{-1/2}=1+\sum_{k=1}^\infty a_k s^k$ with $a_k=(2k)!/(2^{2k}(k!)^2)$ for $|s|<1$. By Stirling’s formula, we have $a_k \sim (k\pi)^{-1/2}$ as $k \rightarrow \infty$. Hence, there exists absolute constants $C_1, C_2>0$ such that for all $k \geq 1$, $$C_1 (k/e)^ka_k^{-1} \leq k! \leq C_2 (k/e)^k a_k^{-1}.$$ Under condition (\[assumption\]), if $k\beta \geq 2$, then $\|e_\cdot\|_{\beta k, \alpha} \leq \|e_\cdot\|_{\psi_\nu, \alpha}(\beta k)^\nu$ and hence $$\begin{aligned} \frac{t^k\|J_n^\beta\|^k_k}{k!} \leq \frac{t^k(\beta k-1)^{\beta k/2}\Delta_{m+1,\beta k}^{\beta k}}{C_1(k/e)^k a_k^{-1}} \leq \frac{a_k t^k (\beta k -1)^{\beta k/2}}{C_1 t_0^k(\beta k)^{\beta k/2}} \leq \frac{a_k t^k}{C_1 \sqrt{e} t_0^k}.\end{aligned}$$ If $k\beta <2$, then $\|J_n\|_{\beta k} \leq \|J_n\|_2 \leq 2^\nu m^{-\alpha}\|e_\cdot\|_{\psi_\nu, \alpha}$. In $e^y= \sum_{k=0}^\infty y^k/k!$, let $y = t J_n^\beta$, then $$\begin{aligned} h(t) &\leq & 1+ \sum_{1 \leq k <2/\beta}\frac{t^k(2^\nu m^{-\alpha} \|e_\cdot\|_{\psi_\nu, \alpha})^{\beta k}}{k!}+\sum_{k \geq 2/\beta}\frac{a_k t^k}{C_1 \sqrt{e} t_0^k}\\ &\leq & 1+C_\beta \sum_{k=1}^\infty a_k\frac{t^k}{t_0^k} \leq 1+ C_\beta \frac{t/t_0}{(1-t/t_0)^{1/2}},\end{aligned}$$ where $C_\beta>0$ only depends on $\beta$. So (\[conclusion1\]) follows by Markov’s inequality. Inequalities for High-dimensional Time Series with Finite Polynomial Moments ----------------------------------------------------------------------------- In this section we shall derive powerful tail probability inequalities for high-dimensional stationary vectors; cf Lemmas \[lemma for m-approximation, polynomial tail\] and \[lemma for the sum, polynomial tail\]. The proofs require Theorem 4.1 of @pinelis1994optimum, a deep Rosenthal-Burkholder type bound on moments of Banach-spaced martingales. Lemma \[lemma for high-dim Bulkholder\] follows from Theorem 4.1 of @pinelis1994optimum. Lemma \[lemma for independent high-dim Nagaev\] is a Fuk-Magaev type inequality for the sum of independent random vectors. For a $p$-dimensional vector $v = (v_1, \ldots, v_p)$ recall the $s$-length $|v|_s = (\sum_{j=1}^p |v_j|^s)^{1/s}$, $s \ge 1$. \[lemma for high-dim Bulkholder\] Let $D_i$, $1 \leq i \leq n$, be $p$-dimensional martingale difference vectors with respect to the $\sigma$-field $\mathcal{G}_i$. Let $s>1$ and $q\geq 2$. Then $$\||D_1+\ldots+D_n|_{s}\|_{q} \le c \left\{ q \| \sup_{i} |D_i|_s \|_q +\sqrt{q(s-1)}\left\|\left[\sum_{i=1}^n {\mathbb{E}}(|D_i|_s^2|\mathcal{G}_{i-1})\right]^{1/2}\right\|_q \right\},$$ where $c$ is an absolute constant. \[lemma for independent high-dim Nagaev\] Assume $s>1$. Let $X_1, \ldots, X_n$ be $p$-dimensional independent random vectors with mean zero such that for some $q>2$, $\||X_i|_s\|_q < \infty$, $1 \leq i \leq n$. Let $T_n=\sum_{i=1}^n X_i$ and $\sigma_i=(\|X_{i1}\|_2, \ldots, \|X_{ip}\|_2)^\top$. Then for any $y>0$, $${\mathbb{P}}\left(|T_n|_s\geq 2 {\mathbb{E}}|T_n|_s+y\right) \leq C_q y^{-q}\sum_{i=1}^n {\mathbb{E}}|X_i|_s^q+\exp\left(-\frac{y^2}{3\sum_{i=1}^n |\sigma_i|_s^2}\right),$$ where $C_q$ is a positive constant only depending on $q$. For $s>1$, we apply Theorem 3.1 of @einmahl2008characterization with the Banach space $(\mathbb{R}^p, |\cdot|_s)$ and $\eta=\delta=1$. The unit ball of the dual of $(\mathbb{R}^p, |\cdot|_s)$ is the set of linear functions $\{u=(u_1, \ldots, u_p)^\top \mapsto \lambda^\top u: \lambda \in \mathbb{R}^p, |\lambda|_a \leq 1\}$ where $1/a+1/s=1$. By Minkowski’s and Hölder’s inequalities, we have $$\|\lambda^\top X_i\|_2 \leq \sum_{j=1}^p |\lambda_j| \cdot \|X_{ij}\|_2 \leq |\lambda|_a |\sigma_i|_s.$$ Hence, the $\Lambda_n$ therein is bounded by $\sum_{i=1}^n |\sigma_i|_s^2$. Let $X_i$ be a mean zero $p$-dimensional stationary process, and $T_n = \sum_{i=1}^n X_i$, $T_{n,m}=\sum_{i=1}^n X_{i,m}$ where $X_{i,m}= {\mathbb{E}}(X_i| {\varepsilon}_{i-m}, \ldots, {\varepsilon}_{i})$. We are interested in bounding the tail probabilities of ${\mathbb{P}}(|T_n-T_{n,m}|_\infty \geq x)$ and ${\mathbb{P}}(|T_n|_\infty \geq x)$ for large $x$. Wrtie $\ell = \ell(p) = 1\vee \log p$. \[lemma for m-approximation, polynomial tail\] Assume $\| |X_\cdot|_\infty \|_{q, \alpha} < \infty$, where $q>2$ and $\alpha \geq 0$. Also assume $\Psi_{2, \alpha} < \infty$. (i) If $\alpha>1/2-1/q$, then for $x \gtrsim [\sqrt{n \ell}\Psi_{2, \alpha}+n^{1/q} \ell \| |X_{\cdot}|_\infty \|_{q, \alpha}]m^{-\alpha}$, $$\label{alpha>} {\mathbb{P}}(|T_n-T_{n, m}|_\infty \geq x) \leq \frac{C_{q, \alpha} n m^{q/2-1-\alpha q} \ell^{q/2}\||X_\cdot|_\infty\|_{q, \alpha}^q}{x^q}+ C_{q,\alpha} \exp{\left(}- \frac{C_{q, \alpha} x^2 m^{2 \alpha}}{n \Psi_{2, \alpha}^2}{\right)}$$ holds for all $1 \leq m \leq n$. (ii) If $0 < \alpha < 1/2-1/q$, the inequality is $$\label{alpha<} {\mathbb{P}}(|T_n-T_{n, m}|_\infty \geq x) \leq \frac{C_{q,\alpha} n^{q/2-\alpha q} \ell^{q/2} \| |X_\cdot|_\infty \|_{q, \alpha}^q}{x^q}+ C_{q, \alpha} \exp{\left(}- \frac{C_{q, \alpha} x^2 m^{2 \alpha}}{n \Psi_{2, \alpha}^2}{\right)}.$$ Let $s = \ell = 1\vee \log p$. Then ${\mathbb{P}}(|T_n-T_{n,m}|_\infty \geq x)$ is equivalent to ${\mathbb{P}}(|T_n-T_{n,m}|_s \geq x)$, since for any vector $v=(v_1, \ldots, v_p)^\top$, $|v|_\infty \leq |v|_s \leq p^{1/s} |v|_\infty$. Let $L=\lfloor (\log n - \log m)/ (\log 2)\rfloor$, $\varpi_l=2^l$ if $1 \leq l <L$, $\varpi_L=\lfloor n/m \rfloor$ and $\tau_l=m \cdot \varpi_l$ for $1 \leq l < L$, $\tau_0=m$, $\tau_L=n$. Define $M_{n, l}=T_{n, \tau_l}-T_{n, \tau_{l-1}}$ for $1 \leq l \leq L$ and write $$\label{sum} T_n-T_{n, m}= T_n-T_{n, n}+\sum\limits_{l=1}^L M_{n, l}.$$ Notice that $T_n-T_{n, n}=\sum\limits_{j=n}^\infty T_{n, j+1}-T_{n, j}$. By Lemma \[lemma for high-dim Bulkholder\], $$\| |T_n-T_{n, n}|_s \|_q \leq \sum\limits_{j=n}^\infty \| |T_{n, j+1}-T_{n, j}|_s \|_q \leq \sum\limits_{j=n}^\infty C_q (ns)^{1/2}\omega_{j+1, q}=C_q (ns)^{1/2}\Omega_{n+1, q},$$ where $C_q$ is a constant only depending on $q$. By Markov’s inequality, we have $$\label{secondterm} {\mathbb{P}}(|T_n-T_{n, n}|_s \geq x) \leq \frac{\| |T_n-T_{n, n}|_s \|_q ^q}{x^q} \leq \frac{C_q (ns)^{q/2}\Omega_{n+1, q}^q}{x^q}.$$ For each $1 \leq l \leq L$, define $$\begin{aligned} &&Y_{i,l}=\sum_{k=(i-1)\tau_l+1}^{(i\tau_l)\wedge n}{\left(}X_{k,\tau_l}-X_{k,\tau_{l-1}}{\right)}, \quad \text{for } 1 \leq i \leq \lfloor n/\tau_l\rfloor;\\ &&R_{n, l}^e= \sum_{i \text{ is even}} Y_{i,l} \text{ and } R_{n, l}^o= \sum_{i \text{ is odd}} Y_{i,l}.\end{aligned}$$ Let $c=q/2-1-\alpha q$; let $\lambda_1, \lambda_2, \cdots, \lambda_L$ be a positive sequence such that $\sum_{l=1}^L \lambda_l \leq 1$, specifically, $\lambda_l=l^{-2}/(\pi^2/3)$ if $1 \leq l \leq L/2$ and $\lambda_l=(L+1-l)^{-2}/(\pi^2/3)$ if $L/2 < l \leq L$. Since $Y_{i,l}$ and $Y_{i',l}$ are independent for $|i-i'|>1$, by Lemma \[lemma for independent high-dim Nagaev\], for any $x>0$, $$\begin{aligned} {\mathbb{P}}( |R_{n,l}^e|_s-2 {\mathbb{E}}| R_{n,l}^e|_s\geq \lambda_lx) \leq \frac{C_q \sum\limits_{i \text{ is even}}{\mathbb{E}}| Y_{i,l}|_s^{q}}{{\left(}\lambda_l x {\right)}^{q}}+ \exp{\left(}-\frac{{\left(}\lambda_l x {\right)}^2}{3 \sum\limits_{i \text{ is even}} |\sigma_{Y_i, l}|_s^2} {\right)},\end{aligned}$$ where $\sigma_{Y_i, l}=(\|Y_{i1, l}\|_2, \ldots, \|Y_{ip,l}\|_2)^\top$. By Lemma \[lemma for high-dim Bulkholder\], $\| |Y_{i, l}|_s \|_{q} \leq C_q (\tau_l s)^{1/2} \tilde{\omega}_{l, q}$ where $\tilde{\omega}_{l, q} =\sum_{k=\tau_{l-1}+1}^{\tau_l} \omega_{k, q} \leq \tau_{l-1}^{-\alpha} \||X_\cdot|_\infty \|_{q, \alpha}$. For $1 \leq j \leq p$, by the Bulkholder inequality, $\|Y_{ij,l}\|_2 \leq \sqrt{\tau_l} \tilde{\delta}_{l,2,j}$ where $\tilde{\delta}_{l,2,j}= \sum_{k=\tau _{l-1}+1}^{\tau_l} \delta_{k, 2, j} \leq \tau_{l-1}^{-\alpha} \|X_{\cdot j}\|_{2, \alpha}$, which implies $|\sigma_{Y_i, l}|_s \lesssim \tau^{1/2}\tau_{l-1}^{-\alpha} \Psi_{2, \alpha}$. So we obtain $${\mathbb{P}}( | R_{n,l}^e|_s -2{\mathbb{E}}| R_{n,l}^e|_s \geq \lambda_l x) \leq \frac{C_1 n s^{q/2}}{x^{q}}\cdot \frac{\tau_{l}^{q/2-1}\tilde{\omega}_{l, q}^{q}}{\lambda_l^{q}}+ \exp{\left(}-\frac{C_2{\left(}\lambda_l x {\right)}^2\tau_{l-1}^{2\alpha}}{n \Psi_{2, \alpha}^2} {\right)}. \label{R}$$ By Lemma 8 of @chernozhukov2014comparison, for $s = \log p \vee 1$, $${\mathbb{E}}| R_{n,l}^e|_s \lesssim \sqrt{ns} \tau_{l-1}^{-\alpha} \Psi_{2, \alpha} + n^{1/q}s \tilde{\omega}_{l,q} \lesssim [\sqrt{ns}\Psi_{2, \alpha}+n^{1/q} s \| |X_{\cdot}|_\infty\|_{q, \alpha}]m^{-\alpha} \varpi_l^{-\alpha}.$$ Notice that $\min_{l \geq 0} \lambda_l \varpi_l^{\alpha} >0$. Hence, ${\mathbb{E}}| R_{n,l}^e|_s \lesssim \lambda_l x$ and (\[R\]) implies $${\mathbb{P}}( | R_{n,l}^e|_s \geq \lambda_l x) \leq \frac{C_1 n s^{q/2}}{x^{q}}\cdot \frac{\tau_{l}^{q/2-1}\tilde{\omega}_{l, q}^{q}}{\lambda_l^{q}}+ \exp{\left(}-\frac{C_2{\left(}\lambda_l x {\right)}^2\tau_{l-1}^{2\alpha}}{n \Psi_{2, \alpha}^2} {\right)}.$$ A similar inequality holds for $R_{n, l}^o$. Therefore, $$\begin{aligned} {\mathbb{P}}(|\sum\limits_{l=1}^L M_{n, l}|_s \geq x ) \nonumber &\leq& \sum_{l=1}^L {\mathbb{P}}{\left(}| M_{n,l}|_s \geq \lambda_l x {\right)}\\ \nonumber &\leq& \sum_{l=1}^L {\mathbb{P}}{\left(}{\left|}R_{n,l}^e {\right|}_s \geq \lambda_l x/2 {\right)}+\sum_{l=1}^L {\mathbb{P}}{\left(}{\left|}R_{n,l}^o {\right|}_s \geq \lambda_l x/2 {\right)}\\ \nonumber &\leq& \sum_{l=1}^L \frac{C_1 n s^{q/2}}{x^{q}}\cdot \frac{\tau_{l}^{q/2-1}\tilde{\omega}_{l, q}^{q}}{\lambda_l^{q}}+ 2\sum_{l=1}^L\exp{\left(}-\frac{C_2{\left(}\lambda_l x {\right)}^2\tau_{l-1}^{2\alpha}}{n \Psi_{2, \alpha}^2} {\right)}\\ \label{thirdterm} &\leq & \frac{C_3 n m^{c}s^{q/2}\| |X_\cdot|_\infty \|_{q, \alpha}^q}{x^q} \sum_{l=1}^L \frac{\varpi_l^c}{\lambda_l^q}+C_4 \sum_{l=1}^L \exp\left(-\frac{C_5 x^2 m^{2 \alpha} \lambda_l^2 \varpi_l^{2\alpha}}{n \Psi_{2, \alpha}^2}\right).\end{aligned}$$ By the definition of $\varpi_l$ and $\lambda_l$ and by some elementary calculation, there exists some constant $C_6>1$ such that for all $t \geq 1$, $$\label{exponentialterm} \sum_{l=1}^L \exp(-C_5 t \lambda_l^2 \varpi_l^{2\alpha}) \leq C_6 \exp(-C_5 t \mu),$$ where $\mu=\min_{l \geq 1} \lambda_l^2 \varpi_l^{2 \alpha} >0$. If $c>0$, it can be obtained that $\sum_{l=1}^L \varpi_l^c/ \lambda_l^q \leq C_7 \varpi_L^c \leq C_7 n^c/m^c$. If $c <0$, then $\sum_{l=1}^L \varpi_l^c/ \lambda_l^q \leq C_8$. Hence, combining (\[sum\]), (\[secondterm\]), (\[thirdterm\]), (\[exponentialterm\]), Lemma \[lemma for m-approximation, polynomial tail\] follows. \[lemma for the sum, polynomial tail\] Assume $\| |X_\cdot|_\infty \|_{q, \alpha} < \infty$, where $q>2$ and $\alpha \geq 0$. Also assume $\Psi_{2, \alpha} < \infty$. (i) If $\alpha>1/2-1/q$, then for $x \gtrsim \sqrt{n \ell}\Psi_{2, \alpha}+n^{1/q} \ell \| |X_{\cdot}|_\infty \|_{q, \alpha}$, $$\label{alpha> for the sum} {\mathbb{P}}(|T_n|_\infty \geq x) \leq \frac{C_{q, \alpha} n \ell^{q/2} \| |X_\cdot|_\infty \|_{q, \alpha}^q}{x^q}+ C_{q,\alpha} \exp{\left(}- \frac{C_{q, \alpha} x^2}{n \Psi_{2, \alpha}^2}{\right)}.$$ (ii) If $0 < \alpha < 1/2-1/q$, we have the following inequality, $$\label{alpha< for the sum} {\mathbb{P}}(|T_n|_\infty \geq x) \leq \frac{C_{q,\alpha} n^{q/2-\alpha q} \ell^{q/2} \| |X_\cdot|_\infty \|_{q, \alpha}^q}{x^q}+ C_{q, \alpha} \exp{\left(}- \frac{C_{q, \alpha} x^2}{n \Psi_{2, \alpha}^2}{\right)}.$$ The proof is similar to that of Lemma \[lemma for m-approximation, polynomial tail\], and thus is omitted. Proofs {#sec:proof} ====== Proof of Theorems \[th:1507140259\] and \[th:1507140325\] {#sec:1507151006} --------------------------------------------------------- We shall apply the $m$-dependence approximation approach. For $m \ge 0$, define $$X_{i, m}=(X_{i1,m}, \ldots, X_{ip, m})^\top={\mathbb{E}}(X_i|{\varepsilon}_{i-m},{\varepsilon}_{i-m+1}, \ldots, {\varepsilon}_i).$$ Write $T_{X}=\sum_{i=1}^n X_i$ and $T_{X, m}=\sum_{i=1}^n X_{i, m}$. For simplicity, suppose $n=(M+m)w$, where $M \gg m$ and $M, m, w \rightarrow \infty$ (to be determined) as $n \to \infty$. We apply the block technique and split the interval $[1, n]$ into alternating large blocks $L_b= [ (b-1)(M+m)+1, bM+(b-1)m ]$ and small blocks $S_b =[ bM+(b-1)m +1, b(M+m) ]$, $1 \leq b \leq w$. Let $$\begin{aligned} Y_b = \sum_{i \in L_b}X_i, \,\, Y_{b, m}=\sum_{i \in L_b} X_{i, m}, \,\, T_{Y}=\sum_{b=1}^w Y_{b}, \,\, T_{Y, m}=\sum_{b=1}^w Y_{b, m}.\end{aligned}$$ Let $Z_{b}$, $1 \leq b \leq w$, be i.i.d. $N(0, M B)$ and $Z_{b, m}$ be i.i.d. $N(0, M \tilde B)$, where the covariance matrices $B$ and $\tilde B$ are respectively given by $$\begin{aligned} \label{eq:1507172300} B = (b_{ij})_{i,j=1}^p = \mbox{Cov}(Y_b/\sqrt{M}) \mbox{ and } \tilde{B}=(\tilde{b}_{ij})_{i,j=1}^p = \mbox{Cov}(Y_{b, m}/\sqrt{M}).\end{aligned}$$ Write $T_{Z, m}=\sum_{b=1}^w Z_{b, m}$ and let $Z \sim N(0,\Sigma)$. \[lemmafordifference\] (i) Assume $\Theta_{q, \alpha}< \infty$ for some $q>2$ and $\alpha>0$. Then there exists some constant $C_{q, \alpha}$ such that for $y > 0$ $$\label{eq:1508122226} {\mathbb{P}}(|T_X-T_{Y, m}|_\infty \geq y) \lesssim f^*_1(y)+f^*_2(y)=: f^*(y)$$ where the constant in $ \lesssim$ only depends on $q$ and $\alpha$, $$f^*_1(y)=\left\{ \begin{array}{ll} y^{-q}n m^{q/2-1-\alpha q}\Theta_{q, \alpha}^q+ p\exp{\left(}- \frac{C_{q, \alpha}y^2 m^{2 \alpha}}{n\Psi_{2, \alpha}^2}{\right)}, & \alpha>1/2-1/q \\ y^{-q}n^{q/2-\alpha q} \Theta_{q, \alpha}^q+ p \exp{\left(}- \frac{C_{q, \alpha}y^2 m^{2 \alpha}}{n\Psi_{2, \alpha}^2}{\right)}, & \alpha<1/2-1/q \end{array} \right.$$ and $$f^*_2(y)=\left\{ \begin{array}{ll} y^{-q}wm \Theta_{q, \alpha}^q+ p\exp{\left(}- \frac{C_{q, \alpha}y^2 }{mw\Psi_{2, \alpha}^2}{\right)}, & \alpha>1/2-1/q \\ y^{-q}(wm)^{q/2-\alpha q} \Theta_{q, \alpha}^q+ p\exp{\left(}- \frac{C_{q, \alpha} y^2 }{wm\Psi_{2, \alpha}^2}{\right)}, & \alpha<1/2-1/q \end{array} \right. .$$ (ii) Assume $\Phi_{\psi_\nu, \alpha}<\infty$ for some $\nu\geq 0$ and $\alpha>0$. Let $\beta=2/(1+2\nu)$. Then there exists a constant $C_\beta >0$ such that for $y>0$, $$\label{eq:1508122228} {\mathbb{P}}(|T_X-T_{Y, m}|_\infty \geq y) \lesssim f^\diamond_1(y)+ f^\diamond_2(y)=: f^\diamond(y),$$ where the constant in $ \lesssim$ only depends on $\beta$ and $\alpha$, $$f^\diamond_1(y)= p\exp\left\{-C_\beta\left(\frac{ y m^{\alpha}}{\sqrt{n}\Phi_{\psi_\nu, \alpha}}\right)^\beta\right\} \,\, \text{and} \,\, f^\diamond_2(y)= p\exp\left\{-C_\beta\left(\frac{ y }{\sqrt{mw} \Phi_{\psi_\nu, 0}}\right)^\beta\right\}.$$ Let $P_1 = {\mathbb{P}}(|T_X-T_{X, m}|_\infty \geq y/2)$ and $P_2 = {\mathbb{P}}(|T_{X, m}-T_{Y, m}|_\infty \geq y/2)$. Lemmas \[m-approximationtheorem1\] and \[lemma for m-approximation, polynomial tail\] imply that $P_1 \leq f^*_1(y)$. Write $T_{X, m}-T_{Y, m} = \sum_{b=1}^w \sum_{i \in S_b} X_{i, m}$. By Lemmas \[theorem for the sum polynomial\] and \[lemma for the sum, polynomial tail\], we also have $P_2 \leq f^*_2(y)$. Hence both cases with $\alpha>1/2-1/q$ and $\alpha<1/2-1/q$ of Lemma \[lemmafordifference\](i) follow in view of ${\mathbb{P}}(|T_X-T_{Y, m}|_\infty \geq y) \leq P_1+P_2$. The exponential moment case (ii) similarly follows from $P_1 \leq f^\diamond_1(y)$ and $P_2 \leq f^\diamond_2(y)$. \[mainlemma\] Let $D=(d_{i j})_{i, j=1}^p$ be a diagonal matrix. Assume that there exist constants $c>0, c_2>c_1>0$ such that $c<\min_{1 \leq j \leq p} d_{jj}$ and $c_1 \leq \tilde{b}_{jj}/d_{jj} \leq c_2$ for all $1 \leq j \leq p$. Assume $\Psi_{q, 0}< \infty$ for some $q \geq 4$. Then for all $\lambda \in (0,1)$, $$\begin{aligned} &&\sup\limits_{t \in \mathbb{R}} \left| {\mathbb{P}}(|D^{-1/2}T_{Y, m}/\sqrt{n}|_\infty \leq t)-{\mathbb{P}}(|D^{-1/2}T_{Z, m}/\sqrt{n}|_\infty \leq t) \right| \\ &\lesssim & w^{-1/8}(\Psi_{3, 0}^{3/4}\vee\Psi_{4, 0}^{1/2})(\log(pw/\lambda))^{7/8} +w^{-1/2}(\log(pw/\lambda))^{3/2} u_m(\lambda)+\lambda\\ &=:& h(\lambda, u_m(\lambda)),\end{aligned}$$ where the constant in $\lesssim$ depends on $c, c_1, c_2$, and $q$ and $\alpha$ for (i), and $\beta$ for (ii) below, and $u_m(\lambda) \le u^*_m(\lambda)$ in (i), and $u_m(\lambda) \le u^\diamond_m(\lambda)$ in (ii). \(i) Assume $\Theta_{q, \alpha}< \infty$ for some $q \geq 4$ and $\alpha>0$, then $$u^*_m(\lambda) = \left\{ \begin{array}{ll} \max\{\Theta_{q, \alpha}(\lambda^{-1}w)^{1/q}M^{1/q-1/2}, \Psi_{2, \alpha}\sqrt{\log(pw/\lambda)}\}, & \alpha>1/2-1/q \\ \max\{\Theta_{q, \alpha}(\lambda^{-1}w)^{1/q}M^{-\alpha}, \Psi_{2, \alpha}\sqrt{\log(pw/\lambda)}\}, & \alpha<1/2-1/q. \end{array} \right.$$ \(ii) Assume $\Phi_{\psi_\nu, 0}<\infty$ for some $\nu\geq 0$. Then $$u^\diamond_m(\lambda) = \max\{\Phi_{\psi_\nu, 0}(\log (pw/\lambda))^{1/\beta}, \sqrt{\log (pw/\lambda)}\}.$$ For $1<l\leq q$, define $R_l=\max_{1 \leq j \leq p} \| M^{-1/2} Y_{bj, m}\|_l$. Since $X_{ij, m}=\sum_{k=0}^m \mathcal{P}_{i-k}X_{ij}$, by Burkholder’s inequality (@burkholder1973), $$\|\sum_{i=1}^M \mathcal{P}_{i-k}X_{ij}\|_l^2 \leq C_l \sum_{i=1}^M \|\mathcal{P}_{i-k}X_{ij}\|_l^2 \leq C_l M (\theta'_{k, l, j})^2,$$ then we have $$\label{moment for the sum} \|\sum_{i=1}^M X_{ij, m}\|_l \leq C_l \sum_{k=0}^m \|\sum_{i=1}^M \mathcal{P}_{i-k}X_{ij}\|_l \leq C_l M^{1/2} \Delta_{0, l, j},$$ which implies $R_l \leq C_l \Psi_{l, 0}$. For $0<\lambda<1$ and the diagonal matrix $D=(d_{ij})_{i,j=1}^p$, define $u_{Y, m}(\lambda)$ as the infimum over all numbers $u>0$ such that $${\mathbb{P}}(|M^{-1/2} d_{jj}^{-1/2}Y_{bj, m}| \leq u, 1 \leq b \leq w, 1\leq j \leq p) \geq 1-\lambda.$$ Also define $u_{Z, m}(\lambda)$ by the corresponding quantity for the analogue Gaussian case, namely with $Y_{b, m}$ replaced by $Z_{b, m}$ in the above definition. Let $u_{m}(\lambda):= u_{Y, m}(\lambda) \vee u_{Z, m}(\lambda)$. By Theorem 2.2 of @chernozhukov2013, for all $\lambda \in (0,1)$, $$\begin{aligned} &&\sup\limits_{t \in \mathbb{R}} \left| {\mathbb{P}}(|D^{-1/2}T_{Y, m}/\sqrt{n}|_\infty \leq t)-{\mathbb{P}}(|D^{-1/2}T_{Z, m}/\sqrt{n}|_\infty \leq t) \right| \\ &\lesssim & w^{-1/8}(R_3^{3/4}\vee R_4^{1/2})(\log(pw/\lambda))^{7/8}+w^{-1/2}(\log(pw/\lambda))^{3/2} u_m(\lambda)+\lambda,\end{aligned}$$ Now we shall find a bound on the function $u_{m}(\lambda)$. (i) By Lemmas \[theorem for the sum polynomial\] and \[lemma for the sum, polynomial tail\], we have $$\begin{aligned} &&{\mathbb{P}}(|M^{-1/2} d_{jj}^{-1/2} Y_{bj, m}| > u \text{ for some } b, j) \leq {\mathbb{P}}(|M^{-1/2}Y_{b, m}|_\infty > c^{1/2}u) \\ &&\leq \left\{ \begin{array}{ll} C_{q, \alpha} u^{-q}wM^{1-q/2} \Theta_{q, \alpha}^q+ C_{q,\alpha} pw \exp{\left(}-\frac{C_{q, \alpha}u^2}{\Psi_{2,\alpha}^2}{\right)}, & \alpha>1/2-1/q \\ C_{q,\alpha} u^{-q}w M^{-\alpha q}\Theta_{q, \alpha}^q + C_{q, \alpha} pw \exp{\left(}- \frac{C_{q, \alpha} u^2 }{\Psi_{2,\alpha}^2}{\right)}, & \alpha<1/2-1/q \end{array} \right. .\end{aligned}$$ This implies $u_{Y, m}(\lambda) \leq C_{q,\alpha} \max\{\Theta_{q, \alpha}(\lambda^{-1}w)^{1/q}M^{1/q-1/2}, \Psi_{2, \alpha}\sqrt{\log(pw/\lambda)}\}$ if $\alpha>1/2-1/q$ and $u_{Y, m}(\lambda) \leq C_{q, \alpha} \max\{\Theta_{q, \alpha}(\lambda^{-1}w)^{1/q}M^{-\alpha}, \Psi_{2, \alpha}\sqrt{\log(pw/\lambda)}\}$ if $\alpha<1/2-1/q$. For $u_{Z, m}(\lambda)$, since $M^{-1/2} Z_{bj, m} \sim N(0, \tilde{b}_{jj})$, we have ${\mathbb{E}}(\exp\{M^{-1}Z^2_{bj, m}/(4\tilde{b}_{jj})\})\leq C$. Hence $$\begin{aligned} {\mathbb{P}}(|M^{-1/2}d_{jj}^{-1/2} Z_{bj, m}| > u \text{ for some } b, j) &\leq& \sum_{b=1}^w \sum_{j=1}^p{\mathbb{P}}(|M^{-1/2}Z_{bj, m}| > d_{jj}^{1/2}u) \nonumber \\ \label{gaussianboundforu} &\leq & Cpw \exp(-d_{jj}u^2/(4\tilde{b}_{jj})).\end{aligned}$$ With the assumption $c_1 \leq \tilde{b}_{jj}/d_{jj} \leq c_2$, $u_{Z, m}(\lambda) \leq C \sqrt{\log(pw/\lambda)}$.\ (ii) By Bonferroni inequality and Lemma \[theorem for the sum exponential\], $$\label{exponentialboundforu} {\mathbb{P}}(|M^{-1/2}d_{jj}^{-1/2} Y_{bj, m}| > u \text{ for some } b, j) \leq C_\beta pw \exp\left\{-C_\beta\frac{u^\beta}{\Phi^\beta_{\psi_\nu, 0}}\right\},$$ where $\beta=2/(1+2\nu)$ and $C_\beta$ is a constant that depends on $\beta$ only. Combining (\[gaussianboundforu\]) and (\[exponentialboundforu\]), it follows that $u_{m}(\lambda)\leq C_{\beta}\max\{\Phi_{\psi_\nu, 0}(\log (pw/\lambda))^{1/\beta}, \sqrt{\log (pw/\lambda)}\}$. Now we consider the comparison between $Z$ and $T_{Z, m}$. Let $\pi(x)= x^{1/3}(1 \vee \log(p/x))^{2/3}$ for $x>0$. \[lemmafortwogaussian\] Assume $\Psi_{2, \alpha}<\infty$ for some $\alpha>0$. Let $D=(d_{ij})_{i, j=1}^p$ be a diagonal matrix such that there exist some constants $0< C_1< C_2$ such that $C_1 \leq \sigma_{jj}/d_{jj} \leq C_2$ for all $1 \leq j \leq p$. Then we have $$\begin{aligned} \label{Tz,m-Tz} &&\sup\limits_{t \in \mathbb{R}} \left| {\mathbb{P}}(|D^{-1/2}T_{Z, m}/\sqrt{n}|_\infty \leq t)-{\mathbb{P}}(|D^{-1/2}Z|_\infty \leq t) \right| \\ &\lesssim & \pi(\max_{1\leq j \leq p}d^{-1}_{jj}\Psi_{2, \alpha} \Psi_{2, 0} (m^{-\alpha}+v(M))+wm/n),\end{aligned}$$ where $v(M)$ is the same as defined in Corollary \[corollary for Sigma\]. By the definition of $T_{Z, m}$ and $Z$ and (\[eq:1507172300\]), $$\begin{aligned} &&\Sigma^{Z, m}:={\text{Cov}}(D^{-1/2}T_{Z, m}/\sqrt{n})= \frac{Mw}{n}D^{-1/2}\tilde{B}D^{-1/2}, \\ &&\Sigma^{Z}:={\text{Cov}}(D^{-1/2}Z)= D^{-1/2}\Sigma D^{-1/2}.\end{aligned}$$ Let $S_{M j} = \sum_{i=1}^M X_{ij}$ and $S_{M j, m} = \sum_{i=1}^M X_{ij, m}$. By the moment inequality in @Wu2005, $ \| S_{M j} \|_2 \leq M^{1/2} \Delta_{0, 2, j}$, $\| S_{M j, m}\|_2 \leq M^{1/2} \Delta_{0, 2, j}$ and $\|S_{M j} - S_{M j, m} \|_2 \leq M^{1/2} \Delta_{m+1, 2, j}$. Note that $b_{jk}=M^{-1}{\mathbb{E}}(S_{M j} S_{M k})$ and $\tilde{b}_{jk}=M^{-1}{\mathbb{E}}(S_{M j, m} S_{M k, m})$. Then $$\begin{aligned} |b_{jk}-\tilde{b}_{jk}|&=& \frac{1}{M} |{\mathbb{E}}(S_{M j} S_{M k} - S_{M j, m} S_{M k, m})| \\ &\leq & \frac{1}{M}\left(\|S_{M j} \|_2\cdot\|S_{M k} - S_{M k, m}\|_2+\|S_{M k, m} \|_2\cdot\| S_{M j} - S_{M j, m} \|_2\right) \cr &\leq & 2\Psi_{2, \alpha}\Psi_{2, 0} m^{-\alpha}.\end{aligned}$$ Recall that $\sigma_{jk}=\sum_{l=-\infty}^\infty \gamma_{jk}(l)$ and $$b_{jk}=M^{-1}{\mathbb{E}}(S_{Mj} S_{Mk})=M^{-1}\sum_{l=-M}^M (M-|l|)\gamma_{jk}(l).$$ It follows that $$\sigma_{jk}-b_{jk}=\sum_{|l|> M} \gamma_{jk}(l) +M^{-1}\sum_{l=-M}^M |l| \gamma_{jk}(l).$$ By $X_{ij}=\sum_{h=0}^\infty \mathcal{P}^{i-h}X_{ij}$, we have $$|\gamma_{jk}(l)|=|\sum_{h=0}^\infty {\mathbb{E}}[(\mathcal{P}^{-h}X_{0j})(\mathcal{P}^{-h}X_{lk})]| \leq \sum_{h=0}^\infty |{\mathbb{E}}[(\mathcal{P}^{-h}X_{0j})(\mathcal{P}^{-h}X_{lk})]| \leq \sum_{h=0}^\infty \delta_{h, 2, j}\delta_{h+l, 2, k}.$$ Hence, it can be obtained that $$\left|\sum_{|l|>M} \gamma_{jk}(l)\right| \leq 2 \sum_{l=M+1}^\infty |\gamma_{jk}(l)| \leq 2\sum_{l=M+1}^\infty \sum_{h=0}^\infty \delta_{h, 2, j}\delta_{h+l, 2, k} \leq 2 \Delta_{0, 2, j}\Delta_{M+1, 2, k},$$ and $$\left|\frac{1}{M}\sum_{l=-M}^M |l| \gamma_{jk}(l)\right| \leq \frac{2}{M} \sum_{l=1}^M \sum_{\iota=k}^M \sum_{h=0}^\infty \delta_{h, 2, j}\delta_{h+\iota, 2, k} \leq \frac{2}{M} \Delta_{0, 2, j} \sum_{l=1}^M \Delta_{l, 2, k}.$$ Since $\Delta_{0, 2, j} \leq \Psi_{2,0}$ and $\Delta_{m, 2, j} \leq \Psi_{2, \alpha}m^{-\alpha}$, $\max_{1 \leq j,k \leq p}|b_{jk}-\sigma_{jk}|\leq \Psi_{2,\alpha}\Psi_{2,0}v(M)$. Hence, $$\begin{aligned} |\Sigma^{Z, m}-\Sigma^{Z}|_\infty &\leq & \max_{1\leq j \leq p}d^{-1}_{jj}(|\tilde{B}-B|_\infty+ |B-\Sigma|_{\infty})+(1-Mw/n)|D^{-1/2}\Sigma D^{-1/2}|_\infty\\ &\leq &\max_{1\leq j \leq p}d^{-1}_{jj}\Psi_{2, \alpha} \Psi_{2, 0} (m^{-\alpha}+v(M))+ C_2 wm/n.\end{aligned}$$ By Theorem 2 of @chernozhukov2014comparison, the result follows. \[maintheorem\] Let $\Sigma_0$ be the diagonal matrix of the long run covariance matrix $\Sigma$ and $D_0=\Sigma_0^{1/2}$. Let Assumption \[assumption:1507172202\] be satisfied. (i) Assume that $\Theta_{q, \alpha} < \infty$ holds with some $q \geq 4$ and $\alpha>0$. Then for every $\lambda \in (0, 1)$ and $\eta>0$, $$\begin{aligned} \rho_n&:=&\sup\limits_{t \in \mathbb{R}} \left| {\mathbb{P}}(|D_0^{-1}T_{X}/\sqrt{n}|_\infty \leq t)-{\mathbb{P}}(|D_0^{-1}Z|_\infty \leq t) \right| \nonumber \\ \label{maintheorem1} &\lesssim & f^*(\sqrt{n}\eta)+ \eta \sqrt{\log p}+ h(\lambda, u^*_m(\lambda))+\pi(\Psi_{2, \alpha}\Psi_{2, 0} (m^{-\alpha}+v(M))+wm/n).\end{aligned}$$ (ii) Assume $\Phi_{\psi_\nu, \alpha} < \infty$ for some $\nu \geq 0$ and $\alpha>0$. Then for every $\lambda \in (0, 1)$ and $\eta>0$, $$\begin{aligned} \rho_n&:=&\sup\limits_{t \in \mathbb{R}} \left| {\mathbb{P}}(|D_0^{-1}T_{X}/\sqrt{n}|_\infty \leq t)-{\mathbb{P}}(|D_0^{-1}Z|_\infty \leq t) \right| \nonumber \\ \label{maintheorem2} &\lesssim & f^\diamond(\sqrt{n}\eta)+ \eta \sqrt{\log p}+ h(\lambda, u^\diamond_m(\lambda))+\pi(\Psi_{2, \alpha}\Psi_{2, 0} (m^{-\alpha}+v(M))+wm/n).\end{aligned}$$ \(i) By Lemma \[mainlemma\] (i) and Lemma \[lemmafortwogaussian\], we have for every $\lambda \in (0, 1)$, $$\begin{aligned} &&\sup\limits_{t \in \mathbb{R}} \left| {\mathbb{P}}(|D_0^{-1}T_{Y, m}/\sqrt{n}|_\infty \leq t)-{\mathbb{P}}(|D_0^{-1}Z|_\infty \leq t) \right| \nonumber \\ &\lesssim & h(\lambda, u^*_m(\lambda))+\pi(\Psi_{2, \alpha}\Psi_{2, 0} (m^{-\alpha}+v(M))+wm/n). \label{sup_part1}\end{aligned}$$ Observe that each component of the Gaussian vector $D_0^{-1}Z$ has variance 1. By Theorem 3 of @chernozhukov2014comparison, for every $\eta>0$, $$\label{anticoncentrationforSZ} \sup\limits_{t \in \mathbb{R}} {\mathbb{P}}(\left||D_0^{-1}Z|_\infty-t \right|\leq \eta) \lesssim \eta \sqrt{\log p}.$$ By the triangle inequality, for every $\eta>0$, we have $$\begin{aligned} &&\sup\limits_{t \in \mathbb{R}} \left| {\mathbb{P}}(|D_0^{-1}T_{X}/\sqrt{n}|_\infty > t)-{\mathbb{P}}(|D_0^{-1}T_{Y, m}/\sqrt{n}|_\infty > t) \right|\\ &\leq& {\mathbb{P}}(|D_0^{-1}(T_{X}-T_{Y, m})/\sqrt{n}|_{\infty}>\eta)+\sup\limits_{t \in \mathbb{R}} {\mathbb{P}}(\left||D_0^{-1}T_{Y, m}/\sqrt{n}|_\infty-t \right|\leq \eta),\end{aligned}$$ which implies Theorem \[maintheorem\] (i) in view of Lemma \[lemmafordifference\] (i), (\[sup\_part1\]) and (\[anticoncentrationforSZ\]). \(ii) Inequality (\[maintheorem2\]) can be obtained by replacing $f^*$ and $u^*_m$ with $f^\diamond$ and $u^\diamond_m$ in the above proof. Proof of Theorem \[th:1507140259\] ---------------------------------- Recall (\[eq:1508122226\]) for $f^*(\cdot)$. By Theorem \[maintheorem\], for $\alpha > 1/2-1/q$, to have (\[eq:J021209\]), we need $$\label{b} \pi(\Psi_{2, \alpha}\Psi_{2, 0} (m^{-\alpha}+v(M))+wm/n) \rightarrow 0$$ and for some $\eta>0$ and $\lambda \in (0,1)$, $$\begin{aligned} \label{c} && f^*(\sqrt{n}\eta)+\eta \sqrt{\log p}\rightarrow 0,\\ \label{d} && h(\lambda, u^*_m(\lambda))\rightarrow 0.\end{aligned}$$ Firstly, (\[b\]) requires $m \gg L_2$, $wm \ll n(\log p)^{-2}$, $w \ll n(\log p)^{-2} (\Psi_{2, \alpha}\Psi_{2,0})^{-1}$ if $\alpha>1$ and $w \ll n/L_2$ if $0<\alpha <1$. Moreover, (\[c\]) requires $m \gg \max(L_1, (\Psi_{2, \alpha} \log p)^{1/\alpha})$ and $w m \ll \min (N_1, N_2)$. And (\[d\]) needs (\[eq:1507142040\]) and $w \gg \max(W_1, W_2)$. We also need $M\asymp n/w \gg m$. Notice that $(\Psi_{2, \alpha} \log p)^{1/\alpha} \lesssim L_2$, $N_2 \lesssim n(\log p)^{-2}$ and $N_2 \leq n(\log p)^{-2}(\Psi_{2, \alpha}\Psi_{2,0})^{-1}$. If $$\label{e} \max(L_1, L_2) \max(W_1, W_2) = o(1) \min(n, N_1, N_2),$$ then we can always choose $m$ and $w$ such that (\[eq:J021209\]) holds. Observe that $N_2 \lesssim n$, then (\[e\]) is reduced to (\[eq:1507142041\]). For $0< \alpha < 1/2-1/q$, the function $f^*$ in (\[c\]) is replaced by $f^\diamond$ (cf. (\[eq:1508122228\])), which implies $\Theta_{q, \alpha} (\log p)^{1/2} = o(n^{\alpha})$, $m \gg (\Psi_{2, \alpha} \log p)^{1/\alpha} $ and $wm \ll \min(N_2, N_3)$. And $u^*_m$ in (\[d\]) is replaced by $u_m^\diamond$, implying $w \gg \max(W_1, W_2, W_3)$. By the similar argument, if (\[eq:1508010523\]) is further assumed, then (\[eq:J021209\]) also holds for the case $0<\alpha < 1/2-1/q$. In the proof of Theorem \[th:1507140259\], we exclude the case $\alpha=1$ when $\alpha>1/2-1/q$. If $\alpha=1$, we need to impose the additional assumption $$\label{add} \max(W_1, W_2)=o(n/(L_2\log n))$$ to ensure (\[b\]). The above condition is very mild since (\[eq:1507142041\]) implies $\max(W_1, W_2) = o(n/L_2)$. If $\log n \lesssim (\log p)^2 \Psi_{2, \alpha}^2$, which trivially holds in the high-dimensional case $p \asymp n^\kappa$ with some $\kappa > 0$, we have $N_2= O(n/\log n)$ and hence (\[eq:1507142041\]) implies (\[add\]). Similarly, it is further assumed $\max(W_1, W_4)=o(n/(L_2\log n))$ in Theorem \[th:1507140325\] if $\alpha=1$.
--- abstract: 'For any operator $T$ whose bilinear form can be dominated by a sparse bilinear form, we prove that $T$ is bounded as a map from $L^1(\widetilde{M}w)$ into weak–$L^1(w)$. Our main innovation is that $\widetilde{M}$ is a maximal function defined by directly using the local $A_\infty$ characteristic of the weight (rather than Orlicz norms). Prior results are due to Coifman&Fefferman, Pérez, Hytönen&Pérez, and Domingo-Salazar&Lacey&Rey. As we discuss, but do not prove, the maximal functions we use seem to be on the order of $M_{L\log\log L \log\log\log L (\log\log\log\log L)^{1+{\varepsilon}}}$.' address: 'Texas A&M Mathematics' author: - Rob Rahm title: 'Borderline Weak–Type Estimates for Sparse Bilinear Forms Involving $A_\infty$ Maximal Functions' --- Introduction ============ We study weighted endpoint estimates for those operators whose bilinear form has a sparse domination. Our estimates are in the spirit of Fefferman–Stein [@FefSte1971]; in particular, for an operator $T$, a function $f$ and a non–negative weight $w$ we will prove: $$\begin{aligned} \label{E:intMain} \lambda w(\{Tf>\lambda\}) \lesssim \int_{{\mathbb{R}}^d}{\ensuremath{\left\vertf(y)\right\vert}}\widetilde{M} w(y)dy, \end{aligned}$$ where $\widetilde{M}$ is a certain maximal function that is pointwise larger than the Hardy–Littlewood maximal function. We take an “entropy bump” point of view – which is our main innovation – and define $\widetilde{M}$ in terms of these entropy bumps. We now prepare to state our main results. Recall that if $\mathcal{D}$ is a dyadic lattice, a sparse subset $\mathcal{S}$ of $\mathcal{D}$ is defined by the property that for every $Q\in\mathcal{S}$ there is a subset $E_Q\subset Q$ such that ${\ensuremath{\left\vertE_Q\right\vert}}>\frac{1}{2}{\ensuremath{\left\vertQ\right\vert}}$ and the sets $\{E_Q:Q\in\mathcal{S}\}$ are pairwise disjoint. When we say that $T$ has a bilinear sparse domination we mean that for all bounded and compactly supported functions $f_1,f_2$ there are $3^d$ sparse sets such that there holds: $$\begin{aligned} \label{D:sparseDom} {\ensuremath{\left\vert{\ensuremath{\left\langleTf_1,f_2\right\rangle}}\right\vert}} \lesssim \sum_{k=1}^{3^d}\sum_{Q\in\mathcal{S}_k} {\ensuremath{\left\vertQ\right\vert}}{\langle f_1 \rangle}_{Q}{\langle f_2 \rangle}_{Q},\end{aligned}$$ where ${\langle f \rangle}_{Q}:=\frac{1}{{\ensuremath{\left\vertQ\right\vert}}}\int_{Q}{\ensuremath{\left\vertf\right\vert}}$ (note the presence of the absolute value inside the integral). Let ${\varepsilon}:[1,\infty]\to[1,\infty]$ be an increasing function with ${K_{{\varepsilon}}}:=\sum_{k=0}^{\infty}{\varepsilon}(2^{2^k})^{-1}<\infty$. (The example you should keep in mind is essentially ${\varepsilon}(t) = (\log\log t)(\log \log \log t)^{1+{\varepsilon}}$) For a cube $Q$ and a weight $w(x)\geq 0$ define: $$\begin{aligned} \rho_w(Q):=\frac{1}{w(Q)}\int_{Q}M({1\!\!1}_{Q}w)(x)dx,\end{aligned}$$ where $M$ is the usual Hardy–Littlewood maximal function and $w(Q):=\int_{Q}w(y)dy$. For a collection $\mathcal{S}$ of cubes (e.g. a dyadic lattice or a sparse subset of a dyadic lattice) define the following maximal function: $$\begin{aligned} M_{\varepsilon}w :=\sup_{Q\in\mathcal{S}}{1\!\!1}_{Q}{\langle w \rangle}_{Q}\log{\rho_w(Q))} {\varepsilon}(\rho_w(Q)).\end{aligned}$$ These are our main theorems: \[T:main\] Let $T$ be an operator that has a sparse bilinear domination as in and let ${\varepsilon}$ be a function as above. Then for any weight $w(x)\geq 0$ we have: $$\begin{aligned} {\ensuremath{\left\|T:L^1(M_{\varepsilon}w)\to L^{1,\infty}(w)\right\|}}\lesssim{K_{{\varepsilon}}}.\end{aligned}$$ As a corollary we obtain the following result of Hytönen–Pérez: \[T:cor\] Let $T$ be an operator that has a sparse bilinear domination as in and let $w$ be an $A_1$ weight. Then we have the following quantitative estimate: $$\begin{aligned} {\ensuremath{\left\|T:L^1(w)\to L^{1,\infty}(w)\right\|}}\lesssim [w]_{A_1}\log{(e+[w]_{A_\infty})}.\end{aligned}$$ The paper is organized as follows. In the next section, we discuss the main result. Following that, in Section \[S:bgap\] we give some background information and preliminary information and then in Section \[S:pomt\] and \[S:poc\] we prove Theorem \[T:main\] and \[T:cor\]. Discussion of Main Results and Previous Results {#S:domr} =============================================== For the remainder of the paper, the function $\log t$ is the function that satisfies $2^{\log t} = 2+t$. That is, the $\log$ we’re using here is really $\log t = \log_{2}(2+t)$. One would like to replace $\widetilde{M}$ in with the smaller Hardy–Littlewood maximal function $M$. However, this is not possible; see for example [@Reg2011; @RegThi2012; @HoaMoen2016]. It is of interest then to determine the smallest maximal function for which holds. Observe that one way to write ${\langle w \rangle}_{Q}$ is ${\ensuremath{\left\|w\right\|}}_{L^1(\frac{dx}{{\ensuremath{\left\vertQ\right\vert}}})}$. Thus to make $M$ slightly larger, we can choose a norm that is slightly larger than the normalized $L^1$ norm. A common approach has been to use Orlicz norms. That is, given a positive non–decreasing function $\Phi$ define: $$\begin{aligned} {\ensuremath{\left\|w\right\|}}_{Q,\Phi} :=\inf\{\lambda>0:\frac{1}{{\ensuremath{\left\vertQ\right\vert}}} \int_{Q}\Phi(\frac{w(x)}{\lambda})dx\leq 1\}.\end{aligned}$$ When $\Phi(t)=t^{r}$, then ${\ensuremath{\left\|w\right\|}}_{Q,\Phi}={\ensuremath{\left\|w\right\|}}_{L^r(\frac{dx}{{\ensuremath{\left\vertQ\right\vert}}})}$ which is bigger than normalized $L^1$ norm for $1<r$. Maximal functions created from these “power bumps” were studied in [@CoiFef1974]. In 1994, Pérez shows that for singular integral operators (in fact maximal truncations), holds when $\widetilde{M}$ is the maximal function based on $\Phi(t)=t(\log{t})^{1+{\varepsilon}}$ and this was result was recently quantified (in terms of ${\varepsilon}$) by Hytönen–Pérez [@Per1994; @HytPer2015]. The best known result so far is due to Domingo-Salazar, Lacey, and Rey [@DomLacRey2016] where $\Phi(t)=(\log{\log{t}})(\log{\log{\log{t}}})^{1+{\varepsilon}}$. See the papers listed in the references for more detailed information about these maximal functions and Orlicz norms. In this paper, we take a slightly different approach and use the so–called “entropy bumps” introduced by Treil–Volberg [@TreVol2016]. More precisely we consider an increasing function ${\varepsilon}:[1,\infty]\to[1,\infty]$ that is just barley summable in the sense that ${K_{{\varepsilon}}}:=\sum_{k\geq 1}{\varepsilon}(2^{2^k})^{-1} <\infty$. For a cube $Q$ we define: $$\begin{aligned} {\ensuremath{\left\|w\right\|}}_{Q,\rho{\varepsilon}(\rho)} :={\langle w \rangle}_{Q}\rho_w(Q){\varepsilon}(\rho_{w}(Q)).\end{aligned}$$ In [@TreVol2016] it is shown that if $\Phi$ satisfies $t\mapsto \Phi(t)/(t\log{t})$ and $\int^{\infty}\frac{dt}{\Phi(t)}<\infty$, then there is a function ${\varepsilon}$ as above with ${\ensuremath{\left\|w\right\|}}_{Q,\rho{\varepsilon}(\rho)} \leq{\ensuremath{\left\|w\right\|}}_{Q,\Phi}$. We will use entropy norms that are smaller than the entropy norms defined above. In particular we will use the following: $$\begin{aligned} {\ensuremath{\left\|w\right\|}}_{Q,(\log{\rho}){\varepsilon}(\rho)} ={\langle w \rangle}_{Q}\log{(\rho_w(Q))}{\varepsilon}(\rho_w(Q)).\end{aligned}$$ Inspecting the norm ${\ensuremath{\left\|w\right\|}}_{Q,\rho{\varepsilon}}$, intuitively ${\langle w \rangle}_{Q}$ is the “$L$”, $\rho_w(Q)$ is the $\log{L}$ and ${\varepsilon}(\rho_w(Q))$ is the ${\varepsilon}(\log L)$. Thus, it appears as though the norm ${\ensuremath{\left\|\cdot\right\|}}_{Q,\log{\rho}{\varepsilon}}$ should be similar to $L(\log\log{L})(\log \log\log L)(\log\log\log\log{L})^{1+{\varepsilon}}$ norms, but we haven’t been able to show this explicitly. The proof(s) in this paper are modifications of the proofs in [@DomLacRey2016; @CulDipOu2016] to the present setting Background Information and Preliminaries {#S:bgap} ======================================== In the proof of the theorems, we will need the collections to satisfy the following stronger condition: for every $Q\in\mathcal{S}$ there holds: $$\begin{aligned} \label{E:tradSp} {\ensuremath{\left\vert\cup_{Q'\in\mathcal{S}:Q'\subsetneq Q}Q'\right\vert}} \leq \frac{1}{4}{\ensuremath{\left\vertQ\right\vert}}.\end{aligned}$$ The following lemma says that every sparse collection is a union of eight sparse collections that satisfy this stronger condition. \[L:stronger\] Let $\mathcal{S}$ be sparse. Then $\mathcal{S}=\cup_{i=1}^{8}\mathcal{S}^i$ where each $\mathcal{S}^i$ satisfies the stronger condition . The sparse condition implies the “Carleson” condition: for every $Q\in\mathcal{S}$ we have: $$\begin{aligned} \sum_{Q'\in\mathcal{S}:Q'\subsetneq Q}{\ensuremath{\left\vertQ'\right\vert}} \leq 2\sum_{Q'\in\mathcal{S}:Q'\subsetneq Q}{\ensuremath{\left\vertE_{Q'}\right\vert}} \leq 2{\ensuremath{\left\vertQ'\right\vert}}.\end{aligned}$$ Now, fix a $Q\in\mathcal{S}$ and let $\mathcal{S}_{k}(Q)$ be the cubes that are $k$ generations down from $Q$ in $\mathcal{S}$. We claim that ${\ensuremath{\left\vert\cup_{Q'\in\mathcal{S}_{8}(Q)}Q'\right\vert}}\leq\frac{1}{4}{\ensuremath{\left\vertQ\right\vert}}$. Indeed, suppose not; then we would have: $$\begin{aligned} \sum_{Q'\in\mathcal{S}:Q'\subsetneq Q}{\ensuremath{\left\vertQ\right\vert}} >\sum_{k=1}^{8}\sum_{Q'\in\mathcal{S}_{k}(Q)}{\ensuremath{\left\vertQ'\right\vert}} >\sum_{k=1}^{8}\frac{1}{4} = 2,\end{aligned}$$ which violates the Carleson condition. It is now easy to see how to separate $\mathcal{S}$ into eight sub–collections: let $Q_0$ be the top cube in $\mathcal{S}$. For $k=0,\ldots, 7$ let $$\begin{aligned} \mathcal{S}_{k}= \cup_{n\geq 0}\cup_{Q\in\mathcal{S}_{8n+k}(Q_0)}Q. \end{aligned}$$ Thus, for each $Q\in\mathcal{S}_k$, the cubes one generation down in $\mathcal{S}_k$ are eight generations down in $\mathcal{S}$ and so we have the stronger sparse condition: $$\begin{aligned} {\ensuremath{\left\vert\cup_{Q'\in\mathcal{S}_k:Q'\subsetneq Q}Q'\right\vert}} \leq \frac{1}{4}{\ensuremath{\left\vertQ\right\vert}},\end{aligned}$$ as desired. We now have a variant of the classic Fefferman–Stein Inequality (see also [@CruMarPer2011].) Let $\mathcal{S}$ be a subset of some dyadic lattice $\mathcal{D}$ and let $\alpha=\{\alpha_{Q}:Q\in\mathcal{D}\}$ be a sequence of non–negative coefficients. Let $M^\mathcal{S}_{\alpha}f:=\sup_{Q\in\mathcal{S}}{1\!\!1}_{Q}\alpha_{Q}{\langle f \rangle}_{Q}$. \[L:fefst\] For every $f$ and $\lambda > 0$ we have: $$\begin{aligned} \lambda w(\{M^\mathcal{S}_{\alpha}f > \lambda\}) \leq \int_{{\mathbb{R}}^d}{\ensuremath{\left\vertf(y)\right\vert}}M^\mathcal{S}_{\alpha}w(y)dy.\end{aligned}$$ For $\lambda >0$, let $\Omega_\lambda$ be the maximal cubes in $\mathcal{S}$ with $\alpha_{Q}{\langle f \rangle}_{Q}>\lambda$. Then using the fact that the cubes in $\Omega_\lambda$ are pairwise disjoint, there holds $$\begin{aligned} \lambda w(\{M^\mathcal{S}_{\alpha}f>\lambda\}) &\leq \sum_{Q\in\Omega_\lambda}\alpha_{Q}{\langle f \rangle}_{Q}w(Q) \\&= \sum_{Q\in\Omega_\lambda} \int_{Q}{\ensuremath{\left\vertf(y)\right\vert}}dy\alpha_{Q}\frac{w(Q)}{{\ensuremath{\left\vertQ\right\vert}}} \\&\leq \int_{{\mathbb{R}}^d}{\ensuremath{\left\vertf(y)\right\vert}}M^\mathcal{S}_{\alpha}w(y)dy,\end{aligned}$$ as desired. There are many ways to define the $[w]_{A_\infty}$ characteristic of a weight. The one we use – and the one that seems most useful and popular – is the one of Wilson [@Wil1987]; see also [@HytPer2015] for more information. For a dyadic lattice $\mathcal{D}$, $Q\in\mathcal{D}$ and subset $\mathcal{S}\subset\mathcal{D}$ define: $$\begin{aligned} \rho_w(Q) :=\frac{1}{w(Q)}\int_{Q}M^\mathcal{D}(w{1\!\!1}_{Q})(x)dx $$ The following is [@HytPer2013]\*[Lemma 6.6]{}: \[L:ainfest\] For a cube $Q$ and a subset $E\subset Q$ we have $$\begin{aligned} w(E) \lesssim w(Q)\frac{\rho_w(Q)}{\log{\frac{{\ensuremath{\left\vertE\right\vert}}}{{\ensuremath{\left\vertQ\right\vert}}}}}.\end{aligned}$$ Proof of Theorem \[T:main\] {#S:pomt} =========================== In this section, we prove the main theorem. Recall that ${K_{{\varepsilon}}}:=\sum_{k\geq -1}{\varepsilon}(2^k)^{-1}$. It suffices to prove the following inequality: $$\begin{aligned} \sup_{{\ensuremath{\left\|f\right\|}}_{L^1(M_{\varepsilon}w)}=1} \hspace{.01in} \sup_{\substack{G\subset{\mathbb{R}}^d\\ 0<w(G)<\infty}} \hspace{.01in} \inf_{\substack{G'\subset G\\ w(G)\leq2w(G')}}{\ensuremath{\left\vert{\ensuremath{\left\langleTf,w{1\!\!1}_{G'}\right\rangle}}\right\vert}} \leq {K_{{\varepsilon}}},\end{aligned}$$ where the first supremum is over functions that are bounded and compactly supported. Thus, fix a set $G$ with $0<w(G)<\infty$ and a compactly supported function $f$ with ${\ensuremath{\left\|f\right\|}}_{L^1(M_{\varepsilon}w)}=1$. Since $f$ is bounded and compactly supported, we may assume that $f$ is supported on a cube $Q_0$ and that ${\langle {f} \rangle}_{Q_0} << 3^d 4w(G)^{-1}$. For each dyadic lattice $\mathcal{D}_k$, $k=0,\ldots,3^d$, let $\mathcal{H}_k$ be the maximal cubes in $\mathcal{D}_k$ contained in $Q_0$ with ${\langle {f} \rangle}_{Q} > 3^d 4w(G)^{-1}$ and set $H=\cup_{k=0}^{3^d}\cup_{Q\in\mathcal{H}_k}Q$. Using the Fefferman–Stein Inequality – Lemma \[L:fefst\] – we have $w(H)\leq \frac{1}{4}w(G)$. Set $G' = G\cap H^{c}$ and note that: $$\begin{aligned} w(G) = w(G\cap H) + w(G\cap H^{c}) \leq w(H) + w(G') \leq \frac{1}{4}w(G) + w(G'),\end{aligned}$$ and so $w(G) \leq 2w(G')$. Using the bilinear domination with $f_1 = f$ and $f_2 = {1\!\!1}_{G'}$, we have to show: $$\begin{aligned} \label{E:toShow} {\ensuremath{\left\vert{\ensuremath{\left\langleTf,w{1\!\!1}_{G'}\right\rangle}}\right\vert}} \lesssim \sum_{j=0}^{3^d} \sum_{Q\in\mathcal{S}_j}{\ensuremath{\left\vertQ\right\vert}}{\langle f \rangle}_{Q}{\langle w{1\!\!1}_{G'} \rangle}_{Q} \lesssim {K_{{\varepsilon}}}.\end{aligned}$$ We fix a $0\leq j \leq 3^d$ and set $\mathcal{S}=\mathcal{S}_j$ and we bound $\sum_{Q\in\mathcal{S}}{\ensuremath{\left\vertQ\right\vert}}{\langle f \rangle}_{Q}{\langle w{1\!\!1}_{G'} \rangle}_{Q}$. For $r\in\mathbb{N}$, let $\mathcal{S}_r$ be those cubes in $\mathcal{S}$ such that $2^{2^{r}}<\rho_w(Q)\simeq \leq 2^{2^{r+1}}$. Observe that for cubes in this collection, we have: $$\begin{aligned} \label{E:eq1} 2^{r} = \log 2^{2^{r}} < \log \rho(Q) \leq \log 2^{2^{r+1}} = 2\cdot 2^r \end{aligned}$$ That is, $\log \rho(Q) \simeq 2^{r}$ for cubes in $\mathcal{S}_{r}$. Similarly if $Q\in\mathcal{S}_r$ $$\begin{aligned} \label{E:eq2} {\varepsilon}(\rho (Q))\simeq {\varepsilon}(2^{2^{r}}).\end{aligned}$$ It is enough to show: $$\begin{aligned} \label{E:toShow2} {\ensuremath{\left\vert{\ensuremath{\left\langleTf,w{1\!\!1}_{G'}\right\rangle}}\right\vert}} \lesssim \sum_{Q\in\mathcal{S}_r}{\ensuremath{\left\vertQ\right\vert}}{\langle f \rangle}_{Q}{\langle w{1\!\!1}_{G'} \rangle}_{Q} \lesssim {\varepsilon}(2^{2^{r}})^{-1} + 2^{-r}.\end{aligned}$$ Thus, we will fix $r\in\mathbb{N}$ and prove ; we will drop the “$r$” from the notation (i.e. we will write $\mathcal{S}$ for $\mathcal{S}_r$). We may also assume that the cubes satisfy ${\langle f \rangle}_{Q}\leq 3^d4w(G)^{-1}$. If not then $Q$ is either a cube in $\mathcal{H}_j$ (for appropriate $j$) or is contained in a cube in $\mathcal{H}_j$; either way, $Q$ is contained in $H$. But ${1\!\!1}_{G'}$ is zero on $H$ and so ${\langle {1\!\!1}_{G'} \rangle}_{Q}=0$. Thus, we assume that ${\langle f \rangle}_{Q}\leq 3^d4w(G)^{-1}$. For $k\geq -1$, let $\mathcal{S}_k:=\{Q\in\mathcal{S}:{\langle f \rangle}_{Q}\simeq 4^{-k}w(G)^{-1}\}$. For fixed $k$, let $S_{k}^{0}$ be the maximal cubes in $\mathcal{S}_k$ and for $j\geq 1$ set $\mathcal{S}_{k}^{j}$ be the maximal cubes in $\mathcal{S}_k\setminus\cup_{l=0}^{j-1}\mathcal{S}_{k}^{l}$. For $Q\in\mathcal{S}_{k}^{j}$ let $E_Q=Q\setminus \cup_{Q'\in\mathcal{S}_{k}^{j+1}}Q'$. Observe that the sets $\{E_Q:Q\in\mathcal{S}_k\}$ are pairwise disjoint. The sparsity condition implies that $\int_{Q}{\ensuremath{\left\vertf(y)\right\vert}}dy\simeq\int_{E_Q}{\ensuremath{\left\vertf(y)\right\vert}}dy$. Using the pairwise disjointness of the sets $\{E_Q\}$, for fixed $k$ we have: $$\begin{aligned} \sum_{Q\in\mathcal{S}_k}{\langle f \rangle}_{Q}w(G'\cap Q) \simeq \sum_{Q\in\mathcal{S}_k}\int_{E_Q}{\ensuremath{\left\vertf(y)\right\vert}}dy {\langle w \rangle}_{Q} \leq \int_{{\mathbb{R}}^d}{\ensuremath{\left\vertf(y)\right\vert}}Mw(y), $$ Summing from $k=-1$ to $k=10\cdot2^{r}$, we make the following coarse estimate: $$\begin{aligned} \sum_{k=-1}^{10\cdot 2^r} \sum_{Q\in\mathcal{S}_k}{\ensuremath{\left\vertQ\right\vert}}{\langle f \rangle}_{Q}w(G'\cap Q) &\lesssim \frac{2^r{\varepsilon}(2^{2^r})}{{\varepsilon}(2^{2^r})}\int_{{\mathbb{R}}^d}{\ensuremath{\left\vertf(y)\right\vert}}M_{\mathcal{S}}w(y)dy \\&\simeq \frac{(\log\rho(Q)){\varepsilon}(\rho(Q))}{{\varepsilon}(2^{2^r})}\int_{{\mathbb{R}}^d}{\ensuremath{\left\vertf(y)\right\vert}}M_{\mathcal{S}}w(y)dy \\&\simeq \frac{1}{{\varepsilon}(2^{2^r})}\int_{{\mathbb{R}}^d}{\ensuremath{\left\vertf(y)\right\vert}}M_{\varepsilon}w(y)dy \leq \frac{1}{{\varepsilon}(2^{2^r})}.\end{aligned}$$ The “$\simeq$” follows from and and the last inequality uses the assumption that $\int_{{\mathbb{R}}^d}{\ensuremath{\left\vertf(y)\right\vert}}M_{\varepsilon}w(y)dy=1$. Now we must handle the sum from $k=10\cdot2^r$ to $\infty$. For a cube $Q$ in $\mathcal{S}_k^{j}$, let $Q_{t}:=\cup_{Q'\in\mathcal{S}_{k}^{j+t}}Q'$ where $t=2^k$. The sparse condition implies that ${\ensuremath{\left\vertQ_t\right\vert}}\leq 4^{-t}{\ensuremath{\left\vertQ\right\vert}}$ and Lemma \[L:ainfest\] implies that $w(Q_t)\lesssim 2^{2^r}2^{-k}w(Q)$. Note that we may write: $$\begin{aligned} Q = Q_{t} \cup (\cup_{l=0}^{t-1}\cup_{Q'\in\mathcal{S}_{k}^{j+l}}E_Q).\end{aligned}$$ Concerning the $Q_t$ portion, for $Q$ in $\mathcal{S}_k$ we have: $$\begin{aligned} {\langle f \rangle}_{Q}w(G'\cap Q_t) \leq \frac{2^{2^r}}{2^k}{\langle f \rangle}_{Q}w(Q) =\frac{2^{2^r}}{2^k} \int_{Q}{\ensuremath{\left\vertf(y)\right\vert}}dy {\langle w \rangle}_{Q} \simeq \frac{2^{2^r}}{2^k} \int_{E_Q}{\ensuremath{\left\vertf(y)\right\vert}}dy {\langle w \rangle}_{Q}.\end{aligned}$$ Thus for fixed $k$ we have – using the pairwise disjointness of the sets $\{E_Q:Q\in\mathcal{S}_k\}$: $$\begin{aligned} \sum_{Q\in\mathcal{S}_k}{\langle {f} \rangle}_{Q}w(G'\cap Q) \lesssim \frac{2^{2^r}}{2^k}\int_{{\mathbb{R}}^d}{\ensuremath{\left\vertf(y)\right\vert}}Mw(y) \leq \frac{2^{2^r}}{2^k}.\end{aligned}$$ This can be summed in $k\geq 10\cdot2^r$ to the desired estimate. We must now handle the portion involving $Q\setminus Q_t$. Note that for fixed $l$ and $k$, the sets $\{E_{Q'}: Q'\in\mathcal{S}_{k}^{j+l} \textnormal{ and } Q'\subset Q, \textnormal{ and } Q\in\mathcal{S}_{k}^{j}; j\geq 0\}$ are pairwise disjoint. Thus for fixed $k$ we have: $$\begin{aligned} \sum_{j\geq 0}\sum_{Q\in\mathcal{S}_{k}^j}\sum_{l=0}^{t-1} \sum_{\substack{Q'\in\mathcal{S}_{k}^{j+l}:\\Q'\subset Q}} {\langle f \rangle}_{Q}w(G'\cap E_{Q'}) \leq 4^{-k}w(G)^{-1}t\sum_{j\geq 0}\sum_{Q\in\mathcal{S}_{k}^{j}} w(G'\cap \widetilde{E}_Q),\end{aligned}$$ where the sets $\widetilde{E}_Q$ are pairwise disjoint according to the observation above. Therefore this term is bounded by $4^{-k}tw(G)^{-1}w(G')\leq 2^{-k}$. This can be summed in $k\geq 10\cdot2^r$ to the desired estimate. Proof of Theorem \[T:cor\] {#S:poc} ========================== Theorem \[T:cor\] is not a corollary of Theorem \[T:main\] but is a corollary of the proof of Theorem \[T:main\]. Indeed, the trouble is the entropy bump function ${\varepsilon}$. Let $r$ be the unique positive integer with $[w]_{A_\infty}\simeq2^r$. Thus for all $Q$ we have $\rho_w(Q)\lesssim 2^r$. With the notation as above, we need to show that $$\begin{aligned} \sum_{k=-1}^{\infty}\sum_{Q\in\mathcal{S}_k}{\ensuremath{\left\vertQ\right\vert}} {\langle f \rangle}_{Q}{\langle w{1\!\!1}_{G'} \rangle}_{Q} \lesssim [w]_{A_1}\log{(e+[w]_{A_\infty})},\end{aligned}$$ where $f$ is a function with ${\ensuremath{\left\|f\right\|}}_{L^1(w)}=1$. The proof of Theorem \[T:main\] shows that we have: $$\begin{aligned} \sum_{k\geq 10r}\sum_{Q\in\mathcal{S}_k}{\ensuremath{\left\vertQ\right\vert}} {\langle f \rangle}_{Q}{\langle w{1\!\!1}_{G'} \rangle}_{Q} \lesssim \int_{{\mathbb{R}}^d}{\ensuremath{\left\vertf(y)\right\vert}}Mw(y)dy.\end{aligned}$$ and $$\begin{aligned} \sum_{k=-1}^{10r}\sum_{Q\in\mathcal{S}_k}{\langle f \rangle}_{Q}{\langle w{1\!\!1}_{G'} \rangle}_{Q} \lesssim r\int_{{\mathbb{R}}^d}{\ensuremath{\left\vertf(y)\right\vert}}Mw(y)dy.\end{aligned}$$ Noting that $Mw(y)\leq [w]_{A_1}w(y)$ and $r\simeq\log{(e+[w]_{A_\infty})}$ the proof of Theorem \[T:cor\] is complete.
--- abstract: | Motivated by the recent work of Hassainia and Hmidi \[Z. Hassainia, T. Hmidi - On the [V]{}-states for the generalized quasi-geostrophic equations,*arXiv preprint arXiv:1405.0858*\], we close the question of the existence of convex global rotating solutions for the generalized surface quasi-geostrophic equation for $\alpha \in [1,2)$. We also show $C^{\infty}$ regularity of their boundary for all ${\alpha}\in (0,2)$.\ 0.3cm *Keywords: bifurcation theory, Crandall-Rabinowitz, V-states, patches, surface quasi-geostrophic* author: - 'Angel Castro, Diego Córdoba and Javier Gómez-Serrano' bibliography: - 'references.bib' title: 'Existence and regularity of rotating global solutions for the generalized surface quasi-geostrophic equations' --- Introduction ============ In this paper, we consider the generalized surface-quasigeostrophic equation (gSQG): $$\begin{aligned} \left\{ \begin{array}{ll} \partial_{t}\theta+u\cdot\nabla\theta=0,\quad(t,x)\in\mathbb{R}_+\times\mathbb{R}^2, &\\ u=-\nabla^\perp(-\Delta)^{-1+\frac{\alpha}{2}}\theta,\\ \theta_{|t=0}=\theta_0, \end{array} \right.\end{aligned}$$ where $\alpha \in (0,2)$. The case $\alpha = 1$ corresponds to the surface quasi-geostrophic (SQG) equation and the limiting case $\alpha = 0$ refers to the 2D incompressible Euler equation. $\alpha = 2$ produces stationary solutions. Our goal in this article is to show the existence of global rotating solutions (also known as V-states) of the gSQG equation. These solutions are also known to exist for the vortex patch problem (${\alpha}= 0$), see the paper [@Hmidi-Mateu-Verdera:rotating-vortex-patch] by Hmidi, Mateu and Verdera, and recently their existence has been shown for the case $0 < {\alpha}< 1$ [@Hassainia-Hmidi:v-states-generalized-sqg] by Hassainia and Hmidi. Motivated by the articles of Constantin et al. [@Constantin-Majda-Tabak:formation-fronts-qg] and Held et al. [@Held-Pierrehumbert-Garner-Swanson:sqg-dynamics], a lot of effort has been devoted to understanding these equations for the SQG $({\alpha}= 1)$ case. More generally, the problem of whether the gSQG system presents global solutions or not is not completely understood. The existence of global weak solutions in $L^2$ for the case ${\alpha}= 1$ (SQG) was proven by Resnick in [@Resnick:phd-thesis-sqg-chicago] using an extra cancellation due to the oddness of the Riesz transform and it was extended to the gSQG case in [@Chae-Constantin-Cordoba-Gancedo-Wu:gsqg-singular-velocities] by Chae et al., even though the question of non-uniqueness is still open (see [@Isett-Vicol:holder-continuous-active-scalar] and references therein). A one-dimensional model of the gSQG equations was studied by Dong and Li in [@Dong-Li:one-dimensional-alpha-patch]. A particular kind of weak solutions for an active scalar are the so called $\alpha-$*patches*, i.e., solutions for which $\theta$ is a step function: $$\begin{aligned} \theta(x,t) = \left\{ \begin{array}{ll} \theta_1, \text{ if } \ \ x \in \Omega(t) \\ \theta_2, \text{ if } \ \ x \in \Omega(t) ^c, \\ \end{array} \right.\end{aligned}$$ where $ \Omega(0)$ is given by the initial distribution of $\theta$, $\theta_1$ and $\theta_2$ are constants, and $\Omega(t)$ is the evolution of $\Omega(0)$ under the velocity field $u$ given by $u(x,t) = \nabla^\perp \Lambda^{-(2-\alpha)} \theta (x,t) $ for $\Lambda = (-\Delta)^{1/2}$. The evolution of such distribution is completely determined by the evolution of the boundary, allowing the problem to be treated as a non-local one dimensional equation for the contour of $\Omega(t)$. In this setting, local existence of smooth solutions was first obtained for $C^{\infty}$ curves by Rodrigo in [@Rodrigo:evolution-sharp-fronts-qg] for $0 < {\alpha}\leq 1$. The question of local existence of simply connected $\alpha-$*patches* with Sobolev regularity of its boundary was addressed by Gancedo in [@Gancedo:existence-alpha-patch-sobolev] for $0 < {\alpha}\leq 1$ and for $1 < {\alpha}< 2$ by Chae et al. in [@Chae-Constantin-Cordoba-Gancedo-Wu:gsqg-singular-velocities]. To get a better understanding of the behaviour of solutions of these interface problems, several numerical experiments have been performed. In [@Cordoba-Fontelos-Mancho-Rodrigo:evidence-singularities-contour-dynamics], Córdoba et al. studied the problem of the evolution of two patches for a range of $\alpha$. Their simulations suggest an asymptotically self-similar singular scenario in which the distance between both patches goes to zero in finite time while simultaneously the curvature of the boundaries blows up. Scott and Dritschel [@Scott-Dritschel:self-similar-sqg], based on numerical simulations, suggest that an elliptical patch with a big aspect ratio between its axes may develop a self-similar singularity with an explosive growth of the curvature in the case ${\alpha}= 1$. Recently ([@Castro-Cordoba-GomezSerrano-MartinZamora:remarks-geometric-properties-sqg]) it has been shown that elliptical patches are not rotating solutions for ${\alpha}> 0$, as opposed to the limiting case ${\alpha}\rightarrow 0$, - for which they are - and by means of a rigorous computer-assisted proof the existence of convex solutions that lose their convexity in finite time has been established. In a paper by Scott [@Scott:scenario-singularity-quasigeostrophic], it was already pointed out that small perturbations of thin strips may lead to a self similar cascade of instabilities, leading to a possible arc chord blow up. Gancedo and Strain [@Gancedo-Strain:absence-splash-muskat-SQG] proved that in fact, no splash singularity can be formed, i.e., two interfaces can not collapse in a point, if the interfaces remain smooth. The evolution equation for the interface of an $\alpha-$ patch, which we parametrize as a $2 \pi$ periodic curve $z(x)$, can be written as $$\begin{aligned} \label{Ecuacion-alpha-patch} \partial_t z(x,t) = -(\theta_2 - \theta_1)C({\alpha}) \int_{0} ^{ 2 \pi} \frac{ \partial_x z (x,t) - \partial_x z(x-y,t) }{ \vert z(x,t) - z(x-y, t) \vert^{\alpha}} dy, \end{aligned}$$ since we can add terms in the tangential direction without changing the evolution of the patch. The normalizing constant $C({\alpha})$ is given by: $$\begin{aligned} C({\alpha}) = \frac{1}{2\pi} \frac{\Gamma\left(\frac{{\alpha}}{2}\right)}{2^{1-{\alpha}}\Gamma\left(\frac{2-{\alpha}}{2}\right)}.\end{aligned}$$ The analogous problem for the vorticity formulation of 2D Euler $(\alpha \to 0)$ is better understood. The global existence and uniqueness of weak solutions of the 2D Euler in vorticity formulation is due to Yudovich [@Yudovich:Nonstationary-ideal-incompressible]. Regularity preservation for $\mathcal{C}^{1, \gamma}$ patches was obtained by Chemin using techniques from paradifferential calculus in [@Chemin:persistance-structures-fluides-incompressibles]. Another proof of that result, which highlights the extra cancellation on semi spheres of even kernels, can be found in [@Bertozzi-Constantin:global-regularity-vortex-patches] by Bertozzi and Constantin. Serfati, in [@Serfati:preuve-directe-existence-globale-patches] provided another one, giving a fuller characterization of the velocity gradient’s regularity. In recent years, Denisov has studied the process of merging for the vortex patch problem. This is the scenario showed by the numerics of [@Cordoba-Fontelos-Mancho-Rodrigo:evidence-singularities-contour-dynamics] for the $\alpha$-patch. However, for the vortex patch problem, the collapse in a point can not happen in finite time, the distance between the two patches can decay at most as fast as a double exponential. Denisov proves in [@Denisov:sharp-corner-euler-patches] that this bound is sharp if one is allowed to modify slightly the velocity by superimposing a smooth background incompressible flow. However, there is a family of solutions that evolve by rotating with constant angular velocity around its center of mass. These solutions are known as V-states. Deem and Zabusky were the first to compute them numerically [@Deem-Zabusky:vortex-waves-stationary], and later other authors have improved the methods and numerically computed a bigger class (see [@Wu-Overman-Zabusky:steady-state-Euler-2d; @Elcrat-Fornberg-Miller:stability-vortices-cylinder; @LuzzattoFegiz-Williamson:efficient-numerical-method-steady-uniform-vortices; @Saffman-Szeto:equilibrium-shapes-equal-uniform-vortices] for a small sample of them). In the case $\alpha = 0$, Burbea [@Burbea:motions-vortex-patches] outlined a proof of the existence of V-states by means of a conformal mapping and bifurcation theory. A fully rigorous proof was given by Hmidi, Mateu and Verdera in [@Hmidi-Mateu-Verdera:rotating-vortex-patch]. They also showed that the family of V-states has $C^{\infty}$ boundary regularity and is convex. In another paper [@Hmidi-Mateu-Verdera:rotating-doubly-connected-vortices], they studied the V-state existence for the case of doubly connected domains. In a very recent preprint, Hassainia and Hmidi have worked in extending the ideas of the aforementioned papers to the case $0 < \alpha \leq 1$ [@Hassainia-Hmidi:v-states-generalized-sqg]. They are able to prove the existence of convex V-states with $C^{k}$ boundary regularity for the case $0 < \alpha < 1$. The possibility of $C^{\infty}$ regularity and the existence for the case $1 \leq \alpha < 2$ are left open. Motivated by their work, we have attempted to fill the gap. Precisely, in this paper we are able to prove existence and $C^{\infty}$ regularity of convex global rotating solutions for the remaining open cases of $\alpha$. For the existence part, the key ingredient in our proof is a careful choice of the spaces in which we apply the Crandall-Rabinowitz theorem in a similar spirit as in the previous papers [@Burbea:motions-vortex-patches; @Hassainia-Hmidi:v-states-generalized-sqg; @Hmidi-Mateu-Verdera:rotating-vortex-patch; @Hmidi-Mateu-Verdera:rotating-doubly-connected-vortices]. Concerning the regularity, one has to invert the most singular operator onto the less singular one to be able to bootstrap. From now on, we will assume that $\theta_2 - \theta_1 = 1$. Contour equations for the rotating solutions -------------------------------------------- Let $z(x,t) = (z_1(x,t),z_2(x,t))$ be the interface of the patch. Since our results will be concerned with patches that are close to the disk, we can assume that the patch is star-shaped and therefore it can be parametrized as $(R(x,t)\cos(x),R(x,t)\sin(x))$. In order to obtain the equations for $R(x,t)$, we will start writing up the equation for $z(x,t)$, and then substitute for the star-shaped ansatz. Let us assume that $z(x,t)$ rotates with frequency $\Omega$ counterclockwise. Thus $$\begin{aligned} z_t(x,t) = \Omega z^{\perp}(x,t),\end{aligned}$$ where for every $v = (v_1, v_2)$, $v^{\perp}$ is defined as $(-v_2,v_1)$. The equations a V-state satisfies are $$\begin{aligned} z_t(x,t) \cdot n & = u(z(x,t),t) \cdot n \\ \Omega \langle z^{\perp}(x,t), z_{x}^{\perp}(x,t) \rangle = \Omega \langle z(x,t), z_{x}(x,t) \rangle & = \langle u(z(x,t),t), z_{x}^{\perp}(x,t) \rangle.\end{aligned}$$ Here, $n$ is the unitary normal vector and the tangential component of the velocity does not change the shape of the curve. The question of finding a rotating global solution patch of the generalized quasi-geostrophic equation is reduced to find a zero of $F(\Omega,R)$, where $$\begin{aligned} \label{funcionalvstates} F(\Omega,R) = \Omega R'(x) - \sum_{i=1}^{3} F_{i}(R),\end{aligned}$$ and the $F_{i}$ are $$\begin{aligned} F_1(R)=&\frac{1}{R(x)}C({\alpha})\int\frac{\sin(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{{\alpha}}{2}} \left(R(x)R(y)+R'(x)R'(y)\right)dy,\\ F_2(R)=&C({\alpha})\int\frac{\cos(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{{\alpha}}{2}} \left(R'(y)-R'(x)\right)dy,\\ F_3(R)=&\frac{R'(x)}{R(x)}C({\alpha})\int\frac{\cos(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{{\alpha}}{2}} \left(R(x)-R(y)\right)dy,\end{aligned}$$ and the above integrals are performed on the torus. For simplicity, from now on we will omit writing the domain of integration, which is always the torus. Functional spaces ----------------- In our proofs, we will use the following spaces: $$\begin{aligned} X^{k} = \left\{f \in H^{k}, f(x) = \sum_{j=1}^{\infty}a_{j} \cos(jx)\right\}, \quad X^{k}_{m} = \left\{f \in H^{k}, f(x) = \sum_{j=1}^{\infty}a_{jm} \cos(jmx)\right\} \\ Y^{k} = \left\{f \in H^{k}, f(x) = \sum_{j=1}^{\infty}a_{j} \sin(jx)\right\}, \quad Y^{k}_{m} = \left\{f \in H^{k}, f(x) = \sum_{j=1}^{\infty}a_{jm} \sin(jmx)\right\} \\ X^{k+\log} = \left\{f \in H^{k}, f(x) = \sum_{j=1}^{\infty}a_{j} \cos(jx), \left\|\int_{\mathbb{T}} \frac{{\partial}^{k}f(x-y)-{\partial}^{k}f(x)}{\left|\sin\left(\frac{y}{2}\right)\right|}dy\right\|_{L^2(x)} < \infty\right\}, \quad k \in \mathbb{Z} \\ X^{k+\log}_{m} = \left\{f \in H^{k}, f(x) = \sum_{j=1}^{\infty}a_{jm} \cos(jmx), \left\|\int_{\mathbb{T}} \frac{{\partial}^{k}f(x-y)-{\partial}^{k}f(x)}{\left|\sin\left(\frac{y}{2}\right)\right|}dy\right\|_{L^2(x)} < \infty\right\}, \quad k \in \mathbb{Z}.\end{aligned}$$ The norm is given in the last two cases by the sum of the $H^{k}$-norm and the additional finite integral in the definition, and in the other four by the $H^{k}$ norm. We give an alternative characterization of the $X^{k+\log}$ spaces. This will be useful for the spectral study. \[alternativexlog\] $$\begin{aligned} f \in X^{k+\log} \Leftrightarrow f \in X^{k}, |a_1|^{2} + \sum_{j=2}^{\infty}|a_j|^{2}|j|^{2k}(1+\log(j))^{2} < \infty,\end{aligned}$$ where $$\begin{aligned} f = \sum_{j=1}^{\infty} a_{j} \cos(jx)\end{aligned}$$ $\Rightarrow:$ By virtue of Lemmas \[LemmaICIS\] and \[lemmagrowthomega\]: $$\begin{aligned} |a_1|^{2} + \sum_{j=2}^{\infty}|a_j|^{2}|j|^{2k}(1+\log(j))^{2} \leq C\left(|a_1|^{2} + \sum_{j=2}^{\infty}|a_j|^{2}|j|^{2k}(\log(j))^{2}\right) \leq C\left\|\int \frac{{\partial}^{k} f(x-y)- {\partial}^{k} f(x)}{\left|\sin\left(\frac{y}{2}\right)\right|}dy\right\|_{L^{2}}^{2} < \infty\end{aligned}$$ $\Leftarrow:$ Since $f \in X^{k}$, it can be written as a Fourier series. Let those coefficients be $a_k$. By Lemmas \[LemmaICIS\] and \[lemmagrowthomega\]: $$\begin{aligned} \left\|\int \frac{{\partial}^{k} f(x-y)- {\partial}^{k} f(x)}{\left|\sin\left(\frac{y}{2}\right)\right|}dy\right\|_{L^{2}}^{2} \leq C \left(|a_1|^{2} + \sum_{j=2}^{\infty}|a_j|^{2}|j|^{2k} (\log(j))^{2}\right) \leq C \left(|a_1|^{2} + \sum_{j=2}^{\infty}|a_j|^{2}|j|^{2k} (1+\log(j))^{2}\right) < \infty.\end{aligned}$$ There is a substantial difference between the spaces $X^{k+\log}$ and the spaces $B^{s}$ and $B^{s-1}_{Log}$ that were proposed as candidates in [@Hassainia-Hmidi:v-states-generalized-sqg]. Even though the Fourier multiplier scaling is correct, the $l^{1}$-summability condition and the definition via Fourier series does not allow to estimate the nonlinear terms in an easy way. Finding an alternative characterization in physical space removes this obstacle. Moreover, the choice of $X^{k+\log}$ as a space that wins a logarithm of a derivative instead of finding a space that loses a logarithm of a derivative (as suggested in [@Hassainia-Hmidi:v-states-generalized-sqg]) alleviates the heavy computations. We make no claim that the proposed spaces $B^{s},B^{s-1}_{Log}$ might not work. Theorems and outline of the proofs ---------------------------------- The paper is organized as follows: In Section \[sectionexistencealpha1\], we prove the following theorem: \[teoremaexistenciaalpha1\] Let $k\geq 3, m \in \mathbb{N}, m \geq 2$ and let $$\begin{aligned} \Omega_m = -\frac{2}{\pi}\sum_{k=2}^{m}\frac{1}{2k-1}.\end{aligned}$$ Then, there exists a family of $m$-fold solutions $(\Omega,R), R(x)-1 \in X^{k+\log}_{m}$ of the equation with ${\alpha}= 1$ that bifurcate from the disk at $\Omega = \Omega_m$. Section \[Sectionexistencealphamayor1\] is devoted to prove \[teoremaexistenciaalphamayor1\] Let $k\geq 3, m \in \mathbb{N}, m \geq 2, 1 < \alpha < 2$ and let $$\begin{aligned} \Omega_m = -2^{{\alpha}-1} \frac{\Gamma\left(1-{\alpha}\right)}{\left(\Gamma\left(1-\frac{{\alpha}}{2}\right)\right)^{2}}\left(\frac{\Gamma\left(1+\frac{{\alpha}}{2}\right)}{\Gamma\left(2-\frac{{\alpha}}{2}\right)} - \frac{\Gamma\left(m+\frac{{\alpha}}{2}\right)}{\Gamma\left(1+m-\frac{{\alpha}}{2}\right)}\right).\end{aligned}$$ Then, there exists a family of $m$-fold solutions $(\Omega,R), R(x)-1 \in X^{k}_{m}$ of the equation with $1 < {\alpha}< 2$ that bifurcate from the disk at $\Omega = \Omega_m$. Both proofs are carried out by means of a combination of a Crandall-Rabinowitz’s theorem and a priori estimates. We remark that there is a lot of room for improvement of the regularity, and the choice $k \geq 3$ is far from being optimal. Here we are only interested on finding one such $k$ that shows the theorem, and not the sharpest one in terms of regularity. In the final section we deal with the regularity of the boundary and its convexity. We are able to show the following theorem: \[teoremaregularidadconvexidad\] Let $0 < \alpha < 2$. Let $R(x)$ be an $m$-fold solution of which is close to the disk. Then, $R(x)$ belongs to $C^{\infty}$ and it parametrizes a convex patch. To do so, we will invert a singular integral operator to gain regularity and bootstrap. It is not necessary to derive an explicit formula as in [@Hmidi-Mateu-Verdera:rotating-vortex-patch] using the structure of the kernel. This agrees with the discussion in [@Hassainia-Hmidi:v-states-generalized-sqg Remark 3]. Existence in the case $\alpha=1$ {#sectionexistencealpha1} ================================ This section is devoted to show Theorem \[teoremaexistenciaalpha1\]. The proof will be divided into 6 steps. These steps correspond to check the hypotheses of the Crandall-Rabinowitz theorem [@Crandall-Rabinowitz:bifurcation-simple-eigenvalues] for $$F(\Omega,R)=\Omega R'-\sum_{i=1}^3F_i(R),$$ where $$\begin{aligned} F_1(R)=&\frac{1}{2\pi R(x)}\int\frac{\sin(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}} \left(R(x)R(y)+R'(x)R'(y)\right)dy,\\ F_2(R)=&\frac{1}{2\pi }\int\frac{\cos(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}} \left(R'(y)-R'(x)\right)dy,\\ F_3(R)=&\frac{R'(x)}{2\pi R(x) }\int\frac{\cos(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}} \left(R(x)-R(y)\right)dy,\end{aligned}$$ and they are the following: 1. The functional $F$ satisfies $$F(\Omega,R)\,:\, {\Bbb{R}}\times \{1+V^r\}\mapsto Y^{k-1},$$ where $V^r$ is the open neighborhood of 0 $$V^r=\{ f\in X^{k+\log}\,:\, ||f||_{X^{k+\log}}<r\},$$ for all $0<r<1$ and $k\geq 3$. 2. $F(\Omega,1) = 0$ for every $\Omega$. 3. The partial derivatives $F_{\Omega}$, $F_{R}$ and $F_{R\Omega}$ exist and are continuous. 4. Ker($\mathcal{F}$) and $Y^{k-1}$/Range($\mathcal{F}$) are one-dimensional, where $\mathcal{F}$ is the linearized operator around the disk $R = 1$ at $\Omega = \Omega_m$. 5. $F_{\Omega R}(\Omega_m,1)(h_0) \not \in$ Range($\mathcal{F}$), where Ker$(\mathcal{F}) = \langle h_0 \rangle$. 6. Step 1 can be applied to the spaces $X^{k+\log}_{m}$ and $Y^{k-1}_{m}$ instead of $X^{k+\log}$ and $Y^{k-1}$. Step 1 ------ In order to prove that $$F(\Omega,R)\,:\, {\Bbb{R}}\times \{1+V^r\}\mapsto Y^{k-1},$$ where $V^r$ is the open neighborhood of 0 $$V^r=\{ f\in X^{k+\log}\,:\, ||f||_{X^{k+\log}}<r\},$$ for all $0<r<1$ and $k\geq 3$, we will deal with the most singular terms. For example we will not give details about the bound on the $L^2-$ norm of $F(R)$ and we focus on the following proposition: \[prop1\] Let $0 < r < 1$, $k \geq 3$. Then $$F(\Omega,R): \mathbb{R} \times \{1+V^r\} \mapsto Y^{k-1}$$ In order to show this proposition we will use the following decomposition $$\begin{aligned} \frac{1}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}} = K_{S}(x,y)+\frac{1}{\left(R(x)^2+R'(x)^2\right)^\frac{1}{2}}\frac{1}{2\left|\sin\left(\frac{x-y}{2}\right)\right|},\end{aligned}$$ where the kernel $$\begin{aligned} K_S(x,y)\equiv \frac{1}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}}- \frac{1}{\left(R(x)^2+R'(x)^2\right)^\frac{1}{2}}\frac{1}{2\left|\sin\left(\frac{x-y}{2}\right)\right|},\end{aligned}$$ is not singular at $x=y$. Since we can write $$\begin{aligned} \cos(x)=&1-2\sin^2\left(\frac{x}{2}\right),\end{aligned}$$ we have that $$\begin{aligned} &\frac{\cos(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}}\\&= \cos(x-y)K_S(x,y)+ \frac{1}{\left(R(x)^2+R'(x)^2\right)^\frac{1}{2}}\frac{1}{2\left|\sin\left(\frac{x-y}{2}\right)\right|} -\frac{\left|\sin\left(\frac{x-y}{2}\right)\right|}{\left(R(x)^2+R'(x)^2\right)^\frac{1}{2}}\end{aligned}$$ Let’s bound $F_1$. We will split $F_1$ into two terms, $$\begin{aligned} F_1=&\frac{1}{2\pi R(x)}\int\frac{\sin(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}} R(x)R(y)dy\\ &+\frac{1}{2\pi R(x)}\int\frac{\sin(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}} R'(x)R'(y)dy\\ \equiv & F_{11}+F_{12}\end{aligned}$$ and we will focus on $F_{12}$ since it is the most singular one. Making the change of variable $x-y \mapsto y$, taking ${\partial}^{k-1}$ derivatives with respect to $x$ and changing back again to $y\mapsto x-y$ yields $$\begin{aligned} {\partial}^{k-1}F_{12}=&\frac{(-1) {\partial}^{k-1}R(x)}{2\pi R(x)^{2}}\int\frac{\sin(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}} R'(x)R'(y)dy\\ +&\frac{1}{2\pi R(x)}\int\frac{\sin(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}} \left(R(x){\partial}^kR(y)+{\partial}^kR(x)R(y)\right)dy\\ &-\frac{1}{2\pi R(x)}\int\frac{\sin(x-y)R'(x)R'(y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{3}{2}} \\ & \times \left((R(x)-R(y))\left({\partial}^{k-1}R(x)-{\partial}^{k-1}R(y)\right)+2\left({\partial}^{k-1}R(x)R(y)+R(x){\partial}^{k-1}R(y)\right)\sin^2\left(\frac{x-y}{2}\right)\right)dy \\&+ \text{l.o.t},\end{aligned}$$ where l.o.t stands for lower order terms. Let $k \in \mathbb{R}$. We denote by $\mathcal{H}_{k}$ the set of functions $f(x,y)$ that satisfy the following estimates: $$\begin{aligned} \sup_{x\in {\mathbb{T}}} \left\|\frac{f(x,\cdot)}{\sin(\cdot)^{k}}\right\|_{L^1({\mathbb{T}})}\leq C, \quad \sup_{y \in {\mathbb{T}}}\left\|\frac{f(\cdot,y)}{\sin(y)^{k}}\right\|_{L^1({\mathbb{T}})}\leq C,\end{aligned}$$ where $C$ is a constant. Its dependence will be clear on the context. Since the kernel $$a(x,y)\equiv \frac{\sin(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}}$$ belongs to $\mathcal{H}_0$ we can apply Young’s inequality to prove that $$||{\partial}^{k-1} F_{1}||_{L^2({\mathbb{T}})} \leq C\left(||R||_{X^{k+\log}},r\right).$$ Next we bound ${\partial}^{k-1} F_2$. Making the change of variable $x-y \mapsto y$, taking ${\partial}^{k-1}$ derivatives with respect to $x$ and changing again to $y\mapsto x-y$ yields $$\begin{aligned} {\partial}^{k-1} F_{2}& =\frac{1}{2\pi }\int\frac{\cos(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}} \left({\partial}^k R(y)-{\partial}^k R(x)\right)dy\\ & -\frac{1}{2\pi }\int\frac{\cos(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{3}{2}}\left({\partial}^{k-1} R(y)-{\partial}^{k-1} R(x)\right) \\ & \times \left((R(x)-R(y))(R'(x)-R'(y)) + 2(R(x)R'(y) + R'(x)R(y))\sin^{2}\left(\frac{x-y}{2}\right)\right)dy\\ &+ \text{l.o.t.}\end{aligned}$$ We will split the first term as follows $$\begin{aligned} &\frac{1}{2\pi }\int \cos(x-y)K_{S}(x,y)\left({\partial}^k R(y)-{\partial}^kR(x)\right)dy\\ &-\frac{1}{2\pi}\int \frac{\left|\sin\left(\frac{x-y}{2}\right)\right|}{\left(R(x)^2+R'(x)^2\right)^\frac{1}{2}}\left({\partial}^k R(y)-{\partial}^k R(x)\right)dy\\ &+\frac{1}{4\pi \left(R(x)^2+R'(x)^2\right)^\frac{1}{2}} \int \frac{({\partial}^k R(y)-{\partial}^k R(x))}{\left|\sin\left(\frac{x-y}{2}\right)\right|}dy.\end{aligned}$$ Therefore, since $K_S \in \mathcal{H}_0$ and because of the definition of the space $X^{k+\log}$ we have that $$\begin{aligned} ||{\partial}^{k-1} F_2||_{L^2({\mathbb{T}})}\leq C\left(||R||_{X^{k+\log}},r\right),\end{aligned}$$ since the first integral is bounded via Young’s inequality, the second one is of lower order and the third one is precisely part of the definition of the $X^{k+\log}$ spaces. To estimate the second, we subtract and add the following term: $$\begin{aligned} \frac{1}{2\pi }\int\frac{\cos(x-y)}{2\left|\sin\left(\frac{x-y}{2}\right)\right|}\frac{R'(x)R''(x)+R'(x)R(x)}{((R(x))^2+(R'(x))^2)^{3/2}}\left({\partial}^{k-1} R(y)-{\partial}^{k-1} R(x)\right) dy,\end{aligned}$$ which is clearly in $L^{2}$, so that the resulting kernel coming from the difference, which is acting on $({\partial}^{k-1} R(y)-{\partial}^{k-1}R(x))$, belongs to $\mathcal{H}_{0}$. By using Young’s inequality again, we get the desired bound. Finally, the bound for $F_3$ is easier to get than the one for $F_2$. Step 2 ------ If we substitute $R = 1$ in the $F_{i}$, the only term which is not immediately 0 is the first part of $F_{1}$. Therefore, we are left to show that $$\begin{aligned} \int_{-\pi}^{\pi} \frac{\sin(y)}{\left(4\sin^{2}\left(\frac{y}{2}\right)\right)^{1/2}} dy = 0,\\\end{aligned}$$ but this is automatically true since the integrand is odd. Thus, $F(\Omega,1)=0$ for all $\Omega\in {\Bbb{R}}$. Step 3 ------ We need to prove the existence and the continuity of the Gateaux derivatives ${\partial}_\Omega F(\Omega,R)$, ${\partial}_R F(\Omega,R)$ and ${\partial}_{\Omega,R}F(\Omega,R)$. The most difficult part is to show the existence and continuity of ${\partial}_R F_i(R)$ for $i=1,2,3$ since the dependence on $\Omega$ is linear and the rest follows easily. \[gatoderivada\]For all $R-1\in V^r$ and for all $h\in X^{k+\log}$ such that $||h||_{X^{k+\log}}=1$ we have that $$\lim_{t \to 0}\frac{F_i(R+th)-F_i(R)}{t}= D_i[R]h\quad \text{in $Y^{k-1}$},$$ where $$\begin{aligned} D_1[R] h =& -\frac{h(x)}{2\pi R(x)^2}\int \frac{\sin(x-y)}{\left((R(x)-R(y))^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}} \left(R(x)R(y)+R'(x)R'(y)\right)dy\\ & +\frac{1}{2\pi R(x)}\int \frac{\sin(x-y)}{\left((R(x)-R(y))^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}}\\&\times(h(x)R(y)+h(y)R(x)+(h'(x)R'(y)+h'(y)R'(x)))dy\\ &-\frac{1}{2\pi R(x)}\int \frac{\sin(x-y)(R(x)R(y)+R'(x)R'(y))}{\left((R(x)-R(y))^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{3}{2}}\\ &\times\left((R(x)-R(y))(h(x)-h(y))+2(h(x)R(y)+h(y)R(x))\sin^2\left(\frac{x-y}{2}\right)\right)dy\\ D_2[R] h = & \frac{1}{2\pi}\int\frac{\cos(x-y)}{\left((R(x)-R(y))^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}}\left(h'(y)-h'(x)\right)dy\\ &-\frac{1}{2\pi}\int\frac{\cos(x-y)(R'(y)-R'(x))}{\left((R(x)-R(y))^2+ 4R(x)R(y)\sin\left(\frac{x-y}{2}\right)\right)^\frac{3}{2}}\\ & \times \left((R(x)-R(y))(h(x)-h(y))+2(h(x)R(y)+h(y)R(x))\sin^2\left(\frac{x-y}{2}\right)\right)dy\\ D_3[R]h= & \frac{h'(x)}{2\pi R(x)}\int\frac{\cos(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}}(R(x)-R(y))dy\\ &-\frac{R'(x) h(x)}{2\pi R(x)^2}\int\frac{\cos(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}} (R(x)-R(y)) dy \\ &+\frac{R'(x)}{2\pi R(x)}\int\frac{\cos(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}}(h(x)-h(y))dy\\ &-\frac{R'(x)}{2\pi R(x)}\int \frac{\cos(x-y)(R(x)-R(y))}{\left((R(x)-R(y))^2+ 4R(x)R(y)\sin\left(\frac{x-y}{2}\right)\right)^\frac{3}{2}}\\ & \times \left((R(x)-R(y))(h(x)-h(y))+2(h(x)R(y)+h(y)R(x))\sin^2\left(\frac{x-y}{2}\right)\right)dy.\end{aligned}$$ Moreover, $D_i[R] h$ are continuous in $R$. We will focus on the term $F_2(R)$ since it is the one that requires a special treatment with respect to the case ${\alpha}< 1$ discussed in [@Hassainia-Hmidi:v-states-generalized-sqg]. We need to prove that $$\lim_{t\to 0} \left|\left| \frac{F_2(R+th)-F_2(R)}{t}-D_2[R]h\right|\right|_{H^{k-1}}=0.$$ In order to do it we will use the following notation. For a general function $f(x)$ we will write $$\begin{aligned} \Delta f = & f(x)-f(y),\\ f=&f(x), \\ \overline{f}=& f(y),\end{aligned}$$ and also we will use the following abbreviations $$\begin{aligned} DR= & {\Delta R}^2+4 R{\overline{R}}\sin^2\left(\frac{x-y}{2}\right),\\ DRh=& ({\Delta R}+t{\Delta h})^2+4 (R+th)({\overline{R}}+t{\overline{h}}) \sin^2\left(\frac{x-y}{2}\right).\end{aligned}$$ We decompose the term inside the $H^{k-1}$-norm in the following way $$\begin{aligned} &\frac{F_2(R+th)-F_2(R)}{t}-D_2[R]h= \\ & \frac{1}{2\pi t}\int \cos(x-y)(R'(x)-R'(y))\\ &\times \left(\frac{1}{\left(\left({\Delta R}+t{\Delta h}\right)^2+4(R+th)({\overline{R}}+t{\overline{h}})\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}}- \frac{1}{\left(\left({\Delta R}\right)^2+4R{\overline{R}}\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}}\right.\\ &\left. + t \frac{{\Delta R}{\Delta h}+2(h{\overline{R}}+{\overline{h}}R)\sin^2\left(\frac{x-y}{2}\right)}{\left(\left({\Delta R}\right)^2+4R{\overline{R}}\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{3}{2}}\right)dy\\ &+\frac{1}{2\pi}\int \cos(x-y)(h'(x)-h'(y))\\ &\times\left(\frac{1}{\left(\left({\Delta R}+t{\Delta h}\right)^2+4(R+th)({\overline{R}}+t{\overline{h}})\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}} -\frac{1}{\left(\left({\Delta R}\right)^2+4R{\overline{R}}\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}} \right)dy. &\equiv I_1+I_2\end{aligned}$$ We will deal with $I_2$ first, $$\begin{aligned} I_2=&\frac{1}{2\pi}\int \cos(x-y)(h'(x)-h'(y))\\ &\times\left(\frac{1}{\left(\left({\Delta R}+t{\Delta h}\right)^2+4(R+th)({\overline{R}}+t{\overline{h}})\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}} -\frac{1}{\left(\left({\Delta R}\right)^2+4R{\overline{R}}\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{1}{2}} \right)dy\\ =&\frac{1}{2\pi}\int \cos(x-y)(h'(x)-h'(y))\left(\frac{DR-DRh}{DR^\frac{1}{2}DRh^\frac{1}{2}\left(DR^\frac{1}{2}+DRh^\frac{1}{2}\right)}\right)dy\\ =&\frac{1}{2\pi}\int \cos(x-y)(h'(x)-h'(y)) \left(\frac{-2t{\Delta R}{\Delta h}-t^2{\Delta h}^2-4(t(h{\overline{R}}+{\overline{h}}R)+t^2h{\overline{h}}) \sin^2\left(\frac{x-y}{2}\right)}{DR^\frac{1}{2}DRh^\frac{1}{2}\left(DR^\frac{1}{2}+DRh^\frac{1}{2}\right)}\right)dy\\ &\equiv I_{21}+I_{22}+I_{23}+I_{24}.\end{aligned}$$ These four terms can be treated in a similar way. We just give some details about $I_{21}$, $$\begin{aligned} I_{21}=-\frac{t}{\pi}\int \cos(x-y)(h'(x)-h'(y)) \left(\frac{{\Delta R}{\Delta h}}{DR^\frac{1}{2}DRh^\frac{1}{2}\left(DR^\frac{1}{2}+DRh^\frac{1}{2}\right)}\right)dy.\end{aligned}$$ Now we will take $k-1$ derivatives with respect to $x$. As before, we first make the change $x-y\mapsto y$, then we apply ${\partial}_x^{k-1}$ and finally we make the change $y\mapsto x-y$, thus $$\begin{aligned} {\partial}^{k-1} I_{21}=& -\frac{t}{\pi}\int \cos(x-y)({\partial}^kh(x)-{\partial}^k h(y))\left(\frac{{\Delta R}{\Delta h}}{DR^\frac{1}{2}DRh^\frac{1}{2}\left(DR^\frac{1}{2}+DRh^\frac{1}{2}\right)}\right)dy\\ &+ \text{ l.o.t}\end{aligned}$$ Now we can decompose the kernel in the previous integral in the following way $$\begin{aligned} &\frac{{\Delta R}{\Delta h}}{DR^\frac{1}{2}DRh^\frac{1}{2}\left(DR^\frac{1}{2}+DRh^\frac{1}{2}\right)}=\tilde{K}_{S}\\& +\frac{R'h'}{(R'^2+R^2)^\frac{1}{2}((R'+th')^2+(R+th)^2)^\frac{1}{2}((R'^2+R^2)^\frac{1}{2}+((R'+th')^2+(R+th)^2)^\frac{1}{2})} \frac{1}{2\left|\sin\left(\frac{x-y}{2}\right)\right|},\end{aligned}$$ where we can check that, for $t$ small enough, $\tilde{K}_{S} \in \mathcal{H}_0$. From this decomposition and for $t$ small enough we have that $$\begin{aligned} ||{\partial}^{k-1}I_{21}||_{L^2}\leq t C(||R||_{X^{k+\log}},r).\end{aligned}$$ Now let’s deal with $I_1$. We proceed again with the same strategy as before in order to take $k-1$ derivatives, in such a way that $$\begin{aligned} {\partial}^{k-1}I_1=&\frac{1}{2\pi t}\int \cos(x-y)({\partial}^k R(x)-{\partial}^k R(y))\left(\frac{1}{DRh^\frac{1}{2}}-\frac{1}{DR^\frac{1}{2}}+t\frac{{\Delta R}{\Delta h}+2(h{\overline{R}}+{\overline{h}}R)\sin^2\left(\frac{x-y}{2}\right)}{DR^\frac{3}{2}}\right)dy\\ &+ \text{ l.o.t}.\end{aligned}$$ One can write $$\begin{aligned} &\frac{1}{DRh^\frac{1}{2}}-\frac{1}{DR^\frac{1}{2}}+t\frac{{\Delta R}{\Delta h}+2(h{\overline{R}}+{\overline{h}}R)\sin^2\left(\frac{x-y}{2}\right)}{DR^\frac{3}{2}}\\ & = t\frac{-{\Delta R}{\Delta h}-2(h{\overline{R}}+{\overline{h}}R)\sin^2\left(\frac{x-y}{2}\right)}{DRh^\frac{1}{2} DRh^\frac{1}{2}\frac{DRh^\frac{1}{2}+DR^\frac{1}{2}}{2} } +t\frac{{\Delta R}{\Delta h}+2(h{\overline{R}}+{\overline{h}}R)\sin^2\left(\frac{x-y}{2}\right)}{DR^\frac{3}{2}}\\ & = t \left({\Delta R}{\Delta h}+2(h{\overline{R}}+{\overline{h}}R)\sin^2\left(\frac{x-y}{2}\right)\right)\left(\frac{1}{DR^\frac{3}{2}}-\frac{1}{DR^\frac{1}{2} DRh^\frac{1}{2}\frac{DRh^\frac{1}{2}+DR^\frac{1}{2}}{2}}\right),\end{aligned}$$ and also $$\begin{aligned} \frac{1}{DR^\frac{3}{2}}-\frac{1}{DR^\frac{1}{2} DRh^\frac{1}{2}\frac{DRh^\frac{1}{2}+DR^\frac{1}{2}}{2}}= \frac{1}{DR^\frac{1}{2}}\left(\frac{DRh^\frac{1}{2}\frac{DRh^\frac{1}{2}+DR^\frac{1}{2}}{2}-DR}{DR DRh^\frac{1}{2}\frac{DRh^\frac{1}{2}+DR^\frac{1}{2}}{2}}\right),\end{aligned}$$ where $$\begin{aligned} DRh^\frac{1}{2}\frac{DRh^\frac{1}{2}+DR^\frac{1}{2}}{2}-DR& = \frac{1}{2}DRh-\frac{1}{2}DR+\frac{1}{2}(DRh^\frac{1}{2}-DR^\frac{1}{2})DR^\frac{1}{2}\\ &= \frac{1}{2}(DRh-DR)+\frac{DR^\frac{1}{2}}{2(DRh^\frac{1}{2}+DR^\frac{1}{2})}(DRh-DR).\end{aligned}$$ From these formulas it is easy to see that $$\begin{aligned} &\frac{1}{DRh^\frac{1}{2}}-\frac{1}{DR^\frac{1}{2}}+t\frac{{\Delta R}{\Delta h}+2(h{\overline{R}}+{\overline{h}}R)\sin^2\left(\frac{x-y}{2}\right)}{DR^\frac{3}{2}}=t^2K[R,h,t]\end{aligned}$$ The $L^2$-bound for the term coming from the kernel $K[R,h,t]$ can be performed in a similar way as we did before. To prove the continuity of $D_2[R] h$ we notice that, for $R$, $r$ in $V^r$ we have that $$\begin{aligned} \frac{1}{DR^\frac{1}{2}}-\frac{1}{Dr^\frac{1}{2}}&=\frac{\Delta r^2 +4r\overline{r}\sin^2\left(\frac{x-y}{2}\right)-{\Delta R}^2-4R{\overline{R}}\sin^2\left(\frac{x-y}{2}\right)} {DR^\frac{1}{2}Dr^\frac{1}{2}\left(DR^\frac{1}{2}+Dr^\frac{1}{2}\right)}\\ &=\frac{(\Delta r+{\Delta R})\Delta(r-R)+ 4\left((r-R)\overline{r}+(\overline{r}-\overline{R})R\right)\sin^2\left(\frac{x-y}{2}\right)} {DR^\frac{1}{2}Dr^\frac{1}{2}\left(DR^\frac{1}{2}+Dr^\frac{1}{2}\right)}\end{aligned}$$ From this formula we obtain the estimate $$\begin{aligned} ||D_2[R] h -D_2[r] h ||_{H^{k-1}}\leq C\left(|| R||_{H^{k+\log}},|| r||_{H^{k+\log}},r\right)|| R-r||_{H^{k+\log}}\end{aligned}$$ what proves the continuity of $D_2[R]h$. The rest of the estimates can be performed in a similar fashion. Step 4 ------ The calculations carried out in this subsection will be more general and include the full range ${\alpha}\geq 1$ instead of ${\alpha}= 1$. They will be used in the next section. Before starting Step 4, we compute the linearization of $F$ around the disk ($R(x) \equiv 1$) in the direction $h(x)$. By taking $R=1$ in lemma \[gatoderivada\] one sees that this linearization is equal to $$\begin{aligned} & \Omega h'(x) - C({\alpha}) \int \frac{\sin(y)(h(x-y)+h(x))}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy + \frac{{\alpha}}{2} C({\alpha}) \int \frac{\sin(y)(h(x-y)+h(x))}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy + C({\alpha}) \int \frac{\cos(y)(h'(x)-h'(x-y))}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy \\ & = \Omega h'(x) + \left(\frac{{\alpha}}{2}-1\right)C({\alpha}) \int \frac{\sin(y)h(x-y)}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy + C({\alpha}) \int \frac{\cos(y)(h'(x)-h'(x-y))}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy, \\\end{aligned}$$ where we have used that $$\begin{aligned} \int \frac{\sin(y)}{(4\sin^{2}\left(\frac{y}{2}\right))^{{\alpha}/2+1}} dy = 0. \\\end{aligned}$$ \[representationOperator\] Let ${\alpha}\geq 1$, and let $\displaystyle h = \sum_{j=1}^{\infty}a_k \cos(kx)$. Then, $$\begin{aligned} {\partial}_{R}F(\Omega,1)(h) = -\sum_{k=1}^{\infty} a_k[k(\Omega-\Omega_k)] \sin(kx),\end{aligned}$$ where $\Omega_k$ is the dispersion set given by $$\begin{aligned} \left\{ \begin{array}{cc} \displaystyle -2^{{\alpha}-1} \frac{\Gamma\left(1-{\alpha}\right)}{\left(\Gamma\left(1-\frac{{\alpha}}{2}\right)\right)^{2}}\left(\frac{\Gamma\left(1+\frac{{\alpha}}{2}\right)}{\Gamma\left(2-\frac{{\alpha}}{2}\right)} - \frac{\Gamma\left(k+\frac{{\alpha}}{2}\right)}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)}\right) & \alpha \neq 1 \\ \displaystyle -\frac{2}{\pi} \sum_{j=2}^{k} \frac{1}{2j-1}& \alpha = 1 \end{array} \right.\end{aligned}$$ The case ${\alpha}\leq 1$ was already covered by [@Hassainia-Hmidi:v-states-generalized-sqg]. Our proof goes the same way regardless of the value of $0 < {\alpha}< 2$. However, the expression of $\Omega_m$ is also valid in the range ${\alpha}> 1$. There is a slight discrepance on the sign, caused by the different choices of $\theta_2 - \theta_1$. In order to calculate the critical rotating velocities, we shall look to each of the contributions to the $k$-th modes. Let $h \in X^{k+\log}_{m}$ or $X^{k}_{m}$, depending on the value of $\alpha$ be $$\begin{aligned} h(x) = \sum_{j=1}^{\infty} a_j \cos(jx). \end{aligned}$$ Then, the contribution of the derivative term $\Omega h'(x)$ to the $k-$th (sine) mode is given by $-k a_{k} \Omega$. Let’s look at the other terms’ contribution. The first one will be: $$\begin{aligned} & \left(\frac{{\alpha}}{2}-1\right)C({\alpha}) \int \frac{\sin(y)h(x-y)}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy \\ & = 2^{-{\alpha}+1}\left(\frac{{\alpha}}{2}-1\right)C({\alpha}) \int_{0}^{2\pi} \sum_{k=1}^{\infty}a_{k}(\cos(kx-ky))\cos\left(\frac{y}{2}\right)\left(\sin\left(\frac{y}{2}\right)\right)^{-{\alpha}+1} dy \\\end{aligned}$$ Using that $$\begin{aligned} \cos\left(\frac{y}{2}\right) \left(\sin\left(\frac{y}{2}\right)\right)^{1-{\alpha}} = \frac{2}{2-{\alpha}} {\partial}_y\left(\sin\left(\frac{y}{2}\right)^{2-{\alpha}}\right)\end{aligned}$$ we can integrate by parts to obtain $$\begin{aligned} & 2^{-{\alpha}+1}\left(\frac{{\alpha}}{2}-1\right)C({\alpha}) \int_{0}^{2\pi} \sum_{k=1}^{\infty}\frac{-2k}{2-{\alpha}} a_{k}(\sin(kx-ky))\left(\sin\left(\frac{y}{2}\right)\right)^{2-{\alpha}} dy \\ & = 2^{-{\alpha}+1}\left(\frac{{\alpha}}{2}-1\right)C({\alpha}) \sum_{k=1}^{\infty}\frac{-2k}{2-{\alpha}} a_{k}\sin(kx)\int_{0}^{2\pi} \cos(ky)\left(\sin\left(\frac{y}{2}\right)\right)^{2-{\alpha}} dy \\ & = 2^{-{\alpha}+1}\left(\frac{{\alpha}}{2}-1\right)C({\alpha}) \sum_{k=1}^{\infty}\frac{-2k}{2-{\alpha}} a_{k}\sin(kx) \frac{2\pi \cos(k \pi) \Gamma(3-{\alpha})}{2^{2-{\alpha}}\Gamma\left(2+k-\frac{{\alpha}}{2}\right)\Gamma\left(2-k-\frac{{\alpha}}{2}\right)}, \\\end{aligned}$$ where we have used the following identity (see [@Magnus-Oberhettinger:formeln-satze-speziellen-funktionen]): $$\begin{aligned} \label{integralgammas} \int_{0}^{\pi} (\sin(\eta))^{x}e^{iy\eta}d\eta = \frac{\pi e^{i\frac{\pi y}{2}}\Gamma(x+1)}{2^{x}\Gamma\left(1+\frac{x+y}{2}\right)\Gamma\left(1+\frac{x-y}{2}\right)}, \quad \forall x >-1, \quad \forall y \in \mathbb{R}.\end{aligned}$$ Extracting the $k$-th (sine) mode contribution, we obtain: $$\begin{aligned} 2^{-{\alpha}+1}\left(\frac{{\alpha}}{2}-1\right)C({\alpha}) \frac{-2k}{2-{\alpha}} a_{k}\frac{2 \pi \cos(k \pi) \Gamma(3-{\alpha})}{2^{2-{\alpha}}\Gamma\left(2+k-\frac{{\alpha}}{2}\right)\Gamma\left(2-k-\frac{{\alpha}}{2}\right)}.\end{aligned}$$ In the particular case ${\alpha}= 1$, by using the identity $$\begin{aligned} \Gamma(z)\Gamma(1-z) = \frac{\pi}{\sin(\pi z)},\end{aligned}$$ the contribution amounts to: $$\begin{aligned} a_k \frac{k}{2} \frac{1}{2\pi} \frac{2 \pi (-1)^{k}}{\Gamma\left(\frac32+k\right)\Gamma\left(\frac32-k\right)} = -\frac{a_k}{\pi} \frac{2k}{4k^2 - 1}.\end{aligned}$$ We move now to the second term. We have $$\begin{aligned} \label{Omegasecondterm} & C({\alpha}) \int \frac{\cos(y)(h'(x)-h'(x-y))}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy \nonumber \\ & = 2^{-{\alpha}} C({\alpha}) \int \sum_{k=1}^{\infty} (-k a_k )\frac{\cos(y)(\sin(kx) - \sin(kx-ky))}{\left(\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy \nonumber \\ & = 2^{-{\alpha}} C({\alpha}) \int \sum_{k=1}^{\infty} (-k a_k )(\sin(kx) - \sin(kx-ky))\left(\sin\left(\frac{y}{2}\right)\right)^{-{\alpha}}dy \nonumber \\ & + 2^{-{\alpha}+1} C({\alpha}) \int \sum_{k=1}^{\infty} k a_k(\sin(kx) - \sin(kx-ky))\left(\sin\left(\frac{y}{2}\right)\right)^{2-{\alpha}}dy \nonumber \\\end{aligned}$$ In order to compute the last two integrals we will use the following lemma: \[LemmaICIS\] Let ${\alpha}\in (0,2), k \in \mathbb{N}$ and let $IS_{k}(x)$ and $IC_{k}(x)$ be defined as $$\begin{aligned} IS_k(x) = \int_{0}^{2\pi} \frac{\sin(kx)-\sin(kx-ky)}{\sin\left(\frac{y}{2}\right)^{{\alpha}}} dy, \quad IC_k(x) = \int_{0}^{2\pi} \frac{\cos(kx)-\cos(kx-ky)}{\sin\left(\frac{y}{2}\right)^{{\alpha}}} dy \\\end{aligned}$$ Then, if ${\alpha}\neq 1$: $$\begin{aligned} IS_{k} = \sin(kx)2^{{\alpha}} \frac{2\pi \Gamma\left(1-{\alpha}\right)}{\Gamma\left(\frac{{\alpha}}{2}\right)\Gamma\left(1-\frac{{\alpha}}{2}\right)}\left(\frac{\Gamma\left(\frac{{\alpha}}{2}\right)}{\Gamma\left(1-\frac{{\alpha}}{2}\right)} - \frac{\Gamma\left(k+\frac{{\alpha}}{2}\right)}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)}\right), \\ IC_{k} = \cos(kx)2^{{\alpha}} \frac{2\pi \Gamma\left(1-{\alpha}\right)}{\Gamma\left(\frac{{\alpha}}{2}\right)\Gamma\left(1-\frac{{\alpha}}{2}\right)}\left(\frac{\Gamma\left(\frac{{\alpha}}{2}\right)}{\Gamma\left(1-\frac{{\alpha}}{2}\right)} - \frac{\Gamma\left(k+\frac{{\alpha}}{2}\right)}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)}\right),\end{aligned}$$ and if ${\alpha}= 1$: $$\begin{aligned} IS_{k} = \sin(kx) \sum_{m=1}^{k} \frac{8}{2m-1}, \quad IC_{k} = \cos(kx) \sum_{m=1}^{k} \frac{8}{2m-1}.\end{aligned}$$ The proof is done by induction. We will find a recurrence for $IS_{k}, IC_{k}$ in terms of $IS_{k-1}, IC_{k-1}$ and then apply the induction hypothesis. We start with $IS_{k}(x)$. Using the addition formulas for the sine and cosine: $$\begin{aligned} IS_{k}(x) & = \cos(x) IS_{k-1}(x) + \sin(x)IC_{k-1}(x) \\ & + \int_{0}^{2\pi} \frac{\sin((k-1)x-(k-1)y)(\cos(x) - \cos(x-y))}{\sin\left(\frac{y}{2}\right)^{{\alpha}}} dy \\ & + \int_{0}^{2\pi} \frac{\cos((k-1)x-(k-1)y)(\sin(x) - \sin(x-y))}{\sin\left(\frac{y}{2}\right)^{{\alpha}}} dy \\ & = \cos(x) IS_{k-1}(x) + \sin(x)IC_{k-1}(x) + J_{1}(x) + J_{2}(x)\end{aligned}$$ $$\begin{aligned} J_{1}(x) & = 2\cos(x)\int_{0}^{2\pi} \sin((k-1)x-(k-1)y)\sin\left(\frac{y}{2}\right)^{2-{\alpha}} dy \\ & -2\sin(x)\int_{0}^{2\pi} \sin((k-1)x-(k-1)y)\cos\left(\frac{y}{2}\right)\sin\left(\frac{y}{2}\right)^{1-{\alpha}} dy\\ & = 2\cos(x)\int_{0}^{2\pi} \sin((k-1)x)\cos((k-1)y)\sin\left(\frac{y}{2}\right)^{2-{\alpha}} dy \\ & +2\sin(x)\int_{0}^{2\pi} \cos((k-1)x)\sin((k-1)y)\cos\left(\frac{y}{2}\right)\sin\left(\frac{y}{2}\right)^{1-{\alpha}} dy \\ & = J_{1,1}(x) + J_{1,2}(x)\end{aligned}$$ Writing the cosine as a sum of exponentials and applying formula , we get $$\begin{aligned} J_{1,1}(x) & = 4 (-1)^{k-1} \frac{\pi}{2^{2-{\alpha}}} \frac{\Gamma(3-{\alpha})}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)\Gamma\left(3-k-\frac{{\alpha}}{2}\right)}\sin((k-1)x)\cos(x)\end{aligned}$$ Concerning $J_{1,2}$, we first integrate by parts: $$\begin{aligned} J_{1,2} & = -\frac{4(k-1)}{2-{\alpha}}\int_{0}^{2\pi} \sin(x)\cos((k-1)x)\cos((k-1)y)\sin\left(\frac{y}{2}\right)^{2-{\alpha}} dy \\ & = -\frac{8(k-1)}{2-{\alpha}}\frac{(-1)^{k-1}\pi}{2^{2-{\alpha}}} \frac{\Gamma(3-{\alpha})}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)\Gamma\left(3-k-\frac{{\alpha}}{2}\right)}\cos((k-1)x)\sin(x)\end{aligned}$$ We move on to $J_2$: $$\begin{aligned} J_2(x) & = 2\sin(x)\int_{0}^{2\pi} \cos((k-1)x-(k-1)y)\sin\left(\frac{y}{2}\right)^{2-{\alpha}} dy\\ & + 2\cos(x)\int_{0}^{2\pi} \cos((k-1)x-(k-1)y)\cos\left(\frac{y}{2}\right)^{1-{\alpha}}\sin\left(\frac{y}{2}\right)^{1-{\alpha}} dy \\ & = 2\sin(x)\cos((k-1)x)\int_{0}^{2\pi} \cos((k-1)y)\sin\left(\frac{y}{2}\right)^{2-{\alpha}} dy\\ & + 2\cos(x)\sin((k-1)x)\int_{0}^{2\pi} \sin((k-1)y)\cos\left(\frac{y}{2}\right)^{1-{\alpha}}\sin\left(\frac{y}{2}\right)^{1-{\alpha}} dy \\ & = 4 (-1)^{k-1} \frac{\pi}{2^{2-{\alpha}}} \frac{\Gamma(3-{\alpha})}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)\Gamma\left(3-k-\frac{{\alpha}}{2}\right)}\cos((k-1)x)\sin(x)\\ & -\frac{8(k-1)}{2-{\alpha}}\frac{(-1)^{k-1}\pi}{2^{2-{\alpha}}} \frac{\Gamma(3-{\alpha})}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)\Gamma\left(3-k-\frac{{\alpha}}{2}\right)}\sin((k-1)x)\cos(x)\end{aligned}$$ Adding up $J_1(x) + J_2(x)$: $$\begin{aligned} J_1(x) + J_2(x) & = \sin(kx)\frac{(-1)^{k-1}\pi}{2^{2-{\alpha}}} \frac{\Gamma(3-{\alpha})}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)\Gamma\left(3-k-\frac{{\alpha}}{2}\right)}\left(4 -\frac{8(k-1)}{2-{\alpha}} \right).\end{aligned}$$ We calculate the recurrence relation for $IC_k(x)$. Using the same expansion as before, we obtain $$\begin{aligned} IC_{k}(x) & = \cos(x) IC_{k-1}(x) - \sin(x)IS_{k-1}(x) \\ & + \int_{0}^{2\pi} \frac{\cos((k-1)x-(k-1)y)(\cos(x) - \cos(x-y))}{\sin\left(\frac{y}{2}\right)^{{\alpha}}} dy \\ & - \int_{0}^{2\pi} \frac{\sin((k-1)x-(k-1)y)(\sin(x) - \sin(x-y))}{\sin\left(\frac{y}{2}\right)^{{\alpha}}} dy \\ & = \cos(x) IC_{k-1}(x) - \sin(x)IS_{k-1}(x) + K_1(x) + K_2(x)\end{aligned}$$ We continue with $K_1(x)$: $$\begin{aligned} K_1(x) & = 2\cos(x)\int_{0}^{2\pi} \cos((k-1)x-(k-1)y)\sin\left(\frac{y}{2}\right)^{2-{\alpha}} dy\\ & - 2\sin(x)\int_{0}^{2\pi} \cos((k-1)x-(k-1)y)\cos\left(\frac{y}{2}\right)^{1-{\alpha}}\sin\left(\frac{y}{2}\right)^{1-{\alpha}} dy \\ & = 2\cos(x)\cos((k-1)x)\int_{0}^{2\pi} \cos((k-1)y)\sin\left(\frac{y}{2}\right)^{2-{\alpha}} dy\\ & - 2\sin(x)\sin((k-1)x)\int_{0}^{2\pi} \sin((k-1)y)\cos\left(\frac{y}{2}\right)^{1-{\alpha}}\sin\left(\frac{y}{2}\right)^{1-{\alpha}} dy \\ & = 4 (-1)^{k-1} \frac{\pi}{2^{2-{\alpha}}} \frac{\Gamma(3-{\alpha})}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)\Gamma\left(3-k-\frac{{\alpha}}{2}\right)}\cos((k-1)x)\cos(x)\\ & +\frac{8(k-1)}{2-{\alpha}}\frac{(-1)^{k-1}\pi}{2^{2-{\alpha}}} \frac{\Gamma(3-{\alpha})}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)\Gamma\left(3-k-\frac{{\alpha}}{2}\right)}\sin((k-1)x)\sin(x)\end{aligned}$$ In a similar way, $K_2(x)$ is equal to: $$\begin{aligned} K_2(x) & = -2\sin(x)\int_{0}^{2\pi} \sin((k-1)x-(k-1)y)\sin\left(\frac{y}{2}\right)^{2-{\alpha}} dy\\ & - 2\cos(x)\int_{0}^{2\pi} \sin((k-1)x-(k-1)y)\cos\left(\frac{y}{2}\right)^{1-{\alpha}}\sin\left(\frac{y}{2}\right)^{1-{\alpha}} dy \\ & = -2\sin(x)\sin((k-1)x)\int_{0}^{2\pi} \cos((k-1)y)\sin\left(\frac{y}{2}\right)^{2-{\alpha}} dy\\ & + 2\cos(x)\cos((k-1)x)\int_{0}^{2\pi} \sin((k-1)y)\cos\left(\frac{y}{2}\right)^{1-{\alpha}}\sin\left(\frac{y}{2}\right)^{1-{\alpha}} dy \\ & = -4 (-1)^{k-1} \frac{\pi}{2^{2-{\alpha}}} \frac{\Gamma(3-{\alpha})}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)\Gamma\left(3-k-\frac{{\alpha}}{2}\right)}\sin((k-1)x)\sin(x)\\ & -\frac{8(k-1)}{2-{\alpha}}\frac{(-1)^{k-1}\pi}{2^{2-{\alpha}}} \frac{\Gamma(3-{\alpha})}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)\Gamma\left(3-k-\frac{{\alpha}}{2}\right)}\cos((k-1)x)\cos(x)\end{aligned}$$ Adding up $K_1(x)$ and $K_2(x)$: $$\begin{aligned} K_1(x) + K_2(x) & = 4 (-1)^{k-1} \frac{\pi}{2^{2-{\alpha}}} \frac{\Gamma(3-{\alpha})}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)\Gamma\left(3-k-\frac{{\alpha}}{2}\right)}\cos(kx)\\ & -\frac{8(k-1)}{2-{\alpha}}\frac{(-1)^{k-1}\pi}{2^{2-{\alpha}}} \frac{\Gamma(3-{\alpha})}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)\Gamma\left(3-k-\frac{{\alpha}}{2}\right)}\cos(kx) \\ & = \cos(kx)\frac{(-1)^{k-1}\pi}{2^{2-{\alpha}}} \frac{\Gamma(3-{\alpha})}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)\Gamma\left(3-k-\frac{{\alpha}}{2}\right)}\left(4 -\frac{8(k-1)}{2-{\alpha}} \right).\end{aligned}$$ We distinguish two cases. In the case ${\alpha}= 1$, we can simplify our formulas by using that $$\begin{aligned} \left.\frac{(-1)^{k-1}\pi}{2^{2-{\alpha}}} \frac{\Gamma(3-{\alpha})}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)\Gamma\left(3-k-\frac{{\alpha}}{2}\right)}\left(4 -\frac{8(k-1)}{2-{\alpha}} \right)\right|_{{\alpha}=1} = \left.(-1)^{k-1}\pi 2^{1+{\alpha}} \frac{\Gamma(2-{\alpha})}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)\Gamma\left(2-k-\frac{{\alpha}}{2}\right)}\right|_{{\alpha}=1} \\ = (-1)^{k-1}\frac{4\pi}{\Gamma\left(\frac{1}{2}+k\right)\Gamma\left(\frac{3}{2}-k\right)} = (-1)^{k-1}\frac{4\pi}{\Gamma\left(\frac{1}{2}+k\right)\Gamma\left(\frac{1}{2}-k\right)\left(\frac{1}{2}-k\right)} = \frac{8}{2k-1},\end{aligned}$$ where in the last equality we have used the identity $$\begin{aligned} \Gamma(1-z)\Gamma(z) = \frac{\pi}{\sin(\pi z)}.\end{aligned}$$ Adding in $k$, we obtain the desired formulas for $IS, IC$: $$\begin{aligned} IS_{k} = \sin(kx) \sum_{m=1}^{k} \frac{8}{2m-1}, \quad IC_{k} = \cos(kx) \sum_{m=1}^{k} \frac{8}{2m-1}.\end{aligned}$$ For the other values of $\alpha$, we use induction. We start by checking the base case ($k = 1$): $$\begin{aligned} \frac{2^{{\alpha}} \pi \Gamma(3-{\alpha})}{\left(\Gamma\left(2-\frac{{\alpha}}{2}\right)\right)^2} & = \frac{2^{{\alpha}} 2\pi \Gamma(1-{\alpha})}{\Gamma\left(1-\frac{{\alpha}}{2}\right)}\left(\frac{1-{\alpha}}{\left(1-\frac{{\alpha}}{2}\right)\Gamma\left(1-\frac{{\alpha}}{2}\right)}\right) = \frac{2^{{\alpha}} 2\pi \Gamma(1-{\alpha})}{\Gamma\left(1-\frac{{\alpha}}{2}\right)}\left(\frac{1}{\Gamma\left(1-\frac{{\alpha}}{2}\right)}\left(\frac{\Gamma\left(\frac{{\alpha}}{2}\right)}{\Gamma\left(\frac{{\alpha}}{2}\right)}-\frac{\Gamma\left(1+\frac{{\alpha}}{2}\right)}{\left(1-\frac{{\alpha}}{2}\right)\Gamma\left(\frac{{\alpha}}{2}\right)}\right)\right) \\ & = \frac{2^{{\alpha}} 2\pi \Gamma(1-{\alpha})}{\Gamma\left(1-\frac{{\alpha}}{2}\right)\Gamma\left(\frac{{\alpha}}{2}\right)}\left(\frac{\Gamma\left(\frac{{\alpha}}{2}\right)}{\Gamma\left(1-\frac{{\alpha}}{2}\right)}-\frac{\Gamma\left(1+\frac{{\alpha}}{2}\right)}{\Gamma\left(2-\frac{{\alpha}}{2}\right)}\right)\end{aligned}$$ Finally, we do the induction step. We assume that the formula is true for $k-1$ ($k\geq 2$) and we show it for $k$. It is enough to check that: $$\begin{aligned} & & \frac{2^{{\alpha}} 2\pi \Gamma(1-{\alpha})}{\Gamma\left(1-\frac{{\alpha}}{2}\right)\Gamma\left(\frac{{\alpha}}{2}\right)}\left(\frac{\Gamma\left(\frac{{\alpha}}{2}\right)}{\Gamma\left(1-\frac{{\alpha}}{2}\right)}-\frac{\Gamma\left(k-1+\frac{{\alpha}}{2}\right)}{\Gamma\left(k-\frac{{\alpha}}{2}\right)}\right) & \\ & & + \frac{2^{{\alpha}} \pi \Gamma(3-{\alpha})(-1)^{k-1} \left(1 - \frac{2(k-1)}{2-{\alpha}}\right)}{\Gamma\left(k+1-\frac{{\alpha}}{2}\right)\Gamma\left(3-k-\frac{{\alpha}}{2}\right)}& = \frac{2^{{\alpha}} 2\pi \Gamma(1-{\alpha})}{\Gamma\left(1-\frac{{\alpha}}{2}\right)\Gamma\left(\frac{{\alpha}}{2}\right)}\left(\frac{\Gamma\left(\frac{{\alpha}}{2}\right)}{\Gamma\left(1-\frac{{\alpha}}{2}\right)}-\frac{\Gamma\left(k+\frac{{\alpha}}{2}\right)}{\Gamma\left(k+1-\frac{{\alpha}}{2}\right)}\right) \\ \Leftrightarrow & & \frac{2^{{\alpha}} 2\pi \Gamma(1-{\alpha})}{\Gamma\left(1-\frac{{\alpha}}{2}\right)\Gamma\left(\frac{{\alpha}}{2}\right)}\left(\frac{\Gamma\left(k+\frac{{\alpha}}{2}\right)}{\Gamma\left(k+1-\frac{{\alpha}}{2}\right)}-\frac{\Gamma\left(k-1+\frac{{\alpha}}{2}\right)}{\Gamma\left(k-\frac{{\alpha}}{2}\right)}\right) & = \frac{2^{{\alpha}} \pi (2-{\alpha})(1-{\alpha})\Gamma(1-{\alpha})(-1)^{k} \left(1 - \frac{2(k-1)}{2-{\alpha}}\right)}{\Gamma\left(k+1-\frac{{\alpha}}{2}\right)\Gamma\left(3-k-\frac{{\alpha}}{2}\right)} \\ \Leftrightarrow & & \frac{ 2}{\Gamma\left(1-\frac{{\alpha}}{2}\right)\Gamma\left(\frac{{\alpha}}{2}\right)}\frac{\Gamma\left(k+\frac{{\alpha}}{2}\right)}{\Gamma\left(k+1-\frac{{\alpha}}{2}\right)}\left(1-\frac{k-\frac{{\alpha}}{2}}{k-1+\frac{{\alpha}}{2}}\right) & = \frac{(2-{\alpha})(1-{\alpha})(-1)^{k} \left(1 - \frac{2(k-1)}{2-{\alpha}}\right)}{\Gamma\left(k+1-\frac{{\alpha}}{2}\right)\Gamma\left(3-k-\frac{{\alpha}}{2}\right)} \\ \Leftrightarrow & & \frac{2\Gamma\left(k-1+\frac{{\alpha}}{2}\right)}{\Gamma\left(1-\frac{{\alpha}}{2}\right)\Gamma\left(\frac{{\alpha}}{2}\right)} & = \frac{(-1)^{k+1}(4-{\alpha}-2k)}{\Gamma\left(3-k-\frac{{\alpha}}{2}\right)} \\ \Leftrightarrow & & \frac{\Gamma\left(k-1+\frac{{\alpha}}{2}\right)}{\Gamma\left(\frac{{\alpha}}{2}\right)} & = (-1)^{k+1}\left(2-k-\frac{{\alpha}}{2}\right)\frac{\Gamma\left(1-\frac{{\alpha}}{2}\right)}{\Gamma\left(3-k-\frac{{\alpha}}{2}\right)} \\ \Leftrightarrow & & \left(k-2+\frac{{\alpha}}{2}\right)\left(k-3+\frac{{\alpha}}{2}\right) \cdots \left(\frac{{\alpha}}{2}\right) & = (-1)^{k+1}\left(2-k-\frac{{\alpha}}{2}\right)\left(-\frac{{\alpha}}{2}\right)\left(-\frac{{\alpha}}{2}-1\right) \cdots \left(-\frac{{\alpha}}{2}-(k-3)\right),\end{aligned}$$ which is true. This finishes the proof. We insert the previous result in and extract the $k$-th mode contribution. In the case ${\alpha}= 1$: $$\begin{aligned} & a_k \frac{1}{4\pi} (-k) \sum_{m=1}^{k}\frac{8}{2m-1} + k a_k \frac{1}{2\pi} \frac{1}{2} \frac{2\pi \Gamma(2)}{\Gamma\left(-\frac12\right)\Gamma\left(\frac32\right)}\left(\frac{\Gamma\left(-\frac12\right)}{\Gamma\left(\frac32\right)} - \frac{\Gamma\left(k-\frac12\right)}{\Gamma\left(k+\frac32\right)}\right) \\ & = -k a_k \frac{2}{\pi}\sum_{m=1}^{k}\frac{1}{2m-1} + k a_k \frac{1}{2\pi}\left(4 + \frac{1}{k^2 - \frac{1}{4}}\right). \\\end{aligned}$$ Combining the sum of every contribution, they amount to $$\begin{aligned} -k a_k \Omega -\frac{a_k}{\pi} \frac{2k}{4k^2 - 1} -k a_k \frac{2}{\pi}\sum_{m=1}^{k}\frac{1}{2m-1} + k a_k \frac{1}{2\pi}\left(4 + \frac{1}{k^2 - \frac{1}{4}}\right) = -ka_k [\Omega - \Omega_k]. \\\end{aligned}$$ This proves the case ${\alpha}= 1$. In the case ${\alpha}\neq 1$, the coefficient in front of the $k$-th mode is: $$\begin{aligned} & -k a_k \Omega + 2^{-{\alpha}+1}\left(\frac{{\alpha}}{2}-1\right)C({\alpha}) \frac{-2k}{2-{\alpha}} a_{k}\frac{2 \pi \cos(k \pi) \Gamma(3-{\alpha})}{2^{2-{\alpha}}\Gamma\left(2+k-\frac{{\alpha}}{2}\right)\Gamma\left(2-k-\frac{{\alpha}}{2}\right)} \\ & a_k 2^{-{\alpha}} C({\alpha}) (-k) 2^{{\alpha}} \frac{2\pi \Gamma(1-{\alpha})}{\Gamma\left(\frac{{\alpha}}{2}\right)\Gamma\left(1-\frac{{\alpha}}{2}\right)}\left(\frac{\Gamma\left(\frac{{\alpha}}{2}\right)}{\Gamma\left(1-\frac{{\alpha}}{2}\right)} - \frac{\Gamma\left(k+\frac{{\alpha}}{2}\right)}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)}\right)\\ & + a_k 2^{-{\alpha}+1} C({\alpha}) k 2^{{\alpha}-2} \frac{2\pi \Gamma(3-{\alpha})}{\Gamma\left(\frac{{\alpha}}{2}-1\right)\Gamma\left(2-\frac{{\alpha}}{2}\right)}\left(\frac{\Gamma\left(\frac{{\alpha}}{2}-1\right)}{\Gamma\left(2-\frac{{\alpha}}{2}\right)} - \frac{\Gamma\left(k-1+\frac{{\alpha}}{2}\right)}{\Gamma\left(2+k-\frac{{\alpha}}{2}\right)}\right).\end{aligned}$$ We can group the third and the fifth factor into $$\begin{aligned} & a_k 2^{-{\alpha}} C({\alpha}) (-k) 2^{{\alpha}} \frac{2\pi \Gamma(1-{\alpha})}{\Gamma\left(\frac{{\alpha}}{2}\right)\Gamma\left(1-\frac{{\alpha}}{2}\right)}\left(\frac{\Gamma\left(\frac{{\alpha}}{2}\right)}{\Gamma\left(1-\frac{{\alpha}}{2}\right)}\right) + a_k 2^{-{\alpha}+1} C({\alpha}) k 2^{{\alpha}-2} \frac{2\pi \Gamma(3-{\alpha})}{\Gamma\left(\frac{{\alpha}}{2}-1\right)\Gamma\left(2-\frac{{\alpha}}{2}\right)}\left(\frac{\Gamma\left(\frac{{\alpha}}{2}-1\right)}{\Gamma\left(2-\frac{{\alpha}}{2}\right)}\right) \\ = & k a_k 2^{{\alpha}- 1} \frac{\Gamma(1-{\alpha})}{\left(\Gamma\left(1-\frac{{\alpha}}{2}\right)\right)^{2}} \left(-\frac{\Gamma\left(\frac{{\alpha}}{2}\right)}{\Gamma\left(1-\frac{{\alpha}}{2}\right)} + \frac12 \frac{(2-{\alpha})(1-{\alpha}) \Gamma\left(\frac{{\alpha}}{2}\right)}{\left(1-\frac{{\alpha}}{2}\right)\Gamma\left(2-\frac{{\alpha}}{2}\right)}\right) \\ = & k a_k 2^{{\alpha}- 1} \frac{\Gamma(1-{\alpha})}{\left(\Gamma\left(1-\frac{{\alpha}}{2}\right)\right)^{2}} \frac{\Gamma\left(1+\frac{{\alpha}}{2}\right)}{\Gamma\left(2-\frac{{\alpha}}{2}\right)} \left(-\frac{1-\frac{{\alpha}}{2}}{\frac{{\alpha}}{2}} + \frac{1}{2}\frac{(2-{\alpha})(1-{\alpha})}{\left(1-\frac{{\alpha}}{2}\right)\left(\frac{{\alpha}}{2}\right)}\right) \\ = & -k a_k 2^{{\alpha}- 1} \frac{\Gamma(1-{\alpha})}{\left(\Gamma\left(1-\frac{{\alpha}}{2}\right)\right)^{2}} \frac{\Gamma\left(1+\frac{{\alpha}}{2}\right)}{\Gamma\left(2-\frac{{\alpha}}{2}\right)}, \\\end{aligned}$$ and the second, fourth and sixth as $$\begin{aligned} & k a_k 2^{{\alpha}-1} \frac{\Gamma(1-{\alpha})}{\left(\Gamma\left(1-\frac{{\alpha}}{2}\right)\right)^{2}} \frac{\Gamma\left(k+\frac{{\alpha}}{2}\right)}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)} \\ & \times \left(1 - \frac12 \frac{(2-{\alpha})(1-{\alpha})}{\left(1-\frac{{\alpha}}{2}\right)\left(k-1+\frac{{\alpha}}{2}\right)\left(1+k-\frac{{\alpha}}{2}\right)} \frac{\Gamma\left(\frac{{\alpha}}{2}\right)}{\Gamma\left(\frac{{\alpha}}{2}-1\right)}+ \frac12 \frac{(2-{\alpha})(1-{\alpha})\Gamma\left(1-\frac{{\alpha}}{2}\right)\Gamma\left(\frac{{\alpha}}{2}\right)(-1)^{k}}{\left(1+k-\frac{{\alpha}}{2}\right)\Gamma\left(k+\frac{{\alpha}}{2}\right)\Gamma\left(2-k-\frac{{\alpha}}{2}\right)}\right) \\ & = k a_k 2^{{\alpha}-1} \frac{\Gamma(1-{\alpha})}{\left(\Gamma\left(1-\frac{{\alpha}}{2}\right)\right)^{2}} \frac{\Gamma\left(k+\frac{{\alpha}}{2}\right)}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)} \left(1 - \frac12 \frac{(1-{\alpha})({\alpha}-2)}{\left(k-1+\frac{{\alpha}}{2}\right)\left(k+1-\frac{{\alpha}}{2}\right)} + \frac12 \frac{(1-{\alpha})({\alpha}-2)}{\left(k-1+\frac{{\alpha}}{2}\right)\left(k+1-\frac{{\alpha}}{2}\right)}\right) \\ & = k a_k 2^{{\alpha}-1} \frac{\Gamma(1-{\alpha})}{\left(\Gamma\left(1-\frac{{\alpha}}{2}\right)\right)^{2}} \frac{\Gamma\left(k+\frac{{\alpha}}{2}\right)}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)}\end{aligned}$$ In total, we get that the $k$-th coefficient is precisely $$\begin{aligned} -k a_k(\Omega - \Omega_k),\end{aligned}$$ as claimed. \[Omegammonotonic\] Let ${\alpha}\in (0,2)$. The values of $\Omega_k$ are monotonic with $k$. The case $\alpha = 1$ is trivial and was already covered in [@Hassainia-Hmidi:v-states-generalized-sqg]. For the rest of the values of $\alpha$, it is enough to show that $\frac{\Gamma\left(k+\frac{{\alpha}}{2}\right)}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)}$ is monotonic with $k$. But we have $$\begin{aligned} \frac{\Gamma\left(1+k+\frac{{\alpha}}{2}\right)}{\Gamma\left(2+k-\frac{{\alpha}}{2}\right)} = \frac{\Gamma\left(k+\frac{{\alpha}}{2}\right)}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)} \frac{k+\frac{{\alpha}}{2}}{1+k-\frac{{\alpha}}{2}} = \frac{\Gamma\left(k+\frac{{\alpha}}{2}\right)}{\Gamma\left(1+k-\frac{{\alpha}}{2}\right)} \left(1 + \frac{{\alpha}-1}{1+k-\frac{{\alpha}}{2}}\right).\end{aligned}$$ In the case $\alpha > 1$, the bracket is strictly greater than 1, and in the case $\alpha < 1$, the bracket is strictly smaller than 1, independently of $k$. Monotonicity for all cases follows. From Proposition \[representationOperator\] and Proposition \[Omegammonotonic\] it is immediate that if $\Omega = \Omega_{m}$, then the kernel is non trivial, has dimension 1, and it is generated by $\cos(mx)$. We continue by computing the range of $D_{R} F(\Omega_m,1)$. To do so, we will prove that the range is indeed the set $$\begin{aligned} \left\{ \begin{array}{cc} \displaystyle Z_{m} = \left\{f \in Y_{m}^{k-1}, f = \sum_{k>1}^{\infty}a_k \sin(kmx)\right\}, \text{ if } {\alpha}= 1 \\ \displaystyle Z_{m} = \left\{f \in Y_{m}^{k-{\alpha}}, f = \sum_{k>1}^{\infty}a_k \sin(kmx)\right\}, \text{ if } {\alpha}> 1 \\ \end{array} \right.\end{aligned}$$ If we are able to do so, then we are done since $Z_{m}$ is closed and has codimension 1 in $Y^{k-1}_{m}$ or $Y^{k-{\alpha}}_{m}$, depending on whether we are in the case ${\alpha}= 1$ or ${\alpha}> 1$. We note that by Proposition \[representationOperator\], the inclusion Range$(D_{R}F(\Omega_m,1)) \subset Z_{m} $ follows trivially. We now show the opposite one. Let $g \in Z_{m}, g = \sum_{k>1}^{\infty}g_k\sin(kmx)$. We need to show that there exists an $h$ such that $D_{R}F(\Omega_m,1)(h) = g$. However, by the representation given by Proposition \[representationOperator\], such an $h$ exists and it is given by $$\begin{aligned} h(x) = \sum_{k>1}^{\infty}h_k\cos(kmx), \quad h_k = \frac{g_k}{km(\Omega_{km} - \Omega_m)}.\end{aligned}$$ We have to check that $h$ has the right regularity: for the case ${\alpha}= 1$, this will mean that $h \in X^{k+\log}$, whereas for the case ${\alpha}> 1$, $h$ will have to belong to $H^{k}$. In order to establish that condition, the following Lemma will be useful. \[lemmagrowthomega\] In the case ${\alpha}= 1$, $\Omega_{m} \sim \log(m)$, and for ${\alpha}> 1$, $\Omega_{m} \sim m^{{\alpha}-1}$ The case ${\alpha}= 1$ was proved in [@Hassainia-Hmidi:v-states-generalized-sqg] and the case ${\alpha}> 1$ follows directly from the asymptotic expansion of the Gamma function given by [@Abramowitz-Stegun:handbook-mathematical-functions Formula 6.1.46,p.257]. In the case ${\alpha}= 1$, we use the alternative characterization of the $X^{k+\log}$ spaces in Proposition \[alternativexlog\], and the asymptotic growth of $\Omega_m$ from Lemma \[lemmagrowthomega\] and bound the following quantity: $$\begin{aligned} \sum_{p>1}|h_p|^{2}(pm)^{2k}(1+\log(pm))^{2} & = \sum_{p>1}\left|\frac{g_p}{pm(\Omega_{pm} - \Omega_m)}\right|^{2}(pm)^{2k}(1+\log(pm))^{2} \\ & \leq C \sum_{p>1} |g_p|^{2} (pm)^{2k-2} \left(\frac{1+\log(pm)}{\log(pm)}\right)^{2} \\ & \leq C \sum_{p>1} |g_p|^{2} (pm)^{2k-2} = C\|g\|_{H^{k-1}} < \infty\end{aligned}$$ In the case ${\alpha}> 1$, we compute the $H^{k}$ norm squared of $h$ and obtain $$\begin{aligned} \sum_{p>1}|h_p|^{2}(pm)^{2k} = \sum_{p>1}\left|\frac{g_p}{pm(\Omega_{pm} - \Omega_m)}\right|^{2}(pm)^{2k} \leq C \sum_{p>1}\left|\frac{h_p}{pm(pm)^{{\alpha}-1}}\right|^{2}(pm)^{2k} \leq C \|g\|_{H^{k-{\alpha}}}^{2} < \infty\end{aligned}$$ This shows step 4. Step 5 ------ We will show step 5 using the previous characterization. First of all, we recall that $$\begin{aligned} F_{\Omega R}(\Omega_m,1) (h) = h'(x).\end{aligned}$$ Therefore, $$\begin{aligned} F_{\Omega R}(\Omega_m,1) (\cos(mx)) = -m\sin(mx),\end{aligned}$$ which does not belong to Range($\mathcal{F}$), as we wanted to prove. Step 6 ------ Since in Step 1 we showed the regularity, the only thing that is left is to show that $F(\Omega,R)$ has $m$-fold symmetry and can be written as a Fourier-sin series. To do so, we will use the following lemmas: \[lemmaoddnessalpha1\] Let $\alpha \geq 1$. If $R(x)$ is even, then $F(\Omega,R)$ is odd. The first term $\Omega R'(x)$ is clearly odd. To see the oddness of the other ones, it is enough to compute $F_i(R)(x)$ and $-F_i(R)(-x)$. One is obtained from the other by changing $y \mapsto -y$ and using the fact that $R(x)$ is even and $R'(x)$ is odd. The Fourier series of $F(\Omega,R)$ consists only of sine terms. \[lemmasymmetryalpha1\] Let $\alpha \geq 1$. If $R(x)$ is expressed as an $m$-fold series of cosines, then $F(\Omega,R)(x+\frac{2\pi}{m}) = F(\Omega,R)(x)$. The first term, $\Omega R'(x)$ satisfies $$\begin{aligned} \Omega R'\left(x+\frac{2\pi}{m}\right) = \Omega R'(x),\end{aligned}$$ To check the property of the other terms, it is enough to compute $F_i(R)(x)$ and $F_i(R)\left(x+\frac{2\pi}{m}\right)$. One is obtained from the other by changing $y \mapsto y+\frac{2\pi}{m}$ and using the fact that $R(x) = R\left(x+\frac{2\pi}{m}\right)$ and $R'(x) = R'\left(x+\frac{2\pi}{m}\right)$. Existence for $1<\alpha <2$. {#Sectionexistencealphamayor1} ============================ This section is devoted to show Theorem \[teoremaexistenciaalphamayor1\]. The proof of this theorem follows the same steps that the proof of Theorem \[teoremaexistenciaalpha1\]. It will be divided into 6 steps. These steps correspond to check the hypotheses of the Crandall-Rabinowitz theorem [@Crandall-Rabinowitz:bifurcation-simple-eigenvalues] for $$F(\Omega,R)=\Omega R'-\sum_{i=1}^3F_i(R),$$ where $$\begin{aligned} F_1(R)=&\frac{C(\alpha)}{R(x)}\int\frac{\sin(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{\alpha}{2}} \left(R(x)R(y)+R'(x)R'(y)\right)dy,\\ F_2(R)=&C(\alpha)\int\frac{\cos(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{\alpha}{2}} \left(R'(y)-R'(x)\right)dy,\\ F_3(R)=&C(\alpha)\frac{R'(x)}{ R(x) }\int\frac{\cos(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{\alpha}{2}} \left(R(x)-R(y)\right)dy,\end{aligned}$$ and they are the following 1. The functional $F$ satisfies $$F(\Omega,R)\,:\, {\Bbb{R}}\times \{1+V^r\}\mapsto Y^{k-1},$$ where $V^r$ is the open neighborhood of 0 $$V^r=\{ f\in X^{k+\alpha-1}\,:\, ||f||_{H^{k+\alpha-1}}<r\},$$ for $0<r<1$ and $k\geq 3$. 2. $F(\Omega,1) = 0$ for every $\Omega$. 3. The partial derivatives $F_{\Omega}$, $F_{R}$ and $F_{R\Omega}$ exist and are continuous. 4. Ker($\mathcal{F}$) and $Y^{k-1}$/Range($\mathcal{F}$) are one-dimensional, where $\mathcal{F}$ is the linearized operator around the disk $R = 1$ at $\Omega = \Omega_m$. 5. $F_{\Omega R}(\Omega_m,1)(h_0) \not \in$ Range($\mathcal{F}$), where Ker$(\mathcal{F}) = \langle h_0 \rangle$. 6. Step 1 can be applied to the spaces $X^{k+{\alpha}-1}_{m}$ and $Y^{k-1}_{m}$ instead of $X^{k+{\alpha}-1}$ and $Y^{k-1}$. Step 1 ------ \[prop2\] Let $0 < r < 1$, $k \geq 3$. Then $$F(\Omega,R): \mathbb{R} \times \{1+V^r\} \mapsto Y^{k-1}.$$ We recall that the norm $||f||_{H^{k+\alpha-1}}$ can be defined as $$\begin{aligned} ||f||_{H^{k+\alpha-1}}=||f||_{H^k}+\left|\left|\int \frac{{\partial}^kf(\cdot)-{\partial}^k f(y)}{|\sin\left(\frac{\cdot-y}{2}\right)|^\alpha} dy\right|\right|_{L^2}.\end{aligned}$$ Indeed we saw in Lemma \[LemmaICIS\] that this definition is equivalent to $$||f||_{H^{k+\alpha-1}}=||f||_{H^k} + || |m|^{\alpha-1}\hat{f}_m||_{l^2}.$$ Using this definition makes quite similar the proofs of Theorems \[teoremaexistenciaalpha1\] and \[teoremaexistenciaalphamayor1\]. We will use the following decomposition $$\begin{aligned} \frac{1}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{\alpha}{2}} = K_{S}(x,y)+\frac{1}{\left(R(x)^2+R'(x)^2\right)^\frac{\alpha}{2}}\frac{1}{2^\alpha\left|\sin\left(\frac{x-y}{2}\right)\right|^\alpha},\end{aligned}$$ where the kernel $$\begin{aligned} K_S(x,y)\equiv \frac{1}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{\alpha}{2}}- \frac{1}{\left(R(x)^2+R'(x)^2\right)^\frac{\alpha}{2}}\frac{1}{2^\alpha\left|\sin\left(\frac{x-y}{2}\right)\right|^\alpha}\end{aligned}$$ belongs to $\mathcal{H}_0$. Again the most singular term is ${\partial}^{k-1}F_2$. Making the change of variable $x-y \mapsto y$, taking ${\partial}^{k-1}$ derivatives with respect to $x$ and changing again to $y\mapsto x-y$ yields $$\begin{aligned} {\partial}^{k-1} F_{2}(R)=& C(\alpha)\int\frac{\cos(x-y)}{\left(\left(R(y)-R(x)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{\alpha}{2}} \left({\partial}^k R(x)-{\partial}^k R(y)\right)dy\\ &+ \text{l.o.t.}\end{aligned}$$ We will split the first term as follows $$\begin{aligned} {\partial}^{k-1} F_{2}(R)=&C(\alpha)\int \cos(x-y)K_{S}(x,y)\left({\partial}^k R(y)-{\partial}^kR(x)\right)dy\\ &-C(\alpha)\int \frac{\left|\sin\left(\frac{x-y}{2}\right)\right|}{\left(R(x)^2+R'(x)^2\right)^\frac{\alpha}{2}}\left({\partial}^k R(y)-{\partial}^k R(x)\right)dy\\ &+C(\alpha)\frac{1}{2^\alpha \left(R(x)^2+R'(x)^2\right)^\frac{{\alpha}}{2}} \int \frac{({\partial}^k R(y)-{\partial}^k R(x))}{\left|\sin\left(\frac{x-y}{2}\right)\right|^{\alpha}}.\end{aligned}$$ Therefore, because of the definition of the space $X^{k+\alpha-1}$ we have that $$\begin{aligned} ||{\partial}^{k-1} F_2(R)||_{L^2({\mathbb{T}})}\leq C\left(||R||_{X^{k+\log}},r\right).\end{aligned}$$ Step 2 ------ Again it is trivial to prove that $F(\Omega,1)=0$. Step 3 ------ We need to prove the existence and the continuity of the Gateaux derivatives ${\partial}_\Omega F(\Omega,R)$, ${\partial}_R F(\Omega,R)$ and ${\partial}_{\Omega,R}F(\Omega,R)$. In order to do it the most difficult part is to show the existence and continuity of ${\partial}_R F_i(R)$ for $i=1,2,3$. \[gatoderivadamayor1\] For all $R\in V^r$ and for all $h\in X^{k+{\alpha}-1}$ such that $||h||_{X^{k+{\alpha}-1}}=1$ we have that $$\lim_{t \to 0}\frac{F_i(R+th)-F_i(R)}{t}= D_i[R]h\quad \text{in $Y^{k-1}$},$$ where $$\begin{aligned} D_1[R] h =& -C(\alpha)\frac{h(x)}{ R(x)^2}\int \frac{\sin(x-y)}{\left((R(x)-R(y))^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{\alpha}{2}} \left(R(x)R(y)+R'(x)R'(y)\right)dy\\ & +C(\alpha)\frac{1}{ R(x)}\int \frac{\sin(x-y)}{\left((R(x)-R(y))^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{\alpha}{2}}\\&\times(h(x)R(y)+h(y)R(x)+(h'(x)R'(y)+h'(y)R'(x)))dy\\ &-\alpha C(\alpha)\frac{1}{ R(x)}\int \frac{\sin(x-y)(R(x)R(y)+R'(x)R'(y))}{\left((R(x)-R(y))^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^{\frac{\alpha}{2}+1}}\\ &\times\left((R(x)-R(y))(h(x)-h(y))+2(h(x)R(y)+h(y)R(x))\sin^2\left(\frac{x-y}{2}\right)\right)dy\\ D_2[R] h = & C(\alpha)\int\frac{\cos(x-y)}{\left((R(x)-R(y))^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{\alpha}{2}}\left(h'(y)-h'(x)\right)dy\\ &-\alpha C(\alpha)\int\frac{\cos(x-y)(R'(y)-R'(x))}{\left((R(x)-R(y))^2+ 4R(x)R(y)\sin\left(\frac{x-y}{2}\right)\right)^{\frac{\alpha}{2}+1}}\\ & \times \left((R(x)-R(y))(h(x)-h(y))+2(h(x)R(y)+h(y)R(x))\sin^2\left(\frac{x-y}{2}\right)\right)dy\\ D_3[R]h= & C(\alpha)\frac{h'(x)}{ R(x)}\int\frac{\cos(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{\alpha}{2}}(R(x)-R(y))dy\\ &-C(\alpha)\frac{R'(x) h(x)}{ R(x)^2}\int\frac{\cos(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{\alpha}{2}} (R(x)-R(y)) dy \\ &+C(\alpha)\frac{R'(x)}{ R(x)}\int\frac{\cos(x-y)}{\left(\left(R(x)-R(y)\right)^2+4R(x)R(y)\sin^2\left(\frac{x-y}{2}\right)\right)^\frac{\alpha}{2}}(h(x)-h(y))dy\\ &-\alpha C(\alpha)\frac{R'(x)}{ R(x)}\int \frac{\cos(x-y)(R(x)-R(y))}{\left((R(x)-R(y))^2+ 4R(x)R(y)\sin\left(\frac{x-y}{2}\right)\right)^{\frac{\alpha}{2}+1}}\\ & \times \left((R(x)-R(y))(h(x)-h(y))+2(h(x)R(y)+h(y)R(x))\sin^2\left(\frac{x-y}{2}\right)\right)dy.\end{aligned}$$ Moreover, $D_i[R] h$ are continuous in $R$. The proof of this lemma follows the same steps of the proof of Lemma \[gatoderivada\] with similar modifications that the ones done in the proof of proposition \[prop2\] with respect to the proof of proposition \[prop1\]. Steps 4, 5 and 6 ---------------- This is already done in the previous section. Regularity of solutions ======================= In this section, we will show that the solutions that we found and had limited regularity $X = \{X^{k}, X^{k+\alpha-1}, X^{k+\log}\}$ depending on whether ${\alpha}$ is smaller than, greater than or equal to 1 respectively) are indeed $C^{\infty}$. We will work with solutions that are contained in $B_r(1)$, the ball of radius $r$ around 1, which for simplicity we will denote by $B_r$. It will be clear from the context what norm to use in the different cases. To show the regularity, we will use the following common strategy in the three different cases ${\alpha}< 1$, ${\alpha}> 1$, and ${\alpha}= 1$: 1. Take $k-1$ derivatives and put the equation into the form $$\begin{aligned} (LI+S)({\partial}^{k-1} R)(x) = g(R)(x), \end{aligned}$$ where $LI$ is linear and invertible and $S$ satisfies $\|S(R_r)(x)\| \leq C_r\|R_r\|$, where $C_r \rightarrow 0$ when $r \to 0$ for every $R_r \in B_r$. It is crucial that $C_r$ is bounded independently of $k$, since otherwise $B_r$ would shrink to 0 whenever we let $k$ go to infinity. $LI+S$ will map functions from $H^{2-{\alpha}}$ into $H^{1-{\alpha}}$, $H^{2{\alpha}-1}$ into $H^{{\alpha}-1}$ or $X^{2+\log}$ into $H^{1}$ depending on ${\alpha}$ being smaller than, greater than or equal to 1 respectively and will be invertible since $C_r$ and $S$ can be as small as desired by taking $r$ small enough. We remark here that we are not inverting $R$ but ${\partial}^{k-1} R$ and we are regarding both $LI$ and $S$ as linear operators (that depend on $R$ and its lower order derivatives) acting on ${\partial}^{k-1} R$. 2. Show that if $R(x) \in X$, then $g(R)(x)$ is in $H^{\beta}$, where $\beta > 0$. This will allow us to bootstrap. For simplicity, we will show how to bootstrap when $k$ is an integer. However, the proof can be adapted for the case $k \not \in \mathbb{Z}$. The case $0 < {\alpha}< 1$ -------------------------- In this subsection, we will use $H^{k}$ spaces. There is no obstruction to the use of $C^{k}$ spaces, which is the space in which the existence of solutions was proved in [@Hassainia-Hmidi:v-states-generalized-sqg]. We will choose the following $LI$, $S$ and $g(R)$. It is immediate to check that they satisfy equation after taking $k-1$ derivatives: $$\begin{aligned} LI({\partial}^{k-1} R)(x) & = \Omega ({\partial}^{k-1} R)'(x) + ({\partial}^{k-1} R)'(x) C({\alpha}) \int \frac{\cos(y)}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy \\ S({\partial}^{k-1} R)(x) & = -C({\alpha}) \frac{({\partial}^{k-1} R)'(x)}{R(x)} \int \frac{\cos(y)(R(x-y)-R(x))+\sin(y)R'(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy \\ & + ({\partial}^{k-1} R)'(x) C({\alpha}) \int \frac{\cos(y)}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}\left[\frac{1}{\left(\left(\frac{R(x)-R(x-y)}{2\sin\left(\frac{y}{2}\right)}\right)^{2} + R(x)R(x-y)\right)^{{\alpha}/2}}-1\right]dy \\ & = S_1({\partial}^{k-1} R) + S_2({\partial}^{k-1} R) \\ g(R) & = {\partial}^{k-1} \left(C({\alpha})\int \frac{\sin(y)R(x-y) + \cos(y)R'(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy\right) \\ & + {\partial}^{k-1}\left(C({\alpha}) \frac{R'(x)}{R(x)} \int \frac{\cos(y)(R(x-y)-R(x))+\sin(y)R'(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy\right) \\ & -C({\alpha}) \frac{({\partial}^{k-1} R)'(x)}{R(x)} \int \frac{\cos(y)(R(x-y)-R(x))+\sin(y)R'(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy \\ & - \left({\partial}^{k-1}\left(R'(x) C({\alpha}) \int \frac{\cos(y)}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}\left[\frac{1}{\left(\left(\frac{R(x)-R(x-y)}{2\sin\left(\frac{y}{2}\right)}\right)^{2} + R(x)R(x-y)\right)^{{\alpha}/2}}-1\right]dy \right)\right. \\ & \left.- ({\partial}^{k-1} R)'(x) C({\alpha}) \int \frac{\cos(y)}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}\left[\frac{1}{\left(\left(\frac{R(x)-R(x-y)}{2\sin\left(\frac{y}{2}\right)}\right)^{2} + R(x)R(x-y)\right)^{{\alpha}/2}}-1\right]dy\right) \\ & = G_1 + G_2 + G_3\end{aligned}$$ \[lemmaISalphamenor1\] $LI$ and $S$ satisfy the following properties: 1. $LI$ is linear and invertible, and maps $H^{2-{\alpha}}$ into $H^{1-{\alpha}}$. 2. $\|S({\partial}^{k-1} R_r)(x)\|_{H^{1-{\alpha}}} \leq C_r\|{\partial}^{k-1} R_r\|_{H^{2-{\alpha}}}$, where $C_r \rightarrow 0$ when $r \to 0$ for every $R_r \in B_r$ and $C_r$ is independent of $k$. <!-- --> 1. The linearity of $LI$ is trivial. To check that $LI$ is invertible, we first compute the following integral using formula : $$\begin{aligned} \int \frac{\cos(y)}{(4\sin^2\left(\frac{y}{2}\right))^{{\alpha}/2}}dy = -2\pi \frac{\Gamma(1-{\alpha})}{\Gamma\left(2-\frac{{\alpha}}{2}\right)\Gamma\left(-\frac{{\alpha}}{2}\right)}.\end{aligned}$$ Therefore $$\begin{aligned} C({\alpha})\int \frac{\cos(y)}{(4\sin^2\left(\frac{y}{2}\right))^{{\alpha}/2}}dy & = - 2^{{\alpha}-1}\frac{\Gamma(1-{\alpha})\Gamma\left(\frac{{\alpha}}{2}\right)}{\Gamma\left(2-\frac{{\alpha}}{2}\right)\Gamma\left(-\frac{{\alpha}}{2}\right)\Gamma\left(1-\frac{{\alpha}}{2}\right)} \\ & = -2^{{\alpha}-1}\frac{\Gamma(1-{\alpha})}{\left(\Gamma\left(1-\frac{{\alpha}}{2}\right)\right)^{2}}\left(\frac{\Gamma\left(\frac{{\alpha}}{2}\right)\left(-\frac{{\alpha}}{2}\right)}{\Gamma\left(1-\frac{{\alpha}}{2}\right)\left(1-\frac{{\alpha}}{2}\right)}\right) = -2^{{\alpha}-1}\frac{\Gamma(1-{\alpha})}{\left(\Gamma\left(1-\frac{{\alpha}}{2}\right)\right)^{2}}\left(-\frac{\Gamma\left(1+\frac{{\alpha}}{2}\right)}{\Gamma\left(2-\frac{{\alpha}}{2}\right)}\right),\end{aligned}$$ which implies $$\begin{aligned} \Omega_m + C({\alpha})\int \frac{\cos(y)}{(4\sin^2\left(\frac{y}{2}\right))^{{\alpha}/2}}dy & =2^{{\alpha}-1}\frac{\Gamma(1-{\alpha})}{\left(\Gamma\left(1-\frac{{\alpha}}{2}\right)\right)^{2}}\left(\frac{\Gamma\left(m+\frac{{\alpha}}{2}\right)}{\Gamma\left(1+m-\frac{{\alpha}}{2}\right)}\right),\end{aligned}$$ which is different than zero for any real $m$. We remark that the possible values that $\Omega$ can take are neighborhoods of $\Omega_m$, i.e. neighbourhoods of integers $m$ in the previous formula, but the multiplier is non-zero for any value of $m$, which is a stronger statement. We can conclude the invertibility of $LI$ from this. 2. We will use the following estimate: \[lemmafolland\] Let $s>0$, $\sigma > \frac12$ and let $\phi \in H^{s+\sigma} \cap L^{\infty}$, $f \in H^{s}$. Then: $$\begin{aligned} \|\phi f\|_{H^{s}} \leq C(\|\phi\|_{L^{\infty}}\|f\|_{H^{s}} + \|\phi\|_{H^{s+\sigma}}\|f\|_{L^{2}}).\end{aligned}$$ If we take $H^{1-{\alpha}}$ norms of $S_1({\partial}^{k-1} R)$, $$\begin{aligned} \|S_1\|_{H^{1-{\alpha}}} \leq C \|{\partial}^{k-1} R\|_{H^{2-{\alpha}}} \|R'\|_{L^{\infty}} + C_{r} \|{\partial}^{k-1} R\|_{H^{1}} = C_r \|{\partial}^{k-1} R\|_{H^{2-{\alpha}}},\end{aligned}$$ where we have used Lemma \[lemmafolland\] and $$\begin{aligned} \left\|C({\alpha}) \frac{1}{R(x)} \int \frac{\cos(y)(R(x-y)-R(x))+\sin(y)R'(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy\right\|_{H^{1-{\alpha}+\sigma}} \leq C_r,\end{aligned}$$ where $\sigma > \frac12$. This shows the boundedness of $S_1$. To bound $S_2$, we will use that for any $p,q > 0$: $$\begin{aligned} \left|\frac{1}{p^{{\alpha}}} - \frac{1}{q^{{\alpha}}}\right| \leq C_{{\alpha}}|q-p|\frac{1}{p^{{\alpha}}q^{{\alpha}}(p^{1-{\alpha}} + q^{1-{\alpha}})}.\end{aligned}$$ If we take $$\begin{aligned} p =\left(\left(\frac{R(x)-R(x-y)}{2\sin\left(\frac{y}{2}\right)}\right)^{2} + R(x)R(x-y)\right) , \quad q = 1.\end{aligned}$$ Then $$\begin{aligned} |p-q| \leq \left(\frac{R(x)-R(x-y)}{2\sin\left(\frac{y}{2}\right)}\right)^{2} + |(R(x)-1)R(x-y)| + |R(x-y)-1|,\end{aligned}$$ and the rest of the factors have upper and lower bounds. Thus, using again Lemma \[lemmafolland\] with $\sigma > \frac12$: $$\begin{aligned} \|S_2\|_{H^{1-{\alpha}}} & \leq C \|{\partial}^{k-1} R\|_{H^{2-{\alpha}}} \left\|\frac{1}{\left(\left(\frac{R(x)-R(x-y)}{2\sin\left(\frac{y}{2}\right)}\right)^{2} + R(x)R(x-y)\right)^{{\alpha}/2}}-1\right\|_{L^{\infty}(x,y)} \\ & + C \|{\partial}^{k-1} R\|_{H^{1}} \left\|\int \frac{\cos(y)}{\left(4\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}\left(\frac{1}{\left(\left(\frac{R(x)-R(x-y)}{2\sin\left(\frac{y}{2}\right)}\right)^{2} + R(x)R(x-y)\right)^{{\alpha}/2}}-1\right)\right\|_{H^{1-{\alpha}+\sigma}} \\ & \leq C \|{\partial}^{k-1} R\|_{H^{2-{\alpha}}} (\|R'\|_{L^{\infty}}^{2} + (\|R\|_{L^{\infty}}+1)\|R-1\|_{L^{\infty}}) + C_{r} \|{\partial}^{k-1} R\|_{H^{1}} \\ & = C_r \|{\partial}^{k-1} R\|_{H^{2-{\alpha}}}.\end{aligned}$$ This finishes the $H^{1-{\alpha}}$-boundedness of $S$. \[lemmaGalphamenor1\] Let $g(R)$ be defined as before, and let $R \in H^{k}$. Then $g(R) \in H^{1-{\alpha}}$. 1. To bound $G_1$ in $H^{1-{\alpha}}$, we first notice that the part of $G_1$ that contains the factor $\sin(y)R(x-y)$ is of lower order term than the one with $\cos(y)R'(x-y)$. Hence, we will focus on the latter. We first apply the $k-1$ derivatives and look to the most singular terms. One of them is the following: $$\begin{aligned} \left(C({\alpha})\int \frac{\cos(y){\partial}^{k} R(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy\right).\end{aligned}$$ We now integrate by parts to obtain $$\begin{aligned} & {\alpha}C({\alpha})\int \frac{\cos(y)({\partial}^{k-1}R(x) - {\partial}^{k-1} R(x-y))}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2+1}} \\ & \times \left((R(x)-R(x-y))R'(x-y) + 2R(x)R(x-y)\sin\left(\frac{y}{2}\right)\cos\left(\frac{y}{2}\right)\right)dy + \text{l.o.t}.\end{aligned}$$ We can now split the kernel $$\begin{aligned} & \frac{\cos(y)\left((R(x)-R(x-y))R'(x-y) + 2R(x)R(x-y)\sin\left(\frac{y}{2}\right)\cos\left(\frac{y}{2}\right)\right)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2+1}} \\ & = \frac{1}{\left((R(x))^2 + (R'(x))^2\right)^{{\alpha}/2}} \frac{1}{\left|\sin\left(\frac{y}{2}\right)\right|^{{\alpha}+1}} + \text{Rem}(x,y),\end{aligned}$$ where Rem$(x,y) \in \mathcal{H}_{0}$. Plugging the decomposition of the kernel, we need to bound the $H^{1-{\alpha}}$ of $$\begin{aligned} & {\alpha}C({\alpha})\frac{1}{\left((R(x))^2 + (R'(x))^2\right)^{{\alpha}/2}}\int \frac{({\partial}^{k-1}R(x) - {\partial}^{k-1} R(x-y))}{\left|\sin\left(\frac{y}{2}\right)\right|^{{\alpha}+1}}dy \\ & + {\alpha}C({\alpha})\int ({\partial}^{k-1}R(x) - {\partial}^{k-1} R(x-y))\text{Rem}(x,y)dy \\ & = G_{11} + G_{12}.\end{aligned}$$ Taking $\Lambda^{1-{\alpha}}$, modulo low order commutators, the most singular terms are $$\begin{aligned} & {\alpha}C({\alpha})\frac{1}{\left((R(x))^2 + (R'(x))^2\right)^{{\alpha}/2}}\int \frac{(\Lambda^{1-{\alpha}}{\partial}^{k-1}R(x) - \Lambda^{1-{\alpha}}{\partial}^{k-1} R(x-y))}{\left|\sin\left(\frac{y}{2}\right)\right|^{{\alpha}+1}}dy \\ & + {\alpha}C({\alpha})\int (\Lambda^{1-{\alpha}}{\partial}^{k-1}R(x) - \Lambda^{1-{\alpha}}{\partial}^{k-1} R(x-y))\text{Rem}(x,y)dy, \\\end{aligned}$$ which can be bounded by $C\|R\|_{H^{k-1+1-{\alpha}+{\alpha}+1-1}} = C\|R\|_{H^{k}}$ and $C\|R\|_{H^{k-{\alpha}}}$ (low order) respectively. Finally, the term where 1 of the $k-1$ derivatives hits the denominator and the other $k-2$ hit the factor $R'(x)-R'(x-y)$ is treated the same way. This concludes the boundedness of $G_1$. 2. We start calculating the most singular terms of $G_2$. First, if all the derivatives hit the same factor on the numerator: $$\begin{aligned} & C({\alpha}) \frac{R'(x)}{R(x)} \int \frac{\cos(y)({\partial}^{k-1} R(x-y)-{\partial}^{k-1} R(x))+\sin(y){\partial}^{k} R(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy \\ & = G_{21} + G_{22}.\end{aligned}$$ $G_{21}$ is less singular than $G_{11}$ and it is estimated in the same way. To deal with $G_{22}$, we integrate by parts and get a kernel that is of the same order as $G_{21}$, therefore repeating the same procedure as with $G_{11}$. The final factor is the one we get if we hit the denominator with one derivative, and the $k-2$ remaining ones hit the factor $R'(x) - R'(x-y)$, but again this is treated as $G_{21}$. 3. We compute the most singular terms of $G_{3}$, obtaining $$\begin{aligned} &\left(\frac{{\alpha}}{2}\right) R'(x) C({\alpha}) \int \frac{\cos(y)}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}\frac{1}{\left(\left(\frac{R(x)-R(x-y)}{2\sin\left(\frac{y}{2}\right)}\right)^{2} + R(x)R(x-y)\right)^{{\alpha}/2+1}} \\ &\times \left(\frac{(R(x)-R(x-y))({\partial}^{k-1}R(x) - {\partial}^{k-1}R(x-y))}{4\sin^2\left(\frac{y}{2}\right)}\right)dy, \\\end{aligned}$$ which is easily bounded like $G_{1}$. Combining the two previous lemmas, we obtain the following corollary: Let $R \in H^{k}$. Then $R \in H^{k+(1-{\alpha})}$. \[bootstrapalphamenor1\] By Lemma \[lemmaGalphamenor1\], $g(R) \in H^{k-{\alpha}}$, and by Lemma \[lemmaISalphamenor1\], $LI+S$ is invertible. $(LI+S)^{-1}$ maps $H^{1-{\alpha}}$ into $H^{2-{\alpha}}$. Thus $$\begin{aligned} {\partial}^{k-1} R = \underbrace{(LI+S)^{-1}\underbrace{g(R)}_{\in H^{1-{\alpha}}}}_{\in H^{2-{\alpha}}} \in H^{2-{\alpha}},\end{aligned}$$ which implies $R \in H^{k+1-{\alpha}}$. The case $1 < {\alpha}< 2$ -------------------------- We will choose the following $LI$, $S$ and $g(R)$. It is immediate to check that they satisfy equation after taking $k-1$ derivatives: $$\begin{aligned} LI({\partial}^{k-1} R)(x) & = -C({\alpha}) \int \frac{({\partial}^{k-1} R)'(x-y)-({\partial}^{k-1} R)'(x)}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy \\ S({\partial}^{k-1} R)(x) & = -C({\alpha}) \int \frac{(({\partial}^{k-1} R)'(x-y)-({\partial}^{k-1} R)'(x))}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}\left[\frac{1}{\left(\left(R'(x)\right)^{2} + (R(x))^2\right)^{{\alpha}/2}}-1\right]dy \\ g(R)(x) & = -\Omega {\partial}^{k} R(x) \\ & + {\partial}^{k-1} \left(C({\alpha})\frac{R'(x)}{R(x)} \int \frac{\cos(y)(R(x-y)-R(x))+\sin(y)R'(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy\right) \\ & + {\partial}^{k-1} \left(C({\alpha})\int \frac{\sin(y)R(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy\right) \\ & + \left({\partial}^{k-1} \left(C({\alpha}) \int \frac{\cos(y)(R'(x-y)-R'(x))}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}\left[\frac{1}{\left(\left(\frac{R(x)-R(x-y)}{2\sin\left(\frac{y}{2}\right)}\right)^{2} + R(x)R(x-y)\right)^{{\alpha}/2}}\right]dy\right)\right. \\ & \left.- C({\alpha}) \int \frac{\cos(y)(({\partial}^{k-1} R)'(x-y)-({\partial}^{k-1} R)'(x))}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}\left[\frac{1}{\left(\left(\frac{R(x)-R(x-y)}{2\sin\left(\frac{y}{2}\right)}\right)^{2} + R(x)R(x-y)\right)^{{\alpha}/2}}\right]dy\right. \\ & \left. + C({\alpha}) \int \frac{(({\partial}^{k-1} R)'(x-y)-({\partial}^{k-1} R)'(x))}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}\right. \\ & \left.\times \left[\frac{\cos(y)}{\left(\left(\frac{R(x)-R(x-y)}{2\sin\left(\frac{y}{2}\right)}\right)^{2} + R(x)R(x-y)\right)^{{\alpha}/2}}-\frac{1}{((R'(x))^2 + (R(x))^2)^{{\alpha}/2}}\right]dy\right)\\ & = G_1(R) + G_2(R) + G_3(R) + G_4(R) + G_5(R) \end{aligned}$$ \[lemmaISalphamayor1\] $LI$ and $S$ satisfy the following properties: 1. $LI$ is linear and invertible, and maps $H^{2{\alpha}-1}$ into $H^{{\alpha}-1}$. 2. $\|S({\partial}^{k-1} R_r)(x)\|_{H^{{\alpha}-1}} \leq C_r\|{\partial}^{k-1} R_r\|_{H^{2{\alpha}-1}}$, where $C_r \rightarrow 0$ when $r \to 0$ for every $R_r \in B_r$ and $C_r$ is independent of $k$. <!-- --> 1. The linearity of $LI$ is trivial. We saw in section \[sectionexistencealpha1\] that $LI$ is invertible since the multiplier in Fourier space does not vanish and moreover it maps $H^{2{\alpha}-1}$ into $H^{{\alpha}-1}$ by definition of the $H^{s}$-norm. 2. We note that we can bound, for $\sigma > \frac12$: $$\begin{aligned} \left\|\frac{1}{\left(\left(R'(x)\right)^{2} + (R(x))^{2}\right)^{{\alpha}/2}}-1\right\|_{L^{\infty}} & \leq C\left\|(R'(x))^{2} + (R(x))^{2} - 1\right\|_{L^{\infty}} \leq C \left\|1-R\right\|_{L^{\infty}} + C\|R'\|_{L^{\infty}} = C_r. \\ \left\|\frac{1}{\left(\left(R'(x)\right)^{2} + (R(x))^{2}\right)^{{\alpha}/2}}-1\right\|_{H^{{\alpha}-1+\sigma}} & \leq C_{r}. \\\end{aligned}$$ Then, $S$ can be bounded in the $H^{{\alpha}-1}$-norm (for $\sigma > \frac12$) by means of Lemma \[lemmafolland\] by the sum of the following two terms: $$\begin{aligned} C \left\|\int \frac{({\partial}^{k} R(x-y)- {\partial}^{k} R(x))}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy\right\|_{H^{{\alpha}-1}}\left\|\frac{1}{\left(\left(R'(x)\right)^{2} + (R(x))^{2}\right)^{{\alpha}/2}}-1\right\|_{L^{\infty}} \leq C_r \|{\partial}^{k-1} R\|_{H^{2{\alpha}-1}} \\\end{aligned}$$ and $$\begin{aligned} C \left\|\int \frac{({\partial}^{k} R(x-y)- {\partial}^{k} R(x))}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy\right\|_{L^{2}}\left\|\frac{1}{\left(\left(R'(x)\right)^{2} + (R(x))^{2}\right)^{{\alpha}/2}}\right\|_{H^{{\alpha}-1+\sigma}} \leq C_r \|{\partial}^{k-1} R\|_{H^{{\alpha}}},\end{aligned}$$ therefore getting the bound $$\begin{aligned} \|S\|_{H^{{\alpha}-1}} \leq C_{r} \|{\partial}^{k-1} R\|_{H^{2{\alpha}-1}}\end{aligned}$$ \[lemmaGalphamayor1\] Let $g(R)$ be defined as before, and let $R \in H^{k+{\alpha}-1}$. Then $g(R) \in H^{{\alpha}-1}$. We will go term by term over the $G_i$. 1. $G_1$ is trivial and the $H^{{\alpha}-1}$-norm is clearly bounded by a constant times the $H^{k+{\alpha}-1}$-norm of $R$. 2. We apply the $k-1$ derivatives and look for the most singular terms. These are: $$\begin{aligned} & C({\alpha})\frac{{\partial}^{k} R(x)}{R(x)} \int \frac{\cos(y)(R(x-y)-R(x))+\sin(y)R'(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy \\ & + C({\alpha})\frac{R'(x)}{R(x)} \int \frac{\cos(y)({\partial}^{k-1} R(x-y)-{\partial}^{k-1} R(x))+\sin(y){\partial}^{k} R(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy \\ & + C({\alpha})\left(-\frac{{\alpha}}{2}\right)\frac{R'(x)}{R(x)} \int \frac{\cos(y)(R(x-y)-R(x))+\sin(y)R'(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2+1}} \\ & \times \left(2(R(x)-R(x-y))({\partial}^{k-1}R(x) - {\partial}^{k-1}R(x-y))+4({\partial}^{k-1}R(x)R(x-y) + R(x){\partial}^{k-1}R(x-y))\sin^{2}\left(\frac{y}{2}\right)\right) dy \\ & = G_{21} + G_{22} + G_{23}\end{aligned}$$ The $H^{{\alpha}-1}$ norm of $G_{21}$ can be bounded by $$\begin{aligned} \|G_{21}\|_{H^{{\alpha}-1}} \leq C\|R\|_{H^{k+{\alpha}-1}} + \text{l.o.t},\end{aligned}$$ We split $G_{22}$ into two: $$\begin{aligned} & C({\alpha})\frac{R'(x)}{R(x)} \int \frac{\cos(y)({\partial}^{k-1} R(x-y)-{\partial}^{k-1} R(x))}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy \\ & + C({\alpha})\frac{R'(x)}{R(x)} \int \frac{\sin(y){\partial}^{k} R(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy \\ & = G_{221} + G_{222}\end{aligned}$$ We deal first with $G_{222}$. We take first $\Lambda^{{\alpha}-1}$ and estimate its $L^{2}$ norm. Modulo low order commutators, the most singular term will be $$\begin{aligned} C({\alpha})\frac{R'(x)}{R(x)} \int \frac{\sin(y)\Lambda^{{\alpha}-1} {\partial}^{k} R(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy,\end{aligned}$$ which can be bounded in $L^{2}$ by a multiple of $\|R\|_{H^{k+{\alpha}-1}}$ since the kernel $$\begin{aligned} K(x,y) = \frac{\sin(y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2}} \in \mathcal{H}_{0}.\end{aligned}$$ We move on to $G_{221}$. We first decompose the kernel into $$\begin{aligned} \frac{\cos(y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2}} = \frac{1}{\left(\left((R'(x)\right)^2 + (R(x))^2\right)^{{\alpha}/2}} \frac{1}{\left(4 \sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2}} + \text{Rem}(x,y),\end{aligned}$$ where Rem$(x,y) \in \mathcal{H}_{1-{\alpha}}$. Again, modulo commutators and taking ${\alpha}-1$ derivatives, we can write the most singular terms as $$\begin{aligned} & C({\alpha})\frac{R'(x)}{R(x)}\frac{1}{\left(\left((R'(x)\right)^2 + (R(x))^2\right)^{{\alpha}/2}}\int \frac{(\Lambda^{{\alpha}-1}{\partial}^{k-1} R(x-y)-\Lambda^{{\alpha}-1}{\partial}^{k-1} R(x))}{\left(4 \sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2}}dy \\ & + C({\alpha})\frac{R'(x)}{R(x)} \int (\Lambda^{{\alpha}-1}{\partial}^{k-1} R(x-y)-\Lambda^{{\alpha}-1}{\partial}^{k-1} R(x))\text{Rem}(x,y)dy \\\end{aligned}$$ Thus $$\begin{aligned} \|G_{221}\|_{H^{{\alpha}}} \leq C\|R\|_{H^{k-3+2{\alpha}}} + \text{l.o.t} \leq C\|R\|_{H^{k+{\alpha}-1}} + \text{l.o.t}\end{aligned}$$ The terms that contain a factor of $4({\partial}^{k-1}R(x)R(x-y) + R(x){\partial}^{k-1}R(x-y))\sin^{2}\left(\frac{y}{2}\right)$ in $G_{23}$ are low order and can be estimated as the previous ones. From what is left, again, modulo low order commutators, the most singular term is $$\begin{aligned} & C({\alpha})\left(-\frac{{\alpha}}{2}\right)\frac{R'(x)}{R(x)} \int \frac{\cos(y)(R(x-y)-R(x))+\sin(y)R'(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{{\alpha}/2+1}} \\ & \times(2(R(x)-R(x-y))(\Lambda^{{\alpha}-1} {\partial}^{k-1}R(x) - \Lambda^{{\alpha}-1}{\partial}^{k-1}R(x-y))) dy, \\\end{aligned}$$ but this term can be bounded as $G_{221}$. This finishes $G_{2}$. 3. $G_3$ is less singular than $G_2$ and thus can be bounded in the same way. 4. The most singular terms in $G_4$ are when $k-2$ derivatives hit the factor $(R'(x-y)-R'(x))$ and one hits the denominator, or when one hits the denominator and the remaining $k-2$ hit one of the factors $(R'(x-y)-R'(x))$. Both terms can be estimated as $G_{221}$. 5. $G_5$ can also be estimated as $G_{221}$ using that the kernel $$\begin{aligned} \left[\frac{\cos(y)}{\left(\left(\frac{R(x)-R(x-y)}{2\sin\left(\frac{y}{2}\right)}\right)^{2} + R(x)R(x-y)\right)^{{\alpha}/2}}-\frac{1}{((R'(x))^2 + (R(x))^2)^{{\alpha}/2}}\right]\end{aligned}$$ belongs to $\mathcal{H}_{1}$. Combining the two previous lemmas, we obtain the following corollary: \[bootstrapalphamayor1\] Let $R \in H^{k+{\alpha}-1}$. Then $R \in H^{k+2{\alpha}-2}$. By Lemma \[lemmaGalphamayor1\], $g(R) \in H^{{\alpha}-1}$, and by Lemma \[lemmaISalphamayor1\], $LI+S$ is invertible. $(LI+S)^{-1}$ maps $H^{{\alpha}-1}$ into $H^{2{\alpha}-1}$. Thus $$\begin{aligned} {\partial}^{k-1} R = \underbrace{(LI+S)^{-1}\underbrace{g(R)}_{\in H^{{\alpha}-1}}}_{\in H^{2{\alpha}-1}} \in H^{2{\alpha}-1},\end{aligned}$$ which implies $R \in H^{k+2{\alpha}-2}$. The case ${\alpha}= 1$ ---------------------- We will choose the following $LI$, $S$ and $g(R)$. It is immediate to check that they satisfy equation after taking $k-1$ derivatives: $$\begin{aligned} LI({\partial}^{k-1} R)(x) & = \Omega ({\partial}^{k-1} R)'(x) - \frac{1}{2\pi} \int \frac{\cos(y)(({\partial}^{k-1} R)'(x-y)-({\partial}^{k-1} R)'(x)}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{1/2}}dy \\ S({\partial}^{k-1} R)(x) & = -\frac{1}{2\pi}\frac{({\partial}^{k-1} R)'(x)}{R(x)} \int \frac{\cos(y)(R(x)-R(x-y)) + \sin(y)R'(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{1/2}}dy \\ & -\frac{1}{2\pi} \int \frac{\cos(y)(({\partial}^{k-1} R)'(x-y)-({\partial}^{k-1} R)'(x))}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{1/2}}\left[\frac{1}{\left(\left(\frac{R(x)-R(x-y)}{2\sin\left(\frac{y}{2}\right)}\right)^{2} + R(x)R(x-y)\right)^{1/2}}-1\right]dy \\ & = S_1 + S_2 \\ g(R)(x) & = {\partial}^{k-1}\left( \frac{1}{2\pi}\int \frac{\sin(y)R(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{1/2}}dy\right) \\ & + \left({\partial}^{k-1}\left(\frac{1}{2\pi}\frac{R'(x)}{R(x)} \int \frac{\cos(y)(R(x)-R(x-y)) + \sin(y)R'(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{1/2}}dy\right)\right. \\ & \left. - \frac{1}{2\pi}\frac{({\partial}^{k-1} R)'(x)}{R(x)} \int \frac{\cos(y)(R(x)-R(x-y)) + \sin(y)R'(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{1/2}}dy \right) \\ & + \left({\partial}^{k-1} \left(\frac{1}{2\pi} \int \frac{\cos(y)(R'(x-y)-R'(x))}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{1/2}}\left[\frac{1}{\left(\left(\frac{R(x)-R(x-y)}{2\sin\left(\frac{y}{2}\right)}\right)^{2} + R(x)R(x-y)\right)^{1/2}}\right]dy\right)\right. \\ & \left.- \frac{1}{2\pi} \int \frac{\cos(y)(({\partial}^{k-1} R)'(x-y)-({\partial}^{k-1} R)'(x))}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{1/2}}\left[\frac{1}{\left(\left(\frac{R(x)-R(x-y)}{2\sin\left(\frac{y}{2}\right)}\right)^{2} + R(x)R(x-y)\right)^{1/2}}\right]dy\right. \\ & = G_1 + G_2 + G_3$$ \[lemmaISalpha1\] $LI$ and $S$ satisfy the following properties: 1. For every $\Omega$, $LI$ is linear, and maps $X^{2+\log}$ into $H^{1}$. $LI(\Omega_{m})$ is not invertible, and has a one-dimensional kernel spanned by $\cos(mx)$. 2. $\|S({\partial}^{k-1} R_r)(x)\|_{H^{1}} \leq C_r\|{\partial}^{k-1} R_r\|_{X^{2+\log}}$, where $C_r \rightarrow 0$ when $r \to 0$ for every $R_r \in B_r$ and $C_r$ is independent of $k$. <!-- --> 1. The linearity of $LI$ is trivial. We saw in section \[sectionexistencealpha1\] that $LI(\Omega_{m})$ is not invertible, its kernel is spanned by $\cos(mx)$ and $LI$ maps $X^{2+\log}$ into $H^{1}$. This creates a technical problem, since the inverse of $LI$ is not uniformly bounded in $\Omega$. We deal with this problem in Corollary \[bootstrapalpha1\]. 2. We start with $S_2$ and decompose the kernel $$\begin{aligned} \frac{1}{\left(\left(\frac{R(x)-R(x-y)}{2\sin\left(\frac{y}{2}\right)}\right)^{2} + R(x)R(x-y)\right)^{1/2}} = \frac{1}{\left((R(x))^{2} + (R'(x))^{2}\right)^{1/2}} + \text{Rem}(x,y),\end{aligned}$$ where Rem$(x,y) \in \mathcal{H}_{1}$ and $$\begin{aligned} \sup_{x\in {\mathbb{T}}} \left\|\frac{\text{Rem}(x,\cdot)}{\sin(\cdot)}\right\|_{L^1({\mathbb{T}})}\leq C_r, \quad \sup_{y \in {\mathbb{T}}}\left\|\frac{\text{Rem}(\cdot,y)}{\sin(y)}\right\|_{L^1({\mathbb{T}})}\leq C_r \\ \sup_{x\in {\mathbb{T}}} \left\|\frac{{\partial}_{x} \text{Rem}(x,\cdot)}{\sin(\cdot)}\right\|_{L^1({\mathbb{T}})}\leq C_r, \quad \sup_{y \in {\mathbb{T}}}\left\|\frac{{\partial}_{x} \text{Rem}(\cdot,y)}{\sin(y)}\right\|_{L^1({\mathbb{T}})}\leq C_r \\\end{aligned}$$ This means that $S_2$ can be written as $$\begin{aligned} S_2 & = -\frac{1}{2\pi} \int \frac{(({\partial}^{k-1} R)'(x-y)-({\partial}^{k-1} R)'(x))}{\left(4\sin^2\left(\frac{y}{2}\right)\right)^{1/2}}\left[\frac{1}{\left(\left(R'(x)\right)^{2} + (R(x))^2\right)^{1/2}}-1\right]dy \\ & -\frac{1}{2\pi} \int \frac{({\partial}^{k-1} R)'(x-y)-({\partial}^{k-1} R)'(x))}{\left(4\sin^{2}\left(\frac{y}{2}\right)\right)^{1/2}}\text{Rem}(x,y)dy.\end{aligned}$$ We note that we can bound $$\begin{aligned} \left\|\frac{1}{\left(\left(R'(x)\right)^{2} + (R(x))^{2}\right)^{1/2}}-1\right\|_{L^{\infty}} \leq C\left\|(R'(x))^{2} + (R(x))^{2} - 1\right\|_{L^{\infty}} \leq C \left\|1-R\right\|_{L^{\infty}} + C\|R'\|_{L^{\infty}} = C_r. \\ \left\|{\partial}_{x}\left(\frac{1}{\left(\left(R'(x)\right)^{2} + (R(x))^{2}\right)^{1/2}}-1\right)\right\|_{L^{\infty}} \leq C_r \end{aligned}$$ Then, $S_2$ can be bounded in the $H^{1}$-norm by $$\begin{aligned} & C \left\|{\partial}^{k-1} R\right\|_{H^{2+\log}}\left\|\frac{1}{\left(\left(R'(x)\right)^{2} + (R(x))^{2}\right)^{1/2}}-1\right\|_{L^{\infty}} + C_r\|{\partial}^{k-1}R\|_{H^{1+\log}} + C_r\|{\partial}^{k-1} R\|_{H^{2+\log}} = C_r \|{\partial}^{k-1} R\|_{H^{2+\log}}. \end{aligned}$$ In order to bound the $H^{1}$ norm of $S_1$, the most singular term can be bounded by $$\begin{aligned} \|S_1\|_{H^{1}} \leq C \|{\partial}^{k-1} R\|_{H^{2}} \|R'\|_{L^{\infty}} + C \|{\partial}^{k-1} R\|_{H^{1}} \|R''\|_{L^{\infty}} \leq C_r \|{\partial}^{k-1} R\|_{H^{2+\log}}.\end{aligned}$$ \[lemmaGalpha1\] Let $g(R)$ be defined as before, and let $R \in X^{k+\log}$. Then $g(R) \in H^{1}$. 1. We start with $G_{2}$. We take $k$ derivatives (the outer $k-1$ plus one more) and compute the most singular terms. The objective is to bound the terms in $L^{2}$ norm. The terms are $$\begin{aligned} G_{21} & = \frac{1}{2\pi}\frac{R'(x)}{R(x)} \int \frac{\cos(y)({\partial}^{k} R(x)-{\partial}^{k} R(x-y))}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{1/2}}dy \\ G_{22} & = \frac{1}{2\pi}\frac{R'(x)}{R(x)} \int \frac{\sin(y){\partial}^{k+1} R(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{1/2}}dy \\ G_{23} & = \frac{1}{4\pi}\frac{R'(x)}{R(x)} \int \frac{\cos(y)(R(x)-R(x-y)) + \sin(y)R'(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{3/2}} \\ & \times \left(2(R(x)-R(x-y))({\partial}^{k} R(x) - {\partial}^{k} R(x-y))+4({\partial}^{k} R(x) R(x-y) + {\partial}^{k}R(x-y))\sin^{2}\left(\frac{y}{2}\right)\right)dy.\end{aligned}$$ We start with $G_{21}$. We split the kernel in the following way: $$\begin{aligned} \frac{\cos(y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{1/2}} = \frac{1}{\left((R'(x))^2 + (R(x))^2\right)^{1/2}}\frac{1}{\left(4\sin^{2}\left(\frac{y}{2}\right)\right)^{1/2}} + \text{Rem}(x,y),\end{aligned}$$ where Rem$(x,y) \in \mathcal{H}_{0}$. $G_{21}$ becomes $$\begin{aligned} \frac{1}{2\pi}\frac{R'(x)}{R(x)} \int \frac{{\partial}^{k} R(x)-{\partial}^{k} R(x-y)}{\left((R'(x))^2 + (R(x))^2\right)^{1/2}}\frac{1}{\left(4\sin^{2}\left(\frac{y}{2}\right)\right)^{1/2}}dy + \frac{1}{2\pi}\frac{R'(x)}{R(x)} \int({\partial}^{k} R(x)-{\partial}^{k} R(x-y))\text{Rem}(x,y)dy,\end{aligned}$$ which can be bounded in $L^{2}$ by $$\begin{aligned} C\|R\|_{X^{k+\log}} + C\|R\|_{H^{k}} < \infty.\end{aligned}$$ We integrate by parts in $G_{22}$ to get $$\begin{aligned} G_{22} & = -\frac{1}{2\pi}\frac{R'(x)}{R(x)} \int \frac{{\partial}^{k} R(x) - {\partial}^{k} R(x-y)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{1/2}} \\ & \times \left(\cos(y) - \frac{\sin(y)}{2}\frac{2(R(x)-R(x-y)R'(x-y) - 4R(x)R'(x-y)\sin^{2}\left(\frac{y}{2}\right) +4R(x)R(x-y)\sin\left(\frac{y}{2}\right)\cos\left(\frac{y}{2}\right)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)}\right)dy\end{aligned}$$ Again, the kernel can be decomposed into $$\begin{aligned} & \frac{1}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)^{1/2}} \\ & \times \left(\cos(y) - \frac{\sin(y)}{2}\frac{2(R(x)-R(x-y)R'(x-y) - 4R(x)R'(x-y)\sin^{2}\left(\frac{y}{2}\right) +4R(x)R(x-y)\sin\left(\frac{y}{2}\right)\cos\left(\frac{y}{2}\right)}{\left((R(x)-R(x-y))^2+4R(x)R(x-y)\sin^{2}\left(\frac{y}{2}\right)\right)}\right) \\ & = \frac{1}{\left(4\sin^{2}\left(\frac{y}{2}\right)\right)^{1/2}}\frac{R(x)R'(x) + R'(x)R''(x)}{\left((R(x))^2 + (R'(x))^2\right)^{2}} + \text{Rem}(x,y),\end{aligned}$$ where Rem$(x,y) \in \mathcal{H}_{0}$. This produces the following bound: $$\begin{aligned} \|G_{22}\|_{L^{2}} \leq C\|R\|_{X^{k+\log}} + \text{l.o.t}\end{aligned}$$ Finally, $G_{23}$ is estimated as $G_{21}$. 2. $G_1$ is less singular than $G_{22}$ and is estimated the same way. 3. The most singular terms in $G_{3}$ are the ones when we hit with 1 derivative the denominator and $k-1$ derivatives one of the $R'(x) - R'(x-y)$ factors. But they are estimated in the same way as $G_{21}$. Let us now decompose $R$ into $R^{high}$, the part corresponding to the frequencies greater than $m$, and $R^{low}$, the part corresponding to the frequencies smaller or equal than $m$. Since we can regard $LI$ and $S$ as linear operators acting on ${\partial}^{k-1} R$, we can write $$\begin{aligned} \label{ecuaciontildeISG} (LI + S)({\partial}^{k-1} R^{high}) + (LI + S)({\partial}^{k-1} R^{low}) = g(R) \\ \Rightarrow (LI + S)({\partial}^{k-1} R^{high}) = g(R) - (LI + S)({\partial}^{k-1} R^{low}) \equiv \tilde{g}(R)\end{aligned}$$ We do this splitting because we will want to make the norm of $S$ small with respect to $\frac{1}{\|LI^{-1}\|}$ to be able to invert $LI+S$. This may not be possible since both quantities ($\|S\|$ and $\frac{1}{\|LI^{-1}\|}$) go to zero as $\Omega \rightarrow \Omega_{m}$ and it is not clear that one is smaller than the other. We will prevent this situation from happening by inverting $LI$ only on high frequencies. Let $\tilde{g}(R)$ be defined as above, and let $R \in X^{k+\log}$. Then $\tilde{g}(R) \in H^{1}$. The regularity of $g(R)$ was proved in Lemma \[lemmaGalpha1\]. We only need to show the regularity of $(LI+S)({\partial}^{k-1} R^{low})$, but this follows easily from the fact that $R^{low}$ is analytic and therefore $LI$ and $S$ can be bounded by low order norms of $R$. \[bootstrapalpha1\] Let $R \in X^{k+\log}$. Then $R \in X^{k+1+\log}$. Let $F_m$ be the space that consists of functions that have modes $> m$. It is immediate that $LI$ maps $F_m$ into $F_m$, but it is not obvious that $S$ does the same. Instead, to ensure this, we apply a projection operator $P_{>m}$ onto the modes greater than $m$ to equation to obtain $$\begin{aligned} (P_{>m}LI + P_{>m}S)({\partial}^{k-1}R^{high}) = P_{>m}\tilde{g}(R).\end{aligned}$$ Now, both $P_{>m}LI$ and $P_{>m}S$ map $F_m$ into $F_m$ and $ (P_{>m}LI + P_{>m}S)$ is invertible. Therefore $$\begin{aligned} {\partial}^{k-1} R^{high} = \underbrace{(P_{>m}LI+P_{>m}S)^{-1}\underbrace{P_{>m}\tilde{g}(R)}_{\in H^{1}}}_{\in X^{2+\log}} \in X^{2+\log},\end{aligned}$$ which implies $R^{high} \in X^{k+1+\log}$. Finally $$\begin{aligned} \|R\|^{2}_{X^{k+1+\log}} = \underbrace{\|R^{high}\|^{2}_{X^{k+1+\log}}}_{\text{already shown } < \infty} + \underbrace{\|R^{low}\|^{2}_{X^{k+1+\log}}}_{\text{finite sum of finite coefficients since }R \in X^{k+\log}} < \infty,\end{aligned}$$ which implies that $R \in X^{k+1+\log}$. We conclude this section with a proposition concerning the convexity of the patches. \[convexitypatches\] Let $r$ be small enough. Then the set of solutions constructed in the previous section parametrizes convex patches. We compute the signed curvature at a point $x$: $$\begin{aligned} \displaystyle \kappa (x) & = \frac{(R(x))^2 + 2(R'(x))^2 - R(x) R''(x)}{((R(x))^2 + (R'(x))^2)^{3/2}} > \frac{R(x)(R(x)-R''(x))}{((R(x))^2 + (R'(x))^2)^{3/2}} \\ &\displaystyle > \frac{\min_{x}R(x)(\min_{x}R(x) - \max_{x} R''(x))}{((R(x))^2 + (R'(x))^2)^{3/2}} > 0\end{aligned}$$ if $r$ is small enough. This shows the convexity. Combining Corollaries \[bootstrapalphamenor1\], \[bootstrapalphamayor1\] and \[bootstrapalpha1\], and Proposition \[convexitypatches\], we derive Theorem \[teoremaregularidadconvexidad\]. Acknowledgements {#acknowledgements .unnumbered} ================ AC, DC and JGS were partially supported by the grant MTM2011-26696 (Spain) and ICMAT Severo Ochoa project SEV-2011-0087. AC was partially supported by the ERC grant 307179-GFTIPFD. We are very grateful to Charlie Fefferman for his suggestions about the regularity proof for ${\alpha}= 1$. [l]{} **Angel Castro**\ [Instituto de Ciencias Matemáticas]{}\ [Universidad Autónoma de Madrid]{}\ [C/ Nicolas Cabrera, 13-15, 28049 Madrid, Spain]{}\ [Email: angelcastro@icmat.es]{}\ \ **Diego Córdoba**\ [Instituto de Ciencias Matemáticas]{}\ [Consejo Superior de Investigaciones Científicas]{}\ [C/ Nicolas Cabrera, 13-15, 28049 Madrid, Spain]{}\ [Email: dcg@icmat.es]{}\ \ [Department of Mathematics]{}\ [Princeton University]{}\ [804 Fine Hall, Washington Rd,]{}\ [Princeton, NJ 08544, USA]{}\ [Email: dcg@math.princeton.edu]{}\ \ **Javier Gómez-Serrano**\ [Department of Mathematics]{}\ [Princeton University]{}\ [610 Fine Hall, Washington Rd,]{}\ [Princeton, NJ 08544, USA]{}\ [Email: jg27@math.princeton.edu]{}\
--- abstract: | We study, in a supersymmetric manifest way, the possibility of having a reliable two-derivative ${\cal N}=1$ Supergravity effective theory in situations where the fields that are mapped out have masses comparable to the supersymmetry breaking scale and the masses of the remaining fields.\ We find that in models with two chiral sectors, $H$ and $L$, described by a Kähler invariant function with schematic dependencies of the form $G=G_H(H,\bar H)+G_L(L,\bar L)$ the superfield equation of motion $\partial_H G=0$ leads to a reliable two-derivative Supersymmetric description for the $L$ sector upon requiring slowly varying solutions in the $H$ one. The $H$ fields can be charged only under a hidden gauge sector and the dependency of the visible gauge kinetic function on them should be suppressed, the same for the $L$-field dependency on the hidden gauge kinetic function. In this case the superfield vector equation $\partial_VG=0$ for the hidden sector is also necessary to get rid of the vector superfields, some of them now massive by gauge symmetry breaking, while mapping out unfixed chiral fields related to the would-be Goldstone directions.\ Our results coincide with the naive expectation of promoting to the superspace the $F$-flatness condition which despite the fact of not being a chiral superfield equation, for these kind of factorizable models, has solutions consistent with chiral superfields. address: | Escuela de Física, Universidad Pedagógica y Tecnológica de Colombia (UPTC),\ Avenida Central del Norte, Tunja, Colombia. author: - Diego Gallego title: Light field integration in SUGRA theories --- [*Keywords*]{}: Effective Supersymmetric theories, Supergravity Models, Supersymmetry breaking, Superstring Vacua \[intro\]Introduction ===================== Supersymmetry (SUSY) and in general Supergravity (SUGRA) not only continues to be the preferred playground for models beyond the Standard Model, but also an ideal framework for dealing with situations where otherwise many calculations would be either impossible or unreliable. However, any constructed model has in mind only a small subset of the entire bunch of fields present in explicit realizations, and these are regarded as encoding all the important dynamics under study. Physically what one has in mind is that the rest of the fields are either decoupled or that their dynamics are negligible. Formally the neglected fields are supposed to be integrated out in such a way that the resulting theory is, at least approximately, SUSY.\ Integrating out fields, in a SUSY fashion, in ${\cal N}=1$ SUGRA theories lead recently to some discussion [@deAlwis:2005tg; @deAlwis:2005tf; @Abe:2006xi; @Achucarro:2008sy; @Achucarro:2008fk; @Choi:2008hn; @Choi:2009jn; @conMarcoI; @Lawrence:2008ar; @conMarcoII] settled finally by the work of Brizi, Gomez-Reino and Scrucca [@Brizi:2009nn] where, by requiring an effective two-derivative SUSY description, approximate superfield equations of motion (e.o.m.) were derived, for the fields to be integrate out, together with the estimation of the deviations from the exact effective higher order theory. These can be understood in the light of a low-energy effective theory where higher order terms appear suppressed by the mass of the fields being integrated out and therefore turn out to be subleading. A general result of the work by Brizi et al. is that the gravitational effects to the e.o.m. are automatically negligible once the masses of the integrated fields lie far above the characteristic energies of the effective theory, which include now the SUSY breaking scale, and therefore the leading superfield e.o.m. coincide with the ones of rigid SUSY.\ There are, however, scenarios where one might like to get rid of some fields despite no hierarchy is realized. Already in ordinary field theories it is clear that in such a case higher order derivative terms are not longer suppressed, as the kinetic energies in the effective theory are comparable to the masses of the integrated fields. An obvious situation that circumvents this problem is the case where both sectors, the one to be integrated out and the one to be kept, denoted hereafter by $\{H\}$ and $\{L\}$ respectively, are completely decoupled. For rigid ${\cal N}=1$ SUSY, without vectors fields, this is obtained for Kähler potential and superpotential factorized schematically as follows $$\label{factoRiidSUSY} K=K_H(H)+K_L(L)\,,~~W=W_H(H)+W_L(L)\,.$$ In SUGRA, instead, the theory is described by the generalized Kähler invariant function, $G=K+\ln|W|^2$, so the factorization is not present in $G$ nor in the theory. Moreover, gravitational interactions makes that even if $G$ turns out to have a factorized form, i.e., $G=G_H(H)+G_L(L)$, the Lagrangian has not a fully decoupled structure as can be seen already in the scalar potential, $$V=e^G\left(G^{I\bar J}G_IG_{\bar J}-3 \right)\,,$$ with $G^{I\bar J}\equiv \big(G_{I\bar J})^{-1}$ the inverse scalar manifold metric, the subindex $I$ denoting derivatives respect to the superfield $\phi^I$, and everything is evaluated in the lowest component of the superfields. In fact, in this context the most ideal scenario would be the sequestered models [@Randall:1998uk; @Dimopoulos:2001ui; @Cheung:2010mc] where only gravitational interactions enter in the interplay between the sectors.\ A factorizable $G$ function leads still to some decoupling, as was first discussed in [@Binetruy:2004hh] and later on studied in [@Achucarro:2008sy; @Achucarro:2008fk; @conMarcoI]. A more detailed study was done by the author in [@mioLVS], where also vector fields where included, but still following a component approach where SUSY is not manifest in the prescription and, therefore, is not completely clear how to understand the results in a fully SUSY framework. The present letter follows closely the analysis in [@Brizi:2009nn], now for the factorizable models, looking for the conditions for a reliable two-derivative SUSY description in the effective theory. We conclude that the naive guess for the chiral superfield e.o.m., namely promoting the $F$-flatness condition to a field identity at the superspace level, not only is correct upon regarding slowly varying solutions in the $H$ sector but also is consistent, at leading order, with the chiral nature of the fields being integrated out.\ The superfield approach allows a neat analysis in the case of presence of gauge symmetries, where in order the $H$ sector not to be sourced back, the fields to be integrated out can be charged only under some hidden gauge group whose gauge kinetic function dependency on the $L$ sector should be suppressed. In the same way the gauge kinetic function of the visible sector depends on the $H$ fields in a mild way. Then, again upon having slowly varying solutions, the superfield e.o.m.’s are the superspace promotion of the $F$-flatness conditions plus the $D$-flatness ones. This last situation is studied for the first time here.\ Our results support and generalize the findings of [@mioLVS], working only with the scalar Lagrangian, and are particularly relevant in the context of SUSY breaking scenarios, and the related issue of moduli stabilization in Superstring/M-theory, where most of the fields are regarded as SUSY preserving and a detailed description of the SUSY breaking and moduli stabilization is performed only on a tiny subset of fields. In particular the seminal work of Kachru, Kallosh, Linde and Trivedi [@KKLT] falls in the kind of scenarios where a hierarchy, dictated by the ratio between the flux and non-perturbative dynamics, is present and therefore the results of Brizi et al. apply. On the other hand, for natural Vacuum Expectation Value (VEV) of the superpotential, i.e., $\langle W\rangle\sim 1$ in Planck units, all fields get important, and of the same order, gravity contributions to the masses and therefore no hierarchy is realized. Low-energy SUSY is still possible if the VEV for the $G$ function is negatively large thanks to the universal factor $e^G$ in the potential. This is what precisely happens in the so called Large volume scenarios (LVS) for type-IIB Superstring compactifications [@Balasubramanian:2005zx; @Cicoli:2008va] where, moreover, the coupling between the Khähler moduli, $T$, and the SUSY preserving dilaton and complex structure moduli, denoted by $U$, is described by $$G_{mix}\sim\frac{\xi(U,\bar U)}{{\cal V}(T,\overline T)}+\frac{W_{np}(T,U)}{W_{flux}(U)}+h.c.\,,$$ with $\xi$ a function of the dilaton resulting from $\alpha^\prime$ corrections [@Becker:2002nn] and $W_{flux}\gg W_{np}$ the flux induced and non-perturbative parts in the superpotential. Thus, for large values of the compact manifold volume, ${\cal V}$, the $G$ function realizes an approximate factorizable form and our results apply (for details and numerical examples see [@mioLVS]).\ We should mention that, although with some broad applicability in moduli stabilization models, our analysis should be recasted for scenarios where higher order operators are relevant and the two-derivative level leads to poor descriptions, like the case of cosmological models of inflation where the background dynamics should be taken into account [@EffInfl; @Tolley:2009fg]. Then, it is necessary to keep full track of the higher order operators to get insights of the effective SUGRA theory [@Baumann:2011nm]. Nevertheless, some analyses are valid at this level [@Postma; @Achucarro:facto; @Achucarro:2010jv; @Hardeman; @Achucarro:2011yc; @Cespedes:2012hu; @Achucarro:2012sm].\ The letter is organized as follows: section two is dedicated to review the arguments and results in [@Brizi:2009nn] for a two-derivative SUSY low-energy effective action with only chiral superfields and shows how the factorizable models can have also a reliable SUSY effective description. The third section is dedicated to study the case where gauge symmetries enter in the game and a fourth one discusses the gravitational terms and the gauge fixing of the superconformal symmetry, an issue not regarded in previous studies. We close with some summary and discussion of the results. Two-derivative SUSY effective theories ====================================== The discussion in [@Brizi:2009nn] starts by noticing that the usual two-derivative truncation for an effective description of a field theory is not enough when SUSY is required, as higher order terms in the spinor bilinears and auxiliary fields are mapped, by SUSY transformations, to higher order derivative terms. Therefore, a further truncation in spinor bilinears and auxiliary fields should be imposed which will be reliable only if the missing terms are negligible. At the superfield level this translates to neglect SUSY covariant derivatives in the Kähler potential and superpotential in the effective description. In other words, the solutions to the superfield e.o.m., for the fields that are being mapped out, should correspond either to field configurations where all the SUSY covariant derivatives are negligible, or such that are independent of any non-negligible one.\ We work directly with the Kähler invariant function, $G= K+\ln|W|^2$, as this Kähler gauge is usually cleaner in the results and therefore convenient for cases where the superpotential is non vanishing or, as in our case, does not introduce any important scaling by, say, a tiny VEV. It is also convenient to use the superconformal formalism and compensator technique to write down the action [@Kaku:1978nz; @VanNieuwenhuizen:1981ae; @Kugo:1982cu]. In this setup the off-shell minimal SUGRA supermultiplet is split and one of the two auxiliary fields is now contained in a compensator chiral supermultiplet $\Phi$, required by Weyl symmetry, which later on is gauge fixed in order to recover the actual symmetries of SUGRA. Under this formalism the tensor calculi are almost the same of rigid SUSY, allowing to write down the Lagrangian as an integral over rigid supercoordinates. In our Kähler gauge, for the moment without gauge interactions, the Lagrangian reads [@Kugo:1982cu; @Ferrara:1983dh]: $$\label{SUGRAL} {\cal L}=-3\int d^2\theta d^2\bar\theta e^{-G/3}\Phi \bar \Phi+\int d^2\theta\Phi^3+h.c.+\cdots\,,$$ the ellipses containing terms implying the graviton, gravitino and the remaining auxiliary field from the SUGRA multiplet, also including couplings with the matter multiplets. For the moment we neglect them in our analysis and comment about the consistency of the procedure at the end.\ We regard models with two sectors of chiral fields $\{H^i\}$ and $\{L^\alpha\}$ (notice the distinction in the indices), being the $H$ the ones to be integrated out. The exact superfield e.o.m. for the $H^i$ then reads: $$\label{fullEOM1} -\frac14\Phi\overline{\cal D}^2\left( G_i e^{-G/3}\bar \Phi\right)=0\,,$$ where we have used the identity $\int d^2\bar \theta=-\frac14 \overline {\cal D}^2$, ${\cal D}$ the SUSY covariant derivative, so to get a derivative superfield equation. Regarding $\Phi\neq 0$ and expanding the previous expression we have, $$\label{fullEOM} \fl e^{-G/3}\bar \Phi\left( G_{i\bar I\bar J}\overline {\cal D}\bar\phi^{\bar I}\overline {\cal D}\bar\phi^{\bar J}+G_{i\bar I}\overline {\cal D}^2\bar\phi^{\bar J} \right)+G_i\overline {\cal D}^2\left( e^{-G/3}\bar \Phi\right)+2 G_{i\bar I}\overline{\cal D}\bar\phi^{\bar I} \overline {\cal D}\left( e^{-G/3}\bar \Phi\right)=0\,,$$ where for simplicity in the notation we omit the spinor index in the SUSY covariant derivatives, and the $I$, $J$ indices run over all superfields $H^i$ and $L^\alpha$. From the previous arguments, the SUSY two-derivative description is reliable if somehow around the solution to the e.o.m. we can neglect the covariant derivatives.\ A possibility, studied by Brizi et al. in [@Brizi:2009nn] (see also [@conMarcoI]), is one where, around the solution, the energy scale associated to the second non-mixed holomorphic derivatives of the superpotential, i.e., $W_{ij}$, dominates over all others, say the ones associated to the superpotential itself, pure $L$-sector and mixed derivatives, e.g., $W_{\alpha\beta}$ and $W_{i\alpha}$, the space-time derivatives on the fields and the auxiliary fields VEV’s. Then, with a regular behavior in the Kähler potential, the dominating term in \[fullEOM\] is the one proportional to $G_i$, whose leading part is $W_i/W$, so the approximate e.o.m. reads: $$\label{heavycase} W_i=0\,,$$ which leads to a two-derivative SUSY description for the $L$ fields as no SUSY covariant derivative is present. In particular the solutions to this e.o.m. are vanishing $H$ auxiliary fields implying no contribution to the SUSY breaking from the $H$ sector at leading order.\ Physically the fact that the holomorphic $W_{ij}$ derivatives dominate means that the masses of the $H$ fields, $M_H\sim W_{ij}$, are larger than the remaining energy scales, namely the masses and kinetic energy of the $L$-sector fields and the SUSY breaking scale. Then, the theory obtained from this leading e.o.m. coincides with the full higher order operator effective theory at first order in an expansion in derivatives, spinor bilinears and auxiliary fields, with the missing terms suppressed by $M_H$, precisely like in any standard low-energy effective description. At the superfield level any missing operator in the effective $G$ function obtained by the approximate e.o.m. comes suppressed by $m_L/M_H$, with $m_L$ a characteristic energy scale in the $L$ sector, e.g., the masses.\ A second possibility is one where the dynamics ruling both sectors are of the same order and therefore no significant hierarchies appear. Still it might be possible to get rid of one sector if somehow it turns out to be decoupled, although we no longer speak about a low-energy description. A decoupling that leads to a SUSY effective theory, proposed first in [@Binetruy:2004hh] and studied at the level of the scalar potential in [@Achucarro:2008sy; @conMarcoI; @mioLVS], can be summarized in a Kähler invariant function with the following structure $$\label{factG} G=G_H(H,\bar H)+G_L(L,\bar L)+\epsilon\, G_{mix}(H,\bar H,L,\bar L)\,,$$ with $G_H$ and $G_L$ of the same order of magnitude and $\epsilon$ small, parameterizing the coupling between the two sectors. We emphasize, however, that the smallness of the mixing is not necessarily due to a small coupling but rather that around the solutions to the e.o.m. all mixed terms turn out to be small. This form for the $G$ function in the superfield e.o.m. implies that all mixed derivatives, $G_{i\alpha}$ or higher order, are suppressed so that the leading terms are proportional either to covariant derivatives of the $H$ fields or to the $G_i$. Schematically this is: $$\label{leadEOMwithep} G_i\overline {\cal D}^2\left( e^{-G/3}\bar \Phi\right)+{\cal O}\Big(\overline {{\cal D} H},\big(\overline {{\cal D} H}\big)^2,\overline{\cal D}^2\bar H\Big)={\cal O}(\epsilon)\,.$$ For solutions with an approximate two-derivative SUSY description in the effective theory the second term should be negligible and, thus, the first one must cancel independently. Since in general the covariant SUSY derivatives on the $L$ fields are large, this term vanishes only if we require $G_i=0$. Due to the factorization at leading order the $G_i$ composite superfield depends only on the $H$ components: its lowest component is the evaluation of $\partial_i G_H$ in the $H$ lowest components, meanwhile the higher components are also combinations of the spinor, spinor derivatives and auxiliary fields. Therefore, a vanishing $G_i$ superfield implies vanishing of the $H$ spinor and auxiliary components, as well of the derivatives of spinor ones, a result consistent with the requirement that the $H$ sector preserve SUSY. The SUSY covariant derivative on the $H$ superfields also depends exclusively on these fields, but furthermore on the space-time derivatives of the lowest component so it is not completely negligible around field configurations solving null $G_i$’s. Therefore, our procedure of neglecting the covariant derivatives is only consistent for slow varying $H$ field configurations, namely the ones with small associated kinetic energy compared with the remaining energies, e.g., the masses and the SUSY breaking scale. This, contrary to the usual case where the integrated fields are heavy, is not a trivial condition as the $H$ field excitations have masses and kinetic energies naturally in the same scale of the ones implied in the $L$ sector.\ Here we can exploit the fact that the solutions for vanishing $G_i$ at leading order do not depend on the $L$ fields and, therefore, any consideration on the leading solution for the $H$ sector can be done disregarding the $L$ field configurations. In other words, even though the fields are light the decoupling forbids, at leading order, the excitation of the $H$ sector fluctuations from the $L$ ones. Thus, we follow the usual consideration of moduli stabilization scenarios, namely to regard slowly varying solutions neglecting the space-time derivatives of the $H$ fields, i.e., solving only the extrema of the potential, without worrying that the dynamics of the fluctuations in the $L$ sector would affect such a situation. Under these circumstances the whole SUSY covariant derivative on the $H$ fields is negligible and the solutions to the e.o.m. coincide at leading order in $\epsilon$ with the ones of the superfield equation $$\label{leadingSol} \partial_i G=0\,.$$ In particular, for heavy fields, compared to the remaining energy scales, this e.o.m. reduces to the one found before, \[heavycase\], where the restriction on the kinetic energy of $H$ sector is loose as well.\ For us is important the fact that the solutions to \[leadingSol\] lead to a two-derivative description which coincides with the full effective theory at leading order in $\epsilon$, as far the space-time derivatives on the $H$ superfield lowest components are small. On the other hand, \[leadingSol\] coincides at leading order with $\partial_iG_H=0$ and therefore the solutions are independent of the $L$ fields, as was pointed out before in [@conMarcoI; @mioLVS], so the dynamics of the $L$ sector can be described by a theory where the $H$ sector is regarded as completely frozen, as usually done in moduli stabilization scenarios.\ Being more explicit, if the space-time derivatives of the $H$ fields are also of order $\epsilon$, the exact solution for the $H$ superfields has the following parametric form: $$\label{exactsolution} H=H_o+\epsilon \tilde H(L,\bar L,\bar \Phi,\overline{\cal D}\bar L,\overline{\cal D}\bar \Phi)\,,$$ where $H_o$ is the solution to $\partial_iG_H=0$ and the remaining encodes the non-constant and non-holomorphic part, which in case of not being suppressed would spoil the two-derivative SUSY description. Plugging back the solution into the Kähler invariant function, we have that the effective theory is described by $$\label{Geff} G_{eff}=G_{H,o}+G_L(L,\bar L)+\epsilon G_{mix,o}(L,\bar L)+{\cal O}(\epsilon^2)\,,$$ with the “not” label indicating evaluation at $H=H_o$. Here it is clear that the theory is described, up to next to leading order in $\epsilon$, by a valid $G$ function with no SUSY covariant derivatives and therefore has a reliable two-derivative description.\ As a remark, notice that this result only relies on the assumption that the $H$ superfields are stabilized, at leading order, at SUSY preserving points. Understanding by stabilization points the ones where the Hessian of the scalar potential has no negative nor zero eigenvalues in these directions, namely that they indeed acquire positive mass squared, so that small fluctuation are not dramatic for the stability of the solution.\ Let us close this section by drawing attention to a potential issue on \[leadingSol\], that is the fact that, being $G$ a real composite superfield, the e.o.m. is not a chiral equation. This prevents us, in general, from using it for the integration of chiral fields [@Brizi:2009nn]. In the case of factorizable models, however, this is avoided as the antiholomorphic components of the equations are trivially consistent with the holomorphic ones, in the sense that both lead to vanishing spinor, spinor derivatives and auxiliary components, which at the same time are consistent with the lowest component of the equation that is nothing but the $F$-flatness condition. Therefore, the leading part in the solution, \[exactsolution\], is given by the chiral set $H_o=\{h_o,0,0\}$. Gauge interactions ================== The presence of gauge interactions modifies the analysis, first by the inclusion of the vector superfields $V^A$ in $G=G(\phi^I,\bar \phi^{\bar I}, V^A)$, the index $A$ running over the gauge group generators, and then by their kinetic term, $$\label{gaugkinL} {\cal L}_{gau-kin}=\frac{1}{4}\int d\theta^2 f_{AB}(\phi^I){\cal W}^A\cdot{\cal W}^B+h.c.$$ with superfield strengths ${\cal W}_\alpha=-\frac14\overline{\cal D}^2\left(e^{-V}{\cal D}_\alpha e^V\right)$, $\alpha$ spinor indices, and the chiral superfield only entering through the gauge kinetic holomorphic function $f_{AB}$. We do not consider Fayet-Iliopoulos terms as they seem to be inconsistent with SUGRA [@Komargodski:2009pc].\ Then, for a generic form for $f_{AB}$, the e.o.m. for the $H^i$ superfield is corrected by $$\begin{aligned} \label{fullEOM2Extragauge} \fl\partial_i {\cal L}\supset -\frac14\Phi\left[e^{-G/3}\bar \Phi\left(G_{i\bar IA}\overline {\cal D}\bar\phi^{\bar I}+G_{iAB}\overline {\cal D}V^B+ G_{iA}\overline {\cal D}\right)+2 G_{iA}\overline {\cal D}\left( e^{-G/3}\bar \Phi\right)\right]\overline {\cal D} V^A\cr +\frac14f_{AB,i}{\cal W}^A\cdot{\cal W}^B\,.\end{aligned}$$ These are automatically subleading in case the $H$ fields develop large masses. Indeed, among others, these are related to the SUSY breaking scale through a $D$-term breaking.\ For factorizable models the SUSY covariant derivatives acting on the $L$ fields do not appear but, even requiring suppressed space-time derivatives of the $H$ fields, \[leadingSol\] is no longer a solution due to the presence of the SUSY covariant derivatives of vector superfields and therefore the two-derivative description is not valid neither. Notice that all terms inside the brackets in \[fullEOM2Extragauge\] are null for neutral, i.e., gauge invariant, $H$ fields as all mixed derivatives of $G$ with the vector fields vanish. This is actually a compulsory requirement if we do not want the fields $H$ to be sourced back through the coupling with the gauge fields [@Achucarro:2008sy; @conMarcoII]. In fact, since the energy scale we are interested in is of the order of the $H$ field masses there is no kinematic constraint forbidding such a process. Moreover, the solutions we are looking for quite probably imply non vanishing VEV for the lowest components of the $H$ superfields inducing, if charged, spontaneous gauge symmetry breaking. Therefore, by gauge invariance there would be as many flat directions, related to the would-be Goldstone fields, as broken symmetries, meaning that the dynamics from $G_H$ alone cannot fix the whole $H$ sector as the the set of equations $\{G_i=0\}$ is no longer linear independent. So for a SUSY integration to proceed a $D$-flatness condition is also needed, which however will depend, in general, on the non negligible SUSY covariant derivatives of the remaining fields. Requiring a neutral $H$ sector is, still, not enough to have SUSY integration as the coupling from the sigma model in the gauge kinetic function follows an identical analysis to the one of the gauge coupling above. Thus, in order to have a two-derivative SUSY description, at least at leading order, we should deal with neutral $H$ fields which enter suppressed in the gauge kinetic function. In such a case we have again that the leading e.o.m. is given by \[leadingSol\] implying an integration of the $H$ sector with a reliable two-derivative SUSY effective description.\ There might be particular situations where the $D$-term SUSY breaking turns out to be suppressed, for instance in the LVS studied in [@mioLVS], and therefore the back-reaction in the $H$ sector is mild enough to be subleading. Then as far as for the scalar potential is regarded, up to the mass level, no suppression in the gauge kinetic function is needed for a leading SUSY freezing of the $H$ sector [@mioLVS]. However, this does not imply negligible contributions to the $H$ superfields solutions coming from other components of ${\cal W}^{\alpha,A}$, e.g., the field strength and gauginos in the Wess-Zumino gauge, which would induce, in particular, non suppressed higher order derivative terms for the vector fields and higher order fermion bilinear for the gauginos. So contrary to the case studied by Brizi et al. (see also [@conMarcoII]), where these higher order terms are suppressed by the mass of the $H$ fields and therefore negligible, an approximate two-derivative SUSY effective theory is only realized for suppressed dependencies of the $H$ fields in the gauge kinetic function.\ We can allow the $H$ fields to be charged under a hidden gauge sector ${\cal G}_H$, such that the whole gauge group is given by ${\cal G}={\cal G}_H\otimes {\cal G}_L$ and the vector superfields are split as $V^a\in{\cal G}_L$ and $V^r\in{\cal G}_H$, labeled by lower case letters in the beginning and middle of the alphabet respectively. In this case the e.o.m. for the vector hidden superfields should be also implemented. A situation that can be easily studied using the superspace approach. With no loss of generality, in order to be more explicit, we show the Abelian case for which the e.o.m. reads: $$\label{vectorEOM} G_re^{-G/3}\bar \Phi \Phi+\frac{1}{8}\left[ {\cal D}^{\alpha}\left(f_{rp} \overline {\cal D}^2 {\cal D}_\alpha V^p\right)+\overline{\cal D}_{\dot \alpha}\left(\bar f_{rp} {\cal D}^2 \overline{\cal D}^{\dot \alpha} V^p\right)\right]=0\,,%\notag$$ where $\alpha$ and $\dot \alpha$ here stand for the spinor index and we have regarded no kinetic mixing in the gauge sector, i.e., $f_{ar}=0$.\ Then, requiring the $H$ field dependencies of the gauge kinetic function for ${\cal G}_L$ to be suppressed, only the SUSY covariant derivatives on the $H$ and ${\cal G}_H$ sectors appear in the e.o.m for the $H$ fields. The same should be imposed for the dependency of the ${\cal G}_H$ gauge kinetic function on the $L$ fields, otherwise their covariant derivative would appear in the e.o.m. \[vectorEOM\]. The implementation of the vector superfield integration corresponding to broken symmetries requires a gauge fixing, being the unitary gauge the one with clearest physical interpretation. However, in practice it is useful to work in a gauge where a chiral superfield, with no vanishing component in the would-be Goldstone direction, is simply fixed to its VEV. Then, as far the SUSY covariant derivatives on the $H$ and $V^r$ superfields are negligible there is a reliable two-derivative SUSY description after the integration of the fields through the set of e.o.m. $$G_{\tilde i}=0\,,~~~G_r=0\,,$$ where $\tilde i$ runs over the chiral fields not affected by the gauge fixing. This set of equations are solved, at next to leading order in $\epsilon$, by vanishing spinor and auxiliary components in both the chiral and vector superfields, and for this last one also the $\theta^2$ and $\bar \theta^2$ components and the vector field have null solutions. As before this is not enough to guarantee null covariant derivatives and small space-time derivatives should be imposed by hand in order to have reliable a two-derivative SUSY effective description. Then, if the suppressed dependencies mentioned above are of order $\epsilon$, the effective theory is described by a $G$ function with the same form of \[Geff\], remaining to include the $V^a$ in $G_L$ and $G_{mix}$ and the $V^r$, solutions to $G_r=0$, in $G_H$ and $G_{mix}$. Gravitational sector and gauge fixing ===================================== In the previous analysis we disregard the gravitational sector contribution to the action encoded in the ellipses in \[SUGRAL\]. On the other hand, we have shown that the effective theory at leading order in $\epsilon$ is described by a theory with superconformal symmetry, namely the one obtained by the $G$ function with the $H$ superfields frozen out. Therefore, since the gravitational terms are univocally dictated by the covariance of the symmetries these terms are also well described by the truncated theory.\ However, the dilatation, axial and S symmetries, not being actual symmetries of SUGRA, should be gauge fixed requiring a canonical normalization in the gravity sector action, eliminating for example kinetic mixings with the matter sector. This proceeds by fixing the compensator in terms of the chiral superfields. However, notice that the form of $G$ implies decoupling only between the $H$ and $L$ sector but not with the compensator, thus, it is not automatically clear that the gauge fixing is the same in both descriptions or, in other words, that the SUGRA theory that is obtained upon the gauge fixing coincide at leading order. Writing the compensator components as $\Phi=\phi\{1,\chi_\phi,U\}$ the fixing reads [@Kugo:1982mr]: $$\label{kugogauge} \phi\equiv e^{G/6}\,,~~\chi_\phi\equiv \frac13 G_I\chi^I\,,$$ where the $G$ function and its derivatives are evaluated in the lowest components of the superfields and $\chi^I$ are the spinor components of the chiral multiplets. Since around the solution to the e.o.m. for the $H$ fields the terms not appearing in the truncated theory, namely $G_i\chi^i$, are of order $\epsilon$ and the functions $G$ and $G_\alpha$ coincide in both theories at next to leading order, the gauge fixing is the same modulo subleading terms.\ One of the main targets of the present letter is to clarify the integration of the fields at the superfield level, however, the gauge fixing in \[kugogauge\] cannot be promoted to the superspace as the compensator is a chiral field and therefore cannot depend on the fields in the antiholomorphic sector contained in $G$. A variation to the fixing which can be performed directly in the superspace is the one proposed by Cheung et al. in [@Cheung:2011jp] that in our Kähler gauge reads: $$\Phi\equiv e^{Z/3}(1+\theta^2 U)\,,$$ with $Z$ a chiral superfield given by $$Z=\langle G\rangle+\langle G_I\rangle\phi^I\,,$$ where the $\langle\rangle$ means the VEV. Again since the VEV’s in both descriptions coincide at leading order and the terms not appearing in the truncated description are suppressed, the $Z$ superfields, and therefore the full and truncated theories, match at leading order. Discussion ========== In this letter we have studied the possibility of having a SUSY two-derivative description for effective theories resulting from the integration of light fields in ${\cal N}=1$ SUGRA. The consistency of a derivative expansion with SUSY transformations requires a parallel expansion in spinor bilinears and auxiliary terms, that at the superfield level is seen as an expansion in the SUSY covariant derivative and a reliable two-derivative effective description is the one where these can be neglected.\ Whenever we speak about an effective description we have in mind a region in the field configuration space around particular solutions of the e.o.m. for the fields that have been integrated out, and in our case such solutions should preserve SUSY, approximately, albeit the remaining fields stand at points where SUSY is spontaneously broken. One possibility is that the SUSY preserving sector is heavy enough to present a hierarchy with the SUSY breaking scale such that the back-reaction from the breaking is suppressed [@conMarcoI; @Brizi:2009nn; @conMarcoII]. For Kähler potentials with no singular behavior such a hierarchy is realized if in particular the gravitational effects, e.g., the contribution to the masses, are suppressed, and therefore the leading superfield e.o.m. coincides with the one obtained in rigid SUSY.\ On the other hand, no hierarchy is necessary if in the Lagrangian the two sectors are decoupled, in which case the best scenario in SUGRA would be squestered sectors. Still, one can allow further interactions, beside the gravitational ones, and achive some SUSY decoupling if the theory is described by a Kähler invariant $G$ function of the form \[factG\]. Although this was previously realized at the level of the scalar Lagrangian [@Achucarro:2008sy; @conMarcoI; @mioLVS], our analysis shows that the situation can be understood in a fully SUSY framework by working directly in the superspace, an approach that also allows the study of more involved situations not regarded before. We find that the decoupling leads to subleading contributions from the SUSY covariant derivatives on the $L$ sector, despite the fact these can be large, and then slowly varying field configurations, i.e., long wave lengths with associated kinetic function negligible compared with the masses and SUSY breaking scale, solving \[leadingSol\] coincide at leading order, in a derivative and $\epsilon$ expansion, with slowly varying solutions of the exact e.o.m., implying negligible SUSY covariant derivatives from the $H$ sector and, therefore, a reliable two-derivative SUSY description. In other words, the implementation of the expansion in spinor, spinor derivatives and auxiliary fields is contained in the e.o.m., with solutions $F^i\sim \chi^i\sim\partial_\mu \chi^i\sim \epsilon$, but the one for the lowest component space-time derivative should be imposed by hand with the confidence that, thanks to decoupling, this would not be spoiled by the $L$ sector dynamics. This condition is natural first from the SUSY perspective as the derivatives are mapped to spinor and auxiliary components which by the e.o.m. are vanishing, and second from the fact that we are not dealing with a genuine low-energy description where the kinetic energy can be neglected compared with the masses. Here we should remark that this freedom on the masses for the $H$ sector occur only if the $H$ sector is stabilized at SUSY preserving points, as we required. Indeed, in case the $H$ sector leads the breaking of SUSY the decoupling with the $L$ sector is lost, as can be seen in \[leadEOMwithep\], and the e.o.m. for $H$ starts to be $L$ dependent. In this case, although nice decoupling features are preserved [@Achucarro:facto; @Achucarro:2010jv], some constrains on the mass of the integrated fields apply [@Hardeman; @Postma].\ Let us emphasize that the restriction about slowly varying solutions is only required on the $H$ sector. This is understood from the fact that the scalar manifold is factorized at leading order and therefore no kinetic mixing is realized, implying that the e.o.m. for the $H$ fields are independent of the $L$ space-time derivatives whether or not these are large. Therefore, when the solutions are plugged back in the Lagrangian, where the only relevant part for the $H$ fields is the potential, no extra terms carrying derivatives on the $L$ sector appear. In other words the space-time derivative expansion for the $L$ sector is not affected at leading order by mapping out the $H$ one and, as a result, we can trust the two-derivative truncation for the $L$ fields, allowing rapidly varying field configurations, as much as we do in the original theory.\ The fact that ours is not a low-energy description alerts about the fact that even in case the fields to be integrated out are neutral these can be sourced back by the vector fields from the coupling in the sigma model ruled by the gauge kinetic function. Thus, even if the $D$-term SUSY breaking is mild other terms in the covariant derivative of the gauge vector are not suppressed and therefore no reliable SUSY two-derivative description is available. One should, then, require a suppressed dependency of the gauge kinetic function on the $H$ fields and, in this case, all the covariant derivative contributions to the e.o.m. are negligible at leading order such that the solutions are determined by \[leadingSol\]. We conclude, therefore, that integration of neutral chiral fields that enter in the Kähler invariant function in a factorizable way and suppressed in the gauge kinetic function leads to a reliable SUSY two-derivative effective description, obtained through the e.o.m. in \[leadingSol\], upon requiring slowly varying solutions in the $H$ sector.\ We can allow charged $H$ fields but only under some hidden gauge group in which case the $L$ fields should appear in a suppressed way in the corresponding gauge kinetic function. Possible flat directions resulting from symmetry breaking are handled by integrating out the vector supermultiplets, after a gauge fixing for the broken directions. Therefore, the full set of e.o.m. includes the $F$-flatness and $D$-flatness conditions, as superfield identities, all together. I was benefited by conversations with Leonardo Brizi, Francesco D’Eramo, Jorge Zanelli and Marco Serone. I thank specially Claudio Scrucca and Marco Serone for useful comments on a preliminary version of the paper. I would like to thank The Abdus Salam International Centre for Theoretical Physics for the hospitality during the completion of this work.\ This work was supported in part by DIN-UPTC grant “proyecto capital semilla" SGI889. References {#references .unnumbered} ========== [10]{} url \#1[[\#1]{}]{}urlprefix\[2\]\[\][[\#2](#2)]{} de Alwis S P 2005 [*Phys. Lett.*]{} [**B628**]{} 183–187 (*Preprint* ) de Alwis S P 2005 [*Phys. Lett.*]{} [**B626**]{} 223–229 (*Preprint* ) Abe H, Higaki T and Kobayashi T 2006 [*Phys. Rev.*]{} [**D74**]{} 045012 (*Preprint* ) Achucarro A, Hardeman S and Sousa K 2008 [*Phys. Rev.*]{} [**D78**]{} 101901 (*Preprint* ) Achucarro A, Hardeman S and Sousa K 2008 [*JHEP*]{} [**11**]{} 003 (*Preprint* ) Choi K, Jeong K S and Okumura K I 2008 [*JHEP*]{} [**0807**]{} 047 (*Preprint* ) Choi K, Jeong K S, Nakamura S, Okumura K I and Yamaguchi M 2009 [*JHEP*]{} [**0904**]{} 107 (*Preprint* ) Gallego D and Serone M 2009 [*JHEP*]{} [**01**]{} 056 (*Preprint* ) Lawrence A 2009 [*Phys.Rev.*]{} [**D79**]{} 101701 (*Preprint* ) Gallego D and Serone M 2009 [*JHEP*]{} [**06**]{} 057 (*Preprint* ) Brizi L, Gomez-Reino M and Scrucca C A 2009 [*Nucl. Phys.*]{} [**B820**]{} 193–212 (*Preprint* ) Randall L and Sundrum R 1999 [*Nucl. Phys.*]{} [**B557**]{} 79–118 (*Preprint* ) Dimopoulos S, Kachru S, Kaloper N, Lawrence A E and Silverstein E 2001 [ *Phys.Rev.*]{} [**D64**]{} 121702 (*Preprint* ) Cheung C, Nomura Y and Thaler J 2010 [*JHEP*]{} [**1003**]{} 073 (*Preprint* ) Binetruy P, Dvali G, Kallosh R and Van Proeyen A 2004 [*Class.Quant.Grav.*]{} [**21**]{} 3137–3170 (*Preprint* ) Gallego D 2011 [*JHEP*]{} [**1106**]{} 087 (*Preprint* ) Kachru S, Kallosh R, Linde A and Trivedi S P 2003 [*Phys. Rev.*]{} [**D68**]{} 046005 (*Preprint* ) Balasubramanian V, Berglund P, Conlon J P and Quevedo F 2005 [*JHEP*]{} [ **03**]{} 007 (*Preprint* ) Cicoli M, Conlon J P and Quevedo F 2008 [*JHEP*]{} [**10**]{} 105 (*Preprint* ) Becker K, Becker M, Haack M and Louis J 2002 [*JHEP*]{} [**06**]{} 060 (*Preprint* ) Cheung C, Creminelli P, Fitzpatrick A L, Kaplan J and Senatore L 2008 [ *JHEP*]{} [**0803**]{} 014 (*Preprint* ) Tolley A J and Wyman M 2010 [*Phys.Rev.*]{} [**D81**]{} 043502 (*Preprint* ) Baumann D and Green D 2012 [*JHEP*]{} [**1203**]{} 001 (*Preprint* ) Davis S C and Postma M 2008 [*JCAP*]{} [**0804**]{} 022 (*Preprint* ) Achucarro A, Gong J O, Hardeman S, Palma G A and Patil S P 2011 [*JCAP*]{} [**1101**]{} 030 (*Preprint* ) Achucarro A, Gong J O, Hardeman S, Palma G A and Patil S P 2011 [ *Phys.Rev.*]{} [**D84**]{} 043502 (*Preprint* ) Hardeman S, Oberreuter J M, Palma G A, Schalm K and van der Aalst T 2011 [ *JHEP*]{} [**1104**]{} 009 (*Preprint* ) Achucarro A, Hardeman S, Oberreuter J M, Schalm K and van der Aalst T 2011 (*Preprint* ) Cespedes S, Atal V and Palma G A 2012 [*JCAP*]{} [**1205**]{} 008 (*Preprint* ) Achucarro A, Gong J O, Hardeman S, Palma G A and Patil S P 2012 [*JHEP*]{} [**1205**]{} 066 (*Preprint* ) Kaku M, Townsend P K and van Nieuwenhuizen P 1978 [*Phys. Rev.*]{} [**D17**]{} 3179 Van Nieuwenhuizen P 1981 [*Phys.Rept.*]{} [**68**]{} 189–398 Kugo T and Uehara S 1983 [*Nucl. Phys.*]{} [**B226**]{} 49 Ferrara S, Girardello L, Kugo T and Van Proeyen A 1983 [*Nucl. Phys.*]{} [ **B223**]{} 191 Komargodski Z and Seiberg N 2009 [*JHEP*]{} [**06**]{} 007 (*Preprint* ) Kugo T and Uehara S 1983 [*Nucl. Phys.*]{} [**B222**]{} 125 Cheung C, D’Eramo F and Thaler J 2011 [*Phys.Rev.*]{} [**D84**]{} 085012 (*Preprint* )
--- abstract: | We introduce and study the higher tetrahedral algebras, an exotic family of finite-dimensional tame symmetric algebras over an algebraically closed field. The Gabriel quiver of such an algebra is the triangulation quiver associated to the coherent orientation of the tetrahedron. Surprisingly, these algebras occurred in the classification of all algebras of generalized quaternion type, but are not weighted surface algebras. We prove that a higher tetrahedral algebra is periodic if and only if it is non-singular. *Keywords:* Syzygy, Periodic algebra, Symmetric algebra, Tame algebra *2010 MSC:* 16D50, 16G20, 16G60, 16S80 address: - 'Mathematical Institute, Oxford University, ROQ, Oxford OX2 6GG, United Kingdom' - 'Faculty of Mathematics and Computer Science, Nicolaus Copernicus University, Chopina 12/18, 87-100 Toruń, Poland' author: - Karin Erdmann - Andrzej Skowroński title: Higher tetrahedral algebras --- [ [^1] ]{} Introduction and the main results {#sec:intro} ================================= Throughout this paper, $K$ will denote a fixed algebraically closed field. By an algebra we mean an associative finite-dimensional $K$-algebra with an identity. For an algebra $A$, we denote by ${\operatorname{mod}}A$ the category of finite-dimensional right $A$-modules and by $D$ the standard duality ${\operatorname{Hom}}_K(-,K)$ on ${\operatorname{mod}}A$. An algebra $A$ is called *self-injective* if $A_A$ is injective in ${\operatorname{mod}}A$, or equivalently, the projective modules in ${\operatorname{mod}}A$ are injective. A prominent class of self-injective algebras is formed by the *symmetric algebras* $A$ for which there exists an associative, non-degenerate symmetric $K$-bilinear form $(-,-): A \times A \to K$. Classical examples of symmetric algebras are provided by the blocks of group algebras of finite groups and the Hecke algebras of finite Coxeter groups. In fact, any algebra $A$ is the quotient algebra of its trivial extension algebra ${\operatorname{T}}(A) = A \ltimes D(A)$, which is a symmetric algebra. From the remarkable Tame and Wild Theorem of Drozd (see [@CB1; @Dr]) the class of algebras over $K$ may be divided into two disjoint classes. The first class consists of the *tame algebras* for which the indecomposable modules occur in each dimension $d$ in a finite number of discrete and a finite number of one-parameter families. The second class is formed by the *wild algebras* whose representation theory comprises the representation theories of all algebras over $K$. Accordingly, we may realistically hope to classify the indecomposable finite-dimensional modules only for the tame algebras. Among the tame algebras we may distinguish the *algebras of polynomial growth* for which the number of one-parameter families of indecomposable modules in each dimension $d$ is bounded by $d^m$ for some positive integer $m$ (depending only on the algebra) whose representation theory is usually well understood (see [@BES4; @Sk1; @Sk2] for some general results). On the other hand, the representation theory of tame algebras of non-polynomial growth is still only emerging. Let $A$ be an algebra. Given a module $M$ in ${\operatorname{mod}}A$, its *syzygy* is defined to be the kernel $\Omega_A(M)$ of a minimal projective cover of $M$ in ${\operatorname{mod}}A$. The syzygy operator $\Omega_A$ is a very important tool to construct modules in ${\operatorname{mod}}A$ and relate them. For $A$ self-injective, it induces an equivalence of the stable module category ${\operatorname{\underline{mod}}}A$, and its inverse is the shift of a triangulated structure on ${\operatorname{\underline{mod}}}A$ [@Ha1]. A module $M$ in ${\operatorname{mod}}A$ is said to be *periodic* if $\Omega_A^n(M) \cong M$ for some $n \geq 1$, and if so the minimal such $n$ is called the *period* of $M$. The action of $\Omega_A$ on ${\operatorname{mod}}A$ can effect the algebra structure of $A$. For example, if all simple modules in ${\operatorname{mod}}A$ are periodic, then $A$ is a self-injective algebra. An algebra $A$ is defined to be *periodic* if it is periodic viewed as a module over the enveloping algebra $A^e = A^{{\operatorname{op}}} \otimes_K A$, or equivalently, as an $A$-$A$-bimodule. It is known that if $A$ is a periodic algebra of period $n$ then for any indecomposable non-projective module $M$ in ${\operatorname{mod}}A$ the syzygy $\Omega_A^n(M)$ is isomorphic to $M$. Finding or possibly classifying periodic algebras is an important problem. It is very interesting because of connections with group theory, topology, singularity theory and cluster algebras. Periodicity of an algebra, and its period, are invariant under derived equivalences [@Ric2] (see also [@ESk3]). Therefore, to study periodic algebras we may assume that the algebras are basic and indecomposable. We are concerned with the classification of all periodic tame symmetric algebras. In [@Du1] Dugas proved that every representation-finite self-injective algebra, without simple blocks, is a periodic algebra. We note that, by general theory (see [@Sk2 Section 3]), a basic, indecomposable, non-simple, symmetric algebra $A$ is representation-finite if and only if $A$ is socle equivalent to an algebra ${\operatorname{T}}(B)^G$ of invariants of the trivial extension algebra ${\operatorname{T}}(B)$ of a tilted algebra $B$ of Dynkin type with respect to free action of a finite cyclic group $G$. The representation-infinite, indecomposable, periodic algebras of polynomial growth were classified by Białkowski, Erdmann and Skowroński in [@BES4] (see also [@Sk1; @Sk2]). In particular, it follows from [@BES4] that every basic, indecomposable, representation-infinite symmetric tame algebra of polynomial growth is socle equivalent to an algebra ${\operatorname{T}}(B)^G$ of invariants of the trivial extension algebra ${\operatorname{T}}(B)$ of a tubular algebra $B$ of tubular type $(2,2,2,2)$, $(3,3,3)$, $(2,4,4)$, $(2,3,6)$ (introduced by Ringel [@R]) with respect to free action of a finite cyclic group $G$. Recently we introduced in [@ESk-WSA] the weighted surface algebras of triangulated surfaces with arbitrary oriented triangles and proved that all these algebras, except the singular tetrahedral algebras, are periodic tame symmetric algebras of period $4$. Here, we investigate the periodicity of higher tetrahedral algebras, being “higher analogues” of the tetrahedral algebras studied in [@ESk-WSA]. Consider the tetrahedron $$\begin{tikzpicture} [scale=1] \node (A) at (-2,0) {$\bullet$}; \node (B) at (2,0) {$\bullet$}; \node (C) at (0,.85) {$\bullet$}; \node (D) at (0,2.8) {$\bullet$}; \coordinate (A) at (-2,0) ; \coordinate (B) at (2,0) ; \coordinate (C) at (0,.85) ; \coordinate (D) at (0,2.8) ; \draw[thick] (A) edge node [left] {3} (D) (D) edge node [right] {6} (B) (D) edge node [below right] {2} (C) (A) edge node [above] {5} (C) (C) edge node [above] {4} (B) (A) edge node [below] {1} (B) ; \end{tikzpicture}$$ with the coherent orientation of triangles: $(1\ 5\ 4)$, $(2\ 5\ 3)$, $(2\ 6\ 4)$, $(1\ 6\ 3)$. Then, following [@ESk-WSA], we have the associated triangulation quiver $(Q,f)$ of the form $$\begin{tikzpicture} [scale=.85] \node (1) at (0,1.72) {$1$}; \node (2) at (0,-1.72) {$2$}; \node (3) at (2,-1.72) {$3$}; \node (4) at (-1,0) {$4$}; \node (5) at (1,0) {$5$}; \node (6) at (-2,-1.72) {$6$}; \coordinate (1) at (0,1.72); \coordinate (2) at (0,-1.72); \coordinate (3) at (2,-1.72); \coordinate (4) at (-1,0); \coordinate (5) at (1,0); \coordinate (6) at (-2,-1.72); \fill[fill=gray!20] (0,2.22cm) arc [start angle=90, delta angle=-360, x radius=4cm, y radius=2.8cm] -- (0,1.72cm) arc [start angle=90, delta angle=360, radius=2.3cm] -- cycle; \fill[fill=gray!20] (1) -- (4) -- (5) -- cycle; \fill[fill=gray!20] (2) -- (4) -- (6) -- cycle; \fill[fill=gray!20] (2) -- (3) -- (5) -- cycle; \node (1) at (0,1.72) {$1$}; \node (2) at (0,-1.72) {$2$}; \node (3) at (2,-1.72) {$3$}; \node (4) at (-1,0) {$4$}; \node (5) at (1,0) {$5$}; \node (6) at (-2,-1.72) {$6$}; \draw[->,thick] (-.23,1.7) arc [start angle=96, delta angle=108, radius=2.3cm] node[midway,right] {$\nu$}; \draw[->,thick] (-1.87,-1.93) arc [start angle=-144, delta angle=108, radius=2.3cm] node[midway,above] {$\mu$}; \draw[->,thick] (2.11,-1.52) arc [start angle=-24, delta angle=108, radius=2.3cm] node[midway,left] {$\alpha$}; \draw[->,thick] (1) edge node [right] {$\delta$} (5) (2) edge node [left] {$\varepsilon$} (5) (2) edge node [below] {$\varrho$} (6) (3) edge node [below] {$\sigma$} (2) (4) edge node [left] {$\gamma$} (1) (4) edge node [right] {$\beta$} (2) (5) edge node [right] {$\xi$} (3) (5) edge node [below] {$\eta$} (4) (6) edge node [left] {$\omega$} (4) ; \end{tikzpicture}$$ where $f$ is the permutation of arrows of order $3$ described by the shaded subquivers. We denote by $g$ the permutation on the set of arrows of $Q$ whose $g$-orbits are the four white $3$-cycles. Let $m \geq 2$ be a natural number and $\lambda \in K$. We denote by $\Lambda(m,\lambda)$ the algebra given by the above quiver and the relations: $$\begin{aligned} \gamma\delta &= \beta\varepsilon + \lambda (\beta\varrho\omega)^{m-1} \beta\varepsilon, & \delta\eta &= \nu\omega, & \eta\gamma &= \xi\alpha, & \nu \mu &= \delta\xi , \\ \varrho\omega &= \varepsilon\eta + \lambda (\varepsilon\xi\sigma)^{m-1} \varepsilon\eta, & \omega\beta &= \mu\sigma, & \beta\varrho &= \gamma\nu , & \mu \alpha &= \omega \gamma , \\ \xi\sigma &= \eta\beta + \lambda (\eta\gamma\delta)^{m-1} \eta\beta, & \sigma\varepsilon &= \alpha\delta, & \varepsilon\xi &= \varrho\mu, & \alpha\nu &= \sigma\varrho, \\ \omit\rlap{\qquad\quad$\big(\theta f(\theta) f^2(\theta)\big)^{m-1} \theta f(\theta) g\big(f(\theta)\big) = 0$ for any arrow $\theta$ in $Q$.}\end{aligned}$$ We call $\Lambda(m,\lambda)$ a *higher tetrahedral algebra*. Moreover, an algebra $\Lambda(m,\lambda)$ with $\lambda \in K^* = K \setminus \{0\}$ is said to be a *non-singular higher tetrahedral algebra*. The following two theorems describe some properties of higher tetrahedral algebras. \[th:main1\] Let $\Lambda = \Lambda(m,\lambda)$ be a higher tetrahedral algebra. Then $\Lambda$ is a finite-dimensional symmetric algebra with $\dim_K \Lambda = 36 m$. \[th:main2\] Let $\Lambda = \Lambda(m,\lambda)$ be a higher tetrahedral algebra. Then $\Lambda$ is a tame algebra of non-polynomial growth. The following theorem is the main result of the paper. \[th:main3\] Let $\Lambda = \Lambda(m,\lambda)$ be a higher tetrahedral algebra. Then the following statements are equivalent: (i) ${\operatorname{mod}}\Lambda$ admits a periodic simple module. (ii) All simple modules in ${\operatorname{mod}}\Lambda$ are periodic of period $4$. (iii) $\Lambda$ is a periodic algebra of period $4$. (iv) $\Lambda$ is non-singular. Following [@ESk-AGQT], an algebra $A$ is called an algebra of generalized quaternion type if $A$ is representation-infinite tame symmetric and every simple module in ${\operatorname{mod}}A$ is periodic of period $4$. We prove in [@ESk-AGQT] that an algebra $A$ is of generalized quaternion type with $2$-regular Gabriel quiver if and only if $A$ is a socle deformation of a weighted surface algebra, different from the singular tetrahedral algebra, or is a non-singular higher tetrahedral algebra. This paper is organized as follows. In Section \[sec:pre\] we recall background on special biserial algebras and degenerations of algebras. In Section \[sec:bimodule\] we describe our general approach and results for constructing a minimal projective bimodule resolution of an algebra with periodic simple modules. Section \[sec:proof1\] is devoted to basic properties of the higher tetrahedral algebras and the proof of Theorem \[th:main1\]. Sections \[sec:proof2\] and \[sec:proof3\] contain the proofs of Theorems \[th:main2\] and \[th:main3\], respectively. For general background on the relevant representation theory we refer to the books [@ASS; @SS; @SY]. Preliminary results {#sec:pre} =================== A *quiver* is a quadruple $Q = (Q_0, Q_1, s, t)$ consisting of a finite set $Q_0$ of vertices, a finite set $Q_1$ of arrows, and two maps $s,t : Q_1 \to Q_0$ which associate to each arrow $\alpha \in Q_1$ its source $s(\alpha) \in Q_0$ and its target $t(\alpha) \in Q_0$. We denote by $K Q$ the path algebra of $Q$ over $K$ whose underlying $K$-vector space has as its basis the set of all paths in $Q$ of length $\geq 0$, and by $R_Q$ the arrow ideal of $K Q$ generated by all paths in $Q$ of length $\geq 1$. An ideal $I$ in $K Q$ is said to be admissible if there exists $m \geq 2$ such that $R_Q^m \subseteq I \subseteq R_Q^2$. If $I$ is an admissible ideal in $K Q$, then the quotient algebra $K Q/I$ is called a bound quiver algebra, and is a finite-dimensional basic $K$-algebra. Moreover, $K Q/I$ is indecomposable if and only if $Q$ is connected. Every basic, indecomposable, finite-dimensional $K$-algebra $A$ has a bound quiver presentation $A \cong K Q/I$, where $Q = Q_A$ is the *Gabriel quiver* of $A$ and $I$ is an admissible ideal in $K Q$. For a bound quiver algebra $A = KQ/I$, we denote by $e_i$, $i \in Q_0$, the associated complete set of pairwise orthogonal primitive idempotents of $A$, and by $S_i = e_i A/e_i {\operatorname{rad}}A$ (respectively, $P_i = e_i A$), $i \in Q_0$, the associated complete family of pairwise non-isomorphic simple modules (respectively, indecomposable projective modules) in ${\operatorname{mod}}A$. Following [@SW], an algebra $A$ is said to be *special biserial* if $A$ is isomorphic to a bound quiver algebra $K Q/I$, where the bound quiver $(Q,I)$ satisfies the following conditions: (a) each vertex of $Q$ is a source and target of at most two arrows, (b) for any arrow $\alpha$ in $Q$ there are at most one arrow $\beta$ and at most one arrow $\gamma$ with $\alpha \beta \notin I$ and $\gamma \alpha \notin I$. Moreover, if in addition $I$ is generated by paths of $Q$, then $A = K Q/I$ is said to be a *string algebra* [@BR]. It was proved in [@PS] that the class of special biserial algebras coincides with the class of biserial algebras (indecomposable projective modules have biserial structure) which admit simply connected Galois coverings. Furthermore, by [@WW Theorem 1.4] we know that every special biserial agebra is a quotient algebra of a symmetric special biserial algebra. We also mention that, if $A$ is a self-injective special biserial algebra, then $A/{\operatorname{soc}}(A)$ is a string algebra. The following has been proved by Wald and Waschbüsch in [@WW] (see also [@BR; @DS] for alternative proofs). \[prop:2.1\] Every special biserial algebra is tame. For a positive integer $d$, we denote by ${\operatorname{alg}}_d(K)$ the affine variety of associative $K$-algebra structures with identity on the affine space $K^d$. Then the general linear group ${\operatorname{GL}}_d(K)$ acts on ${\operatorname{alg}}_d(K)$ by transport of the structures, and the ${\operatorname{GL}}_d(K)$-orbits in ${\operatorname{alg}}_d(K)$ correspond to the isomorphism classes of $d$-dimensional algebras (see [@Kr] for details). We identify a $d$-dimensional algebra $A$ with the point of ${\operatorname{alg}}_d(K)$ corresponding to it. For two $d$-dimensional algebras $A$ and $B$, we say that $B$ is a *degeneration* of $A$ ($A$ is a *deformation* of $B$) if $B$ belongs to the closure of the ${\operatorname{GL}}_d(K)$-orbit of $A$ in the Zariski topology of ${\operatorname{alg}}_d(K)$. Geiss’ Theorem [@Ge] shows that if $A$ and $B$ are two $d$-dimensional algebras, $A$ degenerates to $B$ and $B$ is a tame algebra, then $A$ is also a tame algebra (see also [@CB2]). We will apply this theorem in the following special situation. \[prop:2.2\] Let $d$ be a positive integer, and $A(t)$, $t \in K$, be an algebraic family in ${\operatorname{alg}}_d(K)$ such that $A(t) \cong A(1)$ for all $t \in K \setminus \{0\}$. Then $A(1)$ degenerates to $A(0)$. In particular, if $A(0)$ is tame, then $A(1)$ is tame. A family of algebras $A(t)$, $t \in K$, in ${\operatorname{alg}}_d(K)$ is said to be *algebraic* if the induced map $A(-) : K \to {\operatorname{alg}}_d(K)$ is a regular map of affine varieties. Bimodule resolutions of self-injective algebras {#sec:bimodule} =============================================== In this section we describe a general approach for proving that an algebra $A$ with periodic simple modules is a periodic algebra. Let $A = K Q/I$ be a bound quiver algebra, and $e_i$, $i \in Q_0$, be the primitive idempotents of $A$ associated to the vertices of $Q$. Then $e_i \otimes e_j$, $i,j \in Q_0$, form a set of pairwise orthogonal primitive idempotents of the enveloping algebra $A^e = A^{{\operatorname{op}}} \otimes_K A$ whose sum is the identity of $A^e$. Hence, $P(i,j) = (e_i \otimes e_j) A^e = A e_i \otimes e_j A$, for $i,j \in Q_0$, form a complete set of pairwise non-isomorphic indecomposable projective modules in ${\operatorname{mod}}A^e$ (see [@SY Proposition IV.11.3]). The following result by Happel [@Ha2 Lemma 1.5] describes the terms of a minimal projective resolution of $A$ in ${\operatorname{mod}}A^e$. \[prop:3.1\] Let $A = K Q/I$ be a bound quiver algebra. Then there is in ${\operatorname{mod}}A^e$ a minimal projective resolution of $A$ of the form $$\cdots \rightarrow {\mathbb{P}}_n \xrightarrow{d_n} {\mathbb{P}}_{n-1} \xrightarrow{ } \cdots \rightarrow {\mathbb{P}}_1 \xrightarrow{d_1} {\mathbb{P}}_0 \xrightarrow{d_0} A \rightarrow 0,$$ where $${\mathbb{P}}_n = \bigoplus_{i,j \in Q_0} P(i,j)^{\dim_K {\operatorname{Ext}}_A^n(S_i,S_j)}$$ for any $n \in {\mathbb{N}}$. The syzygy modules have an important property, a proof for the next Lemma may be found in [@SY Lemma IV.11.16]. \[lem:3.2\] Let $A$ be an algebra. For any positive integer $n$, the module $\Omega_{A^e}^n(A)$ is projective as a left $A$-module and also as a right $A$-module. There is no general recipe for the differentials $d_n$ in Proposition \[prop:3.1\], except for the first three which we will now describe. We have $${\mathbb{P}}_0 = \bigoplus_{i \in Q_0} P(i,i) = \bigoplus_{i \in Q_0} A e_i \otimes e_i A .$$ The homomorphism $d_0 : {\mathbb{P}}_0 \to A$ in ${\operatorname{mod}}A^e$ defined by $d_0 (e_i \otimes e_i) = e_i$ for all $i \in Q_0$ is a minimal projective cover of $A$ in ${\operatorname{mod}}A^e$. Recall that, for two vertices $i$ and $j$ in $Q$, the number of arrows from $i$ to $j$ in $Q$ is equal to $\dim_K {\operatorname{Ext}}_A^1(S_i,S_j)$ (see [@ASS Lemma III.2.12]). Hence we have $${\mathbb{P}}_1 = \bigoplus_{\alpha \in Q_1} P\big(s(\alpha),t(\alpha)\big) = \bigoplus_{\alpha \in Q_1} A e_{s(\alpha)} \otimes e_{t(\alpha)} A .$$ Then we have the following known fact (see [@BES4 Lemma 3.3] for a proof). \[lem:3.3\] Let $A = K Q/I$ be a bound quiver algebra, and $d_1 : {\mathbb{P}}_1 \to {\mathbb{P}}_0$ the homomorphism in ${\operatorname{mod}}A^e$ defined by $$d_1(e_{s(\alpha)} \otimes e_{t(\alpha)}) = \alpha \otimes e_{t(\alpha)} - e_{s(\alpha)} \otimes \alpha$$ for any arrow $\alpha$ in $Q$. Then $d_1$ induces a minimal projective cover $d_1 : {\mathbb{P}}_1 \to \Omega_{A^e}^1(A)$ of $\Omega_{A^e}^1(A) = {\operatorname{Ker}}d_0$ in ${\operatorname{mod}}A^e$. In particular, we have $\Omega_{A^e}^2(A) \cong {\operatorname{Ker}}d_1$ in ${\operatorname{mod}}A^e$. We will denote the homomorphism $d_1 : {\mathbb{P}}_1 \to {\mathbb{P}}_0$ by $d$. For the algebras $A$ we will consider, the kernel $\Omega_{A^e}^2(A)$ of $d$ will be generated, as an $A$-$A$-bimodule, by some elements of ${\mathbb{P}}_1$ associated to a set of relations generating the admissible ideal $I$. Recall that a relation in the path algebra $KQ$ is an element of the form $$\mu = \sum_{r=1}^n c_r \mu_r ,$$ where $c_1, \dots, c_r$ are non-zero elements of $K$ and $\mu_r = \alpha_1^{(r)} \alpha_2^{(r)} \dots \alpha_{m_r}^{(r)}$ are paths in $Q$ of length $m_r \geq 2$, $r \in \{1,\dots,n\}$, having a common source and a common target. The admissible ideal $I$ can be generated by a finite set of relations in $K Q$ (see [@ASS Corollary II.2.9]). In particular, the bound quiver algebra $A = K Q/I$ is given by the path algebra $K Q$ and a finite number of identities $\sum_{r=1}^n c_r \mu_r = 0$ given by a finite set of generators of the ideal $I$. Consider the $K$-linear homomorphism $\pi : K Q \to {\mathbb{P}}_1$ which assigns to a path $\alpha_1 \alpha_2 \dots \alpha_m$ in $Q$ the element $$\pi(\alpha_1 \alpha_2 \dots \alpha_m) = \sum_{k=1}^m \alpha_1 \alpha_2 \dots \alpha_{k-1} \otimes \alpha_{k+1} \dots \alpha_m$$ in ${\mathbb{P}}_1$, where $\alpha_0 = e_{s(\alpha_1)}$ and $\alpha_{m+1} = e_{t(\alpha_m)}$. Observe that $\pi(\alpha_1 \alpha_2 \dots \alpha_m) \in e_{s(\alpha_1)} {\mathbb{P}}_1 e_{t(\alpha_m)}$. Then, for a relation $\mu = \sum_{r=1}^n c_r \mu_r$ in $K Q$ lying in $I$, we have an element $$\pi(\mu) = \sum_{r=1}^n c_r \pi(\mu_r) \in e_i {\mathbb{P}}_1 e_j ,$$ where $i$ is the common source and $j$ is the common target of the paths $\mu_1,\dots,\mu_r$. The following lemma shows that relations always produce elements in the kernel of $d_1$; the proof is straightforward. \[lem:3.4\] Let $A = K Q/I$ be a bound quiver algebra and $d_1 : {\mathbb{P}}_1 \to {\mathbb{P}}_0$ the homomorphism in ${\operatorname{mod}}A^e$ defined in Lemma \[lem:3.3\]. Then for any relation $\mu$ in $K Q$ lying in $I$, we have $d_1(\pi(\mu)) = 0$. For an algebra $A = K Q/I$ in our context, we will see that there exists a family of relations $\mu^{(1)},\dots,\mu^{(q)}$ generating the ideal $I$ such that the associated elements $\pi(\mu^{(1)}), \dots, \pi(\mu^{(q)})$ generate the $A$-$A$-bimodule $\Omega_{A^e}^2(A) = {\operatorname{Ker}}d_1$. In fact, using Lemma \[lem:3.2\], we will be able to show that $${\mathbb{P}}_2 = \bigoplus_{j = 1}^q P\big(s(\mu^{(j)}),t(\mu^{(j)})\big) = \bigoplus_{j = 1}^q A e_{s(\mu^{(j)})} \otimes e_{t(\mu^{(j)})} A ,$$ and the homomorphism $d_2 : {\mathbb{P}}_2 \to {\mathbb{P}}_1$ in ${\operatorname{mod}}A^e$ such that $$d_2 \big(e_{s(\mu^{(j)})} \otimes e_{t(\mu^{(j)})}\big) = \pi(\mu^{(j)}) ,$$ for $j \in \{1,\dots,q\}$, defines a projective cover of $\Omega_{A^e}^2(A)$ in ${\operatorname{mod}}A^e$. In particular, we have $\Omega_{A^e}^3(A) \cong {\operatorname{Ker}}d_2$ in ${\operatorname{mod}}A^e$. We will denote this homomorphism $d_2$ by $R$. For the next map $d_3 : {\mathbb{P}}_3 \to {\mathbb{P}}_2$, which we will call $S:=d_3$ later, we do not have a general recipe. To define it, we need a set of minimal generators for $\Omega_{A^e}^3(A)$, and Proposition \[prop:3.1\] tells us where we should look for them. Proof of Theorem \[th:main1\] {#sec:proof1} ============================= Let $\Lambda = \Lambda(m,\lambda)$ for some $m \geq 2$ and $\lambda \in K$. In this section we will study algebra properties of $\Lambda$, and in particular prove Theorem \[th:main1\]. The first results will be used to reduce calculations, and should also be of independent interest. In order to construct a basis of $\Lambda$ with good properties, we analyze the images of paths in $\Lambda$, they have very unusual properties. We introduce some notation. It follows from the relations defining $\Lambda$ that we may define the elements $$\begin{aligned} {X}_1 &= \delta\eta\gamma = \nu\mu\alpha, & {X}_2 &= \varrho\omega\beta = \varepsilon\xi\sigma, & {X}_3 &= \alpha\nu\mu = \sigma\varepsilon\xi, \\ {X}_4 &= \gamma\delta\eta = \beta\varrho\omega, & {X}_5 &= \eta\gamma\delta = \xi\sigma\varepsilon, & {X}_6 &= \omega\beta\varrho = \mu\alpha\nu, $$ given by products of the arrows around the shaded triangles. Moreover, we define the elements $$\begin{aligned} \tilde{X}_2 &= \varepsilon\eta\beta, & \tilde{X}_4 &= \beta\varepsilon\eta, & \tilde{X}_5 &= \eta\beta\varepsilon. $$ The quiver $Q$ of $\Lambda$ has an automorphism $\varphi$ of order $3$, defined as follows. Its action on vertices is given by the cycles $$(5 \ 4 \ 2)(1 \ 6 \ 3)$$ and the action on arrows is $$(\delta \ \omega \ \sigma) (\eta \ \beta \ \varepsilon) (\gamma \ \rho \ \xi) (\nu \ \mu \ \alpha).$$ \[lem:4.1\] The action of $\varphi$ extends to an algebra automorphism of $\Lambda$. We extend $\varphi$ to an algebra map of $K Q$. Then we must check that $\varphi$ preserves the relations, which is direct calculation. For example, $$\varphi(\gamma\delta) = \varphi(\gamma)\varphi(\delta) = \rho\omega \ \ \mbox{ and } \ \varphi(\beta\varepsilon) = \varphi(\beta)\varphi(\varepsilon) = \varepsilon\eta,$$ and $$\varphi(X_4) = \varphi(\gamma)\varphi(\delta)\varphi(\eta) = \rho\omega\beta = X_2.$$ Hence, $\varphi$ takes the relation for $\gamma\delta$ to the relation for $\rho\omega$. \[lem:4.2\] For each vertex $i$ of $Q$, the element $X_i^m$ belongs to the right socle of $\Lambda$. It follows from the relations that, for each arrow $\theta$ in $Q$, we have $X_{s(\theta)}^m \theta = 0$. For example, we have $$X_1^m\delta = (\nu\mu\alpha)^m\delta = \nu (\mu\alpha\nu)^{m-1}\mu\alpha\delta = \nu (X_{s(\mu)})^{m-1}\mu f(\mu)g\big(f(\mu)\big) = 0.$$ \[lem:4.3\] We have the following equalities in $\Lambda$. (i) $X_1 = \nu\omega\gamma = \delta\xi\alpha$, $X_3 = \sigma\varrho\mu = \alpha\delta\xi$, $X_6 = \mu\sigma\varrho = \omega\gamma\nu$. (ii) $X_2 = \varrho\varrho\sigma$, $X_4 = \gamma\nu\omega$, $X_5 = \xi\alpha\delta$. (iii) $X_2 = \tilde{X}_2 + \lambda X_2^m$, $X_4 = \tilde{X}_4 + \lambda X_4^m$, $X_5 = \tilde{X}_5 + \lambda X_5^m$. (iv) $X_2^m = (\tilde{X}_2)^m$, $X_4^m = (\tilde{X}_4)^m$, $X_5^m = (\tilde{X}_5)^m$. The equalities in (i) and (ii) follow directly from the relations defining $\Lambda$. For (iii), observe that the vertices $2$, $4$, $5$ are in one orbit of the autmorphism $\varphi$. Hence, it is enough to show that $X_2 = \tilde{X}_2 + \lambda X_2^m$. We have $$X_2 = \varepsilon\xi\sigma = \rho\mu\sigma = \rho\omega\beta .$$ Moreover $$\rho\omega\beta = (\varepsilon\eta + \lambda X_2^{m-1}\varepsilon\eta)\beta = \tilde{X}_2 + \lambda X_2^{m-1}\varepsilon\eta\beta$$ and $$X_2^{m-1}\varepsilon\eta\beta = X_2^{m-1}\varepsilon(\xi\sigma - \lambda X_5^{m-1}\eta\beta) = X_2^{m-1}\varepsilon\xi\sigma = X_2^m.$$ The equalities in (iv) follow from the equalities in (iii) and the fact that $X_2^m$, $X_4^m$, $X_5^m$ are in the socle of $\Lambda$. \[lem:4.4\] For vertices $i \neq j$ in $Q$, any two paths of length $3$ from $i$ to $j$ are equal and non-zero in $\Lambda$. Consider paths of length three between different vertices $i$ and $j$ in $Q$. Such paths only exist if the vertices are “opposite”, and because of the automorphism $\varphi$, we may assume that $\{ i, j\} = \{ 1, 2\}$. Concerning paths from $1$ to $2$ we have $$\delta\eta\beta = \delta\xi\sigma - \lambda\delta X_5^{m-1}\eta\beta.$$ Now, $\delta X_5 = \delta\eta\gamma\delta = X_1\delta$ and therefore $$\delta X_5^{m-1}\eta\beta = X_1^{m-1}\delta\eta\beta = X_{s(\delta)}^{m-1}\delta f(\delta)g\big(f(\delta)\big) = 0.$$ With this, we have $$\delta\eta\beta = \delta\xi\sigma = \nu\mu\sigma = \nu\omega\beta ,$$ as required. A similar calculation shows that all paths from 2 to 1 of length three are equal in $\Lambda$. \[lem:4.5\] The following statements hold: (i) For $4 \leq k \leq 3m - 1$, any two paths of length $k$ between two vertices in $Q$ are equal and non-zero in $\Lambda$. (ii) For $k = 3m$, any path of length $k$ between two different vertices is zero in $\Lambda$. (iii) For $k = 3m$, any cycle of length $k$ around a vertex $i$ is equal to $X_i^m$. (iv) For $k > 3m$, any path of length $k$ is zero in $\Lambda$. For the following, we write $X_{ij}$ for a path of length three between vertices $i\neq j$. We first show that any two paths of length four between two fixed vertices are equal. For this, it suffices to consider paths starting at $1$ and paths starting at $2$. (i1) Paths from $1$ of length four must end at vertex $5$ or vertex $6$. Consider paths ending at $5$. Such a path either ends with arrow $\delta$ or it ends with arrow $\varepsilon$. If it ends with $\delta$ then it is the product of a cyclic path of length three from $1$ to $1$ with $\delta$, hence by Lemma \[lem:4.3\], is equal to $X_1\delta$. Similarly, any path of length four from $1$ ending with $\varepsilon$ is the product of a path of length three from $1$ to $2$ with $\varepsilon$, hence is equal in $\Lambda$ to $X_{12}\varepsilon$. We must show that $X_1\delta = X_{12}\varepsilon$. We have $$X_1\delta = \nu\mu\alpha\delta = \nu\mu\sigma\varepsilon = X_{12}\varepsilon.$$ Similarly, any path of length four from 1 to 6 ends with arrow $\rho$ or with arrow $\nu$, and one shows as above that all are equal in $\Lambda$. (i2) Consider paths of length four starting at vertex $2$, any such path ends at vertex $6$ or vertex $5$. Consider paths ending at vertex $6$, the last arrow in such a path is $\nu$ or $\rho$. If it ends with $\nu$ then the path is of the form $X_{12}\nu$, and if it ends with $\rho$ then it is either $X_2\rho$, or it is $\tilde{X}_2\rho$. We have $$\tilde{X}_2\rho - (X_2 - \lambda X_2^m)\rho = X_2\rho$$ (noting that $X_2^m$ is in the right socle of $\Lambda$). Moreover, $$X_2\rho = \rho\omega\beta\rho = \rho\omega\gamma\nu = X_{12}\nu.$$ For paths ending at vertex $5$ the proof is similar. We finish the proof of (i) by induction on $k$, using arguments as for the case $k=4$. Note that all paths of length $\leq 3m-1$ in $Q$ are non-zero in $\Lambda$ since all zero relations of $\Lambda$ have length $3m$ (and since the relations as listed are minimal). We prove now the statements (ii) and (iii). It suffices again to consider paths starting at 1 and paths starting at $2$. A cyclic path starting at $1$ of length $3m$ is of the form $Y\gamma$ or $Y'\alpha$, where $Y$ ends at vertex $4$ and $Y'$ ends at vertex $3$. By part (i) we can take $Y=X_1^{m-1}\delta\eta$ and then $Y\gamma = X_1^m$. As well we can take $Y'=X_1^{m-1}\nu\mu$ and get $Y'\alpha = X_1^m$. Similarly, any path of length $3m$ from $2$ to $2$ is equal to $X_2^m$. Now consider a path from vertex $1$ of length $3m$ which does not end at vertex $1$, then it must end at vertex $2$. It is of the form $Y\beta$ with $Y$ from $1$ to $4$, or of the form $Y'\sigma$ with $Y'$ from $1$ to $3$. By part (i) we can take $Y=X_1^{m-1}\delta\eta$ and then $$Y\beta = X_{s(\delta)}^{m-1}\delta f(\delta)g\big(f(\delta)\big) = 0.$$ We also can take $Y' = X_1^{m-1}\nu\mu$ and then again, by the defining relations, $Y'\sigma = 0$. Finally, consider a path from vertex $2$ of length $3m$ which does not end at vertex $2$, then it must end at vertex $1$. Such a path is either of the form $Y\gamma$, or of the form $Y'\alpha$, where $Y$ and $Y'$ are paths of length $3m-1$. We can take $Y=X_2^{m-1}\rho\omega$ and then $Y\gamma=0$, by the defining relations. Similarly, we can take $Y'=X_2^{m-1}\varepsilon\xi$ and then $Y'\alpha=0$, by the defining relations. The statement (iv) follows because $X_i^m$ is in the right socle of $\Lambda$, for any vertex $i$ of $Q$. We present now a basis of $\Lambda$ with good properties. We fix a vertex $i$, and define a basis ${\mathcal{B}}_i$ of $e_i\Lambda$ as follows. Choose a version of $X_i$, then suppose $X_i$ starts with $\tau$, then let $\bar{\tau}$ be the other arrow starting at $i$. Now let ${\mathcal{B}}_i:= $ the set of all initial subwords of $ X_i^m$ together with the set $$\Big\{ X_i^k\bar{\tau}, X_i^k\bar{\tau}f(\bar{\tau}) : 0\leq k\leq m-1\Big\} \cup \Big\{ X_i^k\tau f(\tau)g\big(f(\tau)\big): 0\leq k < m-1 \Big\}.$$ Then ${\mathcal{B}}_i$ is a basis for $e_i\Lambda$, and we take ${\mathcal{B}}:= \cup_{i\in Q_0} {\mathcal{B}}_i$. For each vertex $i$, let $\omega_i:= X_i^m$, this spans the socle of $e_i\Lambda$, by Lemma \[lem:4.5\], and it lies in ${\mathcal{B}}$. The basis ${\mathcal{B}}$ has the following properties: (a) For each $k$ with $1\leq k\leq 3m-1$ the set ${\mathcal{B}}_i$ contains precisely two elements of length $k$. The end vertices are determined by the congruence of $k$ modulo $3$. (b) Any path of length $k$ for $4\leq k \leq 3m-1$ is equal to precisely one basis element, as well any path of length three, except the cyclic paths between vertices $2, 4, 5$. (c) The product of two elements $b, b'$ from ${\mathcal{B}}$ is either zero, or is again an element in ${\mathcal{B}}$. It is non-zero if and only if $t(b)= s(b')$ and $bb'$ has length $\leq 3m$, and if the length is $3m$ then $s(b) = t(b')$. (For this, note that the cyclic paths of length three through the vertices $2, 5, 4$ are not products of basis elements.) (d) For each $b\in {\mathcal{B}}_i$ there is a unique $\hat{b}\in {\mathcal{B}}$ such that $b\hat{b} = \omega_i$: Say $b=be_j$of length $k$, then ${\mathcal{B}}_j$ must contain a unique element $\hat{b}$ of length $3m-k$ and moreover which ends at $i$. This is seem by checking through each congruence. Then $b\hat{b}$ is a path of length $3m$ from $i$ to $i$ and it must therefore be equal to $\omega_i$, by Lemma \[lem:4.5\]. It must be unique with $bb'=\omega_i$ and $b'\in {\mathcal{B}}$. \[cor:4.6\] $\Lambda$ has dimension $36m$. The next theorem completes the proof of Theorem \[th:main1\]. \[th:4.7\] $\Lambda$ is a symmetric algebra. We use the above basis to define a symmetrizing bilinear form. If $b, b'\in {\mathcal{B}}$, define $$(b, b'):= \mbox{ the coefficient of $\omega_i$ when $bb'$ is expanded in terms of ${\mathcal{B}}$. }$$ This extends to a bilinear form, and it is clearly associative. By (c) and (d) above, the Gram matrix of the bilinear form is non-singular, hence the form is non-degenerate. We show that the form is symmetric. Let $b, b'\in {\mathcal{B}}$, where $b= e_ibe_j$ and $b'= e_kb'e_l$. Then we have $$(b, b') = \left\{\begin{array}{ll} 1 & j=k, i=l, \ell(bb') = 3m,\cr 0 & \mbox{else}, \end{array}\right.$$ and $(b, b')$ is the same. Proof of Theorem \[th:main2\] {#sec:proof2} ============================= Let $(Q,f)$ be the triangulation quiver associated to the tetrahedron. Then we have the involution $\bar{}: Q_1 \to Q_1$ on the set $Q_1$ of arrows of $Q$ which assigns to an arrow $\theta \in Q_1$ the arrow $\bar{\theta}$ with $s(\theta) = s(\bar{\theta})$ and $\theta \neq \bar{\theta}$. With this, we obtain another permutation $g: Q_1 \to Q_1$ such that $g(\theta) = {\mkern 5mu\overline{\mkern-5muf(\theta)\mkern-5mu}\mkern 5mu}$ for any $\theta \in Q_1$, as indicated in the introduction. Let $m \geq 2$ be a natural number, $\lambda \in K$, and $\Lambda(m,\lambda)$ the associated higher tetrahedral algebra. We will prove first that $\Lambda(m,\lambda)$ is a tame algebra. We divide the proof into several steps. \[prop:5.1\] For each $\lambda \in K \setminus \{ 0 \}$, $\Lambda(m,\lambda)$ degenerates to $\Lambda(m,0)$. For each $t \in K$, consider the algebra $\Lambda(t)$ given by the quiver $Q$ and the relations: $$\begin{aligned} \gamma\delta &= \beta\varepsilon + t \lambda (\beta\varrho\omega)^{m-1} \beta\varepsilon, & \delta\eta &= \nu\omega, & \eta\gamma &= \xi\alpha, & \nu \mu &= \delta\xi , \\ \varrho\omega &= \varepsilon\eta + t \lambda (\varepsilon\xi\sigma)^{m-1} \varepsilon\eta, & \omega\beta &= \mu\sigma, & \beta\varrho &= \gamma\nu , & \mu \alpha &= \omega \gamma , \\ \xi\sigma &= \eta\beta + t \lambda (\eta\gamma\delta)^{m-1} \eta\beta, & \sigma\varepsilon &= \alpha\delta, & \varepsilon\xi &= \varrho\mu, & \alpha\nu &= \sigma\varrho, \\ \omit\rlap{\qquad\quad\ \ $\big(\theta f(\theta) f^2(\theta)\big)^{m-1} \theta f(\theta) g\big(f(\theta)\big) = 0$ for any arrow $\theta$ in $Q$.}\end{aligned}$$ Then $\Lambda(t)$, $t \in K$, is an algebraic family in the variety ${\operatorname{alg}}_d(K)$, with $d = 36m$. Observe that $\Lambda(0) \cong \Lambda(m,0)$ and $\Lambda(1) \cong \Lambda(m,\lambda)$. Fix $t \in K \setminus \{ 0 \}$, and take an element $a_t \in K$ with $a_t^{3(m-1)} = t$. Then there is an isomorphism of algebras $\varphi_t : \Lambda(1) \to \Lambda(t)$ such that $\varphi_t(\theta) = a_t \theta$ for any arrow $\theta$ in $Q$. This shows that $\Lambda(t) \cong \Lambda(1)$ for all $t \in K \setminus \{ 0 \}$. Then it follows from Proposition \[prop:2.2\] that $\Lambda(m,\lambda)$ degenerates to $\Lambda(m,0) = \Lambda(0)$. Let $\Omega(m)$ be the algebra given by quiver $\Delta$ of the form $$\begin{xy} 0;/r2.5pc/: (0.5,1.125)*+{1}="1"; (1.5,1.675)*+{2}="2"; (0,0.425)*+{3}="3"; (0,-0.525)*+{4}="4"; (-0.5,1.125)*+{5}="5"; (-1.5,1.675)*+{6}="6"; (0,2.8)*+{7}="7"; (2,0)*+{8}="8"; (-2,0)*+{9}="9"; \ar @{->}@/^.25ex/_{\alpha_1} "1";"7" \ar @{->}@/_1ex/_{\alpha_2} "2";"7" \ar @{->}@/^.5ex/_{\alpha_3} "3";"8" \ar @{->}@/_1ex/_{\alpha_4} "4";"8" \ar @{->}@/^.5ex/_{\alpha_5} "5";"9" \ar @{->}@/_1ex/_{\alpha_6} "6";"9" \ar @{->}@/^.5ex/_{\beta_5} "7";"5" \ar @{->}@/_1ex/_{\beta_6} "7";"6" \ar @{->}@/^.5ex/_{\beta_1} "8";"1" \ar @{->}@/_1ex/_{\beta_2} "8";"2" \ar @{->}@/^.5ex/_{\beta_3} "9";"3" \ar @{->}@/_1ex/_{\beta_4} "9";"4" \end{xy}$$ and the relations: $$\begin{aligned} \beta_1\alpha_1 &= \beta_2\alpha_2, & \beta_3\alpha_3 &= \beta_4\alpha_4, & \beta_5\alpha_5 &= \beta_6\alpha_6,\end{aligned}$$ $$\begin{aligned} \alpha_1 (\beta_5 \alpha_5 \beta_3 \alpha_3 \beta_1 \alpha_1)^{m-1} \beta_5 \alpha_5 \beta_3 \alpha_3 \beta_2 &= 0, & \alpha_2 (\beta_6 \alpha_6 \beta_4 \alpha_4 \beta_2 \alpha_2)^{m-1} \beta_6 \alpha_6 \beta_4 \alpha_4 \beta_1 &= 0, \\ \alpha_3 (\beta_1 \alpha_1 \beta_5 \alpha_5 \beta_3 \alpha_3)^{m-1} \beta_1 \alpha_1 \beta_5 \alpha_5 \beta_4 &= 0, & \alpha_4 (\beta_2 \alpha_2 \beta_6 \alpha_6 \beta_4 \alpha_4)^{m-1} \beta_2 \alpha_2 \beta_6 \alpha_6 \beta_3 &= 0, \\ \alpha_5 (\beta_3 \alpha_3 \beta_1 \alpha_1 \beta_5 \alpha_5)^{m-1} \beta_3 \alpha_3 \beta_1 \alpha_1 \beta_6 &= 0, & \alpha_6 (\beta_4 \alpha_4 \beta_2 \alpha_2 \beta_6 \alpha_6)^{m-1} \beta_4 \alpha_4 \beta_2 \alpha_2 \beta_5 &= 0. \end{aligned}$$ For each vertex $i$ of $\Delta$, we denote by $e_i$ the primitive idempotent of $\Omega(m)$ associated to $i$. Moreover, let $e = e_1 + e_2 + e_3 + e_4 + e_5 + e_6$. \[lem:5.2\]The following statements hold: (i) $\Omega(m)$ is a finite-dimensional algebra with $\dim_K \Omega(m) = 81 m + 3$. (ii) $\Lambda(m,0)$ is isomorphic to the idempotent algebra $e \Omega (m) e$. \(i) A direct checking shows that $\dim_K e_i \Omega(m) = 9 m$ for $i \in \{1,2,3,4,5,6\}$, and $\dim_K e_j \Omega(m) = 9m+1$ for $j \in \{7,8,9\}$. Therefore, we obtain $\dim_K \Omega(m) = 81 m + 3$. \(ii) Consider the paths of length $2$ in $\Delta$ $$\begin{aligned} \delta &= \alpha_1 \beta_5, & \nu &= \alpha_1 \beta_6, & \varepsilon &= \alpha_2 \beta_5, & \varrho &= \alpha_2 \beta_6, & \alpha &= \alpha_3 \beta_1, & \sigma &= \alpha_3 \beta_2, \\ \gamma &= \alpha_4 \beta_1, & \beta &= \alpha_4 \beta_2, & \xi &= \alpha_5 \beta_3, & \eta &= \alpha_5 \beta_4, & \mu &= \alpha_6 \beta_3, & \omega &= \alpha_6 \beta_4.\end{aligned}$$ Then these paths satisfy the relations defining the algebra $\Lambda(m,0)$. Therefore, $e \Omega (m) e$ is isomorphic to $\Lambda(m,0)$. The algebra $\Omega (m)$ can be viewed as a blowup of the algebra $\Lambda(m,0)$. The reason to consider it here is as follows. The higher tetrahedral algebras $\Lambda(m,\lambda)$ have no visible degenerations to special biserial alebras. But the algebra $\Omega (m)$ admits a degeneration to a special biserial algebra, as we will show below. Then Proposition \[prop:2.1\] will imply that $\Omega (m)$ is a tame algebra, and consequently $\Lambda(m,0)$ is a tame algebra (see [@DS0 Theorem]). For each $t \in K$, let $\Sigma(m,t)$ be the algebra given by the quiver $\Sigma$ of the form $$\begin{xy} 0;/r2pc/: (1,1.4)*+{x}="x"; (0,0)*+{y}="y"; (-1,1.4)*+{z}="z"; (0,2.8)*+{a}="a"; (2,0)*+{b}="b"; (-2,0)*+{c}="c"; \ar @{->}_{\alpha} "x";"a" \ar @{->}_{\beta} "b";"x" \ar @{->}_{\gamma} "y";"b" \ar @{->}_{\sigma} "c";"y" \ar @{->}_{\omega} "z";"c" \ar @{->}_{\delta} "a";"z" \ar@(dr,dl)^{\eta} "y";"y" \ar@(u,r)^{\varepsilon} "x";"x" \ar@(l,u)^{\mu} "z";"z" \end{xy}$$ and the relations: $$\begin{aligned} \beta \alpha &=0, & \sigma \gamma &= 0, & \delta \omega &= 0, & \varepsilon^2 &= t \varepsilon, & \eta^2 &= t \eta, & \mu^2 &= t \mu,\end{aligned}$$ $$\begin{aligned} t(\alpha\delta\mu\omega\sigma\eta\gamma\beta\varepsilon)^m &= \varepsilon(\alpha\delta\mu\omega\sigma\eta\gamma\beta\varepsilon)^m , & t(\varepsilon\alpha\delta\mu\omega\sigma\eta\gamma\beta)^m &= (\varepsilon\alpha\delta\mu\omega\sigma\eta\gamma\beta)^m \varepsilon, \\ \notag t(\gamma\beta\varepsilon\alpha\delta\mu\omega\sigma\eta)^m &= \eta(\gamma\beta\varepsilon\alpha\delta\mu\omega\sigma\eta)^m , & t(\eta\gamma\beta\varepsilon\alpha\delta\mu\omega\sigma)^m &= (\eta\gamma\beta\varepsilon\alpha\delta\mu\omega\sigma)^m \eta, \\ \notag t(\omega\sigma\eta\gamma\beta\varepsilon\alpha\delta\mu)^m &= \mu(\omega\sigma\eta\gamma\beta\varepsilon\alpha\delta\mu)^m , & t(\mu\omega\sigma\eta\gamma\beta\varepsilon\alpha\delta)^m &= (\mu\omega\sigma\eta\gamma\beta\varepsilon\alpha\delta)^m \mu,\end{aligned}$$ $$\begin{aligned} (\alpha\delta\mu\omega\sigma\eta\gamma\beta\varepsilon)^m &= (\varepsilon\alpha\delta\mu\omega\sigma\eta\gamma\beta)^m , & (\gamma\beta\varepsilon\alpha\delta\mu\omega\sigma\eta)^m &= (\eta\gamma\beta\varepsilon\alpha\delta\mu\omega\sigma)^m, \\ \notag (\omega\sigma\eta\gamma\beta\varepsilon\alpha\delta\mu)^m &= (\mu\omega\sigma\eta\gamma\beta\varepsilon\alpha\delta)^m ,\end{aligned}$$ $$\begin{aligned} (\delta\mu\omega\sigma\eta\gamma\beta\varepsilon\alpha)^m \delta &=0 , & \alpha (\delta\mu\omega\sigma\eta\gamma\beta\varepsilon\alpha)^m &=0 , & (\beta\varepsilon\alpha\delta\mu\omega\sigma\eta\gamma)^m \beta &=0 , \\ \notag \gamma (\beta\varepsilon\alpha\delta\mu\omega\sigma\eta\gamma)^m &=0 , & (\sigma\eta\gamma\beta\varepsilon\alpha\delta\mu\omega)^m \sigma &=0 , & \omega (\sigma\eta\gamma\beta\varepsilon\alpha\delta\mu\omega)^m &=0 .\end{aligned}$$ We note that for $t \in K \setminus \{ 0 \}$ the relations (3) follow from the relations (2), and the relationts (4) from the relations (1) and (2). For example, we have the equalities $$\begin{aligned} t (\delta\mu\omega\sigma\eta\gamma\beta\varepsilon\alpha)^m \delta &= t \delta (\mu\omega\sigma\eta\gamma\beta\varepsilon\alpha\delta)^m = \delta (\mu\omega\sigma\eta\gamma\beta\varepsilon\alpha\delta)^m \mu \\ &= \delta \mu (\omega\sigma\eta\gamma\beta\varepsilon\alpha\delta\mu)^m = t \delta (\omega\sigma\eta\gamma\beta\varepsilon\alpha\delta\mu)^m = 0 ,\end{aligned}$$ because $\delta \omega = 0$, and hence $(\delta\mu\omega\sigma\eta\gamma\beta\varepsilon\alpha)^m \delta = 0$, for $t \in K \setminus \{ 0 \}$. For each vertex $i$ of $\Sigma$, we denote by $f_i$ the primitive idempotent of $\Sigma(m,t)$ associated to $i$. \[lem:5.3\] The following statements hold: (i) For each $t \in K$, $\Sigma(m,t)$ is a finite-dimensional algebra with $\dim_K \Sigma(m,t) = 81 m + 3$. (ii) $\Sigma(m,t) \cong \Sigma(m,1)$ for any $t \in K \setminus \{ 0 \}$. (iii) $\Sigma(m,0)$ is a special biserial algebra. \(i) It follows from the relations defining $\Sigma(m,t)$ that $\dim_K f_i \Sigma(m,t) = 9 m + 1$ for $i \in \{a,b,c\}$, and $\dim_K f_j \Sigma(m,t) = 18 m$ for $j \in \{x,y,z\}$. Hence, we obtain $\dim_K \Sigma(m,t) = 81 m + 3$. \(ii) Fix $t \in K \setminus \{ 0 \}$, and take an element $b_t \in K$ with $b_t^8 = t$. Then there exists an isomorphism of algebras $\psi_t : \Sigma(m,1) \to \Sigma(m,t)$ such that $\psi_t(\varepsilon) = t^{-1} \varepsilon$, $\psi_t(\eta) = t^{-1} \eta$, $\psi_t(\mu) = t^{-1} \mu$, and $\psi_t(\theta) =b_ t \theta$ for any arrow $\theta \in \{\alpha, \beta, \gamma, \sigma, \omega, \delta \}$. \(iii) Follows from the relations defining $\Sigma(m,0)$. \[lem:5.4\] The algebras $\Omega(m)$ and $\Sigma(m,1)$ are isomorphic. We shall prove that there is a well defined isomorphism of algebras $\varphi : \Omega(m) \to \Sigma(m,1)$ such that $$\begin{aligned} \varphi(e_1) &= \varepsilon, & \varphi(e_2) &= f_x - \varepsilon, & \varphi(e_3) &= \eta, & \varphi(e_4) &= f_y - \eta, \\ \varphi(e_5) &= \mu, & \varphi(e_6) &= f_z - \mu & \varphi(e_7) &= f_a, & \varphi(e_8) &= f_b, & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \varphi(e_9) &= f_c, \\ \varphi(\alpha_1) &= \varepsilon \alpha, & \varphi(\alpha_2) &= \alpha - \varepsilon \alpha, & \varphi(\beta_1) &= \beta \varepsilon, & \varphi(\beta_2) &= - \beta + \beta\varepsilon, \\ \varphi(\alpha_3) &= \eta \gamma, & \varphi(\alpha_4) &= \gamma - \eta \gamma, & \varphi(\beta_3) &= \sigma \eta, & \varphi(\beta_4) &= - \sigma + \sigma \eta, \\ \varphi(\alpha_5) &= \mu \omega, & \varphi(\alpha_6) &= \omega - \mu \omega, & \varphi(\beta_5) &= \delta\mu, & \varphi(\beta_6) &= - \delta + \delta\mu. \end{aligned}$$ Observe that $$\begin{aligned} \varphi(e_1 + e_2) &= f_x, & \varphi(e_3 + e_4) &= f_y, & \varphi(e_5 + e_6) &= f_z, \\ \varphi(\alpha_1 + \alpha_2) &= \alpha , & \varphi(\alpha_3 + \alpha_4) &= \gamma , & \varphi(\alpha_5 + \alpha_6) &= \omega , \\ \varphi(\beta_1 - \beta_2) &= \beta , & \varphi(\beta_3 - \beta_4) &= \sigma , & \varphi(\beta_5 - \beta_6) &= \delta . \end{aligned}$$ We have in $\Sigma(m,1)$ the following equalities $$\begin{aligned} \varphi(e_1^2) &= \varphi(e_1) = \varepsilon = \varepsilon^2 = \varphi(e_1)^2, \\ \varphi(e_2^2) &= \varphi(e_2) = f_x - \varepsilon = (f_x - \varepsilon)^2 = \varphi(e_2)^2, \\ \varphi(e_3^2) &= \varphi(e_3) = \eta = \eta^2 = \varphi(e_3)^2, \\ \varphi(e_4^2) &= \varphi(e_4) = f_y - \eta = (f_y - \eta)^2 = \varphi(e_4)^2, \\ \varphi(e_5^2) &= \varphi(e_5) = \mu = \mu^2 = \varphi(e_5)^2, \\ \varphi(e_6^2) &= \varphi(e_6) = f_z - \mu = (f_z - \mu)^2 = \varphi(e_6)^2, \\ \varphi(\beta_1) \varphi(\alpha_1) &= \beta \varepsilon^2 \alpha = \beta \varepsilon \alpha = (- \beta + \beta\varepsilon) (\alpha - \varepsilon \alpha) = \varphi(\beta_2) \varphi(\alpha_2), \\ \varphi(\beta_3) \varphi(\alpha_3) &= \sigma \eta^2 \gamma = \sigma \eta \gamma = (- \sigma + \sigma \eta) (\gamma - \eta \gamma) = \varphi(\beta_4) \varphi(\alpha_4), \\ \varphi(\beta_5) \varphi(\alpha_5) &= \delta\mu^2 \omega = \delta\mu \omega = (- \delta + \delta\mu) (\omega - \mu \omega) = \varphi(\beta_6) \varphi(\alpha_6) .\end{aligned}$$ It remains to show that the six zero relations defining $\Omega(m)$ correspond via $\varphi$ to the six commutativity relations (2), with $t=1$, defining $\Sigma(m,1)$. We will show this for the first two relations, because the proof for the other four is similar. We have the equalities $$\begin{aligned} \varphi(&\alpha_1) \big(\varphi(\beta_5) \varphi(\alpha_5) \varphi(\beta_3) \varphi(\alpha_3) \varphi(\beta_1) \varphi(\alpha_1)\big)^{m-1} \varphi(\beta_5) \varphi(\alpha_5) \varphi(\beta_3) \varphi(\alpha_3) \varphi(\beta_2) \\ &= \varepsilon \alpha (\delta\mu^2 \omega \sigma \eta^2 \gamma \beta \varepsilon^2 \alpha)^{m-1} \delta\mu^2 \omega \sigma \eta^2 \gamma (- \beta + \beta\varepsilon) \\ &= - \varepsilon \alpha (\delta\mu \omega \sigma \eta \gamma \beta \varepsilon \alpha)^{m-1} \delta\mu \omega \sigma \eta \gamma \beta + \varepsilon \alpha (\delta\mu \omega \sigma \eta \gamma \beta \varepsilon \alpha)^{m-1} \delta\mu \omega \sigma \eta \gamma \beta \varepsilon \\ &= - (\varepsilon \alpha \delta\mu \omega \sigma \eta \gamma \beta)^{m} + (\varepsilon \alpha \delta\mu \omega \sigma \eta \gamma \beta)^{m} \varepsilon = 0, \\ \varphi(&\alpha_2) \big(\varphi(\beta_6) \varphi(\alpha_6) \varphi(\beta_4) \varphi(\alpha_4) \varphi(\beta_2) \varphi(\alpha_2)\big)^{m-1} \varphi(\beta_6) \varphi(\alpha_6) \varphi(\beta_4) \varphi(\alpha_4) \varphi(\beta_1) \\ &= \varphi(\alpha_2) \big(\varphi(\beta_5) \varphi(\alpha_5) \varphi(\beta_3) \varphi(\alpha_3) \varphi(\beta_1) \varphi(\alpha_1)\big)^{m-1} \varphi(\beta_5) \varphi(\alpha_5) \varphi(\beta_3) \varphi(\alpha_3) \varphi(\beta_1) \\ &= (\alpha - \varepsilon \alpha) (\delta\mu \omega \sigma \eta \gamma \beta \varepsilon \alpha)^{m-1} \delta\mu \omega \sigma \eta \gamma \beta \varepsilon \\ &= (\alpha \delta\mu \omega \sigma \eta \gamma \beta \varepsilon)^{m} - \varepsilon (\alpha \delta\mu \omega \sigma \eta \gamma \beta \varepsilon)^{m} = 0. \end{aligned}$$ \[cor:5.5\] The algebra $\Omega(m)$ degenerates to the special biserial algebra $\Sigma(m,0)$. In particular, $\Omega(m)$ is a tame algebra. It follows from Lemmas \[lem:5.3\] and \[lem:5.4\] that $\Sigma(m,1)$, $t \in K$, is an alebraic family in the variety ${\operatorname{alg}}_K(d)$ with $d = 81 m + 3$ such that $\Sigma(m,1)\cong \Sigma(m,1) \cong \Omega(m)$ for any $t \in K \setminus \{ 0 \}$ and $\Sigma(m,0)$ is a special biserial algebra. Then it follows from Propositions \[prop:2.1\] and \[prop:2.2\] that $\Omega(m)$ is a tame algebra. \[prop:5.6\] For each $\lambda \in K$, $\Lambda(m,\lambda)$ is a tame algebra of non-polynomial growth. It follows from Lemma \[lem:5.2\] (ii), Corollary \[cor:5.5\] and [@DS0 Theorem] that $\Lambda(m,0)$ is a tame algebra. Then, applying Propositions \[prop:2.2\] and \[prop:5.1\], we conclue that $\Lambda(m,\lambda)$ is a tame algebra for any $\lambda \in K \setminus \{ 0 \}$. $\Lambda = \Lambda(m,\lambda)$ for an arbitrary $\lambda \in K$. Consider now the quotient algebra $\Gamma$ of $\Lambda$ by the ideal generated by the arrows $\delta$, $\nu$, $\varepsilon$, $\varrho$. Then $\Gamma$ is the algebra given by the quiver $$ \xymatrix@C=4.5pc@R=3pc{ 1 & 3 \ar[l]_{\alpha} \ar[ld]^(.2){\sigma} & 5 \ar[l]_{\xi} \ar[ld]^(.2){\eta} \\ 2 & 4 \ar[l]^{\beta} \ar[lu]_(.2){\gamma} & 6 \ar[l]^{\omega} \ar[lu]_(.2){\mu} }$$ and the relations $$\begin{aligned} && \omega \beta &= \mu \sigma, & \eta \gamma &= \xi \alpha, & \mu \alpha &= \omega \gamma, & \xi \sigma &= \eta \beta. &&\end{aligned}$$ Then $\Gamma$ is the tame minimal non-polynomial growth algebra $(30)$ from [@NoS]. Therefore, $\Lambda$ is of non-polynomial growth. We end this section with a Galois covering interpretation of the singular higher tetrahedral algebras. Let $m \geq 2$ be a natural number. We denote by $B(m)$ the fully commutative algebra of the following quiver $$ \xymatrix@C=1.75pc@R=1.2pc@!=.5pc{ 1 && 3 \ar[ll] \ar[lldd] && 5 \ar[ll] \ar[lldd] & \ar@{->}[l] & && 6m-5 \ar@{-}[l] \ar@{-}[ld] && 6m-3 \ar[ll] \ar[lldd] && 6m-1 \ar[ll] \ar[lldd] \\ && && & \ar@{->}[lu] \ar@{->}[ld] & \cdots & \\ 2 && 4 \ar[ll] \ar[lluu] && 6 \ar[ll] \ar[lluu] & \ar@{->}[l] & && 6m-4 \ar@{-}[l] \ar@{-}[lu] && 6m-2 \ar[ll] \ar[lluu] && 6m \ar[ll] \ar[lluu] }$$ Consider the repetitive category $\widehat{B(m)}$ of $B(m)$. Then the Nakayama automorphism $\nu_{\widehat{B(m)}}$ of $\widehat{B(m)}$ admits an $m$-th root $\varphi_m$ such that $(\varphi_m)^m = \nu_{\widehat{B(m)}}$. Let $\Gamma(m)$ be the orbit algebra $\widehat{B(m)}/(\varphi_m)$ of $\widehat{B(m)}$ with respect to the infinite cyclic group $(\varphi_m)$ generated by $\varphi_m$ (see [@Sk2] for relevant definitions). Then we obtain the following proposition. \[prop:5.7\] The algebras $\Lambda(m,0)$ and $\Gamma(m)$ are isomorphic. We would like to strees that, for any $\lambda \in K \setminus \{ 0 \}$, the non-singular higher tetrahedral algebra $\Lambda(m,\lambda)$ is not the orbit algebra of the repetitive category of an algebra. Proof of Theorem \[th:main3\] {#sec:proof3} ============================= We show first that every simple $\Lambda$-module is periodic of period four. This will then tell us what the terms of a minimal projective bimodule resolution of $\Lambda$ must be (see Proposition \[prop:3.1\]). As for notation, we write $\Omega$ for syzygies of right $\Lambda$-modules, and we write $\Omega_{\Lambda^e}$ for syzygies of right $\Lambda^e$-modules ($\Lambda$-$\Lambda$-bimodules). \[prop:6.1\] Each simple $\Lambda$-module is periodic of period four. There is an exact sequence $$0\to S_i \to P_i \to P_x\oplus P_y \ \to P_j\oplus P_k \to P_i \to S_i \to 0$$ where the arrows adjacent to $i$ end at $j, k$ and start at $x, y$. The automorphism $\varphi$ of $\Lambda$ induces an equivalence of the module category ${\operatorname{mod}}\Lambda$, with two orbits on simple modules. We only need to prove periodicity for one simple from each orbit. We will consider $S_1$ and $S_4$. \(1) We compute $\Omega^2(S_1)$ which we identify with the kernel of the map $d_1: P_6\oplus P_5\to P_1$ defined by $$d_1(a, b) := \nu a + \delta b,$$ for $a \in P_6$ and $b \in P_5$. Since $\nu\omega = \delta\eta$ and $\nu \mu = \delta \xi$, the kernel contains the submodule generated by $\phi$ and $\psi$, where $$\phi = (-\omega, \eta) \ \ \mbox{ and } \ \ \psi = (\mu, -\xi).$$ We will show that ${\operatorname{Ker}}d_1 = \phi\Lambda + \psi \Lambda$. Since we have one inclusion, it suffices to show that both spaces have the same dimension, that is, we must show that $\phi \Lambda + \psi \Lambda$ has dimension $6m+1$. We observe that $\phi \Lambda$ is isomorphic to $\Omega^{-1}(S_4)$ since $\omega, \eta$ are the arrows ending at vertex $4$. Similarly, $\psi \Lambda \cong \Omega^{-1}(S_3)$. In particular, $\dim_K \phi \Lambda = 6m-1 = \dim_K \psi \Lambda$. It follows that we must show that $\dim_K \phi \Lambda \cap \psi \Lambda = 6m-3$, that is, $$\dim_K \phi \Lambda/(\phi \Lambda\cap \psi \Lambda) = 2.$$ (1a) We identify the intersections of $\phi \Lambda$ and $\psi \Lambda$ with $0\oplus P_5$. We claim that each of $\phi \Lambda \cap (0\oplus P_5)$ and $\psi \Lambda\cap (0\oplus P_5)$ is 1-dimensional, spanned by $( 0, X_5^m)$. Indeed, suppose $\phi p = (0, z)$ for some $p\in \Lambda$ and $0\neq z$. We may assume that $p$ is a monomial in the arrows. To have $\omega p=0$ the monomial $p$ must have length $\geq 3m-1$. To have $\omega p =0$ and $\eta p = z\neq 0$, we must have that $p$ has length $3m-1$ and ends at vertex $4$, and then $\eta p = X_5^m$. For the converse, take $p= X_4^{m-1}\gamma\delta$. Similarly one proves the second statement. (1b) We claim that $\phi J^2 + \phi \gamma K$ is contained in the intersection $\phi \Lambda\cap \psi \Lambda$. Namely, we have $\phi \gamma = -\psi \alpha$, by the relations. Next, we have $$\begin{aligned} \tag{*} \label{eq:*0} \phi \beta = (-\omega\beta, \eta\beta) = -\psi\sigma - (0, \lambda X_5^{m-1}\eta\beta).\end{aligned}$$ Hence $\phi\beta\varepsilon = \psi\sigma\varepsilon - (0, \lambda X_5^m)$ (using Lemmas \[lem:4.3\], \[lem:4.4\], \[lem:4.5\]). By (1a) above, this belongs to the intersection and it follows from these that $\phi J^2 \subseteq \phi \Lambda \cap \psi \Lambda$. We note that if $\lambda = 0$, then $\Omega^2(S_i)$ has more than two minimal generators, and hence $S_i$ is not periodic of period $4$. (1c) Note that $\phi J^2 + \phi\gamma K$ has dimension $6m-3$. We have the chain of submodules $$\phi J^2 + \phi \gamma K \subseteq \phi \Lambda\cap \psi \Lambda \subseteq \phi \Lambda ,$$ and the quotient $\phi \Lambda /(\phi J^2 + \phi \gamma K)$ is spanned by the cosets of $\phi$ and $\phi \beta$. Assume for a contradiction that $\phi\beta \in \phi \Lambda \cap \psi \Lambda$. Then, by (\[eq:\*0\]), we have $(0,X_5^{m-1}\eta\beta) \in \psi \Lambda$, but this contradicts (1a). So $\phi\beta$ is not in the intersection, and therefore the dimension of $\phi \Lambda/ (\phi \Lambda\cap \psi \Lambda)$ is $2$, as required. (1d) Now it is easy to see that $S_1$ has period four. Namely, define $d_2: P_4\oplus P_3\to \Omega^2(S_1)$ by $$d_2(u, v):= \phi u + \psi v,$$ for $u \in P_4$ and $v \in P_3$. The kernel of $d_2$, that is, $\Omega^3(S_1)$ has dimension $2(6m)-(6m+1) = 6m-1$. We have seen that $\phi \gamma = - \psi\alpha$, and therefore $(\gamma, \alpha)\Lambda\subseteq {\operatorname{Ker}}(d_2)$. This submodule is isomorphic to $\Omega^{-1}(S_1)$ and has dimension $6m-1$. We deduce that $$\Omega^{-1}(S_1) \cong (\gamma, \alpha)\Lambda \cong \Omega^3(S_1).$$ So $S_1$ is periodic of period dividing $4$, and then equal to $4$. \(2) We compute $\Omega^2(S_4)$, which we identify with the kernel of $d_1: P_1\oplus P_2\to P_4$ defined as $$d_1(w,z) = \gamma w + \beta z,$$ for $w \in P_1$ and $z \in P_2$. This is analogous to (1), there is only a small difference in the formulae. Using the relations, the kernel of $d_1$ contains $\phi$ and $\psi$, where $$\phi = \big(\delta, -\varepsilon -\lambda(\rho\omega\beta)^{m-1}\varepsilon\big) \ \ \mbox{ and} \ \psi = (-\nu, \rho).$$ By the same arguments as in (1), to prove that ${\operatorname{Ker}}(d_1) = \phi \Lambda + \psi \Lambda$, we must show that $\dim_K \phi \Lambda/(\phi \Lambda\cap \psi \Lambda) = 2$. We have $\phi \eta = -\psi \omega$, which is in the intersection, and we have $$\phi\xi = -\psi\mu - (0, \lambda X_2^{m-1}\varepsilon\xi) .$$ As before one shows that $\phi J^2 = \psi J^2$ and hence is in the intersection. Suppose $\phi\xi $ is in the intersection. Then it follows that $(0, -\lambda X_2^{m-1}\varepsilon\xi)$ is in $\psi \Lambda$, which is a contradiction to the analogue of (1a). It follows that $\phi \Lambda/\phi \Lambda\cap \psi \Lambda$ is 2-dimensional. Then as in (1d) one concludes that $S_4$ has $\Omega$-period four. We use the notation as in Section \[sec:bimodule\], in particular the description of ${\mathbb{P}}_0$ and ${\mathbb{P}}_1$. For the higher tetrahedral algebra, we need to specify ${\mathbb{P}}_2$, which has generators corresponding to the minimal relations involving paths of length two. Each of these minimal relations has a term $\theta f(\theta)$ for $\theta$ an arrow, and this gives a bijection between arrows and minimal relations involving paths of length two. So we take $${\mathbb{P}}_2:= \oplus_{\theta \in Q_1} \Lambda(e_{s(\theta)}\otimes e_{t(f(\theta))})\Lambda.$$ We may denote the minimal relation with term $\theta f(\theta)$ by $\mu_{\theta}$. Then the definition of $R$ in Section \[sec:bimodule\] specializes to $$R: {\mathbb{P}}_2\to {\mathbb{P}}_1, \ \ R(e_{s(\theta)}\otimes e_{t(f(\theta))}) := \pi(\mu_{\theta}).$$ \[lemma:6.2\] The homomorphism $R: {\mathbb{P}}_2\to {\mathbb{P}}_1$ induces a projective cover of $\Omega^2_{\Lambda^e}(\Lambda)$ in ${\operatorname{mod}}\Lambda^e$. In particular, $\Omega^3_{\Lambda^e}(\Lambda) = {\operatorname{Ker}}R$. This is similar as that of Lemma 7.2 of [@ESk-WSA]. By Propositions \[prop:3.1\] and \[prop:6.1\], we can take ${\mathbb{P}}_3 = \oplus_{i\in Q_0} \Lambda(e_i\otimes e_i)\Lambda$. For each vertex $i$ of $Q$, we define an element $\psi_i$ as follows. Let $\tau, \bar{\tau}$ be the arrows starting at $i$, and let $\theta, \bar{\theta}$ be the arrows ending at $i$. Set $$\psi_i:= (e_i\otimes e_{t(\theta)})\theta + (e_i\otimes e_{t(\bar{\theta})})\bar{\theta} - \tau(e_{t(\tau)}\otimes e_i) - \bar{\tau}(e_{t(\bar{\tau})}\otimes e_i).$$ Then we define a $\Lambda^e$-module homomorphism $S: {\mathbb{P}}_3\to {\mathbb{P}}_2$ by $$S(e_i\otimes e_i):= \psi_i, \ \mbox{for } i\in Q_0.$$ \[lemma:6.3\] The homomorphism $S:{\mathbb{P}}_3\to {\mathbb{P}}_2$ induces a projective cover of $\Omega^3_{\Lambda^e}(\Lambda)$ in ${\operatorname{mod}}\Lambda^e$. In particular, we have $\Omega^4_{\Lambda^e}(\Lambda) = {\operatorname{Ker}}(S).$ We know that the kernel of $R$ is $\Omega^3_{\Lambda^e}(\Lambda)$, and we know that it has minimal generators corresponding to the vertices of $Q$. As well, from the definition, the element $\psi_i$ does not lie in $({\operatorname{rad}}{\mathbb{P}}_2)^2$. Therefore, it is enough to show that $R(\psi_i) = 0$ for all $i$. The algebra automorphism $\varphi$ of $\Lambda$ defined in Section \[sec:proof1\], extends to an automorphism of $\Lambda^e$. One checks that it commutes with the map $R$ and that it takes $\psi_i$ to $\psi_{\varphi(i)}$. So it is enough to take $i=1$ and $i=4$. \(1) We compute $R(\psi_1)$. This is equal to $$\begin{aligned} R\big((e_1\otimes e_4)\gamma &+ (e_1\otimes e_3)\alpha - \nu(e_6\otimes e_1) - \delta(e_5\otimes e_1)\big)\cr = \ & \pi(\delta\eta - \nu\omega)\gamma + \pi(\nu\mu - \delta\xi)\alpha -\nu \big(\pi(\mu\alpha -\omega\gamma)\big) - \delta\big(\pi(\eta\gamma - \xi\alpha)\big)\cr = \ & (e_1\otimes \eta\gamma + \delta\otimes \gamma - e_1\otimes \omega\gamma - \nu\otimes \gamma)\cr &+ (e_1\otimes \mu\alpha + \nu\otimes \alpha - e_1\otimes \xi\alpha - \delta\otimes \alpha)\cr &- (\nu\otimes \alpha + \nu\mu\otimes e_1 - \nu\otimes \gamma - \nu\omega \otimes e_1)\cr & - (\delta\otimes \gamma + \delta\eta\otimes e_1 - \delta\otimes \alpha - \delta\xi\otimes e_1).\end{aligned}$$ The terms of the form $\alpha_1\otimes \alpha_2$ for $\alpha_i$ arrows, cancel. The terms in $(e_1\otimes e_5)\Lambda$ are $$e_1\otimes \eta\gamma - e_1\otimes \xi\alpha) = e_1\otimes (\eta\gamma - \xi\alpha) = 0 .$$ Similarly, there are two terms in $(e_1\otimes e_6)\Lambda$ and two terms in $\Lambda(e_4\otimes e_1)$ and two terms in $\Lambda(e_3\otimes e_1)$, and they all cancel. Hence $R(\psi_1) = 0$. \(2) We compute $R(\psi_4)$. This is equal to $$\begin{aligned} \tag{*} \label{eq:*} R\big((e_4\otimes e_5)\eta &+ (e_4\otimes e_6)\omega - \gamma(e_1\otimes e_4) - \beta(e_2\otimes e_4)\big)\\ \notag &= \pi(\gamma\delta - \beta\varepsilon - \lambda X_4^{m-1}\beta\varepsilon)\eta + \pi(\beta\rho - \gamma\nu)\omega\\ \notag &\ \ \ -\gamma \big(\pi(\delta\eta - \nu\omega)\big) - \beta\big(\pi(\rho\omega - \varepsilon\eta - \lambda X_2^{m-1}\varepsilon\eta)\big) .\end{aligned}$$ We must choose a version of $X_4$ and of $X_2$. It is natural to take $X_4 = \beta\rho\omega$ and $X_2 = \rho\omega\beta$. We continue the calculation. With this, (\[eq:\*\]) is equal to $$\begin{aligned} & (e_4\otimes \delta\eta + \gamma\otimes \eta - e_4\otimes \varepsilon\eta - \beta\otimes \eta) - \lambda \pi\big((\beta\rho\omega)^{m-1}\beta\varepsilon\big)\eta\cr &\ + \ (e_4\otimes \rho\omega + \beta\otimes \omega - e_4\otimes \nu\omega - \gamma\otimes \omega)\cr & \ - \ (\gamma\otimes \eta + \gamma\delta\otimes e_4 - \gamma\otimes \omega - \gamma\nu\otimes e_4)\cr & - (\beta\otimes \omega + \beta\rho \otimes e_4 - \beta\otimes \eta - \beta\varepsilon\otimes e_4) + \lambda\beta\Big(\pi\big((\rho\omega\beta)^{m-1}\varepsilon\eta\big)\Big) . \end{aligned}$$ The terms of the form $\alpha_1\otimes \alpha_2$ with $\alpha_i$ arrows all cancel. Using the relations $\delta\eta = \nu\omega$ and $\beta\rho = \gamma\nu$, four of the other terms cancel. This leaves $$-e_4\otimes \varepsilon\eta + e_4\otimes \rho\omega - \lambda \pi \big((\beta\rho\omega)^{m-1}\beta\varepsilon\big)\eta - \gamma\delta\otimes e_4 + \beta\varepsilon\otimes e_4 + \lambda \beta \pi\big((\rho\omega\beta)^{m-1}\varepsilon\eta\big).$$ The first two terms combine, and the fourth and fifth term combine, and we can rewrite the expression as $$\begin{aligned} \tag{**} \label{eq:**} \lambda (e_4\otimes X_2^{m-1}\varepsilon\eta) - \lambda \pi\big((\beta\rho\omega)^{m-1}\beta\varepsilon\big)\eta - \lambda (X_4^{m-1}\beta\varepsilon \otimes e_4) + \lambda \beta \pi\big((\rho\omega\beta)^{m-1}\varepsilon\eta\big).\end{aligned}$$ Now we combine the second and fourth term of (\[eq:\*\*\]), and we expand both. All terms except the ones $-\otimes e_4$ and $e_4\otimes -$ cancel, and we are left with $$\begin{aligned} \tag{***} \label{eq:***} \lambda\big((\beta\rho\omega)^{m-1}\beta\varepsilon \otimes e_4 - e_4 \otimes (\rho\omega\beta)^{m-1}\varepsilon\eta \big) .\end{aligned}$$ The first term of (\[eq:\*\*\*\]) is the negative of the third term in (\[eq:\*\*\]) since $\beta\rho\omega=X_4$. The second term of (\[eq:\*\*\*\]) is the negative of the first term of (\[eq:\*\*\]) since $\rho\omega\beta = X_2$. Hence, everything cancels and $R(\psi_4) = 0$, as required. \[th:6.4\] There is an isomorphism $\Omega^4_{\Lambda^e}(\Lambda) \cong \Lambda$ in ${\operatorname{mod}}\Lambda^e$. This is similar as in the proof of Theorem 7.4 in [@ESk-WSA]. We have defined a symmetrizing bilinear form of $\Lambda$ in the proof of Theorem \[th:4.7\]. We define elements $\xi_i\in {\mathbb{P}}_3$ by $$\xi_i = \sum_{b\in {\mathcal{B}}_i} b\otimes b^* ,$$ where $\{ b^*: b\in {\mathcal{B}}\}$ is the dual basis corresponding to ${\mathcal{B}}$, defined by $(-,-)$. As in [@ESk-WSA], it follows that the map $$\theta: \Lambda \to {\mathbb{P}}_3, \ \mbox{with} \ \theta(e_i) = \xi_i \ \mbox{for all $i \in Q_0$},$$ is a monomorphism of $\Lambda$-$\Lambda$-bimodules. Moreover, one shows that $S(\xi_i)=0$, exactly as in [@ESk-WSA]. This only uses general properties of the dual basis and no details on a specific algebra. Furthermore, $\Omega^4_{\Lambda^e}(\Lambda)$ is free of rank $1$ as a left or right $\Lambda$-module. Namely, we have the exact sequence of bimodules $$0 \to \Omega^4_{\Lambda^e}(\Lambda) \to {\mathbb{P}}_3 \to {\mathbb{P}}_2 \to {\mathbb{P}}_1 \to {\mathbb{P}}_0 \to \Lambda\to 0.$$ We have ${\mathbb{P}}_0\cong {\mathbb{P}}_3$, and moreover ${\mathbb{P}}_1$ and ${\mathbb{P}}_2$ have obviously the same rank as free $\Lambda$-modules on each side. By the exactness, it follows that $\Lambda$ and $\Omega_{\Lambda^e}^4(\Lambda)$ have the same rank. Therefore, the map $\theta$ gives an isomorphism of $\Lambda$ with $\Omega^4_{\Lambda^e}(\Lambda)$. Alternatively, for the last step one may apply [@GSS] to show that $\Omega_{\Lambda^e}^4(\Lambda)$ must be isomorphic to $_1\Lambda_{\sigma}$ for some algebra automorphism $\sigma$, and therefore has rank $1$ on each side. Theorem \[th:main3\] follows from Proposition \[prop:6.1\], Theorem \[th:6.4\], and the following proposition. \[prop:6.5\] Let $A = \Lambda(m,0)$. Then ${\operatorname{mod}}A$ does not admit a periodic simple module. Take $i \in \{1,3,5\}$. Observe that, for the indecomposable projective $A$-modules $P_i = e_i A$ and $P_{i+1} = e_{i+1} A$, we have ${\operatorname{rad}}P_i / {\operatorname{soc}}P_i \cong {\operatorname{rad}}P_{i+1} / {\operatorname{soc}}P_{i+1}$ in ${\operatorname{mod}}A$. Then, by general theory, $P_i / {\operatorname{soc}}P_i$ and $P_{i+1} / {\operatorname{soc}}P_{i+1}$ are not in stable tubes of the stable Auslander-Reiten quiver $\Gamma_A^s$ of $A$. Since $A$ is a symmetric algebra, we conclude that $S_i$ and $S_{i+1}$ are not periodic modules. Acknowledgements {#acknowledgements .unnumbered} ================ The research was done during the visit of the first named author at the Faculty of Mathematics and Computer Sciences in Toruń (June 2017). [99]{} , [Elements of the Representation Theory of Associative Algebras 1: Techniques of Representation Theory]{}, [London Mathematical Society Student Texts, vol. [65]{}]{}, Cambridge University Press, Cambridge, 2006. , Periodicity of self-injective algebras of polynomial growth, [J. Algebra]{} [443]{} (2015) 200–269. , Auslander-Reiten sequences with few middle terms and applications to string algebras, [Comm. Algebra]{} [15]{} (1987) 145–179. , On tame algebras and bocses, [Proc. London Math. Soc.]{} [56]{} (1988) 451–483. , Tameness of biserial algebras, [Arch. Math. (Basel)]{} [65]{} (1995) 399–407. , On the representation type of locally bounded categories, [Tsukuba J. Math.]{} [10]{} (1986) 63–72. , Galois coverings of representation-infinite algebras, [Comment. Math. Helv.]{} [62]{} (1987) 311–337. , Tame and wild matrix problems, in: [Representation Theory II]{}, in: Lecture Notes in Math., vol. [832]{}, Springer-Verlag, Berlin-Heidelberg, 1980, 242–258. , Periodic resolutions and self-injective algebras of finite type, [J. Pure Appl. Algebra]{} [214]{} (2010) 990–1000. , Periodic algebras, in: [Trends in Representation Theory of Algebras and Related Topics]{}, in: Europ. Math. Soc. Series Congress Reports, European Math. Soc., Zürich, 2008, pp. 201–251. , Weighted surface algebras, Preprint 2017, `http://arxiv.org/abs/1703.02346`. , Algebras of generalized dihedral type, Preprint 2017, `http://arxiv.org/abs/1706.00688`. , Algebras of generalized quaternion type, Preprint 2017, `http://arxiv.org/abs/arXiv:1710.09640`. , On degenerations of tame and wild algebras, [Arch. Math. (Basel)]{} [64]{} (1995) 11–16. , The Hochschild cohomology ring of a selfinjective algebra of finite representation type, [Proc. Amer. Math. Soc.]{} [131]{} (2003) 3387–3393. , Triangulated Categories in the Representation Theory of Finite-Dimensional Algebras, [London Math. Soc. Lect. Note Ser.]{}, vol. [119]{}, Cambridge University Press, Cambridge, 1988. , Hochschild cohomology of finite-dimensional algebras, in: [Séminaire d’Algèbre Paul Dubreil et Marie-Paul Malliavin]{}, in: Lecture Notes in Math., vol. [1404]{}, Springer-Verlag, Berlin-Heidelberg, 1989, pp. 108–126. , Geometric methods in representation theory, in: [Representations of Algebras]{}, in: Lecture Notes in Math., vol. [944]{}, Springer-Verlag, Berlin-Heidelberg, 1982, pp. 180–258. , Tame minimal non-polynomial growth simply connected algebras, [Colloq. Math.]{} [73]{} (1997) 301–330. , Selfinjective biserial standard algebras, [J. Algebra]{} [138]{} (1991) 491–504. , Derived equivalences as derived functors, [J. London Math. Soc.]{} [43]{} (1991) 37–48. , [Tame Algebras and Integral Quadratic Forms]{}, in: Lecture Notes in Math., vol. [1099]{}, Springer-Verlag, Berlin-Heidelberg, 1984. D. Simson and A. Skowroński, [Elements of the Representation Theory of Associative Algebras 3: Representation-Infinite Tilted Algebras]{}, London Mathematical Society Student Texts, vol. [72]{}, Cambridge University Press, Cambridge, 2007. , Selfinjective algebras of polynomial growth, [Math. Ann.]{} [285]{} (1989) 177–199. , Selfinjective algebras: finite and tame type, in: [Trends in Representation Theory of Algebras and Related Topics]{}, in: Contemp. Math., vol. [406]{}, Amer. Math. Soc., Providence, RI, 2006, pp. 169–238. , Representation-finite biserial algebras, [J. reine angew. Math.]{} [345]{} (1983) 172–181. , [Frobenius Algebras 1: Basic Representation Theory]{}, European Mathematical Society Textbooks in Mathematics, European Math. Soc. Publ. House, Zürich, 2011. , Tame biserial algebras, J. Algebra [95]{} (1985) 480–500. [^1]: The research was supported by the research grant DEC-2011/02/A/ST1/00216 of the National Science Center Poland.
--- abstract: 'Unlike nonlocal models, there is no need to introduce an internal length in the constitutive law for lattice model at the mesoscopic scale. Actually, the internal length is not explicitly introduced but rather governed by the mesostructure characteristics themselves. The influence of the mesostructure on the width of the fracture process zone which is assumed to be correlated to the characteristic length of the homogenized quasi-brittle material is studied. The influence of the ligament size (a structural parameter) is also investigated. This analysis provides recommendations/warnings when extracting an internal length required for nonlocal damage models from the material mesostructure.' address: - 'Univ. Grenoble Alpes, 3SR, F-38000 Grenoble, France' - 'CNRS, UMR 5521 3SR, F-38000 Grenoble, France' author: - Huu Phuoc Bui - Vincent Richefeu - Frédéric Dufour bibliography: - 'Refs\_BUI.bib' title: 'Studying the influence of inclusion characteristics on the characteristic length involved in quasi-brittle materials using the lattice element method' --- =1 Quasi-brittle materials ,characteristic length ,internal length ,fracture ,Lattice Element Method. Introduction ============ Fracture of quasi-brittle materials is characterized by a zone with a finite size around and ahead the crack tip, in which damage occurs and causes the softening behavior of the materials. This is the fracture process zone (FPZ). For instance for concrete, the size (width) of the FPZ, denoted by [$\ell_{\mathrm{FPZ}}$]{} hereafter, is believed to be proportional to the maximum aggregate size $d_{\max}$, see, [e.g.]{}, [@Bazant.1983.155; @PijaudierCabot.1987.1512]. Therefore, in nonlocal models (gradient or integral form [@PijaudierCabot.1987.1512; @Giry.2011.3431; @PEERLINGS.1996.3391]), the FPZ size which only depends on the internal length [$\ell_{\mathrm{c}}$]{} introduced, depends on (is proportional to) the maximum aggregate size. Accordingly, neither loading nor structural effect is considered to affect the resulting size of the FPZ except in the latest integral nonlocal model proposed in [@Giry.2011.3431]. In the latter, the internal length parameter evolves depending on the stress state during the damage process and also depends on the intrinsic (characteristic) length that can be correlated with aggregate size of the material. However, the correlation between the characteristic length and the aggregate size has not been explicitly calibrated yet. The literature often reports a linear or affine relation between [$\ell_{\mathrm{c}}$]{} and $d_{\max}$, see, [e.g.]{}, [@Otsuka2000; @Bazant.1989.115]. But actually, varying $d_{\max}$ in experiments may lead to a number of changes in the aggregate structure characterized by other parameters such as the volume fraction of aggregate, their size distribution, their fabric or connectivity. Basic questions may be raised: what does affect the internal length of a nonlocal model? Is it only the maximum size of aggregates or some less obvious parameter(s)? Does the structure itself (size or ligament) play a role in the internal length? To address these questions, numerical simulations of uniaxial tensile tests are carried out using the lattice model in which the geometry and mechanical properties of the material mesostructure are explicitly introduced. The output of the simulations is the FPZ size and the characteristic length of the material. The characteristic length is *a priori* regarded as the internal length that would be introduced in nonlocal models. The same notation [$\ell_{\mathrm{c}}$]{} is thus used in the following. From the lattice simulations, the relationship between the two lengths [$\ell_{\mathrm{FPZ}}$]{} and [$\ell_{\mathrm{c}}$]{}, and some relevant characteristics of the material mesostructure is found out. The study is restricted to the case of two-dimensional analysis of a brittle elastic model material with circular inclusions and is also restricted to mode-[slowromancap1@]{} failure problems occurring with small deformations under quasi-static loading conditions. It is important to stress before reading the following that the inclusions and matrix have a brittle elastic behavior together with highly simplified geometry. As a consequence, our observations and conclusions must be translated with caution to the case of real concrete. The lattice model used in our study is briefly recalled. The model is implemented in our self-writing code using C`++` programing language. The method to assess the FPZ size and the characteristic length of the material will be next pointed out before performing numerous numerical experiments to study the influence of the material mesostructure and of the structural parameter (ligament size) on these lengths. Numerical model =============== The lattice element method (LEM) is a convenient way to model the fracturing of quasi-brittle materials for the problems in which the discontinuities are dominant since it provides a discrete representation of material disorder and failure. By using the LEM, the micro-cracking, crack branching, crack tortuosity and bridging of quasi-brittle materials can easily be identified and captured. It allows the fracture process to be followed until complete failure. There exist two different types of lattice models. The first one is called classical lattice models in which the material is discretized as a network of discrete 1D-elements that can transfer forces and possibly moments [@Schlangen.1992.25.153; @Schlangen1992105; @Schlangen1993; @vanMier.1995.201]. The second type of lattice models, called particle lattice models, are classified as a discrete element method [@Kikuchi1992] in which the material is discretized as an assemblage of rigid particles interconnected along their boundaries through normal and shear springs [@Kawai1978]. The models in this category also include the rigid-body-spring networks [@BolanderJr2000], bonded-particle model [@Potyondy1996], random particle models [@Bazant.1990.116], beam-particle model [@Addetta.2002.4; @Delaplace2005], confinement-shear lattice model [@Cusatis2003]. The main advantage of particle lattice models with respect to classical lattice models is that they account for the fact that crack surfaces may act on each other causing the repulsive force during the loading process. So the particle lattice models are more suitable for predicting the failure behavior in mode [slowromancap2@]{} or mode [slowromancap1@]{} under cyclic loadings whereas the classical ones are enough when the mode [slowromancap1@]{} failure prevails. In this work, only the mode-[slowromancap1@]{} failure of the material submitted to monotonic mechanical loadings is considered. Moreover, for studying the influence of the material mesostructure on the FPZ which is related to the characteristic length of the material, a detailed description of tortuous crack patterns is important. Therefore, a lattice model, based on the classical lattice models, in which the normal and shear springs are introduced. The constitutive laws of the 1D-elements are simple elastic relations in the normal and tangential directions defined by each element, see a. Only small perturbations are considered, the positions of the lattice nodes are assumed fixed and unknown variables are the node displacements ${\boldsymbol}{u}$. The axial direction ${\boldsymbol}{n}_0^{ij}$ and transverse direction ${\boldsymbol}{t}_0^{ij}$ associated with each element $ij$ remain thus fixed. Length variations between the node $i$ and $j$ are defined by $\delta_n^{ij} = ({\boldsymbol}{u}_i - {\boldsymbol}{u}_j)\cdot {\boldsymbol}{n}_0^{ij}$ and $\delta_t^{ij} = ({\boldsymbol}{u}_i - {\boldsymbol}{u}_j)\cdot {\boldsymbol}{t}_0^{ij}$ for the normal and tangential directions, respectively. The forces are related to this length variations by $f_n^{ij} = K_n^{ij} \delta_n^{ij} $ and $f_t^{ij} = K_t^{ij} \delta_t^{ij}$, where $K_n^{ij}$ and $K_t^{ij}$ are the normal and shear stiffnesses of the element, respectively. ![1D-element with its local coordinate system (a) and its effective width $A^{ij}$ (b).[]{data-label="fig:local_schem"}](local_schem){width="\columnwidth"} The approach consists in finding the set of node displacements $[{\boldsymbol}{u}]$ – among which some are imposed along the boundaries – that minimize the total elastic energy of the system: $$\mathcal{U}_e([{\boldsymbol}{u}]) = \frac{1}{2} \sum_{ij} \left \{ K_n^{ij} (\delta_n^{ij})^2 + K_t^{ij}(\delta_t^{ij})^2 \right \}$$ To proceed this minimization, the conjugate gradient method is used with the following definition of the gradient: $$\frac{\partial \mathcal{U}_e}{\partial u_i^\alpha} = - {\boldsymbol}{e}_\alpha \cdot \sum_{j,\ i \in ij} \left \{ K_n^{ij} \delta_n^{ij} {\boldsymbol}{n}_0^{ij} + K_t^{ij} \delta_t^{ij} {\boldsymbol}{t}_0^{ij} \right \}$$ where ${\boldsymbol}{e}_\alpha$ stands for the two directions of the global frame. The damage (in the form of diffuse or macroscopic cracks) of the whole lattice system is accounted for by removing each element that breaks according to a criterion $\psi(f_n^{ij},\ f_t^{ij}) \geq 0$. The Mohr-Coulomb surface with a cut-off of the tensile strength [@BolanderJr1998569] can be adopted. However, we chose to use another model that has the advantage of being more generic while it is expressed in a single function: $$\psi(f_n^{ij},\ f_t^{ij}) = \frac{f_n^{ij}}{A^{ij}\sigma_n^0} + \left ( \frac{f_t^{ij}}{A^{ij} \sigma_t^0} \right )^n - 1$$ where $\sigma_n^0$ and $\sigma_t^0$ are the ultimate stresses for pure normal and tangential loadings, respectively; $n$ is a positive parameter that changes the yield surface from a linear form ($n=1$) – corresponding to the classical Mohr-Coulomb criterion – to a non-linear form ($n>1$). In this study, $n=5$ is used. Let us now consider a system of lattice elements where small displacements are imposed for some nodes on the boundary. A reference solution $[{\boldsymbol}{u}_{\rm ref}]$, corresponding to the free displacements of the other nodes, can be found by minimizing $\mathcal{U}_e$ as described above. Provided that the elements remain elastic and intact, any other elastic state is an *uniform scaling* of the reference solution: $[{\boldsymbol}{u}] = \eta [{\boldsymbol}{u}_{\rm ref}]$. As a consequence, elastic forces can be scaled by the same factor and it becomes possible to find, for each element, a factor $\eta^{ij}$ so that $\psi(\eta^{ij} f_n,\ \eta^{ij} f_t) = 0$. The state corresponding to the failure of the weakest element can thus be obtained by scaling the reference solution by the factor $\eta_{\min} = \min_{ij} \{ \eta^{ij} \}$, and then recorded. The next loading state will result from another reference solution beginning from a new configuration into which the broken element is removed. By repeating this procedure for each element failure, one by one, the loading course is controlled by these events, rather than a time-stepping which could involve more than one element removal within a single time step. This would results in non-physical solutions that make the mechanical response dependent on the loading magnitude [@Delaplace2007]. With the LEM, heterogeneities appear *de facto* at the mesh level. The required disorder in the mesh, introduces a variation in lengths $\ell^{ij}$ and effective width $A^{ij}$ of the elements . It results in an unwanted parasitic heterogeneity in the stiffness properties that can be limited by accounting for the local geometry in the element behavior: $$K_n^{ij} = \frac{A^{ij}}{\ell^{ij}} \bar{K}_n^{\varphi} \quad{\rm and}\quad K_t^{ij} = \frac{A^{ij}}{\ell^{ij}} \bar{K}_t^{\varphi}$$ where $\bar{K}_n^{\varphi}$ and $\bar{K}_t^{\varphi}$ are the stiffnesses that can be set “uniformly” to a phase $\varphi$. The effective width $A^{ij}$ is the distance between centroids $C_{ijk}$ and $C_{ijm}$ of the triangles adjacent to the element $ij$, projected onto the local direction ${\boldsymbol}{t}_0^{ij}$ as proposed in [@Cusatis20067154], see b. Since the state of plane stress or plane strain is not explicitly defined in LEM-based simulations, the quantity $A^{ij}$ can also be regarded as a surface by assuming an unit length in out-of-plane direction. In this picture, $\bar{K}_n^{\varphi}$ and $\bar{K}_t^{\varphi}$ have a dimension of material stiffness. As a consequence of the weighting of imposed stiffnesses (or modulii) $\bar{K}_n^{\varphi}$ and $\bar{K}_n^{\varphi}$ in a phase, actual stiffnesses of elements differ from each other. The targeted Young’s modulus $E^{\varphi}$ and Poisson’s ratio $\nu^{\varphi}$ of the phase $\varphi$ can be used to determine the element stiffnesses by the following relations: $$\bar{K}_n^{\varphi} = \frac{E^{\varphi}} {1-\nu^{\varphi}} \quad {\rm and}\quad \bar{K}_t^{\varphi} = \frac{E^{\varphi}(1-3\nu^{\varphi})} {1-(\nu^{\varphi})^2 }$$ These relations are derived from the equations given in [@Chang.2002.1941] for a regular and triangular lattice, by replacing a factor $\sqrt{3}$ by $1$ (found empirically from a number of single-phase simulations). From there, heterogeneity intrinsic to the mesh geometry is limited as much as possible, and a structure of inclusions (grains) can be generated using the take-and-place processes [@Wang_1999_533; @Hafner2006]. After generating the inclusion structure, different material phases are defined and different local mechanical properties are assigned to the elements falling in each phase. At the mesoscale, three phases can be distinguished: inclusion, matrix and interfacial transition zone (ITZ) , see . If both ends of an element are located in the same phase, then this element is assigned the same mechanical properties of the corresponding phase (inclusion or matrix), otherwise it is considered as interface or inclusion element depending on the location of its midpoint. If its midpoint is located within the grain, the element is classified as inclusion element, or else it will be ranked as ITZ element. The reason for this definition of ITZ element is that the resulting fraction of inclusions (the ratio between the number of inclusion elements and the total number of elements) is closer to desired fraction of inclusions in material than those developed by other authors [@Schlangen.1992.25.153; @Lilliu.2003.927; @Sagar.2009.865]. In their models, all elements that connect two different zones of grain structures are considered as ITZ elements. ![Distinction between inclusion, matrix and ITZ phases according to the location of a lattice element in the grain structure.[]{data-label="fig:phaseDefinition"}](ITZdefinition.pdf){width="0.6\columnwidth"} Assessment of characteristic length =================================== To account for damage in continuous (and homogenized) modeling of concrete, a length parameters is required [@PijaudierCabot.1987.1512; @PEERLINGS.1996.3391]. This length, denoted by [$\ell_{\mathrm{c}}$]{}, called *characteristic length* is seen as an intrinsic property of the material, however it is not so simple to determine and to connect with the heterogeneities at lower scales. The method proposed in [@Bazant.1989.115] is used here to assess this characteristic length for a material modeled by lattice elements. The basic idea is that the characteristic length of the material is approximated by the *effective* width $h$ of the zone in which the fracture energy of the material is dissipated. This effective width is defined as the ratio of the fracture energy $G_f$ (energy per unit area of crack surface) dissipated by the cracking that localizes in a narrow band of the specimen in localized tensile test and the *energy density* $W_s$ dissipated by the cracking that is nearly homogeneously distributed in the whole volume of the specimen of the same material in distributed tensile test. Finally, the characteristic length is approximated by $h$ which can be assessed by $$\ell_{\mathrm{c}} \simeq h = \frac{G_f}{W_s}$$ To evaluate [$\ell_{\mathrm{c}}$]{} with LEM simulations, both numerical tensile tests (localized and distributed) have to be performed to determine $G_f$ and $W_s$. $G_f$ is determined from the tensile test performed on a notched specimen so that the damage can be localized whereas $W_s$ has to be determined from the tensile test carried out on an unnotched specimen with specific design of loading such that the damage is as homogeneously as possible distributed in the specimen volume. To this end, the numerical simulations of tensile tests using the lattice model can be performed in which the tensile loading is indirectly applied to the notched and unnotched specimens by elongating the steel bars “glued” to the specimens as proposed in [@Bazant.1989.115], see . These two tests were performed on numerical specimens of the same size, the loading is applied by means of lateral bars that are “glued” to the specimen and set 10 times stiffer than the material tested. The main difference between the two types of tensile tests is that the steel bars are only glued to the ends of the notched specimen within a certain length while they are entirely glued to the unnotched specimen within the whole height of the specimen. For the following, the tensile tests performed on notched specimens, where the **L**ocation of **D**amage is forced, are referred to as LD-tests. The tensile tests performed on unnotched specimens, designed to identify **D**istributed **D**amage, are mentioned as the DD-tests. These tests are known as the PIED (Pour Identifier l’Endommagement Diffus) tests in the French community, as introduced in [@Fokwa1992]. Note however that a diffuse damage is actually not achievable, that is why we prefer to talk about distributed rather than diffuse damage. In the lattice simulations, the steel bars with the width of $2$ mm are also discretized in 2D by the lattice elements (2D mesh) but their stiffnesses are set 10 times greater than those of the material tested and they always have an elastic behavior. The steel bars are perfectly “glued” to the specimens [via]{} compatible nodes. ![Sketch of the specimens used to determine the characteristic length as proposed in [@Bazant.1989.115]. The tensile test performed on the notched specimen to obtain the localized cracking process (a) and on the unnotched specimen to obtain the distributed cracking process (b).[]{data-label="fig:TD_PIED_40x160"}](TD_PIED_tests.pdf){width="\columnwidth"} In LD-tests (a), a crack is initiated and then propagates until the specimen breaks. The fracture energy $G_f$ is simply the sum of all elastic energy dissipated by the rupture of each element $ij$ divided by the total cracking surface: $$\label{eq:G_f} G_f = \frac{1}{2} \frac{\sum_{ij} A_{ij}^2 \left ( (\sigma_n^{0})^2/K_n^{ij} + (\sigma_t^{0})^2/K_t^{ij} \right ) } {\sum_{ij} A_{ij}}$$ The DD-tests (b) aims to avoid any onset of crack so that the straining and damage are as uniform as possible. The energy density $W_s$ is thus given by the total elastic energy dissipated within the specimen volume $V$: $$\label{eq:W_s} W_s = \frac{1}{2V} \sum_{ij} A_{ij}^2 \left ((\sigma_n^{0})^2/K_n^{ij} + (\sigma_t^{0})^2/K_t^{ij} \right )$$ Direct measurement of the effective width $h$, denoted by [$\ell_{\mathrm{FPZ}}$]{} for the following, of the fracture process zone (FPZ) is another characteristic dimension. We also made this estimation from single tensile tests performed on notched specimen, by treating the fracture energy of each element similarly to acoustic emission [@Maji.1998.27; @Haidar.2005.201]. A density map of the dissipated elastic energy can be drawn from broken elements. Based on this map, the size of the FPZ can be determined by analyzing the density distribution of dissipated energy around the macrocrack. This distribution, when represented as a probability density function (pdf), can be fitted by a Gaussian distribution in order to extract a width. Rather than that, we choose to rely on the cumulative density function (cdf) of the dissipated energy to determine the size of the FPZ since that curve can be more smoothly defined by sorting the dissipated energy along a direction. The direction chosen here is the one perpendicular to the mean direction of the final crack which may not be strictly perpendicular to the loading direction depending on the microstructure setting. A fit of the cumulated form by a “Gaussian bell” allows to assess [$\ell_{\mathrm{FPZ}}$]{} as being four times larger than the standard deviation $\sigma$ of the Gaussian curve. This choice corresponds to a width containing a bit more than 95% of energy dissipated (provided that only one process zone exists). It is worth pointing out that the FPZ size and the characteristic length of the material determined by lattice simulations also result from the mesh size, [i.e.]{} the lattice element size. This means that the LEM introduces a characteristic length by its mesh. An analysis of the mesh-size influence on the FPZ size is performed. A series of LD tensile tests is performed in which the specimen is discretized with five different mean values of the mesh size $l_{\mathrm{m}}$. Furthermore, for each discretization, five independent meshes are generated by randomly moving the nodes within the radius of $l_{\min}$ (the minimum mesh size) to take into consideration mesh orientation effect on [$\ell_{\mathrm{FPZ}}$]{}. The dependence of the PFZ size on the mesh size is shown in Figure \[fig:lFPZ\_meshSize\]. $\bar{l}_{\mathrm{m}}$ is the average value of the discretization size . As expected, the FPZ size does statistically tend to “zero” upon mesh refinement. Note however that the intercept of the fit is not exactly zero, its value is $0.18$ mm. This is probably due to the fact that there are only five discretizations were used and there was not any mesh finer than $1$ mm to be generated for the sake of saving computational time. Once the influence of the mesh on the material internal length is known, it can be subtracted from the relationship between the internal length and the inclusion properties. The latter defines the aim of the present study. ![Evaluation of the FPZ size with respect to mesh size: the FPZ size [$\ell_{\mathrm{FPZ}}$]{} does statistically vanish under mesh refinement.[]{data-label="fig:lFPZ_meshSize"}](lFPZ_meshSize.pdf){width="\columnwidth"} Numerical experiments ===================== To study the role played by coarse inclusions in the internal length, a number of simulations have been performed. In the modeling of the material mesostructure, inclusions are considered, which are embedded in the matrix separated by the interfacial transition zones (ITZ). The inclusions, matrix and ITZs are assumed to be linear brittle elastic. The inclusions are also assumed to be stiffer and more resistant than the matrix, whereas the ITZs are assumed to be less stiff and with a smaller strength than the matrix. In the following simulations, the stiffness and the strength of inclusions are $10$ times larger than those of the matrix. In turn, the stiffness and the strength of the matrix is $2$ times larger than those of ITZs. Elastic and strength parameters of the matrix are set to values listed in Table \[tab:params\], and they are kept fixed for all simulations. ----------------- ------------- ------------- ---------------- ---------------- ------- ------- Phase $\varphi$ $\bar{K}_n$ $\bar{K}_t$ $\sigma_n^{0}$ $\sigma_t^{0}$ $E$ $\nu$ (GPa) (GPa) (MPa) (MPa) (GPa) (–) Matrix 16.50 5.10 6.07 18.21 13.20 0.20 ----------------- ------------- ------------- ---------------- ---------------- ------- ------- The way coarse inclusions are structured – referred to as “grain structure” in the sequel – was restricted to two characteristics in this study: the mono-sized grain diameters $d$ and their surface fraction $P_{\mathrm{a}}$ . In the ($P_{\mathrm{a}}$ – $d$) parameter space, shown in Figure \[fig:variationPath\], three variation paths were considered: 1. varying $d$ while the positions of inclusions remain the same, $P_{\mathrm{a}}$ thus varies roughly like $d^2$, 2. varying $d$ while $P_{\mathrm{a}}$ is kept at $40$%[^1], 3. varying $P_{\mathrm{a}}$ for a given inclusion diameter $d=8$ mm. ![Three variation paths (*a*), (*b*) and (*c*) for three-phase material ($3\varphi$) and the variation path (*a*) for two-phase material ($2\varphi$) in the ($P_{\mathrm{a}}$ – $d$) parameter space for the monodisperse distribution of inclusions.[]{data-label="fig:variationPath"}](Agg_Variation.pdf){width="\columnwidth"} In addition to the variation of grain structure, the presence of a weak interfacial transition zone between inclusion and matrix phases is analyzed. Without weak ITZ, only two phases ($2\varphi$) are modeled in the sense that the properties of the ITZs defined as in Figure \[fig:phaseDefinition\] are those of the matrix. With weak ITZ, a less stiff phase with smaller strength is added in-between inclusions and matrix, bringing the number of phases to three ($3\varphi$). Typical force-displacement and stress-strain curves obtained for the LD and DD tests, respectively, are shown in Figure \[fig:typical\_curves\]. The corresponding crack patterns are also presented. It is seen that there is only one macro crack which traverses the notched specimen of the LD test while about fifteen macro cracks cross over the unnotched specimen of the DD test. It shows that the numerical results exhibit disrupted evolution due to the event-driven flow of the simulation. This differs from the experiments in which the displacement is controlled. In fact, the last one is characterized by a monotonic increase of the displacement. Therefore, in order to have a corresponding response, the “envelope” of the numerical curve should be taken. The envelope curve is obtained by the so-called smoothing procedure. The procedure is described as follows. By connecting from the first to the last point that describes the specimen state and as soon as a decrease of the displacement is observed, the decrease of the computed load is kept vertically until an intersection with the original curve is observed. The envelope curve then follows the original curve until the new decrease of the displacement is met again and the procedure is repeated. The zoom-in figure in Figure \[fig:force\_displacement\_TD\] shows the procedure. Note that envelope curves were also proposed in [@Arslan1995; @Vervuurt1997]. However, when using the envelope curve alone, some essential information may be lost such as a possible snap-back. Also the area under the envelope curve is overestimated. So, the values of $G_f$ and $W_s$ should not be taken from the corresponding areas under the envelopes of the force-displacement and stress-strain curves. Instead, $G_f$ and $W_s$ are directly computed from the stored elastic energies of the broken elements by Equations and . [0.7]{}   [0.3]{} ![image](TD_D6P45_NoRandomPos4.pdf){height="0.3\textheight"}   [0.7]{}   [0.3]{} ![image](PIED_D6P45_NoRandomPos4.pdf){height="0.3\textheight"} In all tests presented herein, the characteristic length $\ell_c^0$ intrinsic to the lattice mesh is determined by performing the LD and DD tests on several mesh configurations without inclusions. The intrinsic effective width of the FPZ $\ell_{\mathrm{FPZ}}^0$ is also determined *via* direct measurements. These values are shown in the plots of lengths as if the inclusion diameter or the surface fraction is zero. Key features that may influence the FPZ size -------------------------------------------- ### Material mesostructure #### Path (*a*) For concrete materials, it is usual to deem that the characteristic length depends on the aggregate size. Initial investigations with the model have therefore focused on the role of inclusion diameter $d$ on the width [$\ell_{\mathrm{FPZ}}$]{} of the FPZ, while varying $d$ and keeping the positions and the number of inclusions unchanged (variation path (*a*)). The evolution of the FPZ size [$\ell_{\mathrm{FPZ}}$]{} with respect to the size of the inclusions $d$ is shown in . In this plot and those that follow, each point with its error bar (standard deviation) requires five measurements and corresponds to the mean value of five values of [$\ell_{\mathrm{FPZ}}$]{} with five independently random distributions of the position of inclusions in the specimen. The lattice mesh used in the simulations provides a width of the FPZ equals to $2.1$ mm. Besides, the best fits of the variation of the mean value of [$\ell_{\mathrm{FPZ}}$]{} with respect to the inclusion size $d$ for the two- and three-phase materials are shown in the figure as well. It is noted that these fits are calculated only from the mean values of [$\ell_{\mathrm{FPZ}}$]{} in the cases of inclusions are introduced, so the value of [$\ell_{\mathrm{FPZ}}$]{} of the homogeneous material is not taken into consideration in the fits. Also, the displayed fitted lines do not necessary mean that an affine relation is enlightened. It must rather be seen as a tendency since the data presents significant variations. As a consequence, the intersection of the fitted line with the vertical axis has no particular meaning [i.e.]{}, one could also say the fit is only valid between the limits studied. ![Affine relationship between the width of FPZ [$\ell_{\mathrm{FPZ}}$]{} and the diameter $d$ of the inclusions, with ($3\varphi$) or without ($2\varphi$) weak interfacial transition zone between inclusions and matrix is observed using the variation path (*a*).[]{data-label="fig:l_FPZ_varyPhases"}](varyPhases.pdf){width="\columnwidth"} The main observation from the is that when the inclusions are introduced, they have a strong effect on the FPZ size in both two- and three-phase materials. First, the mean values of [$\ell_{\mathrm{FPZ}}$]{} in the case of heterogeneous material are greater then the value of [$\ell_{\mathrm{FPZ}}$]{} in the case of homogeneous one. Second, in the case of inclusions are introduced, the fitted slope of the mean values of [$\ell_{\mathrm{FPZ}}$]{} of the three-phase material is greater than that of the two-phase material. This means that when the ITZ is taken into account, the inclusion size plays a stronger effect on the variation of [$\ell_{\mathrm{FPZ}}$]{} than the case in which the ITZ is not taken into consideration. So, according to our model, the internal length does not only depend on the size of the inclusions but also their constituents and therefore the presence of ITZ. The second observation is probably explained by the increase of the ITZ fraction when increasing the inclusion size of the three-phase material, see . Here, the ITZ plays a role of attractive zones for the crack propagation because of their lower strengths and stiffnesses. Accordingly, the greater fraction of the ITZ results in the larger mean value of [$\ell_{\mathrm{FPZ}}$]{} compared to the mean value of [$\ell_{\mathrm{FPZ}}$]{} of the two-phase material (in which the ITZ fraction is zero). In the case of $d=4$ mm, the mean value of [$\ell_{\mathrm{FPZ}}$]{} of the three-phase material does not differ from that of the two-phase one. This is related to the fact that the matrix prevails in the mesostructure in the case of $d=4$ mm, as shown in , and thus few inclusions are found on the crack path. ![Evaluation of surface fraction of each phase.[]{data-label="fig:plotPhasesFraction"}](plotPhasesFraction.pdf){width="\columnwidth"} Furthermore, bigger values of the standard deviations are observed when increasing the particle size of the three-phase material as well as of the two-phase one even though this is less clearly observed in the two-phase material than the three-phase one. We believe that increasing the inclusion size results in the increase of the inclusion fraction and as a result, the spatial distribution of inclusions plays a stronger role in the resulting value of [$\ell_{\mathrm{FPZ}}$]{}. In two-phase material, by comparison between $d=8$ mm and $d=10$ mm, the standard deviation of [$\ell_{\mathrm{FPZ}}$]{} does not significantly change. This is due to the fact that from $d=8$ mm, the inclusions get dense in the mesostructure of the material, in that a change of the position of the inclusions does not have a strong effect on the value of [$\ell_{\mathrm{FPZ}}$]{}. However, in three-phase material, the spatial distribution of inclusions still make sense on the variation of [$\ell_{\mathrm{FPZ}}$]{} even though the inclusions get dense. This is reflected by the greater value of the standard deviation of [$\ell_{\mathrm{FPZ}}$]{} in the case of $d=10$ mm than that in the case of $d=8$ mm, of the three-phase material. There is no doubt that this is due to the effect of the ITZ. #### Path (*b*) A second series of tests is performed by following the variation path (*b*), that is with fixed surface fraction of inclusions and varied inclusion diameter. The reason of this choice relies on the fact that the fundamental role of inclusion size $d$ must be checked while keeping other parameters unchanged to suppress their possible effect. shows the plot of the mean value of the FPZ size [$\ell_{\mathrm{FPZ}}$]{} with respect to the size $d$ of inclusions of the path (*b*) of variation. For the sake of comparison, the same plot of the path (*a*) is shown as well. Surprisingly, it exhibits that the mean value of the FPZ size does not depend on the inclusion size for the path (*b*) of variation. It means that the FPZ size developed in this type of model material (brittle elastic) may not always be related to the inclusion size itself as usually observed in the literature. The observation is in agreement with that of [@Skar.zynski2011], in which the width of the FPZ was experimentally measured on the surface of concrete specimens using a Digital Image Correlation (DIC) technique. However, it is in contrast to the results of [@Mihashi1996; @Otsuka2000] for concrete material, in which the experiments were carried out with X-rays and three-dimensional Acoustic Emission techniques leading to the conclusion that the width of the FPZ increases with the increase of the maximum inclusion size. But for the path (*a*) of variation, as previously shown, it is seen that the mean values of [$\ell_{\mathrm{FPZ}}$]{} increases with the increase of inclusion size $d$ that also results in the increase of the inclusion surface fraction. ![Relation between the FPZ size [$\ell_{\mathrm{FPZ}}$]{} and inclusion size $d$ with respect to the variation paths (*a*) and (*b*).[]{data-label="fig:lFPZ_N55vsP45"}](lFPZ_N55vsP45.pdf){width="\columnwidth"} [0.5]{}   [0.5]{}   [0.5]{}   [0.5]{} shows the crack patterns and the corresponding value of [$\ell_{\mathrm{FPZ}}$]{} obtained when changing the inclusion size $d$ within the variation path (*b*). The figure shows for each inclusion size $d$ only one random distribution of inclusions in the specimen. It seems that the position of inclusions around the notches have an essential role on the resulting FPZ size. In fact, the crack is always initiated at the weak ITZ between the inclusion and matrix phases. With regard to the position of a notch – that can also be seen as a “weak link” – the crack is then propagated *via* the development of microcracks and at the end the macrocrack is formed by connecting the notch(es) and the broken elements (mainly in the ITZs). However, sometimes an inclusion is found just in front of the notch(es) and it plays a role of an obstacle that prevents the rupture of elements in the vicinity of the notch(es) and consequently, prevents the macrocrack to reach the notch(es). In this case, the macrocrack is finally formed by mainly connecting the broken ITZ elements. Therefore, the spatial distribution (positions) of the inclusions actually have an important role on the FPZ size in conjunction with the size of the inclusions. Nevertheless, for the case where the reference surface fraction of inclusion is kept almost constant (path (*b*)), the spacing between the inclusions seems to be constant regardless of the size of the particles, and thus the spatial distribution of the particles prevails more and more on their size in the resulting FPZ size [$\ell_{\mathrm{FPZ}}$]{}. Actually, as shown in  in which four different sets of inclusion positions with the diameter being $6$ mm, the values of [$\ell_{\mathrm{FPZ}}$]{} are finally different depending on the spatial distribution of the inclusions with regard to the notch position. This explains why changing the size of the inclusions according to the path (*b*) does not change the value of [$\ell_{\mathrm{FPZ}}$]{} averaged over five random spatial distributions of inclusions. On the contrary, within the path (*a*) of variation, changing the inclusion size leads to a change in the inclusion surface fraction together with the spacing between inclusions, and the FPZ size is affected by not only the size of inclusions but also the other structuring parameters (position and surface fraction of inclusions in our case). Still, the smaller the inclusion particle size, the larger the spacing between the particles in the path (*a*). This leads to the weaker influence of the spatial distribution of inclusions observed on the FPZ size. It is revealed in by the value of the standard deviation that is increased with the inclusion size. [0.5]{}   [0.5]{}   [0.5]{}   [0.5]{} #### Path (*c*) A third series of tests were performed by following the variation path (*c*) in which the inclusion surface fraction is varied while keeping the inclusion size constant at $8$ mm in order to evaluate the only influence of the inclusion surface fraction (or equivalently the inclusion spacing, since the latter is inversely proportional to the former) on the FPZ size [$\ell_{\mathrm{FPZ}}$]{}. shows the variation of the mean value of [$\ell_{\mathrm{FPZ}}$]{} with respect to the inclusion surface fraction $P_{\mathrm{a}}$. For the sake of comparison, the results of above studies for the variation paths (*a*) and (*b*) are plotted as well, but in the ([$\ell_{\mathrm{FPZ}}$]{}–$P_{\mathrm{a}}$) space. The main observation is that the mean value of [$\ell_{\mathrm{FPZ}}$]{} of the path (*a*) and the path (*c*) does increase with the increase of the inclusion surface fraction $P_{\mathrm{a}}$, whereas that of the path (*b*) does not change. This is simply explained by the fact that the spacing between inclusions decreases with the increase of the inclusion surface fraction within the variation paths (*a*) and (*c*), whereas it seems to be “constant” (or hardly changed) within the path (*b*). By comparison the path (*c*) with the path (*a*), it is observed, however, that the increase rate of [$\ell_{\mathrm{FPZ}}$]{} with respect to $P_{\mathrm{a}}$, which is represented by the fitted slope, of the path (*c*) is smaller than that of the path (*a*). A suitable explanation for this observation is that within the path (*a*), the size of and the spacing between the inclusions do change (increase and decrease, respectively) at the same time with respect to the increase of $P_{\mathrm{a}}$ whereas only the spacing of the inclusions does decrease with respect to the increase of $P_{\mathrm{a}}$ within the path (*c*). *Therefore, the observation could lead to the evidence that the FPZ size depends on both the inclusion spacing (which is just a consequence of the inclusion surface fraction) and inclusion size*. ![Variation of [$\ell_{\mathrm{FPZ}}$]{} according to the inclusion surface fraction $P_{\mathrm{a}}$ of the three variation paths (*a*), (*b*) and (*c*).[]{data-label="fig:lFPZ_P45vsN55vsD8varyGrainDensity"}](lFPZ_P45vsN55vsD8varyGrainDensity.pdf){width="\columnwidth"} #### Larger specimen width compared to inclusion size The above studies of [$\ell_{\mathrm{FPZ}}$]{} are performed on slender specimens (small ligament size, [i.e.]{} in the order of $3 \times d$ compared to inclusion size $d$). These slender specimens were used to ensure the damage distribution in DD-tests as homogeneous as possible for studies of the characteristic length [$\ell_{\mathrm{c}}$]{} which is presented in . However, it was shown that the FPZ size [$\ell_{\mathrm{FPZ}}$]{} performed on these slender specimens strongly depends on the position of inclusions. This can be considered as a drawback for an attempt to correlate the FPZ size and the inclusion size. Therefore, it would be better if the study of [$\ell_{\mathrm{FPZ}}$]{} is performed on a larger specimen size compared to the inclusion size. To this end, tensile tests are performed on the specimen shown in with mono-sized inclusion structures and the inclusion size is varied by taking the value $4$, $6$, $8$ and $10$ mm. Three variation paths above are also considered here. ![Specimen dimensions \[mm\].[]{data-label="fig:specimen90x60"}](specimen90x60.pdf){width="0.8\columnwidth"} show the variations of [$\ell_{\mathrm{FPZ}}$]{} with respect to the inclusion size $d$ when keeping the number and the position of inclusions unchanged (path (*a*)) and when keeping the surface fraction of inclusion constant at $45$% (path (*b*)), respectively. shows the variation of [$\ell_{\mathrm{FPZ}}$]{} with respect to the surface fraction $P_{\mathrm{a}}$ when keeping the inclusion size constant at $8$ mm (path (*c*)). It exhibits that the variation of [$\ell_{\mathrm{FPZ}}$]{} with respect to the position of inclusions is less important than the previous cases. Also, better fit is obtained with higher coefficients of correlation ($0.99$, $0.93$ and $0.96$ for paths (*a*), (*b*) and (*c*) respectively). ![Variation of [$\ell_{\mathrm{FPZ}}$]{} according to the inclusion size $d$ of the variation path (*a*).[]{data-label="fig:lFPZ_N45_specimen90x60"}](lFPZ_N45_specimen90x60.pdf){width="\columnwidth"} By comparison between , it can be seen that, in contrast to the results tested on the slender specimens, a higher influence of the inclusion size $d$ on the FPZ size [$\ell_{\mathrm{FPZ}}$]{} of the variation path (*b*) compared to that of the variation path (*a*). Indeed, a higher value of the fit slope is obtained within the variation path (*b*). It is likely due to the fact that when analyzing on the larger specimen (compared to the inclusion size), the sensitivity of [$\ell_{\mathrm{FPZ}}$]{} with respect to the position of inclusions is less important than testing on the slender specimen, and thus the role of the inclusion size in the FPZ size prevails over the position. So, in conjunction with the influence of the inclusion surface fraction on the FPZ size (which can be observed in ), the higher influence of the inclusion size on the FPZ size is obtained within the path (*b*) than that within the path (*a*) because varying the inclusion size in the path (*b*) is combined with a higher surface fraction of inclusions than in the path (*a*). ![Variation of [$\ell_{\mathrm{FPZ}}$]{} according to the inclusion size $d$ of the variation path (*b*).[]{data-label="fig:lFPZ_P45_specimen90x60"}](lFPZ_P45_specimen90x60.pdf){width="\columnwidth"} ![Variation of [$\ell_{\mathrm{FPZ}}$]{} according to the inclusion surface fraction $P_{\mathrm{a}}$ of the variation path (*c*).[]{data-label="fig:lFPZ_D8varyP_specimen90x60"}](lFPZ_D8varyP_specimen90x60.pdf){width="\columnwidth"} Therefore, a partial conclusion is that depending on the relative size between the macroscopic size (specimen size) and the mesoscopic size (inclusion size), the influence of the mesostructure on the FPZ size is different. When this relative size is small, the effect of the position of inclusions or the spacing between inclusions of the mesostructure prevails over the effect of the inclusion surface fraction. When the relative size is more important ($\geq 5$), an inverse influence is observed. In any case, the influence of the inclusion size on the FPZ size is always recognized. ### Ligament size The specimen geometry is shown in and the dimensions of the specimens used in the numerical test are given in , with the same size but with different notch lengths that results in different ligament lengths: $90$, $80$, $65$ and $50$ mm. They are labeled by L, M, S and XS respectively for convenience. ![Specimen geometry.[]{data-label="fig:ligament_Specimen"}](Specimen_ligamentSize.pdf){width="0.7\columnwidth"} Ligament size Long (L) Medium (M) Small (S) eXtra Small (XS) ------------------------ ---------- ------------ ----------- ------------------ Specimen size: $a$ 100 100 100 100 Notch length: $c$ 10 20 35 50 Notch width: $d$ 2 2 2 2 Ligament length: $a-c$ 90 80 65 50 \[tab:ligament\_specimenDimensions\] In order to study the influence of the maximum inclusion size $d_{\max}$ on the FPZ size, the tensile tests are performed on the specimens made of the material with a polydisperse inclusion structure with $d_{\max}$ being $6.3$, $8$, $10$, $12.5$ and $16$ mm and the reference inclusion surface fraction is kept constant at $45$%. The minimum inclusion size $d_{\min}$ is $3.15$ mm. All the inclusion gradings are generated by the Fuller’s curve which is an “ideal“ grading curve [@Fuller.1906.67]. In the study, for each inclusion grading up to $d_{\max}$, five inclusion structure realizations are generated with independently random distribution of inclusion positions. The specimens are loaded in tension by directly imposing the vertical displacement increment on the nodes of the top boundary of the specimens while vertically fixing the nodes of their bottom boundary. shows the relationship between the FPZ size [$\ell_{\mathrm{FPZ}}$]{} with respect to the maximum inclusion size $d_{\max}$ for the specimens corresponding to four ligament lengths L, M, S and XS. This figure also shows the FPZ size of L, M, S, XS specimens in which no inclusion structure has been introduced. It is seen that, when the inclusion structures are introduced, it always results in a larger FPZ size than the one computed with the homogeneous cases. For a given value of $d_{\max}$, the mean value of [$\ell_{\mathrm{FPZ}}$]{} is systematically increased when the ligament size is increased. In addition, the increase rate of [$\ell_{\mathrm{FPZ}}$]{} is also increased with $d_{\max}$. It results in an higher slope of variation of [$\ell_{\mathrm{FPZ}}$]{} as a function of $d_{\max}$ for a larger ligament size. It is also observed that the increase rate of the slope of variation of [$\ell_{\mathrm{FPZ}}$]{} decreases with the increase of the ligament size from XS specimens to L specimens. So, a stabilized value of variation slope can be achieved for specimens with the ligament size being in order of specimen width. When the ligament size is half (and may be lower by extrapolation) specimen width (the XS specimens), the variation slope is negligible, which means that the inclusion size appears to have no influence on the mean value of [$\ell_{\mathrm{FPZ}}$]{}. It may suggest that the FPZ has not enough time to develop completely within the specimens with “too short” ligament length. Between these limits, the slope variation evolves progressively, indicating that both the inclusion structure and the specimen dimension itself can play a role on the FPZ size. The maximum-inclusion-size independence of [$\ell_{\mathrm{FPZ}}$]{} for the specimens with too short ligament length is in agreement with the previous study performed on the specimen which also has a short ligament length. ![Influence of the ligament length on the variation of the FPZ size [$\ell_{\mathrm{FPZ}}$]{} with respect to the maximum inclusion size $d_{\max}$: L (long ligament), M (medium ligament), S (small ligament), XS (extra small ligament).[]{data-label="fig:lFPZ_ligamentSize"}](lFPZ_ligamentSize.pdf){width="\columnwidth"} shows the crack patterns (selected among several realizations of inclusion positions) and the value of [$\ell_{\mathrm{FPZ}}$]{} corresponding to the smallest inclusion sizes ($d_{\max}=6.3$ mm) and the biggest ones ($d_{\max}=16$ mm) for the two extreme ligament lengths (XS and L). In the case of XS specimens (), whatever the maximum inclusion size, a crack without bifurcation crosses the ligament by connecting ITZ elements with a path that seems to be the shortest. Whereas, in the case of L specimens (), even if only one crack finally crosses the ligament, a number of microcracks occur either side of the inclusions. As a consequence, the FPZ size is in direct proportion with the maximum inclusion size in the latter case. This should be simply because the microcracks have enough space to develop in specimens with large ligament. [0.5]{}   [0.5]{} [0.5]{}   [0.5]{} Material characteristic length *versus* FPZ size {#sec:lc_vs_lFPZ} ------------------------------------------------ The influence of the material mesostructure on the FPZ size has been studied. The aim is now to question whether the same influence can be observed on the characteristic length of the material. Although many simulations were carried out to answer this question, we only focus herein on two mesoscopic features that may influence the characteristic length [$\ell_{\mathrm{c}}$]{}, the inclusion size with fixed position and fixed surface fraction . First, the lattice simulations are performed by varying the inclusion size while both the positions and the number of inclusions remain unchanged. This concerns the path (*a*) of mesostructure variation in which the monodisperse diameter of inclusions is changed by setting their values to $4$, $6$, $8$, and then $10$ mm. shows the relation between the characteristic length [$\ell_{\mathrm{c}}$]{} and the inclusion size $d$. For a comparison with the FPZ size [$\ell_{\mathrm{FPZ}}$]{}, the relation between [$\ell_{\mathrm{FPZ}}$]{}, computed from the LD tests, and the inclusion size is shown as well. *The main observation is that [$\ell_{\mathrm{c}}$]{} and [$\ell_{\mathrm{FPZ}}$]{} have the same order of magnitude and trend with respect to the inclusion size*. The increase in $d$ results in an increase of standard deviations of [$\ell_{\mathrm{c}}$]{} as previously observed in the variation of the FPZ size [$\ell_{\mathrm{FPZ}}$]{}. This is explained by the same reasons mentioned above for the FPZ size. ![Variation of the characteristic length of the material [$\ell_{\mathrm{c}}$]{} and the FPZ size [$\ell_{\mathrm{FPZ}}$]{} with respect to the inclusion size $d$ within the path (*a*) of variation.[]{data-label="fig:lc_lFPZ_N55"}](lc_lFPZ_N55.pdf){width="\columnwidth"} Now the question is whether the variation of the characteristic length [$\ell_{\mathrm{c}}$]{} with respect to the inclusion size $d$ still follows the variation of the FPZ size [$\ell_{\mathrm{FPZ}}$]{} with respect to $d$ if we only do vary the size $d$ of inclusions while keeping the surface fraction of inclusions as constant as possible? For this end, the path (*b*) of the mesostructure variation is used to study the influence of the inclusion size $d$ on the characteristic length [$\ell_{\mathrm{c}}$]{}, in which the “reference” surface fraction of inclusions is kept at $45$% when changing the inclusion size. shows the characteristic length of the material [$\ell_{\mathrm{c}}$]{} as a function of the inclusion size $d$. The plot between the FPZ size [$\ell_{\mathrm{FPZ}}$]{} and $d$ is shown as well. It exhibits that increasing $d$ does not lead to an increase of [$\ell_{\mathrm{c}}$]{}, as previously observed in the case of [$\ell_{\mathrm{FPZ}}$]{}. With a fixed value of the inclusion size, the resulting characteristic length of the material varies upon the spatial distribution of inclusions. However, the mean value of the characteristic length with respect to the spatial distribution of inclusions seems to be unchanged upon the increase of the inclusion size. The reason for this non-sensitivity may be related to the fact that the spacing between the inclusions, thus the spacing between the ITZs, seems to be insignificantly changed when the inclusion size is increased, as previously shown for the case of the FPZ size. ![Variation of the characteristic length of the material [$\ell_{\mathrm{c}}$]{} and the FPZ size [$\ell_{\mathrm{FPZ}}$]{} with respect to the inclusion size $d$ within the path (*b*) of variation.[]{data-label="fig:lc_lFPZ_P45"}](lc_lFPZ_P45.pdf){width="\columnwidth"} Conclusion ========== Two types of tensile tests were performed to study the key features that influence the FPZ size [$\ell_{\mathrm{FPZ}}$]{} and the material characteristic length [$\ell_{\mathrm{c}}$]{}. The assessment of [$\ell_{\mathrm{FPZ}}$]{} is achieved [via]{} localized damage (LD) tests while [$\ell_{\mathrm{c}}$]{} is measured [via]{} both LD tests and distributed damage (DD) tests. The numerical simulations are performed on the brittle elastic model material with inclusions. The material is then modeled as the three-phase material with the inclusion and matrix phases and the interfacial transition zone (ITZ) in-between them. Not only the mesostructure of the material but also the specimen geometry and the ligament size are varied in order to analyze their effect on the resulting FPZ size and material characteristic length. Five independent realizations of inclusion positions are generated for each case of the mesostructure so that the average values of [$\ell_{\mathrm{FPZ}}$]{} and of [$\ell_{\mathrm{c}}$]{} over that five realizations are used to analyze the effect of the mesostructure. The study points out the influences of: the inclusion size with fixed surface fraction, the inclusion size with the number and the position of inclusions unchanged, the inclusion surface fraction with fixed size, and the ligament size of the specimen on [$\ell_{\mathrm{FPZ}}$]{} and [$\ell_{\mathrm{c}}$]{}. From that extensive study, the following conclusions can be drawn: - It appears that not basically the size, but other parameters that characterize the inclusion structure of the material such as the surface fraction, the fabric, the connectivity…strongly affects the size of the FPZ, and thus the characteristic length of the material. - The measured value of the FPZ size is also dependent on the specimen geometry and the ligament size of specimens. Therefore, it is difficult to avoid the conclusion that the FPZ size is *not* an intrinsic property of the material as usually believed. However, it seems true that the FPZ size remains in the same order when the tested system is the same (mesostructure, global geometry and dimensions, loading conditions…). - The assessment of the characteristic length of the material is essential for using its value as the internal length in nonlocal models. However, just like the FPZ size, it is difficult to avoid structural effects in the method of measurement of the characteristic length. This is a first step to study the influence of inclusion properties on the characteristic length and several interesting qualitative conclusions on the numerical model material have already been pointed out. For the future work, we plan to study the effect of the mechanical properties, especially the ratios of the different stiffnesses and strengths of the material phases, on the resulting FPZ size and material characteristic length. Future developments will also aim to develop the numerical model to be more representative of quasi-brittle materials, especially concrete. Acknowledgements {#acknowledgements .unnumbered} ================ The authors gratefully acknowledge the financial support received from the BQR of Grenoble INP (contact reference: 2009/A14) for our work in 3SR laboratory. The laboratory 3SR is part of the LabEx Tec 21 (Investissements d’Avenir - grant agreement n^o^ANR-11-LABX-0030). [^1]: Note however that the surface fraction of inclusions is not exactly kept constant at $40$% when changing the inclusion size. This is because of the fact that the smaller the inclusion size, the greater the number of particles are needed, resulting in a greater number of the ITZ elements and consequently leading to a smaller number of inclusion elements.
--- abstract: 'In the field of graph signal processing (GSP), directed graphs present a particular challenge for the “standard approaches” of GSP to due to their asymmetric nature. The presence of negative- or complex-weight directed edges, a graphical structure used in fields such as neuroscience, critical infrastructure, and robot coordination, further complicates the issue. Recent results generalized the total variation of a graph signal to that of directed variation as a motivating principle for developing a graphical Fourier transform (GFT). Here, we extend these techniques to concepts of signal variation appropriate for indefinite and complex-valued graphs and use them to define a GFT for these classes of graph. Simulation results on random graphs are presented, as well as a case study of a portion of the fruit fly connectome.' author: - ' [^1]\' bibliography: - 'references.bib' title: Graph Signal Processing of Indefinite and Complex Graphs using Directed Variation --- Introduction ============ Given a network represented by a graph, where vertices correspond to components and edges represent some relationship, a *graph signal* is a function or time series defined on the graph vertices. is a developing technique for understanding dynamics evolving on discrete network structures [@shuman2013emerging; @sandryhaila2013discrete]. Recent applications include the analysis of interconnections between components in the brain [@raj2019spectral] and the interpretability of deep networks in machine learning [@anirudh2017margin]. analysis has largely focused on extending signal processing concepts for processing graph signals, which are no longer defined on regular (Euclidean) domains. Most techniques trace their roots to the fields of algebraic signal processing [@puschel2008algebraic] or spectral graph theory [@chung1997spectral]. The is a fundamental operator of graph signals, where now the basis functions and frequencies are defined from the eigenvectors and eigenvalues of a graph matrix (e.g., graph Laplacian), respectively. In fact, for so-called *regular* graphs such as lines, rings, and grids, this approach is identical to the standard discrete Fourier transform with appropriate dimension and boundary conditions. The basic definition of the [@shuman2013emerging] assumes undirected graphs with positive edges. Recent work developed a practical for directed graphs [@shafipour2018directed], named . In this approach, a matrix is obtained from optimization methods that aim to obtain graph frequencies as uniformly distributed as possible. They introduce a two-step optimization procedure, which first aims to find the maximum , a generalization from the . The second step seeks to minimize a spectral dispersion function, finding basis elements whose distribution of is smooth and fall within the achievable frequency range in order to obtain a spread basis. For many applications of interest, the underlying network structure is directed or the edge weights can be negative or complex, and then the adjacency matrix is no longer positive and symmetric. Under these conditions, the eigenvectors of the Laplacian $\bm{L}$ no longer form a valid basis. The work in [@shafipour2018directed] only considers positive weights, and this is not compatible with the network structure in many applications, such as biological neural networks which are best modeled 1) as directed graphs through pre-synaptic to post-synaptic connections and 2) indefinite (i.e., positive and negative) weighted graphs due to the presence of excitatory and inihibitory neurons. Furthermore, due to Dale’s law [@eccles1976electrical], each vertex should have only either positive or negative weights leaving it (with few exceptions), imposing additional structure on the graph. In this paper we extend the work in [@shafipour2018directed] for directed graphs with positive, negative, or complex weights and/or graph signals. We introduce novel concepts of and . Furthermore, we introduce appropriate modifications for the greedy and feasible optimization strategies introduced in [@shafipour2018directed] based on and . In the following sections, we first review related work on on directed graphs, in particular focusing on recent results from [@shafipour2018directed]. Next, we show how these concepts extend to and to perform on indefinite or complex-weighted graphs and address the necessary modifications to the algorithms in [@shafipour2018directed]. We exercise these new techniques on a set of randomly generated graphs, and finally perform a case-study of a particular indefinite directed graph that models dynamics in a portion of the fruit fly connectome. Background ========== In the most basic framework of , we are given a positive-weighted, undirected graph $\mathcal{G}=(\mathcal{V},\bm{A})$, where $\mathcal{V}$ is the set of vertices with $|\mathcal{V}|=N$ and $\bm{A}\in\mathbb{R}^{N\times N}$ is the adjacency matrix of $\mathcal{G}$ with $A_{ij}\geq0$. Additionally, a real valued function $\bm{x}:\mathcal{V}\to\mathbb{R}^N$ is defined on the vertices of $\mathcal{G}$. A common approach [@shuman2013emerging] is to project $\bm{x}$ onto the eigenvectors of the graph Laplacian $\bm{L}\triangleq \bm{D}-\bm{A}$, where $\bm{D}$ is the diagonal degree matrix, i.e., $D_{ii}=\sum_j A_{ji}$. When $\bm{A}$ is nonnegative-symmetric, the eigenvalues $\lambda_i$ of $\bm{L}$ satisfy $0=\lambda_1<\lambda_2\leq\cdots\leq\lambda_N$ and the corresponding eigenvectors $v_i$ are linearly independent. This leads to the analogy that $\lambda_i$ corresponds to frequency and $v_i$ corresponds to the Fourier harmonics. Letting $V=[v_1,\dots,v_N]$ defines a $\tilde{\bm{x}}=V^\top \bm{x}$, that reduces to the classical discrete time Fourier transform when $\mathcal{G}$ is an unweighted ring graph. This insight is the intuition for further analogues from classical signal processing. When the graph $\mathcal{G}$ is directed, this basic formulation using $\bm{L}$ breaks down. First, there is no uniform definition of a Laplacian for a directed graph. Some of these extensions may not produce evenly dispersed graph frequencies, as shown in [@shafipour2018directed], and others may produce degenerate eigenspaces. A more general approach is to use the Jordan decompostion [@sandryhaila2014discrete], but this is numerically unstable and can produce transforms that do not properly preserve the notion of DC signals nor the overall energy (i.e., is a non-unitary transformation). An alternative motivation for defining a set of transform basis vectors and a corresponding notion of frequency for directed, nonnegative graphs was developed in [@shafipour2018directed]. This generalized the notion of the of a signal defined on an undirected, nonnegative graph defined as $$ \text{TV}(\bm{x}) = \bm{x}^\top \bm{L}\bm{x}=\sum_{i,j=1,j>i}^NA_{ij}(x_i-x_j)^2\,. $$ The primary insight is that $\text{TV}(\bm{v}_i)=\lambda_i$ for eigenvectors of $\bm{L}$ and that the right hand side of the expression is amenable to generalization for the case of nonnegative directed graphs. To this end, [@shafipour2018directed] introduces the concept of which is defined as $$ \text{DV}(\bm{x})=\sum_{i,j=1}^NA_{ij}[x_i-x_j]^2_+\,, \label{eq_DV}$$ where $[x]_+=\max\{0,x\}$. Note that for undirected graphs $\text{DV}(\bm{x})=\text{TV}(\bm{x})$. The intuition behind this quantity is that if directed edges of a graph $\mathcal{G}$ represent the directed flow of a signal from higher values to lower ones, only net positive signal flows will contribute to . Having motivated , [@shafipour2018directed] used intuition from to define the graph frequency of a unit vector $\bm{u}$ as $f\triangleq\text{DV}(\bm{u})$, and thus for an arbitrary orthogonal matrix $\bm{U}$, a directed is defined as $\hat{\bm{x}}=\bm{U}^\top \bm{x}$, which is associated with a set of frequencies $f_k$ corresponding to each column $\bm{u}_k$ through $f_k=\text{DV}(\bm{u}_k)$. The majority of [@shafipour2018directed] then focuses on optimization routines for finding $\bm{U}$ that result in frequency components $f_k$ that are evenly spread. Directed Variation for Indefinite and Complex Graphs ==================================================== Indefinite Directed Variation ----------------------------- Unlike the work of [@shafipour2018directed] which focused primarily on approaches for analyzing graph functions on directed graphs with $A_{ij}\geq0$, in this section we first focus on adapting the techniques of [@shafipour2018directed] to the case where $A_{ij}$ can be both positive and negative, and $A_{ij} \neq A_{ji}$. In this case, the Laplacian can have both negative, positive, or complex eigenvalues and non-orthogonal eigenvectors. This suggests that we need to adapt to properly account for the variation introduced by the negative components of $A_{ij}$. The natural adaptation is to extend to via$$ \text{IDV}(\bm{x})=\sum_{i,j=1}^N[A_{ij}]_+[x_i-x_j]_+^2+[A_{ij}]_-[x_i-x_j]_-^2 $$ where $[x]_-= -\min\{0,x\}$. This is equivalent to for positive (or negative) directed graphs, and thus is equivalent to for undirected graphs. ### Complex DV Note that the definition of is readily extendable to a notion of directed variation for analysis when the graph signal or the directed adjacency matrix has complex values. Example use cases include multi-agent systems [@lin2014distributed], infrastructure networks [@sanchez2013ict], and neural networks [@frady2019robust]. Let ${\operatorname{Re}}(\cdot)$ and ${\operatorname{Im}}(\cdot)$ denote the real and imaginary parts of the argument, respectively, and for complex $\bm{A}$ and $\bm{x}$ let $$\begin{aligned} \tilde{\bm{y}}&=\begin{bmatrix}{\operatorname{Re}}(\bm{A}) & -{\operatorname{Im}}(\bm{A})\\ {\operatorname{Im}}(\bm{A}) & {\operatorname{Re}}(\bm{A}) \end{bmatrix}\begin{bmatrix} {\operatorname{Re}}(\bm{x})\\{\operatorname{Im}}(\bm{x})\end{bmatrix}&\triangleq \tilde{\bm{A}}\tilde{\bm{x}} \end{aligned}$$ and then $$ \bm{A}\bm{x}=\tilde{\bm{y}}_{1:N}+i\tilde{\bm{y}}_{N+1:2N}\,, $$ where the subscripts indicate the first and second $N$ dimensions of $\bm{y}$, respectively. This equivalence between complex and real matrix algebras offers an immediate generalization of to the case of complex graphs and graph signals by defining the of a complex signal as the of the associated real valued adjacency matrix and graph signal. Formally, given a complex, directed adjacency matrix $\bm{A}$ and graph signal $\bm{x}$, define $$\begin{aligned} &\text{CDV}(\bm{x})=\sum_{i,j=1}^{2N}[\tilde{A}_{ij}]_+[\tilde{x}_i-\tilde{x}_j]_+^2 +[\tilde{A}_{ij}]_-[\tilde{x}_i-\tilde{x}_j]_-^2\notag\\ &=\sum_{i,j=1}^N [{\operatorname{Re}}(A_{ij})]_+[{\operatorname{Re}}(x_i-x_j)]_+^2+[{\operatorname{Re}}(A_{ij})]_-[{\operatorname{Re}}(x_i-x_j)]_-^2\notag\\ &+[{\operatorname{Re}}(A_{ij})]_+[{\operatorname{Im}}(x_i-x_j)]_+^2+[{\operatorname{Re}}(A_{ij})]_-[{\operatorname{Im}}(x_i-x_j)]_-^2\notag\\ &+ [{\operatorname{Im}}(A_{ij})]_-[{\operatorname{Re}}(x_i)-{\operatorname{Im}}(x_j)]_+^2\notag\\&+[{\operatorname{Im}}(A_{ij})]_+[{\operatorname{Re}}(x_i)-{\operatorname{Im}}(x_j)]_-^2\notag\\ &+ [{\operatorname{Im}}(A_{ij})]_+[{\operatorname{Im}}(x_i)-{\operatorname{Re}}(x_j)]_+^2\notag\\&+[{\operatorname{Im}}(A_{ij})]_-[{\operatorname{Im}}(x_i)-{\operatorname{Re}}(x_j)]_-^2\end{aligned}$$ Minimizing Spectral Dispersion ------------------------------ Two methods for optimizing the spread of were introduced in [@shafipour2018directed]. The first, called the *feasible* method, used gradient descent on Stiefel manifolds [@edelman1998geometry] which can be computationally prohibitive due to the matrix decompositions for large graphs, as well nonconvexity of the overall optimization problem requiring multiple initial conditions for the gradient descent. As an alternative, [@shafipour2018directed] also introduced a *greedy* heuristic that exploits submodularity which is both highly efficient and uses basis vectors of a related undirected graph (this latter fact is desirable as the resulting graph transform will be in a basis that is in some sense “natural”), Next, we will discuss modifications to these two approaches that are necessary to use and in place of . ### Feasible Gradient Descent Approach The feasible gradient descent approach for minimizing spectral dispersion using proceeds almost identically to what is described in [@shafipour2018directed]. The main differences are to replace with in the objective function computations. This also changes the gradient computations, with the only substantial change occurring to the single vector gradient $\bar{g}_i$ defined in [@shafipour2018directed Eq. (15)]. From the linearity of the derivative, in the case of this becomes $$\begin{aligned} \bar{g}_i = &2\biggl([\bm{A}^\top_{\cdot i}]_+[\bm{u}-u_i\bm{1}_N]_+-[\bm{A}_{i\cdot}]_+[u_i\bm{1}_N-\bm{u}]_+\\ &-[\bm{A}^\top_{\cdot i}]_-[\bm{u}-u_i\bm{1}_N]_-+[\bm{A}_{i\cdot}]_-[u_i\bm{1}_N-\bm{u}]_-\biggr) \end{aligned}$$ to use the notation of [@shafipour2018directed]. An extension of the gradient descent approach for is also straight-forward. We can compute the single-vector complex gradient of the now complex vector by using the same complex-to-real transform used to derive from . For a complex adjacency matrix $\bm{A}$ and vector $\bm{u}$ (and transformed real-valued $\tilde{\bm{A}}$ and $\tilde{\bm{u}}$) we can define an associated real-valued gradient $\tilde{\bar{\bm{g}}}$ by $$\begin{aligned} \tilde{\bar{g}}_i = &2\biggl([\tilde{\bm{A}}^\top_{\cdot i}]_+[\tilde{\bm{u}}-\tilde{u}_i\bm{1}_{2N}]_+-[\tilde{\bm{A}}_{i\cdot}]_+[\tilde{u}_i\bm{1}_{2N}-\tilde{\bm{u}}]_+\\ &-[\tilde{\bm{A}}^\top_{\cdot i}]_-[\tilde{\bm{u}}-\tilde{u}_i\bm{1}_{2N}]_-+[\tilde{\bm{A}}_{i\cdot}]_-[u_i\bm{1}_{2N}-\tilde{\bm{u}}]_-\biggr) \end{aligned}$$ and then the complex gradient is $\bar{g}=\tilde{\bar{g}}_{1:N}+i\tilde{\bar{g}}_{N+1:2N}$. The remaining changes to gradient descent approach are handled by taking the appropriate inner product on the complex space (i.e., using conjugate transpose instead of transpose), as outlined in the original reference [@wen2013feasible Sec. 4.2] to the feasible method used by [@shafipour2018directed]. ### Greedy Heuristic Approach Due to the computational complexity and non-convexity of the gradient descent approach, [@shafipour2018directed] introduced a greedy heuristic that uses the Laplacian of a corresponding undirected graph. Specifically, from a positive directed graph $\mathcal{G}=(\mathcal{V},\bm{A})$ they define the *underlying undirected graph* $\mathcal{G}^u=(\mathcal{V}, \bm{A}^u)$ where $A_{ij}^{u}=\max\{A_{ij},A_{ji}\}$. Since $\mathcal{G}^u$ is symmetric, the corresponding Laplacian $\bm{L}^u$ will have orthogonal eigenvectors and can be used for a directed . An additional motivation for considering $\mathcal{G}^u$ is that $(\bm{x})$ (with respect to $\mathcal{G}$) is bounded by $\lambda_{max}^u$, the maximum eigenvalue of $\bm{L}^u$. Noting that for directed graphs $\text{\ac{DV}}(\bm{x})\neq\text{\ac{DV}}(-\bm{x})$, the authors use a greedy search over the collection $\{\pm{\bm{v}_i}\}$ for $\bm{v}_i$ eigenvectors of $\bm{L}^u$ that chooses only one from $\pm \bm{v}_i$ for each $i$. This relies on a proof of submodularity of the spread of derived using matroid theory. In the case of , we use the underlying graph with adjacency matrix $\bm{A}^{|u|}=\max\{|A_{ij}|,|A_{ji}|\}$ and the eigenvectors of the corresponding Laplacian $\bm{L}^{|u|}$. The matroid conditions still hold, as the basis elements of $\bm{L}^{|u|}$ are orthonomal. Thus, the spread of will also be submodular, and the greedy optimization of spread can proceed as described in [@shafipour2018directed], albeit with the subsitution of in place of . The presence of complex weights presents an additional wrinkle in the greedy optimization approach. In the complex case, we should instead consider the effect of an arbitrary unit-norm complex scalar $e^{i\theta}$ on the basis vectors, as opposed to simply $\pm1$. This presents a scalar-argument continuous optimization problem that must be computed for each basis vector at each time step of the greedy approach. Furthermore, this optimization is not convex so will require several initial conditions to find the global minimum in a gradient descent approach. As an alternative, we propose instead to compute initially the of each basis vector multiplied by $e^{i\theta_k}$ for $\theta_k$ evenly spaced on a grid between $[0,2\pi)$ and then proceeding with the greedy algorithm, with the selection made over rotation angles $\theta_k$ as opposed to only $\pm1$. Results ======= Simulation Results ------------------ We performed the following experiment to assess the impact of using the appropriate notion of . We considered ring lattice networks with $N=16$ nodes and degree 2, Erdos Renyi networks with $N=16$ nodes and probability of attachment $p=0.2$, and stochastic block networks with three communities and $N_c=8$ nodes per community. All edges were directed with weights uniformly drawn from $\{\pm1,\pm i\}$. For each of these graphs $\mathcal{G}$, we created an indefinite graph $\mathcal{G}_I$ by replacing the $\pm i$ edges with $\pm1$, and a positive graph $\mathcal{G}_P$ by setting all weights to $1$. For 10,000 random instances of the above classes, we generated two random unit vectors in $\mathbb{R}^N$ and compared the resulting s using $\mathcal{G}_P$ to the s using $\mathcal{G}_I$ and the s using $\mathcal{G}$. Across all 60,000 comparisons we found that the relative orderings of the two vectors were different greater than 35% of the time. This highlights the importance of or , as the frequency interpretation of a given basis element or signal can change substantially. To further understand the proposed methods we compared them on a subset of $M=20$ for graph class above. Fig. \[fig:sims:a\] shows box-plots of the maximum and , indicating that the feasible method increases beyond the greedy method. Furthermore, these show that distributions of particular classes of graphs can vary widely. Unlike maximum , it is essentially impossible to directly compare dispersion, defined as $\delta_{IDV}(\bm{U}) = \sum_{i=1}^{N-1} [\text{IDV}(\bm{u}_{i+1}) - \text{IDV}(\bm{u}_i) ]^2$ (and similarly $\delta_{CDV}$) as dispersion is strongly correlated with maximum (see Fig. \[fig:sims:b\]), meaning the feasible method also tends to produce an increase in dispersion, despite a qualitatively smoother distribution. That said, each graph class does tend to produce relatively similar dispersions. [cc]{} Fruit Fly Protocerebral Bridge ------------------------------ As biological neural networks were the primary motivator for developing , in this example we consider a model of the fruit fly (*Drosophila melanogaster*) from [@kakaria2017ring]. This sub-system of the fruit fly brain is thought to be responsible for maintaining heading direction in the navigation process of the fruit fly. The adjacency matrix and graph signals are provided from a recent study [@kakaria2017ring]. It includes three primary structures, the (nodes 0-31), the excitatory portion of the (nodes 32-49), and the inhbitory portion of the the (nodes 50-59; see Fig. \[fig:fly\_adj\] (a)) as well as a simulation that produces ring attractor dynamics that we use to define the graph signals in our analysis. The adjacency matrix in Fig. \[fig:fly\_adj\] is highly asymmetric, as synaptic signals only flow in one direction, and follows Dale’s Law (i.e., each neuron affects others with only positive or negative weights). An example simulation is shown in Fig. \[fig:fly\_adj:b\], where the network is initially (0.5 s) subjected to a background noisy spiking current stimulus into the , followed by 4 s of background stimulus plus feed-forward stimulus representing constant, periodic rotation of the fly’s heading, followed by another 0.5 s of background stimulus. Other than the repeating stimulus, we use the default parameters from [@kakaria2017ring]. [cc]{} Fig. \[fig:idv\_dv:a\] shows the resulting s from the greedy and feasible methods. The feasible method results in more even spacing in s compared to the greedy. Comparisons between and from greedy methods are shown in Fig. \[fig:idv\_dv:b\], showing considerable differences in the relative ordering of the identical harmonics despite identical basis elements. This indicate substantially different frequency interpretation under the two measures, and thus the inclusion of negative weights in the graph is meaningful. It is more difficult to directly compare the feasible methods, as the graph harmonics produced are quite different. One clear point of comparison, however, is that between the different “maximum frequency” harmonics produced by the first stage of the feasible method. Table \[tab:ff\_compare\] shows that while the greedy methods agree on the maximum frequency harmonic, the feasible methods produce quite different results. Again, this further illustrates the importance of including both directivity and weight into the . -- -- -- -- Feas. IDV Greedy IDV Feas. DV Greedy DV ------------------------------------ ----------- ------------ ---------- ----------- **Max IDV** 599.43 578.48 576.24 578.48 **Max DV** 559.51 569.86 570.39 569.86 **$\delta_{IDV}$, $\delta_{CDV}$** 31286.5 33420.8 28780.3 38559.9 : Comparison between greedy and gradient descent approaches \[tab:ff\_compare\] IDGFT matrices generated using the feasible and greedy methods are shown in Fig. \[fig:fly\_xform:a\] and Fig. \[fig:fly\_xform:b\], respectively, with eigenvectors sorted by . Despite the perceived benefits of the feasible method for this network, we find that the greedy transform more intuitively captures the structure of the network leading to more interpretable graph Fourier analysis, whereas the feasible method spreads the energy more evenly across the different harmonics. Furthermore, the “discontinuties” in the greedy values (Fig. \[fig:idv\_dv:a\]) actually correspond to important functional blocks within the highly structured network considered here. -- -- -- -- The total power of the graph signal (over time) in Fig. \[fig:fly\_adj:b\] using the two transforms in Fig. \[fig:fly\_xform\] is shown in Fig. \[fig:graph\_power\]. Using visual inspection in conjunction with the distribution, we have divided the greedy harmonics into four groups as shown in Fig. \[fig:graph\_power\] (bottom). The first grouping includes basis elements 0, 35, and 59 which coarsely measures signal in the entire network, in the (nodes 0-31) vs. the (32-59), and the interior inhibitory neurons (51-58) vs. the remainder of the . These three elements coarse-grain the signal power into these functional blocks. The second grouping covers spectral content primarily within the the . Curiously, the third grouping includes not only the excitatory neurons in the , but also the first and last inhibitory neurons, presumably due to the fact that they have a slightly different neighborhood structure from the other inhibitory neurons. This also leads to a fundamentally different graph signal for those boundary neurons, as evident in Fig. \[fig:fly\_xform:b\]. The final group consists of the interior inhibitory neurons which appear to add little additional spectral content to the graph beyond their average (captured in the first group). ![Power spectra of graph signal from Fig. \[fig:fly\_adj:b\]. Top: Feasible IDV-based transform. Bottom: Greedy IDV-based transform, colored by frequency groupings. Group 1 corresponds to harmonics that capture coarse activity levels between the (Group 2), excitatory- and boundary inhibitory neurons (Group 3) and interior inhibitory neurons (Group 4).[]{data-label="fig:graph_power"}](stacked_idv_power.eps){width=".8\columnwidth"} An interesting facet revealed through this graph Fourier analysis is that the two dominant components in the ellipsoid body, $\hat{x}_1(t)$ and $\hat{x}_5(t)$, result in periodic oscillations (at the input rotation period) that are 90$^\circ$ out of phase (see Fig. \[fig:time\_harmonics\]). As the inputs to the system are in the , these dynamics must be driven from there, and indeed we see that the top four components have the same periodic structure ($\hat{x}_{41}$, $\hat{x}_{42}$ $\hat{x}_{45}$ and $\hat{x}_{49}$). Since the then feeds back into the it would seem to indicate that the effectively aggregates and stabilizes several Fourier components and feeds them back in the reciprocal relationship between the and the . ![Fluctuations of projections of graph signals into certain graph harmonics that exhibit periodic behavior consistent with the input rotation rate. Note the bottom signals are 90$^\circ$ out of phase with those in the top. This signals, correspond to peaks in Group 2 and 3 in Fig. \[fig:graph\_power\] and comprise $\approx$20% of the total graph signal power.[]{data-label="fig:time_harmonics"}](GFT_sigs){width=".8\columnwidth"} Conclusions and Future Work =========================== In summary, we have extended the techniques in [@shafipour2018directed] to develop a that can be used for indefinite- and complex-weighted graphs. We showed that much of the intuition and rationale behind the original motivation applies to the transforms presented here. Furthermore, as using only the absolute value of the weights can have a substantial impact on the frequency interpretation of graph harmonics and the resulting analysis, this indicates the need to consider both the sign or phase of the weights in addition to the directionality. Finally, we applied these tools to a simulated biological neural network and were able to apply analysis to derive insight into its dynamics. In future work, we will explore other algorithms and heuristics for the optimization of IDV and CDV, and apply IDGFT and CDGFT to other applications. Acknowledgement {#acknowledgement .unnumbered} =============== We thank Dr. Grace Hwang for pointing us to the fly simulation and for helpful discussions. [^1]: This work was supported by NSF award NCS/FO 1835279 and JHU/APL internal research and development funds
--- author: - 'V. Pérez-Mesa, O. Zamora, D. A. García-Hernández, Y. Osorio, T. Masseron, B. Plez, A. Manchado, A. I. Karakas' - 'M. Lugaro' date: 'Received November 9, 2018; accepted February 7, 2019' title: 'Exploring circumstellar effects on the lithium and calcium abundances in massive Galactic O-rich AGB stars' --- [We previously explored the circumstellar effects on the Rb and Zr abundances in massive Galactic O-rich AGB stars. Here we are interested in clarifying the role of the extended atmosphere in the case of Li and Ca. Li is an important indicator of hot bottom burning (HBB) while the total Ca abundances in these stars could be affected by neutron captures.]{} [We report new Li and Ca abundances in massive Galactic O-rich AGB stars by using extended model atmospheres. The Li abundances were previously studied with hydrostatic models, while the Ca abundances have been determined here for the first time.]{} [We use a modified version of Turbospectrum and consider the presence of a gaseous circumstellar envelope and radial wind. The Li and Ca abundances are obtained from the 6708 $\AA$ Li I and 6463 $\AA$ Ca I resonance lines, respectively. In addition, we study the sensitivity of the pseudo-dynamical models to variations of the stellar and wind parameters.]{} [The Li abundances derived with the pseudo-dynamical models are very similar to those obtained from hydrostatic models (the average difference is 0.18 dex, $\sigma^{2}$ = 0.02), with no difference for Ca. The Li and Ca content in these stars is only slightly affected by the presence of a circumstellar envelope. We also found that the Li I and Ca I line profiles are not very sensitive to variations of the model wind parameters.]{} [The new Li abundances confirm the Li-rich (and super Li-rich) nature of the sample stars, supporting the activation of HBB in massive Galactic AGB stars. This is in good agreement with the theoretical predictions for solar metallicity AGB models from ATON, Monash, and NuGrid/MESA but is at odds with the FRUITY database, which predicts no hot bottom burning leading to the production of Li. Most sample stars display nearly solar (within the estimated errors and considering possible NLTE) Ca abundances that are consistent with the available *s*-process nucleosynthesis models for solar metallicity massive AGB stars, which predict overproduction of $^{46}$Ca relatively to the other Ca isotope and the creation of the radiactive isotope $^{41}$Ca but no change in the total Ca abundance. A minority of the sample stars seem to show a significant Ca depletion (by up to 1.0 dex). Possible explanations are offered to explain their apparent and unexpected Ca depletion.]{} **** Introduction ============ Stars with initial masses in the range between 0.8 and 8 M$_{\odot}$ end their lives with a phase of strong mass loss and thermal pulses (TP) on the asymptotic giant branch (AGB; e.g. @herwig05 [@karakaslattanzio14]). AGB stars are one of the main contributors to the chemical enrichment of the interstellar medium (ISM) of light elements (e.g. Li, C, N, F) and heavy elements (e.g. Rb, Zr, Tc, etc.) and so to the chemical evolution of complex stellar systems such as galaxies and globular clusters. AGB stars are also an important source of dust in galaxies and the site of origin of the vast majority of meteoritic stardust grains . The low-mass AGB stars (M $<$ 3$-$4 M$_{\odot}$) are C-rich stars (C/O $>$ 1) because $^{12}$C is produced during the TP-AGB phase and carried to the stellar surface via the third dredge-up (TDU), transforming O-rich stars into C-rich ones [@herwig05; @karakaslattanzio07; @lugarochieffi11]. On the other hand, the more massive AGB stars (M $>$ 4-5 M$_{\odot}$) are O-rich (C/O $<$ 1) due to the activation of the so-called “hot bottom burning” (hereafter, HBB) process. HBB converts $^{12}$C into $^{13}$C and $^{14}$N through the CN cycle via proton captures at the base of the convective envelope, preventing the formation of a carbon star [e.g. @sackmann92; @mazzitelli99]. The HBB models [e.g. @sackmann92; @mazzitelli99] predict also the production of $^{7}$Li via the “$^{7}$Be transport mechanism” [@cameron-fowler71], where Li should be detectable at the stellar surface regions (at least for a short time; see below). Regarding the *s*-process, the $^{13}$C($\alpha$,n)$^{16}$O reaction operates during the interpulse period and is the preferred neutron source in low-mass AGB stars [e.g. @lambert95; @abia01]. The neutrons are captured by iron nuclei and other heavy elements forming *s*-elements that can later be dredged to the stellar surface [e.g. @busso01; @karakaslattanzio14]. Another neutron source, $^{22}$Ne($\alpha$,n)$^{25}$Mg, requires higher temperatures and produces higher neutron densities (up to 10$^{13}$ n/cm$^{3}$) than the $^{13}$C($\alpha$,n)$^{16}$O reaction [see e.g. @vanraai12; @fishlock14]. The $^{22}$Ne($\alpha$,n)$^{25}$Mg reaction operates during the convective TP and dominates the production of *s*-process elements in the more massive (M $>$ 4-5 M$_{\odot}$) AGB stars [e.g. @garcia-hernandez06; @garcia-hernandez09; @garcia-hernandez13]. A different *s*-elements pattern is expected depending on the dominant neutron source; in particular a higher amount of Rb compared with neighboring elements like Zr and Sr. Interestingly, the free neutrons can drive neutron captures also on the light elements including the Ca isotopes. While the total abundance of Ca is predicted not to vary in AGB stars by more than roughly 10% [e.g. @karakaslugaro16], the isotopic composition of Ca can be affected, mostly resulting in an overproduction of $^{46}$Ca relatively to the other Ca isotopes [see also @wasserburg15]. Furthermore, the models predict the production of the radionuclide $^{41}$Ca (half life 0.1 Myr), which can also be carried up to the stellar surface from the intershell region via the TDU, with maximum $^{41}$Ca/$^{40}$Ca ratios at the stellar surface of the order of 10$^{-4}$ [see e.g. @trigo-rodriguez09; @lugaro12; @lugaro14]. This isotope decays via electron captures and is also destroyed by neutron captures via $^{41}$Ca(n,$\alpha$)$^{38}$Ar and $^{41}$Ca(n,p)$^{41}$K. All of these interaction channels are uncertain [see @lugaro18 for a discussion], so the production of $^{41}$Ca could in principle lead to a decrease in the total Ca abundance. However, the cross section of the production channel of $^{41}$Ca, the $^{40}$Ca(n,$\gamma$)$^{41}$Ca reaction is well determined [@dillmann09] and we do not expect major changes in the model predictions if any of the input physics related to $^{41}$Ca will be modified. The first photometric identification of massive O-rich AGB stars was done in the Magellanic Clouds (MCs) about 30 years ago [@wood83]. These stars were found to be long-period variables ($\sim$500-800 days) of Mira type and enriched in heavy neutron-rich s-process elements [see @wood83 for more details]. Subsequent high-resolution optical spectroscopic observations of AGB stars in both MCs (LMC and SMC) discovered that these stars are Li-rich, confirming the activation of HBB [see e.g. @smithlambert89; @smithlambert90; @plez93; @smith95]. More detailed chemical analysis show that the Li-rich HBB stars in the SMC display low C isotopic ratios, near to the equilibrium values, as expected from HBB models; these stars, however, are not rich in Rb but rich in other s-process elements like Zr and Nd [@plez93]. This observation suggests that these low-metallicity HBB stars mainly produce s-process elements via the $^{13}$C neutron source [e.g. @abia01; @garcia-hernandez09; @karakas18]. More recently, candidate HBB stars have been identified in the very low metallicity (\[Fe/H\] $\sim$ $-$1.6) dwarf galaxy IC 1613, but they are likely younger and more metal-rich than the average IC 1613 metallicity; one of these stars displays a strong Li I 6708Å line and it is probably Li-rich [@menzies15]. In our own Galaxy, high-resolution optical spectroscopic surveys of very luminous OH/IR stars show that most of the stars with long periods and high OH expansion velocities are Li-rich, which confirm them as truly massive HBB-AGB stars. The strong Rb overabundaces coupled with mild Zr enhancements [@garcia-hernandez06; @garcia-hernandez07] confirm the activation of the $^{22}$Ne neutron source in the more massive O-rich AGB stars. More recently, observations of a few massive Galactic AGB stars at the beginning of the TP phase have confirmed that HBB is strongly activated at the early AGB stages and that the *s*-process is dominated by the $^{22}$Ne neutron source [@garcia-hernandez13]. The latter stars are super Li-rich (log$\varepsilon$(Li) up to $\sim$4 dex) together with the lack of *s*-process elements (Rb, Zr and Tc), as predicted by the theoretical models [e.g. @vanraai12; @karakas12]. On the other hand, the Ca abundances have never been previously measured in massive Galactic AGB stars; here we report for the first time the Ca abundances in a complete sample of such stars. The chemical abundance analyses of the massive AGB stars of our Galaxy and the MCs [generally OH/IR stars; @garcia-hernandez06; @garcia-hernandez07; @garcia-hernandez09] were made by using classical MARCS hydrostatic atmospheres [@gustafsson08]. The analysis confirm that the $^{22}$Ne neutron source dominates the production of *s*-elements in these stars but the theoretical models cannot explain the extremely high Rb abundances and \[Rb/Zr\] ratios observed (especially in the lower metallicity MC-AGB stars). [@zamora14] constructed new pseudo-dynamical MARCS model atmospheres in which the presence of a gaseous circumstellar envelope and radial wind are considered and applied them to a small sample of O-rich AGB stars. The Rb abundances and \[Rb/Zr\] ratios obtained by [@zamora14] are much lower; in better agreement with the AGB nucleosynthesis models. More recently, [@perez-mesa17] reported the pseudo-dynamical Rb and Zr abundances in a larger sample of massive Galactic AGB stars, previoulsy studied with hydrostatic models [see @garcia-hernandez06; @garcia-hernandez07], by using the more realistic [@zamora14] extended model atmospheres. The new Rb abundances and \[Rb/Zr\] ratios obtained by [@perez-mesa17] are much lower and in much better agreement with the AGB theoretical predictions, significantly resolving the mismatch between the observations and the nucleosynthesis models, and confirming the earlier [@zamora14] preliminary results on a smaller sample of massive O-rich AGBs [see @perez-mesa17 for more details]. In this paper, we explore the circumstellar effects on the Li and Ca abundances by applying the [@zamora14] pseudo-dynamical model atmospheres to the sample of massive Galactic AGB stars of [@garcia-hernandez07] and the super Li-rich AGBs of [@garcia-hernandez13]. These new Li and Ca abundances are then compared with several AGB nucleosynthesis theoretical predictions: ATON [@ventura09], Monash [@karakaslugaro16], NuGrid/MESA [@ritter18] and FRUITY [@cristallo15] models. Observational data ================== We have used the high S/N (at least $\sim$30$-$50 around Li I 6708 $\AA$; see below) and high-resolution (R$\sim$50,000) optical ($\sim$4000$-$9000 $\AA$) echelle spectra for the [@garcia-hernandez06] sample (15 stars) of massive Galactic AGB stars, for which Rb and Zr abundances could be derived by [@perez-mesa17] as well as for the [@garcia-hernandez07] subsample (12 stars) of Li-detected stars not analysed by [@perez-mesa17]. In addition, we have analysed the high-quality optical echelle spectra of the three (RU Cug, SV Cas and R Cen) massive Galactic AGB stars, two of them (SV Cas and R Cen) super Li-rich, reported by [@garcia-hernandez13]. The high-resolution spectra were obtained using the Utrecht Echelle Spectrograph (UES) at the 4.2 m William Herschel Telescope, the CAsegrain Echelle SPECtrograph (CASPEC) at the ESO 3.6 m telescope, the Tull spectrograph at the 2.7 m Harlan J. Smith (HJS) Telescope and the UVES spectrograph at the ESO-VLT . Our final sample is thus composed by 30 stars; all of them with previous hydrostatic Li abundance determinations. It is to be noted here that our subsample of stars with previous Rb abundance determinations slightly differs from the [@perez-mesa17] sample mentioned above because the observed spectra are extremely red and the S/N ratios achieved for a given star can strongly vary from the blue to the red spectral regions (e.g. 10$-$20 at Ca I 6463 Å while $>$50$-$100 at Rb I 7800 $\AA$; see also below); i.e. six stars from [@perez-mesa17] display a too low S/N at Li I 6708Å to estimate their Li abundances and were removed from the present sample. We first carried out an exhaustive study of the Li and Ca absorption spectral lines that can be useful for the extraction of the Li and Ca abundances in these stars. As previouly found by [@garcia-hernandez07], we find the Li I 6708 $\AA$ line to be the best one for the abundance analysis; e.g. we discarded the subordinate and weaker Li I 8216 $\AA$ line because the synthetic spectra do not properly reproduce the observed stellar pseudo-continuum in this spectral region. Regarding the Ca absorption lines, we checked the strongest Ca I lines like those at 6122, 6162, 6439, 6463 and 6573 $\AA$ as well as the Ca II triplet at longer wavelengths ($\sim$ 8500 $\AA$). The Ca I 6463 $\AA$ line turned out to be the best Ca abundance indicator. This is because the synthetic fits around the Ca I 6463 $\AA$ line (i.e. the stellar pseudo-continuum) are much better than for the rest of Ca I lines; the stronger Ca I 6573 $\AA$ line also displays saturation effects. As we have mentioned in the Introduction, the isotopic Ca composition is theoretically expected to be affected by neutron captures. The Ca isotope ratios cannot be measured from the atomic Ca absorption lines (the atomic lines are intrinsically too broad; even at very high-resolution). To comfort our Ca measurement from atomic lines, we additionally explored the possibility of detecting the most intense CaH [@shayesteh13; @alavi18] and CaO [@yurchenko16] bandheads (around $\sim$6850-6950 and 8650-8850 $\AA$, respectively) in the optical spectra of our sample stars. Unfortunately, spectral synthesis show that no CaH and CaO molecular lines are detectable in these spectral regions, which are completely dominated by TiO. Thus, the Li and Ca abundances were determined from the Li I 6708 $\AA$ and Ca I 6463 $\AA$ lines, respectively, by using extended model atmospheres developed by us [see @zamora14; @perez-mesa17 for further information]. The atmospheric parameters (T$_{eff}$ and log$g$), additional observational information (variability period and OH expansion velocity) and the Li abundance derived from hydrostatic models are listed in Table \[table\_obs\_param\] for our sample stars [see @garcia-hernandez07; @garcia-hernandez13 for more details]. IRAS name $T_{eff}$ (K) log $g$ v$_{exp}$(OH) (km s$^{-1}$) Ref. Period (days) Ref. log $\varepsilon(Li)_{static}$ -------------- --------------- --------- ----------------------------- ------ --------------- ------ -------------------------------- 01085$+$3022 3300 $-$0.5 13 1 560 1 2.4 02095$-$2355 3300 $-$0.5 12[^$\dagger$^]{} ... 659 2 1.6 05027$-$2158 2800 $-$0.5 8 2 368 1 1.1 05098$-$6422 3000 $-$0.5 6 3 394 3 $\leq-$1.0 05151$+$6312 3000 $-$0.5 15 3 628 4 $<$ 0.0 05559$+$3825 2900 $-$0.5 12[^$\dagger$^]{} 6 590 5 0.6 06300$+$6058 3000 $-$0.5 12 3 440 6 0.7 07304$-$2032 2700 $-$0.5 7 4 509 1 0.9 09429$-$2148 3300 $-$0.5 12 3 650 1 2.2 10261$-$5055 3000 $-$0.5 4 2 317 1 $\leq-$1.0 11081$-$4203 3000 $-$0.5 8[^$\dagger$^]{} 2 332 8 1.3 14266$-$4211 2900 $-$0.5 9 2 389 8 $\leq$ 0.0 14337$-$6215 3300 $-$0.5 20 5 ... ... 2.4[^1^]{} 15193$+$3132 2800 $-$0.5 3 3 360 1 $\leq$ 0.0 15211$-$4254 3300 $-$0.5 11 2 ... ... 2.3 15255$+$1944 2900 $-$0.5 7 3 425 1 1.0 15576$-$1212 3000 $-$0.5 10 3 415 1 1.1 16030$-$5156 3000 $-$0.5 10[^$\dagger$^]{} 2 579 9 1.5 16037$+$4218 2900 $-$0.5 4 2 360 10 $\leq-$1.0 16260$+$3454 3300 $-$0.5 12 3 475 1 2.7 17034$-$1024 3300 $-$0.5 8[^$\dagger$^]{} 2 346 1 $\leq$ 0.0 18413$+$1354 3300 $-$0.5 15 6 590 5 1.8 18429$-$1721 3000 $-$0.5 7 2 481 8 1.2 19129$+$2803 3300 $-$0.5 11[^$\dagger$^]{} 2 420 10 3.1[^1^]{} 19361$-$1658 3000 $-$0.5 8 2 ... ... 1.9 20052$+$0554 3300 $-$0.5 16 7 450 5 2.6 20343$-$3020 3000 $-$0.5 8 2 349 1 $\leq-$1.0 RU Cyg 3000 $-$0.5 12[^$\dagger$^]{} ... 442 11 2.0 SV Cas 3000 $-$0.5 12[^$\dagger$^]{} ... 456 11 3.5 R Cen 3000 $-$0.5 5[^$\dagger$^]{} ... 251 11 4.3 ^$\dagger$^\ ^1^\ \ \ Pseudo-dynamical models ======================= In the chemical abundance analysis, we have followed the previous work by [@perez-mesa17]. In short, we have used the v12.2 modified version of the spectral synthesis code *Turbospectrum* [@alvarez98; @plez12], in which the presence of a circumstellar gas envelope and a radial wind are considered. In the analysis of the stars in our sample, we have assumed the atmosphere parameters (e.g. T$_{eff}$, log$g$, C/O, \[Fe/H\], macroturbulence) from [@garcia-hernandez07; @garcia-hernandez13] and the solar abundances from [@grevesse07]. Hydrodynamical wind models for AGB stars have been developed through the years [see the review by @hofner18 and references therein]. Recent pulsation-enhanced dust-driven outflow type models include time-dependent gas dynamics and dust formation, with polychromatic radiative transfer [e.g. @eriksson14; @hofner16]. Their predictive power for a particular star is however limited by the use of free parameters, e.g. the piston velocity and amplitude driving the pulsations. We therefore chose to use generic empirical models, based on observational determinations of the velocity-law and simple physical hypotheses. The pseudo-dynamical models were constructed from the original MARCS hydrostatic model structure and the atmosphere radius was extended by a wind out to $\sim$5 stellar radii and a radial velocity field. We have computed the stellar wind following mass conservation, radiative thermal equilibrium and a classical $\beta$-velocity law [see @zamora14; @perez-mesa17 for more details]. By adopting the atmospheric parameters from [@garcia-hernandez06; @garcia-hernandez07; @garcia-hernandez13], we genereted a mini-grid of synthetic spectra for each sample star. Some parameters are fixed: stellar mass M = 2 M$_{\odot}$[^1], gravity log $g$ = $-$0.5, microturbulent velocity $\xi$ = 3 km/s, metallicity \[Fe/H\] = 0.0 and C/O = 0.5 dex [see @garcia-hernandez07 for more details]. However, for the mass-loss rate $\dot{M}$ and $\beta$ parameters, we use values between $\dot{M} \sim 10^{-9}-10^{-6} M_{\odot}yr^{-1}$ [^2] in steps of $0.5\times10^{-1}$ $M_{\odot}yr^{-1}$ and $\beta \sim 0.2-1.6$ in steps of 0.2, respectively. Finally, for the Li and Ca abundances we used values between log $\varepsilon(Li)$ $\sim$ 0.0 to $+$2.8 dex and log $\varepsilon(Ca)$ $\sim$ $+$5.0 to $+$7.0 dex, in steps of 0.1 dex. The parameters of the synthetic spectra that best fit the 6708 Å Li I and the 6463 Å Ca I profiles and their adjacent pseudocontinua are listed in Table \[table\_abundances\]. For the subsample of stars (15) from [@perez-mesa17] we have used the stellar and wind parameters obtained from the Rb I 7800 Å spectral region fits for consistency and because the synthetic spectra are much less sensitive to variations of the model parameters in the Li I 6708 Å and Ca I 6463 Å spectral regions (see Section 3.1 below). Sensitivity of the synthetic spectra to variations of the model parameters -------------------------------------------------------------------------- ![image](comparison_Li_M.png){width="9.15cm" height="6.7cm"} ![image](comparison_Li_Teff.png){width="9.15cm" height="6.7cm"} ![image](comparison_Li_beta.png){width="9.15cm" height="6.7cm"} ![image](comparison_Li_vexp.png){width="9.15cm" height="6.7cm"} ![image](comparison_newCa_M.png){width="9.15cm" height="6.7cm"} ![image](comparison_newCa_Teff.png){width="9.15cm" height="6.7cm"} ![image](comparison_newCa_beta.png){width="9.15cm" height="6.7cm"} ![image](comparison_newCa_vexp.png){width="9.15cm" height="6.7cm"} We have analyzed the influence of variations in the stellar (T$_{eff}$) and wind ($\dot{M}$, $\beta$ and $v_{exp}$(OH)) parameters in the output synthetic spectra. Figures \[comparisons\_Li\] and \[comparisons\_Ca\] show, respectively, synthetic spectra in the spectral regions around the 6708 $\AA$ Li I and 6463 $\AA$ Ca I lines for different stellar and wind parameters. The Li I profile is not very sensitive to the wind parameters ($\dot{M}$, $\beta$ and $v_{exp}$(OH)). The Li I line is only slightly stronger with increasing $\dot{M}$ (Figure \[comparisons\_Li\]; top-left panel) and $\beta$ (Figure \[comparisons\_Li\]; bottom-left panel), while it is slightly weaker with increasing $v_{exp}$(OH) (Figure \[comparisons\_Li\]; bottom-right panel). In addition, the Li I absorption line is stronger with decreasing T$_{eff}$ (Figure \[comparisons\_Li\]; top-right panel), with the pseudo-continuum (e.g. the TiO molecular bands) being also affected, as expected [see e.g. @garcia-hernandez07]. In the Ca case, the sensitivity of the synthetic spectra to variations in $\dot{M}$, $\beta$ and $v_{exp}$(OH) (Figure \[comparisons\_Ca\]; top-left, bottom-left and bottom-right panel, respectively) is even smaller than the Li case. The Ca I 6463 Å spectral region displays TiO molecular bands weaker than the Li I 6708 Å spectral region and, consequently, the Ca I line and the pseudo-continuum are less affected by T$_{eff}$ variations (Figure \[comparisons\_Ca\]; top-right panel) than in the Li I region. Abundance results ================= The parameters of the best fits of [@garcia-hernandez06; @garcia-hernandez07; @garcia-hernandez13] to the observations and the hydrostatic Li abundances are listed in Table \[table\_obs\_param\]. In these fits, the static models have used the solar abundances from [@grevesse98] for computing the Li abundances, while the new pseudo-dynamical models and the hydrostatic values shown in Table \[table\_abundances\] use the more recent solar abundances from [@grevesse07]. The Li hydrostatic abundances obtained by using [@grevesse98] and [@grevesse07] are practically the same. ![image](Li_line.png){width="9.1cm" height="6.7cm"} ![image](Ca_new_line.png){width="9.1cm" height="6.7cm"} ![image](Li_line_zoom.png){width="9.1cm" height="6.7cm"} ![image](Ca_new_line_zoom.png){width="9.1cm" height="6.7cm"} In Figure \[Li\_Ca\_line\] we display the hydrostatic and pseudo-dynamical fits in the 6708 $\AA$ Li I and 6463 $\AA$ Ca I regions in four sample stars. The pseudo-dynamical models are similar to the hydrostatic ones, and reproduce properly the Li and Ca regions. The Li and Ca line profiles are not strongly affected by the presence of a circumstellar envelope and a radial wind. The rest of spectral fits are shown in Figure \[Li\_sample\] and \[Ca\_sample\] in Appendix A. In addition, Figure \[Li\_Ca\_zoom\] displays in more detail the Li and Ca regions for some sample stars, showing that the Ca I line is less sensitive (but still useful) to abundance variations than the Li one. In seven stars (IRAS 02095$-$2355, IRAS 09429$-$2148, IRAS 15211$-$4254, IRAS 16260$+$3454, IRAS 17034$-$1024, IRAS 18413$+$1354 and IRAS 19129$+$2803) the best spectral fit is different in the Li and Ca regions. In all cases, around the 6708 $\AA$ Li I line the best spectral fits give $T_{eff}$ = 3300 K, while around the 6463 $\AA$ Ca I line the best spectral fits provide cooler $T_{eff}$ of 3000 K. A similar finding, this time when comparing the Li I 6708$\AA$ and Rb I 7800$\AA$ regions was previously found by [@garcia-hernandez06; @garcia-hernandez07]. In addition, for some sample stars in which the OH expansion velocity is unknown (IRAS 02095$-$2355, IRAS 05559$+$3825, IRAS 11081$-$4203, IRAS 16030$-$5156, IRAS 17034$-$1024, IRAS 19129$+$2803, RU Cyg, SV Cas and R Cen), we explored the OH expansion velocity range displayed by other sample stars with similar variability periods. Due to the similar spectral fits that are obtained for slightly different OH expansion velocities, for these stars we thus adopted the average OH expansion velocities from the values displayed by the sample stars with similar periods (see Table \[table\_obs\_param\]). IRAS name $T_{eff}$ (K) log $g$ $\beta$ $\dot{M}$ (M$_{\odot}$ yr$^{-1}$) v$_{exp}$(OH) (km s$^{-1}$) log $\varepsilon(Li)_{static}$ log $\varepsilon(Li)_{dyn}$ log $\varepsilon(Ca)_{static}$ log $\varepsilon(Ca)_{dyn}$ ---------------------- ---------------- --------- --------- ----------------------------------- ----------------------------- -------------------------------- ----------------------------- -------------------------------- ----------------------------- -- 01085$+$3022[^\*^]{} 3300 $-$0.5 0.2 1.0$\times$10$^{-7}$ 13 2.4 2.2 ... ... 02095$-$2355 3300[^\*\*^]{} $-$0.5 0.8 1.0$\times$10$^{-9}$ 12[^$\dagger$^]{} 1.6 1.6 6.3 6.3 05027$-$2158 2800 $-$0.5 0.4 1.0$\times$10$^{-7}$ 8 1.1 1.1 5.8 5.8 05098$-$6422 3000 $-$0.5 1.4 1.0$\times$10$^{-8}$ 6 $\leq-$1.0 $\leq$ 0.0 6.1 6.1 05151$+$6312 3000 $-$0.5 1.0 1.0$\times$10$^{-8}$ 15 $\leq$ 0.0 $\leq$ 0.2 6.0 6.0 05559$+$3825 2900 $-$0.5 1.6 1.0$\times$10$^{-7}$ 12[^$\dagger$^]{} 0.6 0.5 5.8 5.8 06300$+$6058 3000 $-$0.5 0.2 1.0$\times$10$^{-7}$ 12 0.7 0.8 $\leq$ 5.8 $\leq$ 5.8 07304$-$2032 2700 $-$0.5 0.4 1.0$\times$10$^{-7}$ 7 0.9 1.0 $\leq$ 5.8 $\leq$ 5.8 09429$-$2148 3300[^\*\*^]{} $-$0.5 1.6 1.0$\times$10$^{-8}$ 12 2.2 2.2 5.3 5.3 10261$-$5055 3000 $-$0.5 0.2 1.0$\times$10$^{-9}$ 4 $\leq-$1.0 $\leq$ 0.0 5.8 5.8 11081$-$4203 3000 $-$0.5 1.6 5.0$\times$10$^{-8}$ 8[^$\dagger$^]{} 1.3 1.1 5.8 5.8 14266$-$4211 2900 $-$0.5 0.2 5.0$\times$10$^{-8}$ 9 $\leq$ 0.0 0.2 $\leq$ 5.3 $\leq$ 5.3 14337$-$6215[^\*^]{} 3300 $-$0.5 0.2 5.0$\times$10$^{-8}$ 20 2.4$^{1}$ 2.4$^{1}$ ... ... 15193$+$3132 2800 $-$0.5 1.6 1.0$\times$10$^{-9}$ 3 $\leq$ 0.0 $\leq$ 0.0 $\leq$ 5.8 $\leq$ 5.8 15211$-$4254 3300[^\*\*^]{} $-$0.5 1.6 1.0$\times$10$^{-9}$ 11 2.3 2.3 5.8 5.8 15255$+$1944 2900 $-$0.5 0.2 5.0$\times$10$^{-7}$ 7 1.0 0.9 6.0 6.0 15576$-$1212 3000 $-$0.5 0.2 1.0$\times$10$^{-8}$ 10 1.1 1.2 5.7 5.7 16030$-$5156[^\*^]{} 3000 $-$0.5 0.2 1.0$\times$10$^{-8}$ 10[^$\dagger$^]{} 1.5 1.5 ... ... 16037$+$4218 2900 $-$0.5 1.2 1.0$\times$10$^{-8}$ 4 $\leq-$1.0 $\leq$ 0.0 6.3 6.3 16260$+$3454 3300[^\*\*^]{} $-$0.5 0.2 1.0$\times$10$^{-9}$ 12 2.7 2.6 5.4 5.4 17034$-$1024 3300[^\*\*^]{} $-$0.5 0.8 1.0$\times$10$^{-8}$ 8[^$\dagger$^]{} $\leq$ 0.0 $\leq$ 0.0 6.2 6.2 18413$+$1354 3300[^\*\*^]{} $-$0.5 1.2 1.0$\times$10$^{-8}$ 15 1.8 1.7 $\leq$ 5.3 $\leq$ 5.3 18429$-$1721 3000 $-$0.5 0.2 1.0$\times$10$^{-8}$ 7 1.2 1.1 5.8 5.8 19129$+$2803 3300[^\*\*^]{} $-$0.5 0.2 1.0$\times$10$^{-8}$ 11$^{\dagger}$ 3.1$^{1}$ 3.1$^{1}$ 5.5 5.5 19361$-$1658 3000 $-$0.5 0.2 1.0$\times$10$^{-9}$ 8 1.9 2.0 5.8 5.8 20052$+$0554[^\*^]{} 3300 $-$0.5 0.2 1.0$\times$10$^{-7}$ 16 2.6 2.3 ... ... 20343$-$3020 3000 $-$0.5 1.2 1.0$\times$10$^{-9}$ 8 $\leq-$1.0 $\leq$ 0.0 6.0 6.0 RU Cyg 3000 $-$0.5 1.6 1.0$\times$10$^{-7}$ 12$^{\dagger}$ 2.0 1.7 6.2 6.2 SV Cas 3000 $-$0.5 1.6 1.0$\times$10$^{-7}$ 12$^{\dagger}$ 3.5 3.3 6.3 6.3 R Cen$^{2}$ 3000 $-$0.5 1.6 1.0$\times$10$^{-7}$ 5$^{\dagger}$ 4.3 4.0 ... ... ^$\dagger$^\ ^\*^\ ^\*\*^ in the Ca I 6463 Å  spectral region is cooler (3000 K) than the one around the Li I 6708 Å  line.\ ^1^\ ^2^\ The atmospheric and wind parameters as well as the Li and Ca abundances (or upper limits) from the best fits to the observed spectra are shown in Table \[table\_abundances\]. The new Li abundances determined from the extended models are very similar to those obtained with the hydrostatic models (see Table \[table\_abundances\]). A maximum difference, between the hydrostatic and dynamical abundances ($\Delta$(log$\varepsilon$(Li))$_{_{static-dynamic}}$), of $+$0.3 dex is found for IRAS 20052$+$0554 and R Cen, while an average difference of $+$0.18 dex is found in our entire AGB sample. This indicates that the Li content in these stars is not strongly affected by the presence of a circumstellar envelope. It is to be noted here that the latter number does not consider the four stars (IRAS 05098$-$6422, IRAS 10261$-$5055, IRAS 16037$+$4218 and IRAS 20343$-$3020) for which we get more conservative pseudo-dynamic Li upper limits ($\leq$0.0 dex) than from the hydrostatic models ($\leq$-1.0 dex). In addition, the Ca abundances from the hydrostatic and pseudo-dynamical models are practically identical and the presence of a circumstellar envelope does not affect at all the derived Ca abundances. We have estimated the uncertainties in the derived Li and Ca abundances for the sample stars. For this, we have made small changes in the atmosphere parameters ${\Delta}T_{eff}$ = $\pm$100 K, ${\Delta}$log $g$ = $\pm$0.5, ${\Delta}Z$ = $\pm$0.2, ${\Delta}$t = $\pm$0.5 km s$^{-1}$ and ${\Delta}FWHM$ = $\pm$50 m$\AA$ for the hydrostatic models, and also in the wind parameters ${\Delta}{\beta}$ = $\pm$0.2, ${\Delta}log$($\dot{M}/M_{\odot}$yr$^{-1}$) = $\pm$0.5 and ${\Delta}v_{exp}$(OH) = $\pm$5 km s$^{-1}$ for the pseudo-dynamic models. These small changes result in Li formal errors of $\pm$0.3 and $\pm$0.2 dex for the hydrostatic and pseudo-dynamic abundances, respectively, while the estimated Ca formal uncertainties are $\pm$0.5 dex for both the hydrostatic and pseudo-dynamic abundances. Non-LTE effects on the Li I and Ca I lines ========================================== Due to the fact that the classical hydrostatic and our pseudo-dynamical synthetic spectra are constructed by considering a local thermodynamic equilibrium (LTE) treatment in the (extended) stellar atmosphere, we have explored the possible non-LTE (NLTE) effects on the 6708 Å Li I and 6463 Å Ca I resonance lines in order to clarify the sign and magnitude of the corrections to be applied to the hydrostatic and pseudo-dynamic Li and Ca abundances. The NLTE radiative transfer calculations for the 6708 Å Li I line were performed using the [MULTI]{} code [@carlsson86; @carlsson92] with the same Li atom model as in [@osorio11]. This Li atom model includes quantum mechanical calculations of electron and hydrogen collisional excitation as well as charge exchange with hydrogen [see more details in @osorio11]. The calculations were performed in the same grid of MARCS models used in [@osorio16] and the models are hydrostatic. For this study we focused on atmospheric models with log $g$ $=-0.5$, \[Fe/H\] = 0.0 and five different effective temperatures, T$_{eff}$ = [2500, 2600, 2700, 3300 and 3400]{} K. For the three coolest models, the NLTE abundance corrections $\Delta$(log$\varepsilon$(Li))$_{_{NLTE-LTE}}$ are $\gtrapprox$ $+$0.2 dex, reaching $+$0.3 dex at log$\varepsilon$(Li)=0.0. The warmer model atmospheres display smaller NLTE abundance corrections of $\Delta$(log$\varepsilon$(Li))$_{_{NLTE-LTE}}$ $\sim$ 0.0 dex at log$\varepsilon$(Li) $\sim$ $+$2.0 and $\Delta$(log$\varepsilon$(Li))$_{_{NLTE-LTE}}$ $\sim$ $+$0.1 dex around log$\varepsilon$(Li) $\geq$ 3.0 and log$\varepsilon$(Li) $\leq$ 0.5. Our updated NLTE Li calculations thus confirm the Li-rich character of our massive O-rich AGB sample stars and that the adoption of LTE in Li-rich AGB stars is likely to result in an underestimation of the Li abundances [e.g. @kiselman95; @abia99]. For the NLTE calculations of the 6463 Å Ca I line, the [MULTI]{} code was used with the same Ca atom model as in [@osorio18]. This Ca atom model includes also updated data for electron and hydrogen collisional excitation and charge exchange with hydrogen. The calculations were performed in two atmospheric hydrostatic models with T$_{eff}$/log$g$/\[Fe/H\] = 3000/$-$0.5/0.0 and 3300/$-$0.5/0.0 for which we found positive NLTE abundance corrections of $\sim$ $+$0.06 and $+$0.02 dex, respectively. In short, our NLTE Ca calculations show that the use of LTE in massive O-rich AGB stars would translate into a slight underestimation of the real Ca abundances (see Subsection 6.2). Discussion ========== Lithium ------- Our Li results in massive Galactic AGB stars, including a circumstellar component in the analysis, do not reflect a dramatic change in the derived abundances, contrary to our previous findings on the Rb abundances in these stars [@zamora14; @perez-mesa17]. The Rb abundances obtained with pseudo-dynamical models are much lower (sometimes even by 1-2 dex) than the hydrostatic ones, being strongly affected by the presence of a circumstellar envelope. We have made several tests by changing the wind parameters (mass loss rate $\dot{M}$, parameter $\beta$ and the terminal velocity $v_{exp}$(OH)) in the models; e.g. not fixing $\dot{M}$ and $\beta$ or by assuming the wind model parameters from the best fits of the Rb I 7800 $\AA$ line [@perez-mesa17] but the Li abundances remain very similar (within 0.1 dex). As we have mentioned before, for consistency, we have fixed $\dot{M}$ and $\beta$ to those values obtained by [@perez-mesa17] from the Rb spectral fits (when available) because the synthetic spectra around 7800 $\AA$ are more sensitive to variations in the model wind parameters than the Li I 6708Å spectral region. Our finding of the circumstellar effects being not so important for the Li I 6708Å line could be somehow surprising because the atomic parameters (e.g. excitation potential) of the Li I 6708 $\AA$ and Rb I 7800 $\AA$ resonance lines are quite similar. The lower Li abundance as compared with Rb, and therefore lower Li I column-density in the circumstellar envelope, however, likely explains the small abundance differences obtained between the pseudo-dynamical and hydrostatic models. In addition, other factors such as the molecular blends in each wavelength range and the line depth formation could influence the different sensitivity to the circumstellar effects between Rb and Li. In particular, the line depth formation is extremely important because the velocity field could change the $\tau$-scale of the lines [see @nowotny10]. We have compared our new Li abundances with up-to-date solar metallicity massive AGB nucleosynthesis models with very different prescriptions for mass loss and convection: (i) ATON models [@ventura18] with the [@bloecker95] recipe for mass loss and the full spectrum of turbulence convective mixing [FST; e.g. @mazzitelli99]; (ii) Monash models [@karakaslugaro16] with the [@vassiliadis93] mass-loss prescription and the mixing length theory of convection [MLT; @bohm58]; (iii) FRUITY[^3] models [@cristallo15] with a pulsationally-driven mass-loss rate [see @straniero06] and the MLT of convection but under the formulae from [@straniero06]; and (iv) NUGrid/MESA models [@ritter18] with the mass-loss formula from [@bloecker95] and assuming convective boundary mixing [CBM; e.g. @ritter18]. The evolution of Li in massive HBB-AGB stars is strongly affected by several stellar parameters such as progenitor mass, metallicity, mass loss and convection model [see e.g. @mazzitelli99; @vanraai12]. During the AGB phase, the mass loss and the treatment of the convection are the most important factors in the determination of the duration of HBB and the variation of the surface chemistry during the AGB phase. For example, i) the massive AGB nucleosynthesis ATON models show strong Li abundance oscillations (by orders of magnitude) on timescales as short as $\sim$10$^{4}$ years [see e.g. @mazzitelli99] and there may be negligible Li in the envelope for a significant (at least 20%) period of time; and ii) the Li minima and the duration of the Li-rich phase in the Monash models are less deep and longer, respectively, than in the ATON models [see @garcia-hernandez13 for more details]. This complex theoretical evolution of the Li abundance implies that the Li abundances distribution derived from the spectroscopic observations (e.g. the exact progenitor mass and evolutionary status are not known) can only be analyzed in a statistical way [@garcia-hernandez07]. Regarding the peak surface Li abundances during the AGB, the ATON models predict that it goes from log$\varepsilon$(Li) = 3.8 dex for M = 3.5 M$_{\odot}$ to 4.3 dex for M = 6.0 and 7.5 M$_{\odot}$, while in the Monash models it changes from log$\varepsilon$(Li) = 3.8 dex for M = 4.25 M$_{\odot}$ to log$\varepsilon$(Li) = 4.4 dex for M = 8 M$_{\odot}$. In the NuGrid/MESA models, Li production at Z = 0.02 is only predicted for M = 6 and 7 M$_{\odot}$ with a peak surface Li abundance of 2.9 and 3.7 dex, respectively. However, the FRUITY models do not predict production of Li at all, which is at odds with the Li overabundances observed in massive AGB stars in the Galaxy [e.g. @garcia-hernandez07; @garcia-hernandez13], the Magellanic Clouds [e.g. @plez93; @smith95; @garcia-hernandez09] and the Li-detected O-rich AGB star in the dwarf galaxy IC 1613 [e.g. @menzies15]. The pseudo-dynamic Li abundances obtained from our spectra are between $\sim$0.0 and 4.0 dex; with eight non Li-rich (log$\varepsilon$(Li) $<$ 0.5 dex), twenty Li-rich (0.5 $\leq$ log$\varepsilon$(Li) $\leq$ 3.2 dex) and two super Li-rich (log$\varepsilon$(Li) $>$ 3.2 dex) stars. Their great similarity with the hydrostatic Li abundances (and the relatively small positive NLTE corrections; Subsect. 5) means that the conclusions reached by [@garcia-hernandez07; @garcia-hernandez13] are unchanged and will not be repeated here. In short, the Li-rich and super Li-rich character of the massive AGB stars in our sample confirm that they experience strong HBB [@garcia-hernandez07; @garcia-hernandez13]. This is in good agreement with the predictions from AGB nucleosynthesis models like the ATON, Monash and NuGrid/MESA, but in strong contrast with the FRUITY AGB models, which do not predict strong HBB and Li production in solar metallicity massive AGB stars. Calcium ------- This is the first work in which the Ca abundances have been obtained for a sample of massive AGB stars. Figure \[comparisons\_Ca\] shows that the 6463 $\AA$ Ca I line is not sensitive to changes in the stellar (T$_{eff}$) and wind ($\dot{M}$, $\beta$ and $v_{exp}$(OH)) parameters. Thus, we adopted the wind parameters from the Rb fits (when possible) or the Li fits, which are more sensitive to variations of the wind parameters. The hydrostatic and pseudo-dynamic abundances of Ca obtained from our spectra are identical; so the Ca abundances are not affected at all by the presence of a circumstellar envelope. The Ca abundances obtained are in the range log $\varepsilon(Ca)$ = 5.3 - 6.3 dex. The theoretical AGB nucleosynthesis models predict an important production of some Ca isotopes like the radioactive $^{41}$Ca but no significant change in the total Ca abundance (see Section 1). Note that here we consider log $\varepsilon(Ca)$ = 6.31 dex as the solar abundance in the photosphere [@grevesse07]. While ATON models [@ventura18] do not include Ca, the Monash models predict Ca abundances in the range log $\varepsilon(Ca)$ = 6.31 - 6.35 dex for solar metallicity; i.e. a 12% increase at most relatively to the initial value used in these models of 6.29 dex. In the same way, the FRUITY models predict solar Ca abundances for Z = 0.014. In the NuGrid/MESA models, Ca at Z = 0.01 is predicted to vary from log $\varepsilon(Ca)$ = 6.11 to 6.18 dex in the range of M = 3 -7 solar masses, while for Z = 0.02 the Ca abundances are between 6.44 and 6.48 dex. In Figure \[m6z014\] we show the evolution of Li, Ca and the radioactive isotope $^{41}$Ca as a function of time for the 6 M$_{\odot}$ model of solar metallicity from [@karakaslugaro16]. This figure shows that the $^{41}$Ca abundance increases as a consequence of nucleosynthesis and mixing during the TP-AGB phase although the first increase of the $^{41}$Ca occurred during the second dredge-up, after core helium burning. We can also see that the total elemental Ca abundance is, however, unchanged. The 6 M$_{\odot}$ model has TDU and HBB as described in [@karakaslugaro16], remains oxygen rich as a consequence of HBB, and as shown by Figure \[m6z014\] the model becomes Li-rich, where the $\log \varepsilon$(Li) exceeds 3 for $\sim$ 80,000 years. ![Evolution of Li, Ca and $^{41}$Ca versus time for the 6 M$_{\odot}$ model of solar metallicity from [@karakaslugaro16].[]{data-label="m6z014"}](m6z014-surf.pdf){width="7.0cm" height="9.0cm"} We find that most (20) sample stars display Ca abundance values $\sim$ 0.5 - 0.6 dex lower than the adopted Ca solar abundance of log $\varepsilon(Ca)$ = 6.31 dex. In spite of the fact that we cannot completely discard that some of our sample stars could be indeed slightly metal poor[^4], their Ca abundances can be considered as nearly solar when taking into account our estimated Ca abundance errors ($\sim$ 0.5 dex) and the possible NLTE effects (see Subsect. 5). This is consistent with the predictions from the available *s*-process nucleosynthesis models for solar metallicity massive AGB stars, as mentioned above. However, a minority (5) of the sample stars seem to show a significant Ca depletion ($-0.8$ to $-1.0$ dex). We explored if the derived Ca abundances are correlated with the Li content, the wind parameters (mass loss $\dot{M}$, beta parameter $\beta$ and terminal velocity $v_{exp}$(OH)) or other observational information such as variability periods, near-IR colors and IR excess but our search proved to be negative with the possible exception of the near-IR colors and the IR excess (see below). We have identified three possibilities (i.e. missed opacities in the stellar atmosphere models, Ca depletion into dust and line weakening phenomena) in order to understand their apparent (and unexpected) Ca depletion and that are enumerated below: ![image](Ca_vs_color.png){width="9.1cm" height="6.7cm"} ![image](Ca_vs_color_12-2um.png){width="9.1cm" height="6.7cm"} i\) The low Ca abundances in our sample stars could be due to missed opacities in the stellar atmosphere models. Although the Ca spectral region (mainly dominated by the TiO molecule) is generally well modelled by us, @garcia-hernandez07 [@garcia-hernandez09] have reported the presence of strong and yet unidentified molecular bands in several spectral regions in the optical spectra of massive Galactic and extragalactic O-rich AGB stars, which suggest the presence of other opacity contributors not yet considered in the model atmospheres and in the construction of synthetic optical spectra for O-rich AGB stars. ii\) Although the condensation of inorganic dust grains in the winds of evolved stars is still poorly understood, the observed Ca underabundances may be also due to the fact that Ca in our sample of stars could be depleted into dust [see e.g. @lodders99 for a review]. Figure \[Ca\_vs\_color\] plots the Ca pseudo-dynamical abundaces against the 2MASS J-K colors and the infrared excesses R = F(12$\mu$m)/F(2.2$\mu$m)[^5] (with fluxes at 2.2 and 12 $\mu$m from 2MASS and IRAS, respectively) in our sample stars. Curiously, the 5 stars with a significant Ca depletion are the redder ones (they display, on average, higher J-K colors) and most of them display a significant infrared excess, suggesting that they could be among the more evolved and/or dusty stars in our sample. However, the number of stars is still low and Galactic massive AGB stars are known to display a large photometric variability in the near-IR and mid-IR ranges [see e.g. @garcia-hernandez07b]. [@dellagli14] studied the alumina dust (the amorphous state of Al$_{2}$O$_{3}$) production in O-rich circumstellar shells, which is expected to be fairly abundant in the winds of the more massive and O-rich AGB stars. By coupling AGB stellar nucleosynthesis and dust formation, the predicted production of alumina dust implies an important decrease (see below) in the abundance of gaseous Al in the AGB wind. The high fraction of gaseous Al condensed in Al$_{2}$O$_{3}$ (especially in their more massive AGB models) implies that the gaseous Al is expected to be underabundant in the more massive HBB-AGB stars; something that is in good agreement with the only estimate of the Al content in massive HBB-AGB stars to date[^6]; i.e. the Al content measured in a confirmed massive HBB-AGB in the Large Magellanic Cloud [see @dellagli14 for more details]. Thus, similarly to the Al case, the gaseous Ca in massive Galactic O-rich AGB stars could be depleted into dust. For example, [@tielens90] proposed a dust condensation sequence in O-rich circumstellar regions, in which the formation of calcium-rich silicates such as augite (Ca$_{2}$Al$_{2}$SiO$_{7}$), diopside (CaMgSi$_{2}$O$_{6}$) and anothite (CaAl$_{2}$Si$_{2}$O$_{6}$) would be expected [see also @lodders99 and references therein]. However, [@speck00] suggested that the dust evolutionary path is different for the AGB and red supergiant (RSG) stars; although both condensation sequences eventually would lead to similar dust types. Basically, the main difference between the RSG and AGB dust condensation sequences is that the RSG stars experience an evolutionary phase in which aluminium- and calcium-rich silicates condensation takes place, while this evolutionary phase is apparently not seen in the AGB stars. [@speck00] classified the spectra of a sample of AGB and RSG stars into several groups according to the observed appearance of the amorphous silicates infrared (IR) features around 10 $\mu$m. They found that the RSG IR spectra are better reproduced when calcium-rich silicates are considered, while the AGB stars are well reproduced with amorphous silicates only. Unfortunately, we have only three stars (RU Cyg, IRAS 07304$-$2032 and IRAS 15193$+$3139) in common with [@speck00] and they are not among the most Ca-poor stars in our sample. [@speck00] classified their spectra as *silicate A* (RU Cyg) and *silicate B* (IRAS 07304$-$2032 and IRAS 15193$+$3139) AGB types; all of them with no clear signs for calcium-rich silicates. Additional N-band IR spectroscopic observations of confirmed Galactic AGB O-rich stars (especially for those stars with significant Ca depletion) would be desirable in order to clarify if their 10 $\mu$m amorphous silicates dust features could be better fitted by the inclusion of Ca-rich silicates. Ca-rich stardust grains from AGB stars have been recovered from meteorites, also belonging to the Group II population that probably originated from HBB-AGB stars [@lugaro17]. Both hibonite grains [e.g. @nittler08], as well as Ca-rich silicates [e.g. @nguyen04; @vollmer09] have been reported. iii\) Finally, line weakening phenomena could be another possibility to explain the lack of Ca in these sample stars. [@humphreys74] studied some high-luminosity M-type supergiants that show veiling of the absorption metallic lines; the veiling effect was found to be most pronounced in the near-IR than in the blue spectral regions. [@humphreys74] proposed that the peculiar energy distributions of these stars and the veiling of the absorption lines may be explained by a combination of free-bound emission ($\lambda<$ 1.6 $\mu$m) and free-free emission ($\lambda>$ 1.6 $\mu$m) from electron-neutral H interactions arising in the extended atmosphere around the star plus the surrounding circumstellar shell of dust grains. Such line weakening phenomenon, as observed in these peculiar M-supergiants, could be also present in similar M-type long-period variables (more than 260 days) dusty stars such as our sample stars. We have looked for additional absorption metallic lines in our Li and Ca spectral regions (e.g. the 6469 $\AA$ Fe I and 6484 $\AA$ Ni I lines) in order to check if line weakening phenomena are affecting other metallic lines. Unfortunately, our optical spectra are severely dominated by the TiO molecule and no metallic lines are detected. We thus cannot confirm or discard if the lack of Ca in our minority stars is because of possible line weakening phenomena affecting the optical spectra of massive Galactic O-rich AGB stars. Conclusions =========== We have reported new hydrostatic and pseudo-dynamical abundances of Li and Ca from the 6708 $\AA$ Li I and 6573 $\AA$ Ca I lines, respectively, in a complete sample of massive Galactic O-rich AGB stars by using a modified version of the spectral synthesis code *Turbospectrum*, which considers the presence of a circumstellar envelope with a radial wind. The new Li abundances from pseudo-dynamical models are very similar to those obtained from the hydrostatic models (the average difference is 0.18 dex), while they are identical for Ca. This indicates that the determination of the Li and Ca abundances in massive O-rich AGB stars is not strongly affected by the presence of a circumstellar envelope. Indeed, we found that the the Li I and Ca I line profiles are not very sensitive to variations of the wind ($\dot{M}$, $\beta$ and $v_{exp}$(OH)) parameters. The new pseudo-dynamic abundances of Li (30 stars) confirm the Li-rich (and super Li-rich in some stars) character of our sample stars and the strong activation of the HBB process in massive Galactic AGB stars. This is in good agreement with the theoretical predictions from the most recent AGB nucleosynthesis models such as ATON, Monash and NuGrid/MESA, but at odds with the FRUITY database, which predicts no Li production by HBB in massive AGB stars at solar metallicity. For the first time we have obtained Ca abundances in a sample of massive Galactic AGB stars. Most of them (20) display nearly solar Ca abundances; within the estimated errors and/or considering possible NLTE effects. Their abundances are thus consistent with the predictions from the *s*-process nucleosynthesis models for massive AGB stars at solar metallicity. For example, such models predict some production of the radioactive $^{41}$Ca isotope but no change in the total Ca abundance. A minority of stars (5) show a significant Ca depletion (by $\sim$ -0.8 $-$ -1.0 dex). Possible explanations to explain their apparent and unexpected Ca depletion could be missed opacities in the stellar atmosphere models and/or Ca depletion into dust as well as line weakening phenomena. The authors thank Flavia Dell’Agli for providing the Li peak abundances from the ATON models and Umberto Battino and Ashley Tattersall for providing the Li and Ca abundances from the NuGrid/MESA models. This work is based on observations at the 4.2 m William Herschel Telescope operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de Los Muchachos of the Instituto de Astrofísica de Canarias. Also based on observations with the ESO 3.6 m telescope at La Silla Observatory (Chile). V.P.M. acknowledges the financial support from the Spanish Ministry of Economy and Competitiveness (MINECO) under the 2011 Severo Ochoa Program MINECO SEV$-$2011$-$0187. V.P.M., O.Z., D.A.G.H., T.M. and A.M. acknowledge support provided by MINECO under grant AYA$-$2017$-$88254-P. M.L. is a Momentum (“Lendület-2014” Programme) project leader of the Hungarian Academy of Sciences. This paper made use of the IAC Supercomputing facility HTCondor (http://research.cs.wisc.edu/htcondor/), partly financed by the Ministry of Economy and Competitiveness with FEDER funds, code IACA13$-$3E$-$2493. Complete sample {#Append_sample} =============== The best fits of the 6708 $\AA$ Li I and 6463 $\AA$ Ca I spectral regions of our sample of massive AGB stars are displayed in Figure \[Li\_sample\] and \[Ca\_sample\] respectevely. The pseudo-dynamical models are similar to the hydrostatic ones, and reproduce properly the Li and Ca regions, which means that the Li and Ca line profiles are not strongly affected by the presence of a circumstellar envelope and a radial wind. ![image](appendix_Li_1.png){width="9.1cm" height="6.5cm"} ![image](appendix_Li_2.png){width="9.1cm" height="6.5cm"} ![image](appendix_Li_3.png){width="9.1cm" height="6.5cm"} ![image](appendix_Li_4.png){width="9.1cm" height="6.5cm"} ![image](appendix_Li_5.png){width="9.1cm" height="6.5cm"} ![image](appendix_Li_6.png){width="9.1cm" height="6.5cm"} ![image](appendix_Li_7.png){width="9.1cm" height="6.5cm"} ![image](appendix_Li_8.png){width="9.1cm" height="6.5cm"} ![image](appendix_newCa_1.png){width="9.1cm" height="6.5cm"} ![image](appendix_newCa_2.png){width="9.1cm" height="6.5cm"} ![image](appendix_newCa_3.png){width="9.1cm" height="6.5cm"} ![image](appendix_newCa_4.png){width="9.1cm" height="6.5cm"} ![image](appendix_newCa_5.png){width="9.1cm" height="6.5cm"} ![image](appendix_newCa_6.png){width="9.1cm" height="6.5cm"} Abia, C., Pavlenko, Y., de Laverny, P. 1999, , 351, 273 Abia, C., Busso, M., Gallino, R., et al. 2001, , 559, 1117 Alavi, S. F. & Shayesteh, A. 2018, MNRAS, 474, 2 Alvarez, R., & Plez, B. 1998, , 330, 1109 Bl[ö]{}ecker, T. 1995, , 297, 727 B[ö]{}hm-Vitense, E. 1958, , 46, 108 Busso, M., Gallino, R., Lambert, D. L., Travaglio, C., & Smith, V. V. 2001, , 557, 802 Cameron, A. G. W., & Fowler, W. A. 1971, , 164, 111 Carlsson, M. 1986, Uppsala Astronomical Observatory Reports, 33 Carlsson, M. 1992, Cool Stars, Stellar Systems, and the Sun, 26, 499 Chengalur, J. N., Lewis, B. M., Eder, J., & Terzian, Y. 1993, , 89, 189 Cristallo, S., Piersanti, L., Straniero, O., et al. 2011, , 197, 17 Cristallo, S., Straniero, O., Piersanti, L., & Gobrecht, D. 2015, , 219, 40 Dell’Agli, F., Garc[í]{}a-Hern[á]{}ndez, D. A., Rossi, C., et al. 2014, , 441, 1115 Delfosse, X., Kahane, C., & Forveille, T. 1997, , 320, 249 Di Criscienzo, M., Ventura, P., Garc[í]{}a-Hern[á]{}ndez, D. A., et al. 2016, , 462, 395 Dillmann, I., Domingo-Pardo, C., Heil, M., et al. 2009, , 79, 065805 Eriksson, K., Nowotny, W., H[ö]{}fner, S., et al. 2014, , 566, A95. Fishlock, C. K., Karakas, A. I., Lugaro, M., & Yong, D. 2014, , 797, 44 Garc[í]{}a-Hern[á]{}ndez, D. A., Garc[í]{}a-Lario, P., Plez, B., et al. 2006, Science, 314, 1751 Garc[í]{}a-Hern[á]{}ndez, D. A., Garc[í]{}a-Lario, P., Plez, B., et al. 2007, , 462, 711 Garc[í]{}a-Hern[á]{}ndez, D. A., Perea-Calderón, J. V., Bobrowsky, M., Garc[í]{}a-Lario, P.  2007, , 666, L33 Garc[í]{}a-Hern[á]{}ndez, D. A., Manchado, A., Lambert, D. L., et al. 2009, , 705, L31 Garc[í]{}a-Hern[á]{}ndez, D. A., Zamora, O., Yag[ü]{}e, A., et al. 2013, , 555, L3 Grevesse, N., & Sauval, A. J. 1998, , 85, 161 Grevesse, N., Asplund, M., & Sauval, A. J. 2007, , 130, 105 Groenewegen, M. A. T., & de Jong, T. 1998, , 337, 797 Gustafsson, B., Edvardsson, B., Eriksson, K., et al. 2008, , 486, 951 Herwig, F. 2005, , 43, 435 H[ö]{}fner, S., Bladh, S., Aringer, B., et al. 2016, , 594, A108. H[ö]{}fner, S., & Olofsson, H. 2018, Astronomy and Astrophysics Review, 26, 1 Hoppe, P., & Ott, U. 1997, American Institute of Physics Conference Series, 402, 27 Humphreys, R. M. 1974, , 188, 75 Jim[é]{}nez-Esteban, F. M., Garc[í]{}a-Lario, P., Engels, D., & Manchado, A. 2006, , 458, 533 Jones, T. J., Bryja, C. O., Gehrz, R. D., et al. 1990, , 74, 785 Jorissen, A., Frayer, D. T., Johnson, H. R., Mayor, M., & Smith, V. V. 1993, , 271, 463 Karakas, A. I., Garc[í]{}a-Hern[á]{}ndez, D. A., & Lugaro, M. 2012, , 751, 8 Karakas, A., & Lattanzio, J. C. 2007, , 24, 103 Karakas, A. I., & Lattanzio, J. C. 2014, , 31, e030 Karakas, A. I., & Lugaro, M. 2016, , 825, 26 Karakas, A. I., Lugaro, M., Carlos, M., et al. 2018, , 477, 421 Kholopov, P. N., Samus, N. N., Frolov, M. S., et al. 1998, Combined General Catalogue of Variable Stars, 4.1 Ed (II/214A). (1998), Kiselman, D., & Plez, B. 1995, , 66, 429 Lambert, D. L., Smith, V. V., Busso, M., Gallino, R., & Straniero, O. 1995, , 450, 302 Lewis, B. M. 1994, , 93, 549 Lockwood, G. W. 1985, , 58, 167 Lodders, K., & Fegley, B., Jr. 1999, Asymptotic Giant Branch Stars, 191, 279 Lugaro, M., Doherty, C. L., Karakas, A. I., et al. 2012, Meteoritics and Planetary Science, 47, 1998 Lugaro, M., & Chieffi, A. 2011, Lecture Notes in Physics, Berlin Springer Verlag, 812, 83 Lugaro, M., Karakas, A. I., Bruno, C. G., et al. 2017, Nature Astronomy, 1, 0027 Lugaro, M., Ott, U., & Kereszturi, [Á]{}. 2018, Progress in Particle and Nuclear Physics, 102, 1 Lugaro, M., Heger, A., Osrin, D., et al. 2014, Science, 345, 650 Mazzitelli, I., D’Antona, F., & Ventura, P. 1999, , 348, 846 McSaveney, J. A., Wood, P. R., Scholz, M., Lattanzio, J. C., & Hinkle, K. H. 2007, , 378, 1089 Menzies, J. W., Whitelock, P. A., & Feast, M. W. 2015, , 452, 910 Nguyen, A. N., & Zinner, E. 2004, Science, 303, 1496 Nittler, L. R., Alexander, O., Gao, X., Walker, R. M., & Zinner, E. 1997, , 483, 475 Nittler, L. R., Alexander, C. M. O., Gallino, R., et al. 2008, , 682, 1450 Nowotny, W., H[ö]{}fner, S. & Aringer, B. 2010, , 514, 35 Osorio, Y., Barklem, P. S., Lind, K., & Asplund, M. 2011, , 529, A31 Osorio, Y., & Barklem, P. S. 2016, , 586, A120 Osorio, Y., et al. 2018, in press P[é]{}rez-Mesa, V., Zamora, O., Garc[í]{}a-Hern[á]{}ndez, D. A., et al. 2017, , 606, A20 Plez, B. 1990, MmSAI, 61, 765 Plez, B., Smith, V. V., & Lambert, D. L. 1993, , 418, 812 Plez, B. 2012, Turbospectrum: Code for spectral synthesis, Astrophysics Source Code Library 1205.004 Richards, J. W., Starr, D. L., Miller, A. A., et al. 2012, , 203, 32 Ritter, C., Herwig, F., Jones, S., et al. 2018, , 480, 538 Sackmann, I.-J., & Boothroyd, A. I. 1992, , 392, L71 Samus’, N. N., Kazarovets, E. V., Durlevich, O. V., Kireeva, N. N., & Pastukhova, E. N. 2017, Astronomy Reports, 61, 80 Sevenster, M. N., Chapman, J. M., Habing, H. J., Killeen, N. E. B., & Lindqvist, M. 1997, , 122, 79 Shayesteh, A., Ram, R. S., Bernath, P. F. 2013, J. Mol. Spectrosc., 288, 46 Sivagnanam, P., Le Squeren, A. M., Foy, F., & Tran Minh, F. 1989, , 211, 341 Slootmaker, A., Habing, H. J., & Herman, J. 1985, , 59, 465 Smith, V. V., & Lambert, D. L. 1989, , 345, L75 Smith, V. V., & Lambert, D. L. 1990, , 361, L69 Smith, V. V., Plez, B., Lambert, D. L., & Lubowich, D. A. 1995, , 441, 735 Speck, A. K., Barlow, M. J., Sylvester, R. J., & Hofmeister, A. M. 2000, , 146, 437 Straniero, O., Gallino, R., & Cristallo, S. 2006, Nuclear Physics A, 777, 311 te Lintel Hekkert, P., Versteege-Hensel, H. A., Habing, H. J., & Wiertz, M. 1989, , 78, 399 te Lintel Hekkert, P., Caswell, J. L., Habing, H. J., et al. 1991, , 90, 327 Tielens, A. G. G. M. 1990, From Miras to Planetary Nebulae: Which Path for Stellar Evolution?, 186 Trigo-Rodr[í]{}guez, J. M., Garc[í]{}a-Hern[á]{}ndez, D. A., Lugaro, M., et al. 2009, Meteoritics and Planetary Science, 44, 627 van Raai, M. A., Lugaro, M., Karakas, A. I., Garc[í]{}a-Hern[á]{}ndez, D. A., & Yong, D. 2012, , 540, A44 Vassiliadis, E., & Wood, P. R. 1993, , 413, 641 Ventura, P., & D’Antona, F. 2009, , 499, 835 Ventura, P., Karakas, A., Dell’Agli, F., Garc[í]{}a-Hern[á]{}ndez, D. A., & Guzman-Ramirez, L. 2018, , 475, 2282 Vollmer, C., Hoppe, P., Stadermann, F. J., Floss, C., & Brenker, F. E. 2009, , 73, 7127 Watson, C. L., Henden, A. A., & Price, A. 2006, Society for Astronomical Sciences Annual Symposium, 25, 47 Wasserburg, G. J., Trippella, O., & Busso, M. 2015, , 805, 7 Whitelock, P., Menzies, J., Feast, M., et al. 1994, , 267, 711 Wood, P. R., Bessell, M. S., & Fox, M. W. 1983, , 272, 99 Wo[ź]{}niak, P. R., Williams, S. J., Vestrand, W. T., & Gupta, V. 2004, , 128, 2965 Yurchenko, S. N., Blissett, A. Asari, U. et al. 2016, MNRAS, 456, 4524 Zamora, O., Garc[í]{}a-Hern[á]{}ndez, D. A., Plez, B., & Manchado, A. 2014, , 564, L4 [^1]: The stellar mass was in all cases selected to be 2 M$_{\odot}$ because the temperature and pressure structure of the model atmosphere is pratically identical for a 1 M$_{\odot}$ and 10 M$_{\odot}$ model atmosphere and the output synthetic spectra are not sensitive to the mass of the star [see Fig. 1 in @plez90]. [^2]: Massive AGB stars with mass-loss rates higher than 10$^{-6}$ M$_{\odot}yr^{-1}$ are completely obscured in the optical, while our sample stars, still visible in the optical, should have lower mass-loss rates [see @zamora14; @perez-mesa17 for more details]. [^3]: FUll-Network Repository of Updated Isotopic Tables and Yields: http://fruity.oa-abruzzo.inaf.it/. [^4]: It is to be noted, however, that most sample stars are expected to be of solar metallicity [see a detailed discussion on this in @garcia-hernandez07]. [^5]: The infrared excess R probes the presence of circumstellar material emitting at 12 $\mu$m with respect to the stellar continuum at 2.2 $\mu$m [see e.g. @jorissen93]. [^6]: [@mcsaveney07] measured $\log \varepsilon$(Al) = 5.5 dex in the truly massive HBB-AGB star HV 2576 in the LMC (Z = -0.3 dex), while the Al solar abundance is 6.4 dex [@grevesse07]. The amount of gaseous Al depleted into dust is 0.6 dex, in good agreement with the [@dellagli14] AGB models that include dust formation.
--- abstract: 'We address robustness issues of self-triggered sampling with respect to model uncertainties, and propose a robust self-triggered sampling method. The approach is compared with existing methods in terms of sampling conservativeness and closed-loop system performance. The proposed method aims at fulfilling the gap between the event and the self-triggered sampling paradigms for what concerns robustness with respect to model uncertainties, and it generalizes most of the existing self-triggered samplers implemented up to now.' author: - 'U. Tiberi, K.H. Johansson, Fellow, IEEE [^1] [^2]' bibliography: - 'IEEEabrv.bib' - './MyBib.bib' nocite: '[@lunze10; @GARCIA-TAC-2014; @KALLE-CDC-2009; @TABUADA-TAC-2007; @wang11; @MAZO-AUTOMATICA-2010; @ANTA-TAC-2010]' title: '**On the robustness of self-triggered sampling of nonlinear control systems.**' --- [***Index terms—*** Event-triggered control, Self-triggered control, Nonlinear systems, Sampled-data systems, Robust control. ]{} Introduction {#sec:introduction} ============ To cope with common drawbacks raised by periodic sampling in modern control systems, such as network utilization in networked control systems [@HESPANHA-IEEE-2007] or processor utilization in multi-task programming [@BUTTAZZO-BOOK], two novel sampling methods, referred to as event-based and self-triggered sampling, has been recently introduced, [@ASTROM-CDC-2002]–[@TIBERI-NAHS-2013]. Roughly speaking, event-based sampling consists in monitoring the system output for all the time and to update the control signal only when some event is detected, whereas self-triggered sampling consists in *predicting* the event occurrence based on a system model and on the current system output. It has been showed that both approaches usually leads to an efficient utilization of shared resources without deteriorating the closed-loop performance. Nevertheless, they exhibit profound differences. Event-based methods take decisions upon the detection of an event and they can be thus categorized as *reactive* methods; on the contrary, self-triggered methods are *proactive* as they provide the next event occurrence time in advance. A notable benefit in event-based methods is that they seldom requires a model of the plant, but the event occurrences are often determined only from the output measurements, whereas in self-triggered methods an accurate system model is generally required. Clearly, if the model is not sufficiently accurate, the closed-loop performance under self-triggered sampling may deteriorate or, in some cases, the closed-loop system may even become unstable. To the best of our knowledge, the problem of self-triggered control robustness versus parameter uncertainty for nonlinear systems has been little investigated, and existing methods exhibit severe limitations [@DIGENNARO-IJC-2013],[@TIBERI-PHDTHESIS]. For instance, the approach proposed in [@DIGENNARO-IJC-2013] relies on assumptions that hold only for a very narrow class of systems, thus limiting the applicable cases. Nevertheless, if such assumptions are relaxed, then both the cited methods can only guarantee a safety property of the closed-loop system which is weaker than common stability properties such as asymptotic stability of ultimate boundedness. In contrast, our method requires milder assumptions compared to the cited work, which extends the applicability to a larger number of cases. Inspired by the Lebesgue sampling rule [@ASTROM-CDC-2002], our approach ensures uniform ultimate boundedness or, in some cases even asymptotic stability. In this note, we address both the local and the global stability cases. Finally, the proposed approach is compared with existing methods in terms of conservativeness of the sampling intervals and closed-loop performance. Notation and preliminaries {#sec:notation} ========================== The set of natural numbers is denoted with $\N$. The set of real numbers is denoted with $\R$, the set of positive real numbers with $\R^+$ and the set of nonnegative real numbers with $\R^+_0$, i.e. $\R_0^+=\R^+\cup \{0\}$. The notation $\|v\|$ is used to indicate the Euclidean norm of a vector $v\in \mathbb R^{n}$ and $\mathcal B_{r}$ indicates the closed ball centered at the origin and radius $r$, i.e. $\mathcal B_{r}=\{v:\|v\|\le r \}$. Given a set $\mathcal D$, we denote its power set with $2^{\mathcal D}$. Given a signal $s: \R^{+} \to \mathbb R^{n}$, $s_{k}$ denotes its realization at time $t=t_{k}$, i.e. $s_{k}:=s(t_{k})$. A function $h:\cal D_p \times \cal D_q \to \R^n$ is said to be *Lipschitz continuous over $\cal D_p \times \cal D_q$* if $\|h(p_1,q)-h(p_2,q)\| \le L_{h,p}\|p_1-p_2\|$ for some $L_{h,p}>0$ and for all $p_1,p_2 \in \cal D_p, q \in \cal D_p$ and $\|h(p,q_1)-h(p,q_2)\| \le L_{h,q}\|q_1-q_2\|$ for some $L_{h,q}>0$ and for all $q_1,q_2 \in \cal D_q, p \in \cal D_p$. The constants $L_{h,p}$ and $L_{h,q}$ are called *Lipschitz constant of $h$ with respect to $p$* and *Lipschitz constant of $h$ with respect to $q$*, respectively. A continuous function $\alpha:[0,a) \to +\infty,a>0$ is said to belong to class $\cal K$ if it is strictly increasing and $\alpha(0)=0$. If, in addition, $a=\infty$ and $\alpha(r) \to +\infty$ for $r \to +\infty$, then $\alpha$ is said to be of class $\cal K_{\infty}$. A continuous function $\beta:[0,a)\times [0,\infty)\to [0,\infty)$ is said to belong to class $\K\L$ is, for each fixed $s$, the mapping $\beta(r,s)$ belongs to class $\K$ with respect to $r$ and, for each fixed $r$, the mapping $\beta(r,s)$ is decreasing with respect to $s$ and $\beta(r,s) \to 0$ as $s\to \infty$. Given a system $\dot \xi=f(t,\xi)$, $\xi \in \mathbb R^{n}$, $\xi(t_{0})=\xi_{0}$, $f:\mathbb{R}^{+} \times \mathcal{D} \rightarrow \mathbb{R}^{n}$, where $f$ is Lipschitz continuous with respect to $x$ and piecewise continuous with respect to $t$, and where $\mathcal{D} \subset \mathbb{R}^{n}$ is a domain that contains the origin, we say that the solutions are UUB if there exists three constants $a,b,T>0$ independent of $t_{0}$ such that for all $\|\xi_{0}\|\le a$ it holds $\|\xi(t)\| \le b$ for all $t \ge t_{0}+T$, and globally UUB (GUUB) if $\|\xi(t)\| \le b$ for all $t \ge t_{0}+T$ and for arbitrarily large $a$. The value of $b$ is referred as the *ultimate bound*. System architecture =================== We consider the system architecture depicted in Fig. \[fig:SysArchitecture\]. The control system includes an uncertain plant subject to external disturbances $w$ and a self-triggered controller, i.e. a controller which computes both the new control signal and its update instant. ![The control system architecture.[]{data-label="fig:SysArchitecture"}](./SysArchitecture1_cropped) The plant’s dynamics are of the form $$\label{eqn:nominal plant} \dot \xi = f(\eta,\xi,u,w)\,,$$ where $f$ is Lipschitz continuous and where $\xi \in \mathcal D_{\xi} \subseteq \R^{n_{\xi}}$ is the state vector, $u\in \mathcal D_{u} \subseteq \R^{n_{u}}$ is the input vector, $\eta $ is a vector of (possible time-varying) uncertain parameters in a compact set $\mathcal D_\eta \subset \R^{n_{\eta}}$ and $w \in \mathcal D_{w}\subseteq \R^{n_{w}}$ is a piecewise bounded external disturbance vector with bound $\|w\|\le \bar w$ We assume that there exists a Lipschitz continuous state feedback control law $\kappa:\mathcal D_{\xi}\to \mathcal D_{u}$ such that the closed-loop dynamics satisfying $$\label{eqn:closed-loop CT} \dot \xi = f(\eta,\xi,\kappa(\xi),w)\,.$$ are asymptotically stable for $w=0$ and UUB for all $w\in \mathcal{D}_w\backslash \{0\}$. Our goal is to determine a function $\Gamma:\R^{2n}\to \R$ and to predict, at each time $t=t_k$ the time instant $t_{k+1}$ defined as $$\label{eqn:STC def} t_{k+1}=t_k+\min\{t \Blue{>} t_k:\Gamma(x,x_k)=0\}\,,$$ such that the sampled-data system $$\label{eqn:sampled-data} \dot x = f(\eta,x,\kappa(x_{k}),w)\,, \quad t\in [t_{k},t_{k+1})\,,$$ where $x \in \mathcal{D}_{\xi}$, is UUB for all $w\in \mathcal{D}_w$ and such that $t_{k+1}-t_k\ge h_{\min}$ for some $h_{\min}>0$ and for all $k$. Although existing self-triggered samplers may still apply for stabilizing uncertain systems of the form , the performance of the closed-loop system may not be acceptable or it may becomes unstable, as we will discuss in the next Section. Motivating example {#sec:motivating example} ================== Consider the rigid-body control example in [@ANTA-TAC-2012], which dynamics satisfies $$\begin{aligned} \label{eqn:rigid-body} \dot \xi_{1}&= u_{1}\,,\nonumber\\ \dot \xi_{2}&= u_{2}\,,\\ \dot \xi_{3}&= \eta \xi_{1}\xi_{2}\nonumber\,,\end{aligned}$$ and let $\eta$ to be an uncertain parameter. Let $\eta_n=1$ be the assumed value of the uncertainty for designing both the continuous controller and the self-triggered sampler. With this setting, a control law to globally stabilize system  if $\eta=\eta_n$ is given by $u_{1}=-\xi_{1}\xi_{2}-2\xi_{2}\xi_{3}-\xi_{1}-\xi_{3}$ and $u_{2}=2\xi_{1}\xi_{2}\xi_{3}-3\xi_{3}^{2}-\xi_{2}$ and a self-triggered sampler implementation considers the sampling rule $\Gamma(x,x_k):=\|x_k-x\|^{2}-0.79^2\sigma^2\|x\|^{2}$ where $0<\sigma<1$, see [@ANTA-TAC-2012]. If for the real system it holds $\eta=\eta_n$, then the response of continuous-time, the event and the self-triggered implementation of the controller would be fairly similar as shown in Figure \[fig:EvoEx\]. Nevertheless, assume now that for the real system dynamics it holds instead $\eta=\eta_r$, with $\eta_r=8$, and assume to use the same controller and self-triggered sampler as for the case $\eta=\eta_n$. As shown in Figure \[fig:EvoDelta\], the closed-loop system performance deteriorates under self-triggered sampling, although both in the event and the continuous case we experience a satisfactory response. This is because in the event-triggered scheme the condition $\Gamma(x,x_k)=0$ is constantly evaluated based on a constant monitoring of the state $x(t)$, and then the time $t_{k+1}$ defined in  are correctly determined. In the self-triggered implementation, the times $t_{k+1}$ may mismatch with the ones defined in  since the prediction for which $\Gamma(x,x_k)=0$ is based on an imperfect model. Note that although the continuous-time controller exhibits a certain degree of robustness, this is unfortunately not enough to ensure good performance of its self-triggered implementation, but the self-triggered strategy shall also be robust. We wish further to highlight that the event-based sampling rule implicitly defined by the function $\Gamma(x,x_k)$ in this example only represents a sufficient condition for the closed-loop stability. This means that the self-triggered implementation based on an imperfect model may not fulfill such a condition for all the time and then the closed-loop system stability is also jeopardized. ![System response with $\eta=\eta_n, \eta_n=1$](./EvoEx) . \[fig:EvoEx\] ![System response with $\eta=\eta_r, \eta_r=8$](./EvoDelta "fig:") \[fig:TrigEx\] . \[fig:EvoDelta\] Unfortunately, the inclusion of parameter uncertainties in the framework proposed in [@ANTA-TAC-2012] does not appear to be straightforward, and leaves room for future research. Nevertheless, in this note we follow a different approach by proposing a method which applies to every robustly stabilizable nonlinear system and which ensure UUB of the sampled-data system trajectories. Robust self-triggered sampling {#sec:robust self-triggered sampling} ============================== In this section we present the main result of this note. We first consider the local stabilizability and then the global stabilizability case. Local analysis: exponentially stabilizable systems. --------------------------------------------------- The proposed method is developed starting from a self-triggered implementation of the Lebesgue sampling rule. We recall that the Lebesgue sampling consists in updating the control law every time the triggering condition $\|x_{k}-x(t)\|\le \delta, \delta >0$ is violated, [@ASTROM-CDC-2002]. Since, self-triggered sampling consists in predicting event occurrences, its design requires an upper-bound of the evolution of $\|x_{k}-x(t)\|$, which is given in the next result. \[lem:upper-bound\] Let $M_{1}$ and $M_{2}$ be two positive constants such that the trajectories of  satisfy $ \|\xi(t)\|\le M_{1} \|\xi_{k}\|+M_{2}\bar w\,, $ for all $t\ge t_k$. Then, the function $g(t):=x_{k}-x(t)$ is upper-bounded with $$\label{eqn:upper-bound} \|g(t)\| \le (M_{1}\|x_{k}\|+M_{2}\bar w)(e^{L(t-t_{k})}-1)\,,$$ where $L:=\max_{\eta\in \mathcal{D}_{\eta}} L_{f,u}L_{k,x}$, for all $t\in[t_{k},t_{k+1})$. \ A self-triggered sampler is devised by predicting the next time in which the function $\|g(t)\|$ hits the triggering threshold $\delta$, as done in [@TIBERI-AUTO-2013] for linear systems. This is equivalent to define $\Gamma(x,x_k)=\|g(t)\|-\delta$ and to predict the time instant $t_{k+1}$ for which $\Gamma(x,x_k)=0$ as per . Such a prediction is performed by exploiting the bound , as stated in the the following result. \[prop:Lebesgue sts\] Consider the same notation as in Lemma \[lem:upper-bound\] and let $\delta$ any arbitrary positive constant such that the trajectories of the perturbed system $\dot \psi=f(\eta,\psi,\kappa(\psi),w)+ Lg(t)$ where $\psi \in \mathcal D_{\xi}$ and $\| g(t)\|\le \delta$ are contained into the region of attraction $\mathcal{R}_a \subseteq \mathcal {D}_\xi$. Then, the self-triggered sampler $$\label{eqn:Lebesgue sts} t_{k+1}=t_{k}+\frac{1}{L}\ln \bigg(1+\frac{\delta }{M_{1}\|x_k\|+M_{2}\bar w} \bigg)\,,$$ ensures UUB of the sampled-data system . Moreover, there exists a positive constant $h_{\min}$ such that $t_{k+1}-t_{k}>h_{\min}$ for all $k$. \ The self-triggered sampler  applies to every exponentially stable system of the form , and it is robust with respect to parameter uncertainty and external disturbances. As  suggests, the inter-sampling intervals increase as $\delta$ does, but, on the other hand, the size of the ultimate bound also increases since the perturbation due to the sampling exhibits larger amplitudes. This means that $\delta$ can be intended as a tuning parameter that encodes the trade-off between inter-sampling intervals and ultimate-bound size. While the self-triggered sampler  presents only a single tuning parameter, the following result provides more flexibility, since it allows the tuning of few more parameters. \[thm:main\] Consider the same notation and assumptions of Proposition \[prop:Lebesgue sts\], and let $\nu_0,\nu_1,\nu_3$ and $\nu_2$ arbitrary positive constants such that $$\label{eqn:tuning rule sim} \max_{\substack{r\ge 0}} (M_1r+M_2\bar d)L\left[\left(1+\frac{\nu_0}{\nu_2r+\nu_3} \right)^{\frac{L}{\nu_1}}-1\right]\le \delta\,.$$ Then, the self-triggered sampler $$\label{eqn:universal sts} t_{k+1}=t_{k}+\frac{1}{\nu_1}\ln \bigg(1+\frac{\nu_0 }{\nu_2\|x_k\|+\nu_3} \bigg)\,,$$ ensures UUB of the sampled-data system . Moreover, there exists a positive constant $h_{\min}$ such that $t_{k+1}-t_{k}>h_{\min}$ for all $k$. \ In addition to the size of the ultimate bound, the self-triggered sampler  also allows to regulate the minimum and the maximum inter-sampling intervals. For instance, let us define $h_k:=t_{k+1}-t_k$ and $$\begin{aligned} h_{\min}&:=\frac{1}{\nu_1}\ln \bigg(1+\frac{\nu_0 }{\nu_2(M_1\|x_0\|+M_2)+\nu_3} \bigg) \,, \label{eqn:h_min}\\ h_{\textrm{mid}}&:=\frac{1}{\nu_1}\ln \bigg(1+\frac{\nu_0 }{\nu_2b+\nu_3} \bigg) \,,\label{eqn:h_mid}\\ h_{\max}&:=\frac{1}{\nu_1}\ln \bigg(1+\frac{\nu_0 }{\nu_3} \bigg) \,.\label{eqn:h_max}\end{aligned}$$ For all $k>0$ it holds $h_k\in [h_{\min},h_{\max}]$, while for a sufficiently large $k'$ it holds $h_k\in [h_{\textrm{mid}},h_{\max}]$ for all $k>k'$. This means that for $k>k'$ the closed-loop system can be viewed as a periodically sampled system with period $h^*$ and jitter $\tilde h_k$ such that $h^*\pm \tilde h_k \in [h_{\textrm{mid}},h_{\max}]$. If the system  with $w=0$ is exponentially stable for any time-varying $h_k \in [h_{\textrm{mid}},h_{\max}]$, then the self-triggered sampler  ensures exponential stability and not only UUB. This fact suggests a tuning method: for instance, it is enough to compute first the values of $h_{\textrm{mid}}$ and $h_{\max}$ to ensure exponential stability of , and then to tune the coefficients $\nu_i$’s according to –. As an example, one can consider the method in [@SEURET-CDC-2009], [@BRIAT-SCL-2012] to compute $h_{\textrm{mid}}$ and $h_{\max}$ for the linear case or [@HU-AUTO-2000] for the nonlinear case. Furthermore, the inter-sampling intervals asymptotically converge to a constant sampling period $h^*=h_{\max}$. Local analysis: asymptotically stabilizable systems. ---------------------------------------------------- In the previous section we presented a self-triggered formula which applies to *every* exponentially stabilizable system. In this section the results are extended to asymptotically stabilizable systems. Unfortunately, the nice property of  that allows its utilization with every exponentially stabilizable systems has no counterpart for asymptotically stabilizable systems. In-fact, a key ingredient to obtain the self-triggered formula  relies on the bound $\|\xi(t)\|\le M_{1}\|\Blue{\xi_{k}}\|+M_{2}\bar w$ which applies to *every* exponentially stable systems of the form . For asymptotically stabilizable nonlinear system, such a bound is replaced by $\|\xi(t)\| \le \beta(\|\xi_k\|,0)+\gamma(\bar w)$, where $\beta$ is a class-$\K\L$ function and $\gamma$ is an appropriate class-$\K$ function which depends on the system under exam. \[thm:asympt\] Assume that the system  is ISS with respect to $w$. Let $\delta$ any arbitrary positive constant such that the trajectories of the perturbed system $\dot \psi=f(\eta,\psi,\kappa(\psi),w)+ Lg(t)$ where $\psi \in \mathcal D_{\xi}$ and $\| g(t)\|\le \delta$ satisfy $\|\psi(t)\| \le \beta(\|\psi_k\|,t)+\gamma_1(\bar w)+\gamma_2(\delta)$ for some class-$\mathcal{KL}$ function $\beta$ and some class-$\mathcal K$ functions $\gamma_1, \gamma_2$, and assume that they are contained into the region of attraction $\mathcal{R}_a \subseteq \mathcal {D}_\xi$. Finally, let $\nu_0,\nu_1$ and $\nu_3$ be arbitrary positive constants and let $\nu_2:\R_0^+\to\R_0^+$ be any function such that $$\label{eqn:tuning rule sim asympt} \max_{\substack{r\ge 0}} (\beta(r,0)+\gamma_1(\bar w))L\left[\left(1+\frac{\nu_0}{\nu_2(r)+\nu_3} \right)^{\frac{L}{\nu_1}}-1\right]\le \delta\,.$$ Then, the self-triggered sampler $$\label{eqn:sts nonlinear} t_{k+1}=t_{k}+\frac{1}{\nu_1}\ln \bigg(1+\frac{\nu_0 }{\nu_2(\|x_k\|)+\nu_3} \bigg)\,,$$ ensures UUB of the sampled-data system . Moreover, there exists a positive constant $h_{\min}$ such that $t_{k+1}-t_{k}>h_{\min}$ for all $k$. \ The design of  does not require exact knowledge of the function $\beta$, but only its behavior with respect to its first argument. Since it is well known that for asymptotically stable systems there exists a Lypaunov function $V(\xi)$ such that $\alpha_1(\|\xi\|)\le V(\xi)\le\alpha_2(\|\xi\|)$ for some class-$\K$ functions $\alpha_1$ and $\alpha_2$, it is enough to set $\nu_2(r)=\alpha_1^{-1}(V(r))$. A notable case is when the closed-loop system admits a quadratic Lyapunov function, for which it holds $\alpha_1^{-1}(V(r))=cr$ for some positive $c$. In this case, the self-triggered sampler  reduces to . \[cor:asympt\] Consider the same assumptions as in Theorem   and assume that the closed-loop system with continuous control  admits a quadratic Lyapunov function. Then, the self-triggered sampler  ensures UUB of the sampled-data system  and there exists a positive constant $h_{\min}$ such that $t_{k+1}-t_{k}>h_{\min}$ for all $k$. \ The tuning rules presented in the previous section also applies, *mutatis mutandis*, to the self-triggered sampler  as long as the function $\nu_2\|x_k\|$ is replaced with $\nu_2(\|x_k\|)$ and the bound $\|\xi(t)\|\le M_1\|\xi_k\|+M_2\bar w$ is replaced with $\|\xi(t)\|\le \beta(\|\xi_k\|,0)+\gamma_1(\bar w)$. Global analysis. ---------------- The presented self-triggered sampler has been developed by modeling the deviation of the piecewise control from the continuous control as an external perturbation on the system state $g(t)$. The duty of the sampling strategy is to keep such a perturbation bounded. Then, by exploiting some *bounded input - bounded state* property of the closed-loop system with continuous control, UUB of the sampled-data system is proved. The perturbation due to the proposed sampling rule is given by the left-hand side of  or . At this point, one may draw the conclusion that whenever the bounded input - bounded state property of the system  holds globally, then the self-triggered samplers – applies for any setting of the parameters $\nu_i's$. Unfortunately, this is only partly true. By taking a closer look at , we observe that its right hand side is bounded if, and only if the Lipschitz constant $L$ holds globally. In-fact, there are many cases (e.g. polynomial systems) in which the system is locally, but not globally Lipschitz continuous. Hence, the method would only apply to bounded regions, and UUB would be only semi-global. Moreover, the selection of $\delta$ shall ensure boundedness of the trajectories into the region in which the Lipschitz constant $L$ is computed. To tackle these issues, we recall that a local Lipschitz constant $L_f$ of a function $f$ depends on the domain in which it is computed. Let now $\mathcal F$ be the set of all the Lipschitz functions and let $Lip:2^{\R^n} \times \mathcal F \to \R_0^+$ be the operator that associates local Lipschitz constants $L_f$ to Lipschitz functions $f \in \mathcal F$ over subsets $\mathcal D\subseteq \R^n$. By observing that the proposed self-triggered sampler enforces the sets $\mathcal{B}_{d_k}$ to be invariant for all $t\in [t_k,t_{k+1})$, where $d_k=\beta(\|x_k\|,0)+\gamma_1(\bar w)+\gamma_2(\delta)$, it is not difficult to argue that there exists a function $\hat L:\R_0^+ \to \R_0^+$ such that $Lip(\mathcal{B}_{d_k}) \le \hat L(\|x_k\|)$ for all $t\in [t_k,t_{k+1})$ and for all $k$ as stated in the next result. \[lem:1\] Let $\delta$ any arbitrary positive constant such that the trajectories of the perturbed system $\dot \psi=f(\eta,\psi,\kappa(\psi),w)+ Lg(t)$ with $\| g(t)\|\le \delta$ are UUB in any subset of $\R^{n_{\xi}}$ . Let $\mathcal F$ be the set of all the Lipschitz continuous functions and let $Lip:2^{\R^n} \times \mathcal F \to \R_0^+$ be the operator that associates local Lipschitz constants $L_f$ to Lipschitz continuous functions $f \in \mathcal F$ over subsets $\mathcal D\subseteq \R^n$. Then, by using the self-triggered sampler , there exists a function $\hat L:\R_0^+ \to \R_0^+$ such that $Lip(\mathcal{B}_{d_k}) \le \hat L(\|x_k\|)$ for all $t\in [t_k,t_{k+1})$ and for all $k$. \ The next result represents a generalization of the self-triggered samplers presented in this note. \[thm:global\] Consider the same notation as in Theorem \[thm:asympt\] and in Lemma \[lem:1\] and assume that the closed-loop system with continuous control  is GUUB. Let $\nu_i:\R_0^+\to\R_0^+, i=1,\dots,3$ be any functions such that $$\label{eqn:tuning rule sim global} \max_{\substack{r\ge 0}} (\beta(r,0)+\gamma_1(\bar w))\hat L(r)\left[\left(1+\frac{\nu_0(r)}{\nu_2(r)+\nu_3(r)} \right)^{\frac{\hat L(r)}{\nu_1(r)}}-1\right]\le \delta\,.$$ is bounded. Then, the self-triggered sampler $$\label{eqn:sts global} t_{k+1}=t_{k}+\frac{1}{\nu_1(\|x_k\|)}\ln \bigg(1+\frac{\nu_0(\|x_k\|) }{\nu_2(\|x_k\|)+\nu_3(\|x_k\|)} \bigg)\,,$$ ensures GUUB of the sampled-data closed-loop system. Moreover, there exists a positive constant $h_{\min}$ such that $t_{k+1}-t_{k}>h_{\min}$ for all $k$. \ Note that the functions $\nu_i$ are not required to have any particular form. Infact, their duty is simply to bound the term in the left-hand side of . In the next section we revise the example of Section \[sec:motivating example\] with our method. Motivating example revisited {#sec:simulations} ============================ ![System response with $w=0$.[]{data-label="fig:EvoUBA"}](./EvoUBA) ![Inter-sampling times with $w=0$.[]{data-label="fig:TrigUBA"}](./TrigUBA) ![System response with $w=0.6$.[]{data-label="fig:EvoUBAW"}](./EvoUBAW) ![Inter-sampling times with $w=0.6$.[]{data-label="fig:TrigUBAW"}](./TrigUBAW) In this section we apply our method to the rigid-body control example described in Section \[sec:motivating example\]. We also compare our approach with existing methods by means of conservativeness of the inter-sampling intervals and closed-loop performance. The conservativeness of the inter-sampling times is measured in terms of average sampling time, while the closed-loop performance is evaluated through the average value $J_\textrm{avg}$ of a quadratic performance $J$ given by $$J=\int_0^{T}\|x(\sigma)\|^2+\|u(\sigma)\|^2\,d\sigma\,.$$ We consider 25 initial conditions equally spaced on a ball of radius one and a simulation time $T= 15$ s. Using SOSTOOLS [@SOSTOOLS] we get a local parameter-dependent Lyapunov function that satisfies $3.2107\cdot 10^{-12}\|\xi\|^2 \le V(\eta,\xi) \le 3.6862 \cdot 10^{-12}\|\xi\|^2, \dot V \le -2.1273\cdot 10^{-13}\|\xi\|^4, \|\partial V/\partial \xi\| \le 2.34\cdot 10^{-13}\|\xi\|$ for all $\eta \in [1,8.2]$ on a ball of radius 5. We further get $L=61.1945$ over such a region. By setting $\delta = 2.8$ we get $b=3.9633$ as ultimate bound, and by considering $\mathcal B_1$ as initial condition set, we get $\|\psi\|_{\mathcal L_{\infty}}\le 4.7982$. Within this setting, we tune the proposed self-triggered sampler with $\nu_0(r) = 0.15\delta, \nu_1(r)=L, \nu_2(r)=10r$ and $\nu_3=10^{-6}$, for which we get $h_{\min}=0.142$ ms, $h_{\textrm{mid}}=0.180$ ms and $h_{\max}=211.6$ ms. The simulation results are reported in Table \[tab:tab1\], while the system response for a particular initial condition is depicted in Figures \[fig:EvoUBA\]–\[fig:TrigUBA\]. The system response with the self-triggered sampler [@ANTA-TAC-2012] provides the larger inter-sampling times, but also the worst performance. Other methods exhibits comparable performance compared to the continuous-time case, meaning that the parameters uncertainty is well handled. However, compared to the other methods providing the same performance, our approach provides the largest average inter-sampling intervals. Next, we evaluate the robustness with respect to external disturbances. For this purpose, we modify the dynamics of  by setting $\dot \xi_2 = u_2+w$, where $w=0.6$ is an external disturbance acting on the system for $t \in [7.4,8.92]$ s. Within this setting, we set $\nu_0(r) = 0.05\delta, \nu_1(r)=L, \nu_2(r)=3.3r$ and $\nu_3=7.86$, for which we get $h_{\min}=0.009$ ms, $h_{\textrm{mid}}=0.048$ ms and $h_{\max}=0.288$ ms. As shown in Figure \[fig:EvoUBAW\] and as reported in Table \[tab:tab2\], the response of the continuous-time, the event and proposed self-triggered implementation of the controller is fairly similar, whereas the method in [@ANTA-TAC-2012] provides a deterioration of the performance index $J_{avg}$. A good trade-off between performance and average inter-sampling is instead provided by [@ANTA-TAC-2010]. Nevertheless, there are no rigorous proofs of its robustness. Just for the sake of comparison, we tuned our self-triggered sampler by assuming $w=0$ even when in reality it holds $w = 0.6$. With this setting, there are no proof of robustness of our method as well. Nevertheless, we experienced $J_{avg}=3.9413$ with an average sampling interval of $3.4$ ms, which outperforms [@ANTA-TAC-2010] Notice that by comparing our method in the perturbed and unperturbed case, the former exhibits a worsening of the average sampling period. This is due because in the tuning of the proposed self-triggered sampler when $w \neq 0$, a worst case disturbance acting for all the time has been considered. A method to reduce such conservativeness resorts to the utilization of disturbance observers as described in [@TIBERI-NAHS-2013]. Nevertheless, differently from [@TIBERI-NAHS-2013], here we are dealing with an uncertain sampled-data nonlinear system, and to the best of our knowledge there are no result yet related to observer for this class of systems. Finally, for $w=0.6$, our self-triggered sampler provides an ultimate bound $b=3.9624$, whereas the other methods provide an ultimate bound $b=2.5682$. The larger ultimate bound in our case is due to the perturbation due to the sampling than sums up to $w$, while in the other cases, the ultimate bound only depends on the disturbance upper-bound $\bar w$. Conclusions =========== In this note we addressed the problem of robustness with respect to model uncertainties of self-triggered sampling for nonlinear systems. We have shown that even if a continuous-time controller is robust, this is not sufficient to use an arbitrary self-triggered sampling scheme, but the employed sampling scheme shall be also robust. In case of perfect model knowledge, then the self-triggered sampler in [@ANTA-TAC-2012] outperforms our method, whereas in case of model uncertainties, our method appears to be more robust. A notable characteristic of the self-triggered sampler  relies in its similarity to existing methods, [@TIBERI-AUTO-2013],[@LEMMON-TAC-09],[@MAZO-AUTOMATICA-2010],[@TOLIC-MED-2012]. This means that the proposed self-triggered formula can be regarded as a generalization of existing self-triggered sampler that in turn can be used to tune  by matching the coefficients $\nu_i$. For example, in the case of Lebesgue sampling , a tuning rule is given by is $\nu_0= \delta, \nu_1=L,\nu_2=M_1$ and $\nu_3= M_{2}\bar w$. The tuning of the parameter $\nu_3$ can also be performed through the utilization of a disturbance observer, as motivated and explained in [@TIBERI-NAHS-2013]. During the process of coefficient matching, eventual parameter uncertainties can be easily included. Furthermore, the computational complexity of the proposed method is very low, since the next sampling-instant can be entirely determined by evaluating a simple function and no numerical methods are required. Finally, we wish to highlight that in the system architecture definition we assumed that the self-triggered sampler is implemented in the controller side. However, our method still applies even whenever the self-triggered sampler is implemented on the sensor side as long as the controller updates are performed correspondingly to the output measurement transmission times. Eventual time delays can be easily accommodated by following the same line as in [@TIBERI-AUTO-2013]. [Proof of Lemma \[lem:upper-bound\].]{} First of all, note that it holds $ \dot g(t)=-\dot x(t)$ and $g(t_{k})=0$ at each sampling instant. For $t\in [t_{k},t_{k+1})$, it holds $$\begin{aligned} g(t)=&\int_{t_{k}}^{t}-f(\eta(s),x(s),\kappa(x_{k}),d(s))\,ds \\ =&\int_{t_{k}}^{t}f(\eta(s),x(s),\kappa(x(s)),d(s))\nonumber \\ &-f(\eta(s),x(s),\kappa(x_{k}),d(s))\,ds \nonumber \\ &-\int_{t_{k}}^{t}f(\eta(s),x(s),\kappa(x(s)),d(s))\,,\end{aligned}$$ By taking the norm at both sides, and by recalling that exponential stability of  with $w=0$ ensure the existence of constants $M_1,M_2$ and $\lambda$ such that $\|\xi(t)\|\le M_1\|\xi_k\|e^{-\lambda(t-t_k)}+M_2\bar w$ for all $t\ge t_k$, it follows $$\begin{aligned} \|g(t)\| \le& \int_{t_{k}}^{t}L_{f,u}L_{\kappa,x}\|x_{k}-x(s)\|\,ds \nonumber \\ &+\left\| \int_{t_{k}}^{t}f(\eta(s),x(s),\kappa(x(s)),d(s))\,ds \right\|\, \\ \le& \int_{t_{k}}^{t}L_{f,u}L_{\kappa,x}\|g(s)\|\,ds+M_{1}\|x_{k}\|+M_{2}\bar w\,\nonumber \\ \le & \int_{t_{k}}^{t}L\|g(s)\|\,ds+M_{1}\|x_{k}\|+M_{2}\bar w\,\end{aligned}$$ By applying the Gronwall-Bellman inequality, it follows . [Proof of Proposition \[prop:Lebesgue sts\].]{} Converse Theorems ensure the existence of a parameters-dependent Lyapunov function $V(\eta,x)$[^3] that satisfies $$\begin{aligned} c_1\|\xi\|^2\le V(\eta,\xi) &\le c_2\|\xi\|^2\,, \cr \frac{\partial V(\xi)}{\partial \xi}f(\eta,\xi,\kappa(\xi),d)&\le-c_3\|\xi\|^2+c_5\bar w\,, \cr \bigg\|\frac{\partial V(\eta,\xi)}{\partial \xi}\bigg\|&\le c_4\|\xi\| \,, \cr \end{aligned} \label{eqn:LyapunovCandidateNonlinear}$$ where $c_1,c_2,c_3,c_4$ and $c_5$ are positive constants. The time derivative of $V$ along the trajectories of the sampled-data system , for $t \in (t_{k},t_{k+1})$, satisfy $$\begin{aligned} \dot V &\le -c_3\|x\|^2+c_5\bar w+c_4\|x\|L\|x_k-x\| \nonumber \\ &\le -c_3\|x\|^2+c_5\bar w+c_4\|x\|L\hat g(x_k,t)\,,\end{aligned}$$ for all $t \in (t_{k},t_{k+1})$, where $\hat g(x_k,t)=(M_{1}\|x_{k}\|+M_{2}\bar w)(e^{L(t-t_{k})}-1)$. By setting $\hat g(x_k,t)=\delta$ it follows . Moreover, since the sampling rule  enforces $\|g(t)\|\le \delta$ for $t\in(t_k,t_{k+1})$, it follows $\|x(t)\|\le M_1\|x_k\|+M_2\bar w+M_3\delta:=M(k)$. Hence, the Lyapunov derivative is further upper-bounded with $\dot V\le -c_3\|x\|^2+c_5\bar w+c_4M(k)L\delta \,.$ By observing that at each sampling instants $t=t_k$ it holds $\dot V \le -c_3\|x_k\|^2+c_5\bar w$, it follows that the trajectories are upper-bounded for all $t\in [t_k,t_{k+1})$. Finally, since the sampling rule enforces $M(k+1)\le M(k)$ for $\|x\| > b$, or $M(k) \le b$ for $\|x\| \le b$ the sampled-data system  is UUB. $$h_{\min}:=\frac{1}{L}\ln\left(1+\frac{\delta}{M_1 c+M_2\bar w}\right)\,.$$ where $c = \max\{M(0),b\}$. Next, we have to prove that the sampling rule  guarantees that $x(t)\in \mathcal{R}_a$ for all $t\ge t_0$. Since by assumption the trajectories $\psi$ are confined into the region of attraction $\mathcal{R}_a$ for all $t \ge t_0$, and since  enforces $\|g(t)\|\le \delta$ and since $\lim_{t\to t_k^+} x(t)= \lim_{t\to t_k^-} x(t)$ for all $k$, it follows that $x(t)\in \mathcal{R}_a$ for all $t\ge t_0$. [Proof of Theorem \[thm:main\].]{} The first part of the proof follows the same line as the proof of Proposition \[prop:Lebesgue sts\]. This means that the Lyapuonov function $V(x)$ satisfies $\dot V \le -c_3\|x\|^2+c_5\bar w+c_4\|x\|L\hat g(x_k,t) \,,$ where $\hat g(x_k,t)=(M_{1}\|x_{k}\|+M_{2}\bar w)(e^{L(t-t_{k})}-1)$. By using the sampling rule , it follows that $$\begin{aligned} \dot V\le& -c_3\|x\|^2+c_5\bar w+c_4\|x\|L \nonumber \\ &\times (M_1\|x_k\|+M_2\bar w)\left[\left(1+\frac{\nu_0}{\nu_2\|x_k\|+\nu_3} \right)^{\frac{L}{\nu_1}}-1\right] \nonumber \\ \le& -c_3\|x\|^2+c_5\bar w+c_4\|x\|L\bar \delta \,, $$ for all $t\in (t_k,t_{k+1})$, where $$\bar \delta :=\max_{r \in \R^+_0}(M_1r+M_2\bar w)\left[\left(1+\frac{\nu_0}{\nu_3r+\nu_2} \right)^{\frac{L}{\nu_1}}-1\right] \,.$$ Since $\bar \delta$ is bounded, UUB follows for all $t\in (t_k,t_{k+1})$. Moreover, correnspondently to the sampling instants $t=t_k$ it holds $\dot V \le -c_3\|x\|^2+c_5\bar w$ and thus UUB holds in every interval of the form $[t_k,t_{k+1})$. Finally, continuity of the solution of the sampling-data system  ensure UUB for all $[t_k,t_{k+1})$ and for all $k$. The existence of a lower-bound of the inter-sampling intervals can be proved by using the same arguments as in the proof of Proposition \[prop:Lebesgue sts\]. Proof of Theorem \[thm:asympt\]. By observing that in the case of asymptotic stability the bound $\|\xi(t)\|\le M_{1}\|\xi_{k}\|+M_{2}\bar w$ becomes $\|\xi(t)\|\le \beta(\|\xi_k\|,0)+\gamma_1(\bar w)$, the upper-bound  becomes $\|g(t)\|\le (\beta(\|x_{k}\|,0)+\gamma_1(\bar w))(e^{L(t-t_{k})}-1)\,.$ By using such a bound, the proof follows analougsly to the proof of Theorem \[thm:main\], where the terms $M_1\|\xi_k\|$ and $M_2\bar w$ are replaced with $\beta(\|\xi_k\|,0)+\gamma_1(\bar w)$, and where the comparison functions in  are replaced with appropriate class-$\K$ functions [@KHALIL]. [^1]: K.H. Johansson is with ACCESS Linneaus Center, KTH Royal Institute of Technology, Stockholm, Sweden. Email: kallej@kth.se [^2]: U. Tiberi is with Volvo Group Trucks Technology, Göteborg, Sweden. Email: ubaldo.tiberi@volvo.com [^3]: In case of time-varying uncertainty, the Lyapunov function $V(\eta,x)$ shall be replaced with a parameters-independent Lyapunov function $V(x)$, see [@AMATO-BOOK].
--- abstract: 'In this article we want to review the research state on the bullwhip effect in supply chains with stochastic lead times and give a contribution to quantifying the bullwhip effect. We analyze the mo-dels quantifying the bullwhip effect in supply chains with stochastic lead times and find advantages and disadvantages of their approaches to the bullwhip problem. Using real data we confirm that real lead times are stochastic and can be modeled by a sequence of independent identically distributed random variables. Moreover we modify a model where stochastic lead times and lead time demand forecasting are considered and give an analytical expression for the bullwhip effect measure which indicates that the distribution of a lead time and the delay parameter of the lead time demand prediction are the main factors of the bullwhip phenomenon. Moreover we analyze a recent paper of Michna and Nielsen [@mi:ni:13] adding simulation results.' author: - 'Zbigniew Michna[^1]' - Peter Nielsen - Izabela Ewa Nielsen title: '****' --- Introduction ============ Supply chains are networks of firms (supply chain members) which act in order to deliver a product to the end consumer. Supply chain members are concerned with optimizing their own objectives and this results in a poor performance of the supply chain. In other words local optimum policies of members do not result in a global optimum of the chain and they yield the tendency of replenishment orders to increase in variability as one moves up stream in a supply chain. This effect was first recognized by Forrester [@fo:58] in the middle of the twentieth century and the term of bullwhip effect was coined by Procter & Gamble management. The bullwhip effect is considered harmful because of its consequences which are (see e.g. Buchmeister et al. [@bu:pa:pa:po:08]): excessive inventory investment, poor customer service levels, lost revenue, reduced productivity, more difficult decision-making, sub-optimal transportation, sub-optimal production etc. This makes it critical to find the root causes of the bullwhip effect and to quantify the increase in order variability at each stage of the supply chain. In the current state of research several main causes of the bullwhip effect are considered (see e.g. Lee et al. [@le:pa:wh:97a] and [@le:pa:wh:97b]): demand forecasting, non-zero lead time, supply shortage, order batching, price fluctuation and lead time forecasting (see Michna and Nielsen [@mi:ni:13]). To decrease the variance amplification in a supply chain (i.e. to reduce the bullwhip effect) we need to identify all factors causing the bullwhip effect and to quantify their impact on the effect. Many researchers have assumed a deterministic lead time and studied the influence of different methods of demand forecasting on the bullwhip effect such as simple moving average, exponential smoothing, and minimum-mean-squared-error forecasts when demands are independent identically distributed or constitute integrated moving-average, autoregressive processes or autoregressive-moving averages (see Graves [@gr:99], Lee et al. [@le:so:ta:00], Chen et al. [@ch:dr:ry:si:00a] and [@ch:dr:ry:si:00b], Alwan et al. [@al:li:ya:03], Zhang [@zh:04] and Duc et al. [@du:lu:ki:08]). Moreover they quantify the impact of a deterministic lead time on the bullwhip effect and it follows from their work that the lead time is one of the major factors influencing the size of the bullwhip effect in a given supply chain. Stochastic lead times were intensively investigated in inventory systems see Bagchi et al. [@ba:ha:ch:86], Hariharan and Zipkin [@ha:zi:95], Mohebbi and Posner [@mo:po:98], Sarker and Zangwill [@sa:za:91], Song [@so:94a] and [@so:94b], Song and Zipkin [@so:zi:93] and [@so:zi:96] and Zipkin [@zi:86]. Most of these works consider the so-called exogenous lead times that is they do not depend on the system e.g. the lead times are independent of the orders or the capacity utilization of a supplier. Moreover these articles studied how the variable lead times affect the control parameter, the inventory level or the costs. One can investigate the so-called endogeneous lead times that depends on the system. This is analyzed in So and Zheng [@so:zh:03] showing the impact of endogeneous lead times on the amplification of the order variance and has been done by simulation. Recently the impact of stochastic lead times on the bullwhip effect is intensively investigated. The main aim of this article is to review papers devoted to stochastic lead times in supply chains in the context of the bullwhip effect especially those which quantify the effect. Moreover we modify a model where stochastic lead times and lead time demand forecasting are considered. In this model we find an analytical expression for the bullwhip effect measure which indicates that the distribution of a lead time (the probability of the longest lead time and its expectation and variance) and the delay parameter of the lead time demand prediction are the main factors of the bullwhip phenomenon. In Tab. \[tabpapers\] we collect all the main articles in which models on the bullwhip effect with stochastic lead times are provided (except the famous works of Chen et al. [@ch:dr:ry:si:00a] and [@ch:dr:ry:si:00b] where deterministic lead time is considered and some of them analyze the effect using simulation). Article Demands Lead times Forrecasting ------------------------------------ --------------- --------------- ------------------------ Chen et al. [@ch:dr:ry:si:00a] AR(1) deterministic moving average of demands Chen et al. [@ch:dr:ry:si:00b] AR(1) deterministic expon. smoothing of demands Chaharsooghi deterministic i.i.d. – and Heydari [@ch:he:10] Chatfield et al. [@ch:ki:ha:ha:04] AR(1) i.i.d. moving average of lead time demands Kim et al. [@ki:ch:ha:ha:06] AR(1) i.i.d. moving average of lead time demands Duc et al. [@du:lu:ki:08] AR(1) i.i.d. the minimum-mean- ARMA(1,1) squared-error forecast of demands Fioriolli et al. [@fi:fo:08] AR(1) i.i.d. moving average of demands Michna i.i.d. i.i.d. moving average and Nielsen [@mi:ni:13] of demands and lead times Reiner dependent i.i.d. moving average and Fichtinger [@re:fi:09] of demands and lead times So AR(1) dependent the minimum-mean- and Zheng [@so:zh:03] squared-error forecast of demands : Articles on the impact of lead time on the bullwhip effect[]{data-label="tabpapers"} The remainder of the paper is structured as follows. In the next section a discussion of the bullwhip effect is presented along with the common definition of it. The next section presents a brief study of real lead times in a supply chain, documenting their nature. The following section analyzes the current main models of supply chains with stochastic lead times, expanding and modifying some of the results. Finally conclusions and future research opportunities are presented. Supply chains and the bullwhip effect ===================================== In recent studies a supply chain is considered as a system of organizations, people, activities, information and resources involved in moving a product or service from suppliers to customers. More precisely a supply chain consists of customers, retailers, warehouses, distribution centers, manufactures, plants, raw material suppliers etc. They are members or stages (echelons) of a given supply chain. A supply chain has (or is assumed to have) a linear order which means that at the bottom there are customers, above customers there is a retailer, above the retailer there is e.g. a manufacturer and so on. The linear order is determined by the flow of products which stream down from the supplier through the manufacturer, warehouse, retailer to the customers. The financial and information flows can accompany the flow of products. The simplest supply chain can consist of customers (customers are not regarded as a stage), a retailer and a manufacturer (a supplier). At every stage (except customers) a member of a supply chain possesses a storehouse and uses a certain stock policy (a replenishment policy) in its inventory control to fulfill its customer (a member of the supply chain which is right below) orders in a timely manner. Commonly used replenishment policies are: the periodic review, the replenishment interval, the order-up-to level inventory policy (out policy), $(s,S)$ policy, the continuous review, the reorder point (see e.g. Zipkin [@zi:00]). A member of a supply chain observes demands from a stage below and lead times from a stage above. Based on the previous demands and previous lead times and using a certain stock policy each member of a chain places an order to its supplier. Thus at every stage one can observe demands from the stage below and replenishment orders sent to the stage above. The phenomenon of the variance amplification in replenishment orders if one moves up in a supply chain is called bullwhip effect (see Disney and Towill [@di:to:03] and Geary et al. [@ge:di:to:06] for the definition and historical review). Manson et al. [@mu:hu:ro:03] assert: ”When each member of a group tries to maximize his or her benefit without regard to the impact on other members of the group, the overall effectiveness may suffer”. The bullwhip effect is the key example of a supply chain inefficiency. The main measure of the bullwhip effect is the ratio of variances, that is if $q$ is a random variable describing demands (orders) of a member of the supply chain to a member above and $D$ is a random variable responsible for demands of the member below (e.g. $q$ describes orders of a retailer to a manufacturer (supplier) and $D$ shows customer demands to the retailer) then the measure of performance of the bullwhip effect is the following $$BM=\frac{{{\bf Var}}(\mbox{orders})/{{\rm I\hspace{-0.8mm}E}}(\mbox{orders})}{{{\bf Var}}(\mbox{demands})/{{\rm I\hspace{-0.8mm}E}}(\mbox{demands})}=\frac{{{\bf Var}}q/{{\rm I\hspace{-0.8mm}E}}q}{{{\bf Var}}D/{{\rm I\hspace{-0.8mm}E}}D}\,.$$ Usually in most models ${{\rm I\hspace{-0.8mm}E}}D= {{\rm I\hspace{-0.8mm}E}}q$. The value of $BM$ is greater than one in the presence of the bullwhip effect in a supply chain. If $BM$ is equal to one then there is no variance amplification whereas $BM$ smaller than one indicates dampening which means that the orders are smoothed compared to the demands indicating a push rather than pull supply chain. Another very important parameter of the supply chain performance is the measure of the net stock amplification of a given supply chain member. More precisely let $N_S$ be the level of the net stock of a supply chain member (e.g. a retailer or a supplier) and $D$ be demands observed from its downstream member (customers or a retailer) then the following measure $$NSM=\frac{{{\bf Var}}({\mbox{net stock}})}{{{\bf Var}}({\mbox{demands}})}=\frac{{{\bf Var}}({N_S})}{{{\bf Var}}{D}}$$ is also considered as a critical performance measure. In many models it is assumed that the costs are proportional to $\sqrt{{{\bf Var}}(\mbox{orders})}$ and $\sqrt{{{\bf Var}}({N_S})}$. Establishing real lead time behavior ==================================== Despite lead times widely being considered as one of the main causes of the bullwhip effect, limited literature exists investigating actual lead time behavior (see Tab. \[tabpapers\]). Most research to date is focused on lead time demand, with an assumption of either constant lead times or lead times that are independent identically distributed (i.i.d.). To support the assumptions used in references (see Chatfield et al. [@ch:ki:ha:ha:04], Kim et al. [@ki:ch:ha:ha:06], Duc et al. [@du:lu:ki:08] and Michna and Nielsen [@mi:ni:13]) – i.e. that lead times are i.i.d. - the lead time behavior from a manufacturing company is analyzed as an example in the following. The data used is 6,967 orders for one product varying in quantity ordered over a period of two years (481 work days) in a manufacturing company. On average 14.5 orders are received per day in the period, each order is to an individual customer in the same geographical region. To test whether or not lead times are in fact i.i.d. the following tests are employed: 1. autocorrelation (see e.g. Box and Jenkins [@bo:je:76]) for independence of the lead times. This is done on the average lead time per day as the individual orders cannot be ordered in time periods smaller than one day. 2. Kolmogorov-Smirnov test (see e.g. Conover [@co:71]) is applied. The test is a widely used robust estimator for identical distributions [@co:71] . The method (as seen in Fig. \[test\_fig\]) relies on comparing samples of lead times and using Kolmogorov-Smirnov test to determine whether not these pairwise samples are identical. In this research a 0.05 significance level is used and the ratio of pairwise comparisons that pass this significance test is the output from the analysis. Different sample sizes are used to determine if the lead times can be assumed to be similarly distributed in smaller time periods, and thus if it is fair to sample previous lead time observations to estimate lead time distribution for planning purposes. For a detailed account of the method please refer to Nielsen et al. [@ni:14]. ![Sample and comparison procedure using Kolmogorov-Smirnov test for pairwise comparisons.[]{data-label="test_fig"}](flow_chart.eps){width="13cm"} ![top: auto correlation plot of average lead time per day; bottom: partial auto correlation plot of average lead time per day.[]{data-label="iidts"}](acf_mean_lt.eps){height="12cm" width="13cm"} An autocorrelation and partial autocorrelation plot are found in Fig. \[iidts\] which clearly show that the average lead times per day for all practical purposes can be considered mutually independent. There may be some minor indications from the average lead time on a given day slightly depend on recent average lead times. However, the correlation coefficients are small (that is approximately 0.1) and the penalty for assuming independence seems slight. ![Sample and comparison procedure using Kolmogorov-Smirnov test for pairwise comparisons.[]{data-label="fig_ks_samples"}](lead_time_ks_graph.eps){height="12cm" width="13cm"} Fig. \[fig\_ks\_samples\] shows that even for large samples (500 vs. 500 observations) most of the comparisons are found to be statistically similar on a 0.05 or better level. This supports the assumptions that the lead times are in fact identically distributed. The overall conclusion is that it is not wrong to assume that lead times are in fact i.i.d. The investigation also underlines that it is a grave oversimplification to assume that lead times are constant for individual orders. There is also no guarantee that lead times are in fact i.i.d. in any and all context. Models with stochastic lead times ================================= Having established that at least in some cases lead times can be considered to be i.i.d. the next step is to analyze the current state of research into supply chains where lead times are assumed to be stochastic. The lead time is typically regarded as the second main cause of the bullwhip effect after demand forecasting. Lead times are made of two components which are the physical delays and the information delays. In models one does not distinguish these components as the lead time is the time between an order is placed by a member of a supply chain and the epoch when the product is delivered to the member. The assumption that the lead time is constant is rather unrealistic. Undoubtedly in many supply chains physical and information delays are random which means that a member of a supply chain does not know values of the future lead times and in the past he observed that their values were varied in a stochastic manner. For instance in the paper of So and Zheng [@so:zh:03] the model of a supply chain with stochastic lead times is motivated by the semiconductor industry where the dramatic boom-and-bust cycles cause the delivery lead times to be highly variable, ranging from several weeks during the low demand season to over several months during the high demand season. Moreover in the models investigating the bullwhip effect one can decide how time is represented. There are two choices that is discrete or continuous time. We analyze stochastic techniques which use discrete time. We assume that the observations are made at integer moments of time which means that time is represented in units of the review periods and nothing is known about the system in the time between observations. The main difference in models with stochastic lead times lies in the definition of the lead time demand forecast. Let us recall that the lead time demand at the beginning of a period $t$ (at a certain stage of the supply chain) is defined as follows $$\label{ltd} D_t^L=D_t+D_{t+1}+\ldots+D_{t+L_t-1}= \sum_{i=0}^{L_t-1}D_{t+i}$$ where $D_t, D_{t+1}, \ldots$ denote demands (from a stage below) during $t, t+1,\ldots$ periods and $L_t$ is the lead time of the order placed at the beginning of the period $t$ (order placed to a stage above). This value sets down the demand during a lead time. The demands come from the stage right below and the lead times come from the supplier right above that is they are delivery lead times of the supplier which is right above the receiving supply chain member. This quantity is necessary to place an order. Practically the member of the supply chain does not know its value at the moment $t$ but he needs to predict its value to place an order. Thus if we want to analyze the bullwhip effect we need to look closer at the definitions of lead time demand forecasting $\widehat{D}_t^L$. The approaches to this problem vary greatly in models with stochastic lead times and some of them cannot be feasible in practice. Many articles on the bullwhip effect investigate different methods of demand forecasting under the assumption that lead times are constant. The problem of the lead time demand prediction is much more complicated if lead times are stochastic. Then mere demand forecasting is not sufficient to place an order. We will analyze the works which quantify the bullwhip effect in supply chains with stochastic lead times. In all the presented models we will consider a simple two stage supply chain consisting of customers, a retailer and a manufacturer. Moreover will assume that the retailer uses the order-up-to-level policy (which is optimal in the sense that it minimizes the total discounted linear holding and backorder costs) then the level of the inventory at time $t$ has to be $$\label{st} S_t=\widehat{D}_t^L+z\widehat{\sigma}_t\,,$$ where $\widehat{D}_t^L$ is the lead time demand forecast at the beginning of the period $t$ (that is the prediction of the quantity given in (\[ltd\])) and $$\label{sigmadl} \widehat{\sigma}_t^2={{\bf Var}}(D_t^L-\widehat{D}_t^L)$$ is the variance of the forecast error for the lead time demand and $z$ is the normal z-score that specifies the probability that demand is fulfilled by the on-hand inventory and it can be found based on a given service level. Usually $z=\Phi(p/(p+h))$ where $\Phi$ is the standard normal cumulative distribution function and $h$ and $p$ are the unit inventory holding and backorder costs at the retailer, respectively. Moreover we need to notice that the definition of $\widehat{\sigma}_t^2$ differs in articles (see e.g. Chen et al. [@ch:dr:ry:si:00a], Duc et al. [@du:lu:ki:08] or Kim et al. [@ki:ch:ha:ha:06]) which results in slightly different formulas of the bullwhip effect measure (e.g. equality instead of inequality). Practically instead of variance we have to put the empirical variance of $D_t^L-\widehat{D}_t^L$. This complicates theoretical calculations very much but we must mention that the estimation of $\widehat{\sigma_t}^2$ increases the size of the bullwhip effect. Thus under above assumptions the order quantity $q_t$ placed by the retailer at the beginning of a period $t$ is $$\label{qt} q_t=S_t-S_{t-1}+D_{t-1}\,.$$ Let us notice that negative values of $q_t$ are allowed which correspond to returns. Lead time demand forecasting using moving average. {#kimchat} -------------------------------------------------- Let us analyze the work of Kim et al. [@ki:ch:ha:ha:06] (see also Chatfield [@ch:ki:ha:ha:04] for a simulation approach). In their approach lead time demand forecasting is defined as follows $$\widehat{D}_t^L=\frac{1}{n}\sum_{j=1}^{n}D_{t-j}^L\,,$$ where $n$ is the delay parameter of the prediction and $D_{t-j}^L$ is the previous known lead time demand of the order placed at the beginning of the time $t-j$. This method is practically feasible. The problem of the approach of Kim et al. [@ki:ch:ha:ha:06] lies in an impractical definition of the past lead time demands $D_{t-j}^L$. Namely they continue $$\widehat{D}_t^L=\frac{1}{n}\sum_{j=1}^{n}D_{t-j}^L= \frac{1}{n}\sum_{j=1}^{n}\sum_{i=0}^{L-1} D_{t-j+i}= \frac{1}{n}\sum_{i=0}^{L-1}\sum_{j=1}^n D_{t-j+i}\,,$$ where $L$ is a lead time. Firstly if we assume that lead times are stochastic then with every lead time demand $D_{t-j}^L$ we associate a different lead time $L_{t-j}$. Moreover this definition does not work in the case of a deterministic lead time because at the beginning of the moment $t$ the values of demands $D_{t-j+i}$ if $j\leq i$ are not known (they explain it is a ”mirror image” and ”equivalent in terms of a priori statistical analysis” ). Let us analyze the bullwhip effect under above setting but with small modifications (see also Michna et al. [@mi:ni:ni:13]). More precisely let us consider the simplest supply chain that consists of customers, a retailer and a supplier. We assume that the customer demands constitute an iid sequence $\{D_t\}_{t=-\infty}^\infty$. Moreover lead times are deterministic and equal to $L$ where $L$ is a positive integer that is $L=1,2,\ldots$. It is assumed that the retailer’s replenishment order policy is the order-up-to-level policy and his lead time demand forecasting is based on the moving average method. Thus the forecast of the lead time demand at the beginning of the period $t$ based on the moving average method is as follows $$\label{ltdfma} \widehat{D}_t^L=\frac{1}{n}\sum_{i=0}^{n-1} D^L_{t-L-i}\,.$$ Let us notice that we have to get back with lead time demands at least to the period $t-L$ because we know demands till the epoch $t-1$. Moreover let us recall that the demand forecast alone using the moving average is as follows $$\widehat{D}_t=\frac{1}{n}\sum_{j=1}^n D_{t-j}\,.$$ Thus substituting into eq. (\[ltdfma\]) the known values of the previous lead time demands we get $$\begin{aligned} \widehat{D}_t^L&=&\frac{1}{n}\sum_{i=0}^{n-1}\sum_{j=0}^{L-1}D_{t-L-i+j}\nonumber\\ &=&\sum_{j=0}^{L-1}\frac{1}{n}\sum_{i=0}^{n-1}D_{t-L-i+j}\nonumber\\ &=&\sum_{j=0}^{L-1}\widehat{D}_{t-L+j+1}\nonumber\\ &=&\sum_{j=0}^{L-1}\widehat{D}_{t-j}\,.\label{ltdfkim}\end{aligned}$$ Applying the order-up-to-level policy we get that the inventory level of the retailer at time $t$ is given in (\[st\]). The error of the lead time demand forecast$\widehat{\sigma}_t$ is defined as in eq. (\[sigmadl\]). It is easy to notice that under above assumptions $\widehat{\sigma}_t$ is independent of $t$. Thus the order quantity $q_t$ placed by the retailer at the beginning of a period $t$ is $$\begin{aligned} q_t&=&S_t-S_{t-1}+D_{t-1}\\ &=&\widehat{D}_t^L-\widehat{D}_{t-1}^L+D_{t-1}\\ &=&\sum_{j=0}^{L-1}\widehat{D}_{t-j}-\sum_{j=0}^{L-1}\widehat{D}_{t-1-j}+D_{t-1}\\ &=& \widehat{D}_t-\widehat{D}_{t-L}+D_{t-1}\end{aligned}$$ where in the second last equality we use eq. (\[ltdfkim\]) . To calculate the value of $q_t$ we need to consider two cases that is $L\geq n$ and $L<n$. Thus in the case $L\geq n$ the order $q_t$ placed by the retailer is as follows $$q_t=\left(\frac{1}{n}+1\right)D_{t-1}+\frac{1}{n}\sum_{j=2}^n D_{t-j} -\frac{1}{n}\sum_{j=1}^n D_{t-L-j}$$ and $$\begin{aligned} {{\bf Var}}q_t&=&\left[\left(\frac{1}{n}+1\right)^2+\frac{2n-1}{n^2}\right]{{\bf Var}}D\\ &=&\left(1+\frac{4}{n}\right){{\bf Var}}D\,.\end{aligned}$$ In the case $L<n$ we get $$q_t=\left(\frac{1}{n}+1\right)D_{t-1}+\frac{1}{n}\sum_{j=2}^L D_{t-j} -\frac{1}{n}\sum_{j=1}^L D_{t-p-j}$$ and $$\begin{aligned} {{\bf Var}}q_t&=&\left[\left(\frac{1}{n}+1\right)^2+\frac{2L-1}{n^2}\right]{{\bf Var}}D\\ &=&\left(1+\frac{2}{n}+\frac{2L}{n^2}\right){{\bf Var}}D\,.\end{aligned}$$ \[ldet\] If the lead times are deterministic and positive integer valued that is $L=1,2,\ldots$ and lead time demands are forecasted using the moving average method then bullwhip effect measure is $$BM=\frac{{{\bf Var}}q_t}{{{\bf Var}}D}= \left\{ \begin{array}{ll} 1+\frac{2}{n}+\frac{2L}{n^2}& \mbox{if}\,\,\, L<n\\ 1+\frac{4}{n}& \mbox{if}\,\,\, L\geq n\,. \end{array} \right.$$ In Fig. \[det\] we plotted the bulwhip effect measure for a deterministic lead time when lead time demands are predicted by the moving average method (see Prop. \[ldet\]). Let us notice that the bullwhip effect function $BM(n)$ as a function of $n$ does not have any jump at $L$ that is it smoothly gets across the point $n=L$ (compare Prop. \[ldet\] with the similar result of Kim et al. [@ki:ch:ha:ha:06]). ![The plot of the bullwhip effect measure as a function of $n$ where $L=7$.[]{data-label="det"}](det.eps){height="10cm" width="10cm"} Now we follow the work of Kim et al. [@ki:ch:ha:ha:06] with certain modifications to find the bullwhip effect measure in the presence of stochastic lead times (see also Michna et al. [@mi:ni:ni:13]). We assume that the customers demands constitute an iid sequence $\{D_t\}_{t=-\infty}^\infty$ and the lead times $\{L_t\}_{t=-\infty}^\infty$ are also independent and identically distributed and the sequences are mutually independent. Let us put ${{\rm I\hspace{-0.8mm}E}}D_t=\mu_D$, ${{\bf Var}}D_t=\sigma^2_D$, ${{\rm I\hspace{-0.8mm}E}}L_t=\mu_L$ and ${{\bf Var}}L_t=\sigma^2_L$. Additionally we need to assume that lead times are bounded random variables that is $L_i\leq M$ where $M$ is a positive integer. This assumption is not adopted in Kim et al. [@ki:ch:ha:ha:06] which makes their results slightly impractical because it is necessary to make the prediction of lead time demands. More precisely we get back at least $M$ periods to forecast lead time demand that is at time $t$ we know lead time demands of times $t-M, t-M-1, \ldots$ and we may not know lead time demands of times $t-M+1, t-M+2, \ldots$. As we see later we will need to know the distribution of $L_t$ to calculate the bullwhip effect measure that is we assume that $${{\rm I\hspace{-0.8mm}P}}(L_t=k)=p_k$$ where $k=1,2,\ldots,M$ and $k$ is the number of periods (in practice we estimate these probabilities). Thus the prediction of the lead time demand at time $t$ using the method of moving average with the length $n$ is as follows $$\label{ltdp} \widehat{D}_t^L=\frac{1}{n}\sum_{j=0}^{n-1} D_{t-M-j}^L\,.$$ Once again let us notice that the lead time demands $D_{t-M+1}^L, D_{t-M+2}^L, \ldots$ we may not know at time $t$ that is why in the lead time demand forecasting we engage $D_{t-M}^L, D_{t-M-1}^L, \ldots$ which are the lead time demands up to time $t-M$. Thus by eq. (\[ltd\]) we get $$\label{ltdpc} \widehat{D}_t^L=\frac{1}{n}\sum_{j=0}^{n-1} \sum_{i=0}^{L_{t-M-j}-1}D_{t-M-j+i}\,.$$ As before the retailer uses the order-up-to-level policy thus the level of the inventory at time $t$ is given in eq. (\[st\]). By the stationarity and independence of the sequences of demands and lead times one can show that $\widehat{\sigma}_t^2$ given in (\[sigmadl\]) does not depend on $t$. Hence we obtain $$\begin{aligned} q_t&=& \widehat{D}_t^L-\widehat{D}_{t-1}^L+D_{t-1}\nonumber\\ &=&\frac{1}{n}\sum_{j=0}^{n-1} D_{t-M-j}^L-\frac{1}{n}\sum_{j=0}^{n-1} D_{t-1-M-j}^L +D_{t-1}\nonumber\\ &=& \frac{1}{n} D_{t-M}^L-\frac{1}{n}D_{t-M-n}^L+D_{t-1}\nonumber\\ &=&\frac{1}{n}\sum_{i=0}^{L_{t-M}-1}D_{t-M+i}-\frac{1}{n}\sum_{i=0}^{L_{t-M-n}-1} D_{t-M-n+i}+D_{t-1}\,.\label{qtcal}\end{aligned}$$ \[thltdp1\] Under above assumptions and for $n\geq M$ the bullwhip effect measure is the following $$BM=\frac{{{\bf Var}}q_t}{{{\bf Var}}D_t}= 1+\frac{2p_M}{n}+\frac{2\mu_L}{n^2}+\frac{2\mu^2_D\sigma^2_L}{\sigma^2_D n^2}\,.$$ Using the law of total variance we have $${{\bf Var}}q_t={{\rm I\hspace{-0.8mm}E}}({{\bf Var}}(q_t|L_{t-M}, L_{t-M-n}))+{{\bf Var}}{{\rm I\hspace{-0.8mm}E}}(q_t|L_{t-M}, L_{t-M-n})\,.$$ By eq. (\[qtcal\]) we get $${{\rm I\hspace{-0.8mm}E}}(q_t|L_{t-M}, L_{t-M-n})=\mu_D\left(\frac{L_{t-M}-L_{t-M-n}}{n}+1\right)\,.$$ Thus $$\label{varexp} {{\bf Var}}{{\rm I\hspace{-0.8mm}E}}(q_t|L_{t-M}, L_{t-M-n})=\frac{2\mu^2_D\sigma^2_L}{n^2}\,.$$ We need to consider two case to find ${{\bf Var}}(q_t|L_{t-M}, L_{t-M-n})$. In the first case $L_{t-M}<M$ we get $${{\bf Var}}(q_t|L_{t-M}, L_{t-M-n})=\sigma^2_D\left(\frac{L_{t-M}+L_{t-M-n}}{n^2}+1\right)\,.$$ If $L_{t-M}=M$ we have $${{\bf Var}}(q_t|L_{t-M}, L_{t-M-n})=\sigma^2_D\left(\frac{L_{t-M}+L_{t-M-n}}{n^2}+1+\frac{2}{n}\right)\,.$$ Finally we obtain $${{\bf Var}}(q_t|L_{t-M}, L_{t-M-n})=\sigma^2_D\left(\frac{L_{t-M}+L_{t-M-n}}{n^2}+1\right)+ \frac{2\sigma^2_D}{n}{{1\hspace{-1mm}{\rm I}}}\{L_{t-M}=M\}$$ where ${{1\hspace{-1mm}{\rm I}}}$ is the indicator function. Thus we get $${{\rm I\hspace{-0.8mm}E}}{{\bf Var}}(q_t|L_{t-M}, L_{t-M-n})=\sigma^2_D\left(\frac{2\mu_L}{n^2}+1\right)+\frac{2\sigma^2_D p_M}{n}$$ which together with eq. (\[varexp\]) give the assertion. The formula for the bullwhip effect measure in the case $n<M$ is more complicated and its derivation is rather cumbersome. In practice the case $n\geq M$ is more interesting because we require to put large $n$ in the forecast to get a more precise prediction. Let us notice that if $L_t=M=L$ is deterministic the formula of Th. \[thltdp1\] is consistent with Prop. \[ldet\]. We have to mention that in the formula of Th. \[thltdp1\] the term $\frac{2p_M}{n}$ gives the largest contribution in the bullwhip effect for large $n$ because it is of the order $O(1/n)$. This means that in reducing the bullwhip effect the probability of the largest lead time is very important. It is astonishing that if $p_M=0$ and we still get back $M$ periods in the prediction of lead time demands then the bullwhip effect measure is reduced by the therm $O(1/n)$ and is of the form $$\frac{{{\bf Var}}q_t}{{{\bf Var}}D_t}= 1+\frac{2\mu_L}{n^2}+\frac{2\mu^2_D\sigma^2_L}{\sigma^2_D n^2}\,.$$ Let us compare the values of the bullwhip effect measure under $p_M>0$ and $p_M=0$. More precisely let $L_t$ have the discrete uniform distribution on $\{1,2,3\}$ that is $p_k=1/3$ for $k=1,2,3$ then $M=3$, $\mu_L=2$ and $\sigma_L^2=2/3$. In the case $p_M=0$ we assume that $L_t$ has the discrete uniform distribution on $\{1,2\}$ that is $p_k=1/2$ for $k=1,2$ then $\mu_L=1.5$ and $\sigma_L^2=1/4$ (and we still get back at least $M=3$ periods to predict the lead time demand). The results are in Tab. \[bm1\]. $n$ $p_M>0$ $p_M=0$ ----- --------- --------- 3 2.259 1.555 4 1.750 1.312 5 1.506 1.200 6 1.370 1.138 7 1.285 1.102 8 1.229 1.078 9 1.189 1.061 10 1.160 1.050 11 1.137 1.041 12 1.120 1.034 13 1.106 1.029 14 1.095 1.025 15 1.085 1.022 : The measure of the bullwhip effect as a function of $n$ for $M=3$ and $\sigma_D/\mu_D=0.5$.[]{data-label="bm1"} Similarly we can calculate the bullwhip effect measure for longer lead times. More precisely let $L_t$ have the discrete uniform distribution on $\{1,2,\ldots,7\}$ that is $p_k=1/7$ for $k=1,2,\ldots, 7$ then $M=7$, $\mu_L=4$ and $\sigma_L^2=4$. In the case $p_M=0$ we assume that $L_t$ has the discrete uniform distribution on $\{1,2,\ldots,6\}$ that is $p_k=1/6$ for $k=1,2,\ldots, 6$ then $\mu_L=3.5$ and $\sigma_L^2=3.916$ (and we still get back at least $M=7$ periods to predict the lead time demand). The results are in Tab. \[bm2\]. $n$ $p_M>0$ $p_M=0$ ----- --------- --------- 7 1.857 1.782 8 1.660 1.598 9 1.525 1.473 10 1.428 1.383 11 1.356 1.316 12 1.301 1.266 13 1.258 1.226 14 1.224 1.195 15 1.196 1.170 16 1.174 1.149 17 1.155 1.132 18 1.139 1.118 : The measure of the bullwhip effect as a function of $n$ for $M=7$ and $\sigma_D/\mu_D=0.5$.[]{data-label="bm2"} Stochastic lead times without forecasting ----------------------------------------- In the paper of Duc et al. [@du:lu:ki:08] stochastic lead times are investigated under the assumption that they are independent and identically distributed. The simplest two stage supply chain in analyzed with a first-order autoregressive AR(1) demand process and an extension to a mixed first-order autoregressive-moving average ARMA(1,1). More precisely the demands from customers to the retailer constitute the first order autoregressive-moving average AR(1) that is $\{D_t\}_{t=-\infty}^\infty$ is a stationary sequence of random variables which satisfy $$\label{ard} D_t=\mu+\rho D_{t-1}+\epsilon_t$$ where $\mu>0$, $|\rho|<1$ and $\{\epsilon_t\}_{t=-\infty}^\infty$ is a sequence of independent identically distributed random variables such that ${{\rm I\hspace{-0.8mm}E}}\epsilon_t=0$ and ${{\bf Var}}\epsilon_t=\sigma^2$. It is easy to notice that ${{\rm I\hspace{-0.8mm}E}}D_t=\mu_D=\frac{\mu}{1-\rho}$, ${{\bf Var}}D_t=\sigma^2_D=\frac{\sigma^2}{1-\rho^2}$ and the correlation coefficient ${{\bf Corr}}(D_t, D_{t+1})=\rho$ Moreover it is assumed that the demands are forecasted using the minimum-mean-squared-error forecasting method. If $\widehat{D}_{t+i}$ denotes the forecast for a demand for the period $t+i$ at the beginning of a period $t$ (that is after $i$ periods) then employing the minimum-mean-squared-error forecasting method we get $$\begin{aligned} \label{msef} \widehat{D}_{t+i}&=&{{\rm I\hspace{-0.8mm}E}}(D_{t+i}|, D_{t-1}, D_{t-2},\dots)\nonumber\\ &=& \mu_D(1-\rho^{i+1})+\rho^{i+1}D_{t-1}\label{fd}\end{aligned}$$ where $D_{t-j}$ $j=1,2,\ldots$ are demands which have been observed by the retailer till the beginning of a period $t$. Then the lead time demand at the beginning of the period $t$ is defined by Duc et al. [@du:lu:ki:08] as follows $$\widehat{D}_t^L=\sum_{i=0}^{L_t-1}\widehat{D}_{t+i}$$ where $\widehat{D}_{t+i}$ is given in eq. (\[msef\]). Let us notice that the above lead time demand forecast is not practically feasible because we do not know the value of $L_t$ and the beginning of the period $t$. Practically to place an order we have to forecast demands and lead times which means that in the above lead time demand forecast we need to substitute a lead time prediction $\widehat{L}_t$ instead of $L_t$. As in the previous model the retailer uses the order-up-to-level policy and the level of inventory $S_t$ is given in (\[st\]). The variance of the forecast error for the lead time demand and the order quantity $q_t$ placed by the retailer at the beginning of a period $t$ are defined in (\[sigmadl\]) and (\[qt\]), respectively. The main result of Duc et al. [@du:lu:ki:08] is the following. \[duc1\] Under the above assumptions with the minimum-mean-squared-error forecasting method the bullwhip effect measure is $$\begin{aligned} \lefteqn{BM=\frac{{{\bf Var}}q_t}{{{\bf Var}}D_t}}\\ &=&\frac{1}{(1-\rho)^2}[(1-\rho^2)[1-2\rho{{\rm I\hspace{-0.8mm}E}}\rho^{L_t}]+2\rho^2{{\rm I\hspace{-0.8mm}E}}\rho^{2 L_t} -2\rho^3({{\rm I\hspace{-0.8mm}E}}\rho^{L_t})^2]+\frac{2\mu^2_D\sigma^2_L}{\sigma^2_D}\,.\end{aligned}$$ Duc et al. [@du:lu:ki:08] give numerical examples. They calculate the value of $BM$ for specific distributions of $L_t$ e.g. three-point distribution, geometric distribution, Poisson and discrete uniform distribution. The plots of $BM$ as a function of the autoregressive coefficient $\rho$ for a fixed $\sigma_D/\mu_D$ are presented. It is interesting that the minimal value of $BM$ is attained for $\rho$ around $-0.6$ and $-0.7$. The maximal value of $BM$ is for $\rho$ around $0.6$ or $1$. The authors of [@du:lu:ki:08] extend the results for ARMA(1,1) demand processes (the mixed first-order autoregressive-moving average process). In this case, the structure of demands is defined as follows $$D_t=\mu+\rho D_{t-1}+\epsilon_t-\theta\epsilon_{t-1}$$ where $\mu$, $\rho$ and $\epsilon_t$ are the same as in the case of AR(1) demand process and $|\theta|<1$. Then under the same assumptions (the order-up-to-level inventory policy and the minimum-mean-squared-error forecasting method) the bullwhip effect measure is given. \[duc2\] Under ARMA(1,1) demand process with the minimum-mean-squared-error forecasting method the bullwhip effect measure is $$\begin{aligned} \lefteqn{BM=\frac{{{\bf Var}}q_t}{{{\bf Var}}D_t}}\\ &=&\frac{(1-\rho^2)(1-\theta)[1-\theta+2(\theta-\rho){{\rm I\hspace{-0.8mm}E}}\rho^{L_t}]+2(\rho-\theta)^2[{{\rm I\hspace{-0.8mm}E}}\rho^{2 L_t}-\rho({{\rm I\hspace{-0.8mm}E}}\rho^{L_t})^2]}{(1-\rho)^2 (1+\theta^2-2\rho\theta)}\\ &&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, +\frac{2\mu^2_D\sigma^2_L}{\sigma^2_D}\,.\end{aligned}$$ Numerical results for the case of ARMA(1,1) demand process provide the same trends as those of the AR(1) case. Stochastic lead times with forecasting -------------------------------------- In the work of Michna and Nielsen [@mi:ni:13] the impact of lead time forecasting on the bullwhip effect is investigated. It is assumed that lead times and demands are forecasted separately which seems to be a very natural and practical approach. More precisely the lead time demand prediction is the following $$\label{eltd} \widehat{D}_t^L=\widehat{L}_t\widehat{D}_t= \frac{1}{mn}\sum_{i=1}^m L_{t-i}\sum_{i=1}^n D_{t-i}\,.$$ where we use the moving average method for lead times and demands with the delay parameters $m$ and $n$, respectively. Moreover we assume that lead times and demands constitute i.i.d. sequences being mutually independent. Under the same assumptions as in the previous models on the policy and the lead time demand forecast error it is proven the following result. \[bmmt\] The measure of the bullwhip effect in a two stage supply chain has the following form $$BM=\frac{{{\bf Var}}q_t}{{{\bf Var}}D_t}= \frac{2\sigma_L^2(m+n-1)}{m^2n^2}+\frac{2\mu_D^2\sigma_L^2}{\sigma_D^2m^2}+\frac{2\mu_L^2}{n^2}+\frac{2\mu_L}{n}+1\,.$$ The above theoretical model shows that one cannot avoid lead time forecasting when placing orders and the variance of orders will increase dramatically if a crude estimation of lead time (e.g. small $m$) or no estimation is used (e.g. assuming a constant lead time when placing orders). Moreover the demand signal processing and the lead time signal processing which mean the practice of adjusting demand and lead time forecasts resulting in adjusting the parameters of the inventory replenishment rule are the main and equally important causes of the bullwhip effect. To confirm theoretical results derived in Michna and Nielesen [@mi:ni:13] we simulate the bullwhip effect measure in a supply chain which consists of three echelons. First we assume that client demands are deterministic that is during a given period (this will be a week) we observe the same constant demand $D$. Above the consumers in our supply chain we have a retailer, a manufacturer and a supplier. Between the manufacturer and the retailer there are stochastic lead times which create an i.i.d. sequence (that is they are the delivery times of the manufacturer to the retailer). Similarly we observe random times between the supplier and the manufacturer and they constitute an i.i.d. sequence. These two sequences are mutually independent. Moreover we take the review period equal to a week (7 days) and the lead times are uniform discrete random variables taking on values $1,2,\ldots,7$. The retailer uses the order-up-to level policy and the moving average method to predict lead times with the delay parameter $m$ (consumer demands are constant equal to 5000 so they are not predicted by the retailer). Similarly the manufacturer places orders to its supplier that is he uses the order-up-to-level policy and the moving average method to predict lead times with the delay parameter $m$ and the demands with the delay parameter $n$ (the demands of the retailer are random by random lead times in his lead time demand forecast). In Tab. \[bmmdet\] there are the simulation results that is the ratio of variances of the manufacturer and the retailer orders (variance of consumer demands is zero). A common feature of these simulation results is the fact that the delay parameter of demand forecasting $n$ smooths bullwhip much faster than the delay parameter of lead time forecasting $m$. Under the same assumptions as above we simulate the bullwhip effect adding that customer demands are stochastic and they are i.i.d. with uniform distribution on $(4500, 5500)$ and independent of lead times. In Tab. \[bmmsto1\] the bullwhip effect at the retailer stage is given that is the quotient of the retailer variance and the customer demand variance. Tab. \[bmmsto2\] shows the same as in Tab. \[bmmsto1\] but calculated theoretically using the formula of Th. \[bmmt\]. Here we get a reverse behavior than in the case of deterministic demands that is the delay parameter of lead time forecasting $m$ smooths bullwhip much faster than the delay parameter of demand forecasting $n$ (see Tab. \[bmmsto1\] and Tab. \[bmmsto2\]). Finally in Tab. \[bmmsto3\] we have the bullhwip effect measure at the manufacturer stage that is the ratio of the manufacturer order variance and the customer demand variance (we could count the quotient of the manufacturer order variance and the retailer order variance but it is easy to get this having also the ratio of the retailer variance and the customer demand variance). The simulation results for the bullwhip effect at the manufacturer stage show that the delay parameter of demand forecasting $n$ and the delay parameter of lead time forecasting $m$ dampen the effect with a similar strength. $m \backslash n$ 1 2 6 10 20 ------------------ --------- --------- -------- -------- -------- 1 61.6592 17.9880 4.4984 3.3532 2.4048 3 39.1578 14.5778 4.3510 3.1641 2.4692 6 44.2991 14.4909 5.4784 3.0812 2.4720 10 42.9382 13.8638 4.3274 3.6823 2.4074 15 42.4075 14.5218 4.0920 3.1734 2.5155 20 43.292 14.194 4.150 3.165 2.744 : The bullwhip effect measure for discrete uniform lead times and constant customer demands[]{data-label="bmmdet"} $m \backslash n$ 1 2 6 10 20 ------------------ --------- --------- -------- -------- -------- 1 2506.6 2336.7 2392.4 2380.9 2549.4 3 310.13 280.31 269.90 267.19 265.30 6 107.60 78.70 68.65 71.21 68.86 10 67.313 37.270 26.465 25.997 25.889 20 46.8218 19.3528 9.2602 8.1362 7.4563 : The bullwhip effect measure at the retailer stage for discrete uniform lead times and uniformly distributed customer demands[]{data-label="bmmsto1"} $m \backslash n$ 1 2 6 10 20 ------------------ -------- -------- -------- -------- -------- 1 2449.0 2417.0 2404.6 2402.9 2401.9 3 310.33 280.55 270.08 268.89 268.19 6 109.00 80.055 69.956 68.820 68.160 10 65.800 37.220 27.255 26.135 25.485 20 47.400 19.105 9.2361 8.1258 7.4820 : The bullwhip effect measure at the retailer stage for discrete uniform lead times and uniformly distributed customer demands calculated theoretically[]{data-label="bmmsto2"} $m \backslash n$ 1 2 6 10 20 ------------------ -------- --------- --------- --------- --------- 1 194700 41742 10720 8024.0 5891.9 3 13495 4221.1 1191.6 856.245 676.579 6 5821.1 1216.1 364.729 226.268 171.459 10 3671.5 581.341 112.412 93.529 62.333 20 2840.2 316.308 37.244 23.710 18.972 : The bullwhip effect measure at the manufacturer stage for discrete uniform lead times and uniformly distributed customer demands[]{data-label="bmmsto3"} Conclusions and future research opportunities ============================================= The main conclusion from our research is that stochastic lead times boost the bullwhip effect. More precisely we deduce from the presented models that the effect is amplified by the increase of the expected value, variance and the probability of the largest lead time. Moreover the delay parameter of the prediction of demands, the delay parameter of the prediction of lead times and the delay parameter of the prediction of lead time demands depending on the model are crucial parameters which can dampen the bullwhip effect. We must also notice that in all the presented models the bullwhip effect measure contains the term $\frac{2\mu_D^2\sigma_L^2}{\sigma_D^2}$ (see Th. \[thltdp1\], \[duc1\], \[duc2\] and \[bmmt\]) and except the model of Duc et al. [@du:lu:ki:08] this term can be killed by the prediction (going with $n$ or $m$ to $\infty$). The future research on quantifying the bullwhip effect has to be aimed at stochastic lead times with a different structure than i.i.d. and dependence between lead times and demands. One can investigate for example AR(1) structure of lead times and the influence of the dependence between lead times and demands on the bullwhip effect. Another challenge in bullwhip modeling is the problem of lead time forecasting and its impact on the bullwhip effect. A member of a supply chain placing an order must forecast lead time to determine an appropriate inventory level in order to fulfill its customer orders in a timely manner which implies that lead times influence orders. In turn orders can impact lead times. This feedback loop can be the most important factor causing the bullwhip effect which has to be quantified and in our opinion this seems to be the most important challenge and the most difficult problem in bullwhip modeling. Another topic is the combination of methods for lead time forecasting and demand forecasting (to predict lead time demand). Thus the spectrum of models which have to be investigated in order to quantify and find all causes of the bullwhip effect is very wide. However, these problems do not seem to be easy to solve by providing analytical models alone. Acknowledgments {#acknowledgments .unnumbered} --------------- This work has been supported by the National Science Centre grant\ 2012/07/B//HS4/00702. [99]{} Agrawal, S., Sengupta, R.N., Shanker, K. (2009). Impact of information sharing and lead time on bullwhip effect and on-hand inventory. [*European Journal of Operational Research*]{} [**192**]{}, pp. 576–593. Alwan L.C., Liu J., J., Yao, D.Q. (2003). Stochastic characterization of upstream demand processes in a supply chain. [*IIE Transactions*]{} [**35**]{}, pp. 207–219. Bagchi, U., Hayya, J., Chu, C. (1986). The effect of leadtime variability: The case of independent demand. [*Journal of Operations Management*]{} [**6**]{}, pp. 159–177. Box, G.E.P., Jenkins, J.M. (1976). [*Time series analysis: Forecasting and control.*]{} Holden-Day, San Francisco. Buchmeister, B., Pavlinjek, J., Palcic, I., Polajnar, A. (2008). Bullwhip effect problem in supply chains. [*Advances in Production Engineering and Management*]{} [**3**]{}(1), pp. 45–55. Chaharsooghi, S.K., Heydari, J. (2010). LT variance or LT mean reduction in supply chain management: Which one has a higher impact on SC performance? [*International Journal of Production Economics*]{} [**124**]{}, pp. 475–481. Chatfield, D.C., Kim, J.G., Harrison, T.P., Hayya, J.C. (2004). The bullwhip effect - Impact of stochastic lead time, information quality, and information sharing: a simulation study. [*Production and Operations Management*]{} [**13**]{}(4), pp. 340–353. Chen, F., Drezner, Z., Ryan, J.K., Simchi-Levi, D. (2000a). Quantifying the bullwhip effect in a simple supply chain. [*Management Science*]{} [**46**]{}(3), pp. 436–443. Chen, F., Drezner, Z., Ryan, J.K., Simchi-Levi, D. (2000b). The impact of exponential smoothing forecasts on the bullwhip effect. [*Naval Research Logistics*]{} [**47**]{}(4), pp. 269–286. Conover, W.J. (1971). [*Practical Nonparametric Statistics.*]{} New York, John Wiley & Sons. Disney, S.M., Towill, D.R. (2003). On the bullwhip and inventory variance produced by an ordering policy. [*Omega*]{} [**31**]{}, pp. 157–167. Duc, T.T.H., Luong, H.T., Kim, Y.D. (2008). A measure of the bullwhip effect in supply chains with stochastic lead time. [*The International Journal of Advanced Manufacturing Technology*]{} [**38**]{}(11-12), pp. 1201–1212. Fioriolli, J.C., Fogliatto, F.S. (2008). A model to quantify the bullwhip effect in systems with stochastic demand and lead time. [*Proceedings of the 2008 IEEE IEEM*]{}, pp. 1098–1102. Forrester, J.W. (1958). Industrial dynamics - a major break-through for decision-making. [*Harvard Business Review*]{} [**36**]{}(4), pp. 37–66. Geary, S., Disney, S.M., Towill, D.R. (2006). On bullwhip in supply chains - historical review, present practice and expected future impact. [*International Journal Production Economics*]{} [**101**]{}, pp. 2–18. Hariharan, R., Zipkin, P. (1995). Costumer-order information, leadtimes, and inventories. [*Management Science*]{} [**41**]{}, pp. 1599–1607. Lee, H.L., Padmanabhan, V., Whang, S. (1997a). The bullwhip effect in supply chains. [*Sloan Management Review*]{} [**38**]{}(3), pp. 93–102. Lee, H.L., Padmanabhan, V., Whang, S. (1997b). Information distortion in a supply chain: the bullwhip effect. [*Management Science*]{} [**43**]{}(3), pp. 546–558. Lee, H.L., So, K.C., Tang, C.S. (2000). The value of information sharing in a two-level supply chain. [*Management Science*]{} [**46**]{}(5), pp. 626–643. Li, C., Liu, S. (2013). A robust optimization approach to reduce the bullwhip effect of supply chains with vendor order placement lead time delays in an uncertain environment. [*Applied Mathematical Modeling*]{} [**37**]{}, pp. 707–718. Graves, S.C. (1999). A single-item inventory model for a non-stationary demand process. [*Manufacturing and Service Operations Management*]{} [**1**]{}, pp. 50–61. Hariharan, R., Zipkin, P. (1995). Customer-order information, leadtimes, and inventories. [*Management Science*]{} [**41**]{}, pp. 1599–1607. Kim, J.G., Chatfield, D., Harrison, T.P., Hayya, J.C. (2006). Quantifying the bullwhip effect in a supply chain with stochastic lead time. [*European Journal of Operational Research*]{} [**173**]{}(2), pp. 617–636. Michna, Z., Nielsen, P. (2013). The impact of lead time forecasting on the bullwhip effect. arXiv preprint arXiv:1309.7374 Michna, Z, Nielsen, I.E., Nielsen, P. (2013). The bullwhip effect in supply chains with stochastic lead times. [*Mathematical Economics*]{} [**9**]{}, pp. 71–88. Mohebbi, E., Posner, M.J.M. (1998). A continuous-review system with lost sales and variable lead time. [*Naval Research Logistics*]{} [**45**]{}, pp. 259–278. Munson, C.L., Hu, J., Rosenblatt, M. (2003). Teaching the costs of uncoordinated supply chains. [*Interfaces*]{} [**33**]{}, pp. 24–39. Nielsen, P., Michna, Z., Do, N.A.D. (2014). An empirical investigation of lead time distributions. [*IFIP Advances in Information and Communication Technology*]{} [**438**]{} (part I), pp. 435–442. Reiner, G., Fichtinger, J. (2009). Demand forecasting for supply processes in consideration of pricing and market information. [*International Journal of Production Economics*]{} [**118**]{}, pp. 55–62. Sarker, D., Zangwill, W. (1991). Variance effects in cyclic production systems. [*Management Science*]{} [**40**]{}, pp. 603–613. So, K.C., Zheng, X. (2003). Impact of supplier’s lead time and forecast demand updating on retailer’s order quantity variability in a two-level supply chain. [*International Journal of Production Economics*]{} [**86**]{}, pp. 169–179. Song, J. (1994). The effect of leadtime uncertainty in simple stochastic inventory model. [*Management Science*]{} [**40**]{}, pp. 603–613. Song, J. (1994). Understanding the leadtime effects in stochastic inventory systems with discounted costs. [*Operations Research Letters*]{} [**15**]{}, pp. 85–93. Song, J., Zipkin, P. (1993). Inventory control in a fluctuation demand environment. [*Operations Research*]{} [**41**]{}, pp. 351–370. Song, J., Zipkin, P. (1996). Inventory control with information about supply chain conditions. [*Management Science*]{} [**42**]{}, pp. 1409–1419. Zhang, X. (2004). The impact of forecasting methods on the bullwhip effect. [*International Journal of Production Economics*]{} [**88**]{}, pp. 15–27. Zipkin, P. (1986). Stochastic leadtimes in continuous-time inventory models. [*Naval Research Logistics Quarterly*]{} [**33**]{}, pp. 763–774. Zipkin, P.H. (2000). [*Foundations of Inventory Management.*]{} McGraw-Hill, New York. [^1]: Corresponding author\ Email: zbigniew.michna@ue.wroc.pl\ Tel/fax: +48 71 3680335
--- abstract: 'An imploding shell of radiation is shown to create a 2-D black hole within the framework of the “$R=T$” theory. The radius of the horizon is given by $r_H=\frac{1}{2M}$, where $M$ is the mass of the black hole. The topology of the central singularity is that of a corner. The radiation emitted very far from the black hole is thermal with temperature $\Theta^{\rm rad}=\frac{\hbar M}{2\pi k}$. The back-reaction problem is solved to one-loop order.' address: 'Blackett Laboratory, Imperial College, Prince Consort Road, London SW7 2BZ, U.K.' author: - 'F. Vendrell$^\ast$' date: 'May 19, 1997' title: 'A black hole in two-dimensional space-time' --- Black holes in 2-D space-times are toy models to understand the quantum physics of 4-D black holes, in particular their thermal radiation [@Ha]. Since the general theory of relativity has no physical contents in two dimensions [@Co], other 2-D gravity theories have been considered, in particular string theory for which there exist 2-D black hole solutions [@CGHS]. It is possible, however, to extract from the equations of general relativity a gravity theory by considering the formal limit $D\rightarrow 2$, where $D$ is the dimension of space-time [@Ma]. One obtains the equation $$\begin{aligned} R(x) = 8\pi G\,T^{\rm cl}(x), \label{RT}\end{aligned}$$ where $c=1$, $G$ is Newton’s constant and where $T^{\rm cl}(x)$ is the trace of the classical energy-momentum tensor $T_{\mu\nu}^{\rm cl}(x)$. This equation is supplemented by the continuity equation $$\begin{aligned} \nabla^\mu T_{\mu\nu}^{\rm cl} (x) = 0, \label{CE}\end{aligned}$$ which follows also from the equations of general relativity. A black-hole solution was obtained from these equations by Mann et al. by considering a static distribution of matter localised in space [@MST]. This black hole is an extension and generalisation of the solution of Brown et al. [@BHT]. It has a number of similarities with the Schwarzschild black hole, in particular, it may be created from a collapse of matter. However, there is no [*global*]{} set of coordinates for which the metric is Minkowskian very far from the black hole, and, furthermore, its thermal radiation does not originate from a [*global*]{} vacuum [@ST]. At least in this sense this black hole differs from the 4-D case. There is, however, a 2-D black hole based on Eqs. (\[RT\]) and (\[CE\]) which satisfies the properties above, as shown in the present Letter. This is formed from an imploding shell of radiation symmetric under space-reflection. The metric obtained is discontinuous when the shell is infinitesimally thin, as in the 3-D circularly collapse [@3D], but contrary to the 4-D spherically collapse where it is possible to require its continuity (but not the continuity of its derivative) [@Sy]. The 2-D metric exhibits a coordinate singularity, where the horizon is located, and a central [*real*]{} singularity as well. The interior and exterior of the black hole are causally disconnected regions. A massless or massive particle inside the black hole will inextricably fall into the central singularity within a finite affine parameter or time. The topology of the singularity is that of a corner. I also show that this 2-D black hole emits a thermal radiation whose temperature is proportional to its mass. The back-reaction on space-time may easily be analysed to one-loop order due to the simplicity of the model. Consider a 2-D space-time which is symmetric under space-reflection with respect to an origin and which is covered [*globally*]{} by the set of coordinates $(t,r)$, where $t\in{{\mathchoice{\hbox{$\sf\textstyle I\hspace{-.15em}R$}} {\hbox{$\sf\textstyle I\hspace{-.15em}R$}} {\hbox{$\sf\scriptstyle I\hspace{-.10em}R$}} {\hbox{$\sf\scriptscriptstyle I\hspace{-.11em}R$}}}}$ is the time and $r\geq 0$ is the distance to the origin. I assume that this space-time may also be covered by a set of [*conformal*]{} coordinates $x^\pm=x^0\pm x^1\in{{\mathchoice{\hbox{$\sf\textstyle I\hspace{-.15em}R$}} {\hbox{$\sf\textstyle I\hspace{-.15em}R$}} {\hbox{$\sf\scriptstyle I\hspace{-.10em}R$}} {\hbox{$\sf\scriptscriptstyle I\hspace{-.11em}R$}}}}$: $$\begin{aligned} ds^2= C(x)\,dx^+\,dx^-, \label{conformal}\end{aligned}$$ such that, if $x^+\leq x^+_0$ and $x^1\geq0$, $$\begin{aligned} x^\pm = t \pm r, \label{rtx<} $$ where $x^+_0>0$ is an arbitrary constant. Consider now a imploding shell of radiation localised at $x^+=x^+_0$ and defined in accordance with Eq. (\[CE\]) by $$\begin{aligned} T^{\rm cl} (x) =\frac{M}{2\pi G}\,\delta(x^+-x^+_0), \label{SW}\end{aligned}$$ where $M>0$ is a constant. Equation (\[RT\]) implies that the curvature is infinite where the shell is localised and that the conformal factor $C(x)$ satisfies the differential equation $$\begin{aligned} \partial_+\partial_- \ln \vert C(x)\vert = M\,C(x)\,\delta(x^+-x^+_0). \label{C(x)}\end{aligned}$$ I also assume that the space-time is Minkowskian inside the imploding shell: $$\begin{aligned} C(x)= 1, &\hspace{10mm}&\mbox{if $x^+<x^+_0$.} \label{BC} \end{aligned}$$ The problem consists of obtaining the conformal factor $C(x)$ outside the imploding shell and then extending there the $(t,r)$ coordinates in a sensible way. Then the entire accessible space-time will be known, since this is represented in the plane $(x^+,x^-)$ by the set of points whose radius $r$ is larger than zero, also called the [*physical region*]{}. Equation (\[rtx&lt;\]) implies that the region $x^1<0$ is unphysical if $x^+\leq x^+_0$. The problem given by Eqs. (\[C(x)\]) and (\[BC\]) is not well defined for [*every*]{} definition of the delta function (or structure of the imploding shell), and if it is, the solution will depend on that definition (see below). I assume here that the delta function is defined by $$\begin{aligned} \int_{-\infty}^{+\infty}du \, f(u)\,\delta(u)= \lim_{\epsilon\rightarrow 0^+} f(\epsilon) \equiv f(0^+),\end{aligned}$$ where the test function $f$ is continuous except maybe at $u=0$. This delta function may be represented by the limit of a series of normalised functions $\{\delta_n\}_{n\in{{\mathchoice{\hbox{$\sf\textstyle I\hspace{-.15em}N$}} {\hbox{$\sf\textstyle I\hspace{-.15em}N$}} {\hbox{$\sf\scriptstyle I\hspace{-.10em}N$}} {\hbox{$\sf\scriptscriptstyle I\hspace{-.11em}N$}}}}}$ vanishing for negative arguments (see fig. 1 for a particular representation). =2.4truein Define also the theta function $\theta$ by $$\begin{aligned} \theta(u) &=& \left\{ \begin{array}{rcl} 0, & \hspace{5mm} &\mbox{if $u< 0$,} \\ [2mm] 1, & &\mbox{if $u>0$,} \end{array} \right.\end{aligned}$$ and the function $m(x^+) = M\,\theta(x^+-x^+_0)$. Then integrating Eq. (\[C(x)\]) once gives $$\begin{aligned} \partial_- \ln \vert C(x)\vert = m(x^+)\,\left.C(x)\right\vert_{x^+=x^+_0+0^+},\end{aligned}$$ where the boundary condition (\[BC\]) has been used. This equation would have been different for another definition of the delta function [@DF]. Integrating again gives the discontinuous line element $$\begin{aligned} ds^2= \frac{dx^+\,dx^-}{1-m(x^+)\,(x^--\Delta)}, \label{dsx}\end{aligned}$$ where $\Delta$ is a constant. This result was first obtained in Ref. [@Ve] but is valid in our case only in the physical region which will be fixed below. The value of the conformal factor at $x^+=x^+_0$ may not be determined. This is because the solution of Eqs. (\[C(x)\]) and (\[BC\]), when the delta function is replaced by one of the functions $\delta_n$, depends strongly on $x$ in the interval $[x^+_0,x^+_0+1/n]$ if $n\gg1$. The solution becomes discontinuous in the limit $n\rightarrow \infty$ only. The constant $\Delta$ is arbitrary since the form of both the metric (\[conformal\]) and the energy-momentum tensor (\[SW\]) are invariant under translation of $x^-$. The conformal factor $C(x)$ is singular at $x^-= x^-_H\equiv \frac{1}{M}+\Delta$, if $x^+> x^+_0$, and is negative only if $x^+> x^+_0$ and $x^->x^-_H$. Furthermore, it is equal to one on the surface defined by $x^-=\Delta$ and $x^1\geq 0$. This implies that the energy of the shock wave (\[SW\]) flowing through this surface is given by $$\begin{aligned} E = 2 \int_{\Delta}^\infty dx^+ \, \left. T^{{\rm cl}}(x)\right\vert_{x^-=\Delta} = \frac{M}{\pi G}\cdot \label{EM}\end{aligned}$$ In order to understand the structure of the space-time obtained better, the two sets of coordinates $y_I$ and $y_E$ are defined for $x^->x_H^-$ and $x^-<x_H^-$ respectively by $$\begin{aligned} &\mbox{\hspace{3mm}}& \left\{ \begin{array}{rcl} y^+_{^{^I_E}}(x^+) &=& x^+, \\ [2mm] y^-_{^{^I_E}}(x^-) &=& \pm\frac{1}{M}\,\ln \left\vert\,x^-_H-x^- \,\right\vert, \end{array} \right. \label{y}\end{aligned}$$ where $y_{I,E}^\pm\in{{\mathchoice{\hbox{$\sf\textstyle I\hspace{-.15em}R$}} {\hbox{$\sf\textstyle I\hspace{-.15em}R$}} {\hbox{$\sf\scriptstyle I\hspace{-.10em}R$}} {\hbox{$\sf\scriptscriptstyle I\hspace{-.11em}R$}}}}$ and the upper (lower) sign is related to the upper half-plane $x^->x_H^-$ (lower half-plane $x^-<x_H^-$) \[the subscripts are related to the Interior and Exterior of the black hole\]. In these coordinates the line element (\[dsx\]) becomes, if $y^+> x^+_0$, $$\begin{aligned} ds^2 = \left\{ \begin{array}{ccl} -dy_I^+\,dy_I^-, & \mbox{\hspace{3mm}}& \mbox{if $x^->x_H^-$,} \\[3mm] +dy_E^+\,dy_E^-, & &\mbox{if $x^-<x_H^-$.} \end{array} \right. \label{dsy}\end{aligned}$$ This implies that the geodesics expressed in terms of the $x$ and $y_{I,E}$ coordinates are straight lines in the half-planes $x^+<x^+_0$ and $x^+> x^+_0$ respectively. The geodesics are continuous functions of their affine parameter, but the slope of non-null geodesics are discontinuous across the shell because the curvature is infinite there. Their velocity $\dot{x}_f$ just outside the shell is given in terms of their initial velocity $\dot{x}_i$ by $\dot{x}_f^-=\dot{x}_i^-$ and $$\begin{aligned} \dot{x}_f^+= M\,\dot{x}_i^+\, \left(\,x^-_H-x^-_c\,\right),\end{aligned}$$ if $x_c$ is where they cross the shell. It takes an [*infinite*]{} proper-time or affine parameter for a time-like or null geodesic to reach or to move away from the coordinate singularity at $x^-=x^-_H$. This implies that the two regions $x^->x^-_H$ and $x^-<x^-_H$ are [*causally disconnected*]{} if $x^+> x^+_0$. There is no set of local coordinates centred at $x^-=x^-_H$ for which the metric is not singular there. Although space-time is flat everywhere if $x^+>x^+_0$, it is not possible to find a set of [*global*]{} coordinates for which [*i)*]{} the metric is Minkowskian [**]{} and [*ii)*]{} which cover the half-plane $x^+>x^+_0$. At least two such set of coordinates must be used, as for instance the $y_{I,E}$ coordinates. The $(t,r)$ coordinates are now extended in the right half-plane $x^+> x^+_0$. Contrary to the higher-dimensional cases, there is no differential equation for $r=r(x^+,x^-)$ which follows from the equations of motion. This stems from the fact that, in two dimensions, there is no angular contribution to the metric. In consequence, the function $r(x^+,x^-)$ is left undetermined for $x^+> x^+_0$. This is also the case for the function $t(x^+,x^-)$. It is however natural to assume that the curves $r=const.$ are geodesics outside the imploding shell as well. By extending them across the shell continuously, the spatial coordinate $y^1$ is related to the radius by $$\begin{aligned} y^1_{^{^I_E}}(r) = \frac{x^+_0}{2}\mp \frac{1}{2M}\log\left\vert\,2r+x^-_H-x^+_0\,\right\vert.\end{aligned}$$ I require this expression to take the more familiar form $$\begin{aligned} y^1_{^{^I_E}}(r) = r_0 \mp r_H \log\left\vert\,r-r_H\,\right\vert, \label{rt}\end{aligned}$$ where $r_0=\displaystyle x^+_0/2\mp\ln 2/(2M)$, so that the constants $x_H^-$ and $r_H$ may be fixed: $$\begin{aligned} \begin{array}{rclcrcl} x_H^- &=& \displaystyle x^+_0 - \frac{1}{M}, &\hspace{5mm}& r_H &=& \displaystyle \frac{1}{2M}\cdot \label{rH} \end{array}\end{aligned}$$ The time is defined by $t = \mp y^0_{^{^I_E}}$ if $x^+>x^+_0$. The coordinates $(x^+,x^-)$ and $(t,r)$ are thus related through Eqs. (\[y\]) and (\[rt\]) by $$\begin{aligned} e^{-2Mt} &=& e^{\pm M x^+} \left\vert\, x^-_H-x^-\,\right\vert, \label{rtx>*}\\ [2mm] 2r-\frac{1}{M}&=& e^{\mp M (x^+-x^+_0)}\left(\,x^-_H-x^-\,\right). \label{rtx>}\end{aligned}$$ These equations show that the two curves $r=r_H$ and $x^-=x^-_H$ coincide if $x^+>x^+_0$ and that $x^-=x^-_H$ implies $t=+\infty$ if $x^+>x^+_0$. The amount of time $\delta t(r)$ for a curve $r=const.$ to cross the shell is given from Eqs. (\[rtx&lt;\]), (\[rtx&gt;\*\]) and (\[rtx&gt;\]) by $$\begin{aligned} \delta t(r)= r-\frac{1}{2M}\ln\left\vert\, r-\frac{1}{2M}\,\right\vert \mp r_0 -x^+_0,\end{aligned}$$ and diverges when $r$ tends to $r_H$. The accessible space-time and its topology are now completely specified. They are represented in fig. 2. =3.375truein Inside the collapsing shell the line element in the $(t,r)$ coordinates is Minkowskian, and outside of it is given from Eqs. (\[dsy\]) and (\[rt\]) by $$\begin{aligned} ds^2 = {\rm sgn}\left(r-\frac{1}{2M}\right)\, \left[\,dt^2- \frac{dr^2}{\left(\,2Mr-1\,\right)^2}\,\right]. \label{dsrt}\end{aligned}$$ This line element is singular at $r=r_H$. The region $r\leq r_H$ outside the imploding shell is a [*trapped*]{} region because all ingoing and outgoing null geodesics with initial point in this region will be contained in it, and so no light ray comes out of it. Furthermore in this region the roles of the time and radius are exchanged, i.e. the curves $r=const.$ are space-like geodesics there. This means that [*all null and time-like geodesics with $r<r_H$ will reach the singularity at $r=0$*]{}. The nature of this singularity is analysed below. The boundary of the trapped region is the [*apparent horizon*]{} located at $r=r_H$ if $x^+>x^+_0$. The line element (\[dsrt\]) is not Minkowskian far from the black hole. It is useful to introduce in this connection the new set of coordinates $(T,R)$ defined by $T=t$ and $$\begin{aligned} R+R_H\log\left\vert\,R-R_H\,\right\vert = r_H \log\left\vert\,r-r_H\,\right\vert. \label{R}\end{aligned}$$ By assuming that the origins of the coordinates $r$ and $R$ coincide, one deduces $R_H=r_H$. Inside the collapsing shell the line element is not Minkowskian in these new coordinates, but outside of it and very far from the black hole it is asymptotically Minkowskian: $$\begin{aligned} ds^2 = {\rm sgn}\left(R-\frac{1}{2M}\right)\, \left[\,dT^2- \frac{dR^2}{\left(\,1-\frac{1}{2MR}\,\right)^2}\,\right]. \label{dsRT}\end{aligned}$$ This line element exhibits a singularity in the limit $R\rightarrow 0$, which does not appear in the metrics (\[dsx\]), (\[dsy\]) and (\[dsrt\]). So one may suspect that this is due to the bad behaviour of the coordinate $R$ and that this is not a property of space-time itself. But this is not the case. That the central singularity is [*real*]{} comes from the fact that it takes a [*finite*]{} amount of proper time or affine parameter for all time-like or null geodesics inside the apparent horizon to fall into $r=0$, and that it does not make any sense to extend them further [@HEWJ]. One expects indeed a space-time singularity to be located at $r=0$ since all the energy is concentrated there after the collapse. This singularity has the topology of a [*corner*]{} because the scalar curvature vanishes along any curve ending at $r=0$, but is not defined there. The remarkable result is that, contrary to the 3-D case [@SGA], but like the 4-D case [@Sy], [*the central singularity is located inside an apparent horizon*]{}. The line element (\[dsRT\]) describes thus a black hole. The constant $M$ is identified with its mass because of Eq. (\[EM\]) [@tH]. The existence of a spontaneous creation of particles in this space-time is shown by calculating the expectation value of the energy-momentum tensor in the incoming vacuum. This quantity will be denoted by $T_{\mu\nu}^{\rm rad}$. Outside the imploding shell and far from the black hole, Eqs. (\[y\]), (\[rt\]) and (\[R\]) imply that $x^\pm\approx x^\pm(Z^\pm)$, where $Z^\pm=T\pm R$. For both the scalar and Dirac massless fields, $T_{\mu\nu}^{\rm rad}$ is thus given in this region in terms of the Schwartzian derivative of $x^\pm(Z^\pm)$ [@Ve; @DFU], $$\begin{aligned} T_{\pm\pm}^{\rm rad}\left(Z^\pm\right) \approx -\frac{1}{24\pi}\, \left[\,\partial^2_\pm \ln\partial_\pm x^\pm - \frac{1}{2} \left(\partial_\pm \ln\partial_\pm x^\pm\right)^2\,\right], \nonumber\end{aligned}$$ and $T_{+-}^{\rm rad} \approx 0$. Noting furthermore that $x^+(Z^+)\approx Z^+$ and $x^-(Z^-)\approx x^-_H-\exp(-MZ^-)$, one obtains in this region $$\begin{aligned} \begin{array}{rlccrlc} \displaystyle T_{++}^{\rm rad}\left(Z^+\right) &\approx& 0, &\hspace{5mm}& \displaystyle T_{--}^{\rm rad}\left(Z^-\right) &\approx& \displaystyle \frac{M^2}{48\pi}\cdot \end{array}\end{aligned}$$ This shows that the black hole emits a radiation. Since the thermal average of the energy-momentum tensor equals $\frac{\pi k^2}{12\hbar^2}\,\Theta^2$ [@Ve], where $\Theta$ is the temperature, then $$\begin{aligned} \Theta^{\rm rad} = \frac{\hbar M}{2\pi k}\end{aligned}$$ is the temperature of the outgoing radiation detected by a distant [*inertial*]{} observer outside the imploding shell. The back-reaction on space-time of the emitted radiation is analysed to one-loop order by adding to Eq. (\[RT\]) the trace anomaly $T^{\rm rad}(x)=-\frac{\hbar}{24\pi}R(x)$ [@DFU; @BF]. One obtains thus $$\begin{aligned} R(x) = \alpha\,8\pi G \, T^{\rm cl}(x)\end{aligned}$$ where $\alpha=\frac{1}{1+\hbar G/3} < 1$. The mass and temperature of the black hole are then renormalised by a term of order Planck length squared and become smaller: $$\begin{aligned} \begin{array}{rclcrcl} M_{\rm BR} &=& \alpha\,M, &\hspace{5mm}& \Theta^{\rm rad}_{\rm BR} &=& \displaystyle \alpha\,\frac{\hbar M}{2\pi k}\cdot \end{array}\end{aligned}$$ Furthermore from Eq. (\[rH\]) its radius becomes larger. If the “$R=T$” theory is considered instead of the 2-D theory of general relativity, the results of the present Letter show that there is a 2-D black hole whose metric, in a given set of coordinates, is Minkowskian far away. This 2-D black hole differs in essence from the Schwarzschild black hole by the fact that its central singularity is not [*physically genuine*]{} although real [@HEWJ], because the curvature does not diverge in its neighbourhood. Also, contrary to the 4-D case, this 2-D black hole grows without limits when its mass vanishes. Although the line element (\[dsRT\]) may not be directly obtained as a static solution from the equation of motion, it is physically relevant because it describes the result of a collapse. I would like to thank C. Isham, M. E. Ortiz, and S. Schreckenberg for helpful conversations and critical reading of the manuscript. I also acknowledge support from the Société Académique Vaudoise and from the Swiss National Science Foundation. $\ast$Electronic address: f.vendrell@ic.ac.uk S. W. Hawking, Commun. Math. Phys. [**43**]{}, 199 (1975). P. Collas, Am. J. Phys. [**45**]{}, 833 (1977). C. G. Callan [*et al.*]{}, Phys. Rev. D [**45**]{}, R1005 (1992). R. B. Mann, Gen. Rel. Grav. [**24**]{}, 433 (1992); R. B. Mann, and S. F. Ross, Class. Quant. Grav. [**10**]{}, 1405 (1993). R. B. Mann, A. Shiekh, and L. Tarasov, Nucl. Phys. [**B341**]{}, 134 (1990); D. Christensen, and R. B. Mann, Class. Quantum Grav. [**9**]{}, 1769 (1992). J. D. Brown, M. Henneaux, and C. Teitelboim, Phys. Rev. D [**33**]{}, 319 (1986). R. B. Mann, and T. G. Steele, Class. Quantum Grav. [**9**]{}, 475 (1992). V. Husain, Phys. Rev. D [**50**]{}, R2361 (1994). J. L. Synge, Proc. Roy. Irish Acad. [**59**]{} A, 1 (1957). If the delta function had been defined by $\int_{-\infty}^{+\infty}du \, f(u)\,\delta(u)=f(0^-)$, then Eqs. (\[C(x)\]) and (\[BC\]) would imply $\partial_- \ln \vert C(x)\vert = m(x^+)$ and $C(x)= \exp\left[m(x^+)\,(x^--\Delta)\right]$, where $\Delta$ is a constant and $m(x^+)$ is defined as above. The problem is not well defined if $\int_{-\infty}^{+\infty}du \, f(u)\,\delta(u) =f(0)$. F. Vendrell, Helv. Phys. Acta [**70**]{}, 598 (1997). S. W. Hawking, and G. F. R. Ellis, [*The Large Scale Structure of Space-Time*]{} (Cambridge University Press, 1973); R. M. Wald, [*General Relativity*]{} (The University of Chicago Press, 1984); P. S. Joshi, [*Global Aspects in Gravitation and Cosmology*]{} (Oxford University Press, 1996). A. Staruszkiewicz, Acta Phys. Polon. [**24**]{}, 735 (1963); J. R. Gott III, and M. Alpert, Gen. Rel. Grav. [**16**]{}, 243 (1984). G. ’t Hooft, Int. J. Mod. Phys. A [**11**]{}, 4623 (1996). P. C. W. Davies, and S. A. Fulling, Proc. R. Soc. Lond. A [**354**]{}, 59 (1977); P. C. W. Davies, and W. G. Unruh, Proc. R. Soc. Lond. A [**356**]{}, 259 (1977). R. Balbinot, and R. Floreanini, Phys. Lett. [**151B**]{}, 401 (1985); N. Sanchez, Nucl. Phys. [**B266**]{}, 487 (1986).
--- abstract: 'We discuss the proton lifetime in pure gravity mediation models with non-universal Higgs soft masses. Pure gravity mediation offers a simple framework for studying SU(5) grand unified theories with a split supersymmetry like spectra. We find that for much of the parameter space gauge coupling unification is quite good leading to rather long lifetimes for the proton. However, for $m_{3/2}\sim 60$ TeV and $\tan\beta\sim 4$, for which gauge coupling unification is also good, the proton lifetime is short enough that it could be in reach of future experiments.' --- FTPI-MINN-15/02\ UMN–TH–3417/15\ IPMU15-0011 1.35cm [ ]{} 1.2cm Jason L. Evans$^a$, Natsumi Nagata$^{a,b}$, and Keith A. Olive$^a$ 0.4cm [*$^a$William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,\ University of Minnesota, Minneapolis, MN 55455, USA*]{}\ \[3pt\] [*$^b$Kavli IPMU (WPI), TODIAS, University of Tokyo, Kashiwa 277-8583, Japan*]{} 1.5cm Introduction ============ After the initial run of the LHC, the constraints on new physics are rather severe [@lhc]. Although models can still be made to realize weak-scale mass spectra, sfermion masses of generic models like the constrained minimal supersymmetric standard model (CMSSM) [@cmssm] are now required to be larger than about a TeV. As a result, the naturalness of supersymmetry (SUSY) has been called into question. However, it was perhaps naive to expect nature to fall into our strict definition of naturalness with less than $10\%$ fine-tuning. Supersymmetry, with sfermion masses larger than a TeV, still solves the larger hierarchy problem associated with grand unification and/or the Plank scale. Furthermore, if the sfermion masses are set by the gravitino mass, $m_{3/2}$, and are larger than about 10 TeV, the gravitino lifetime is short enough that it decays before BBN [@gravitinoBBN]. Moreover, as the mass scale of the sfermions is pushed beyond the weak scale, the constraints on SUSY models from flavor and CP violation in the sfermion sector are greatly relaxed [@flavor]. These advantages, plus the fact that sfermion masses this large are consistent with a larger Higgs mass like the $126$ GeV Higgs boson seen at the LHC [@lhch] suggest we relax our strict definition of naturalness. Large sfermion masses like those found in split supersymmetry [@split] are realized in models such as pure gravity mediation (PGM) [@pgm; @Hall:2011jd], which can be parametrized by a single parameter [@eioy] $m_{3/2}$. This minimal model of pure gravity mediation is similar in many ways to minimal supergravity (mSUGRA). Universal masses equal to $m_{3/2}$ are imposed at the grand unified theory (GUT) scale based on the assumption that the Kähler manifold is flat for all matter fields. Unlike the CMSSM, gauginos do not get a tree-level mass. This is because the supersymmetry breaking field is not a singlet and so is excluded from coupling to the gauge kinetic function to leading order. Thus, the leading order contribution to the mass of the gauginos comes from anomaly mediation [@anom] and is loop suppressed relative to the sfermion masses. The $B$-term, which contributes to electroweak symmetry breaking, is identical to that in mSUGRA, $B=A-m_{3/2}$. However, since the $A$-terms of PGM are effectively zero, $B=-m_{3/2}$ and $B$ is fixed for a given value of $m_{3/2}$. This makes radiative electroweak symmetry breaking (EWSB) difficult. However, by adding a Giudice-Masiero term [@gm], $B$ is no longer fixed by $m_{3/2}$ alone, but also depends on the coupling of the Giudice-Masiero term. This additional freedom in $B$ makes radiative EWSB possible, but only for small values of $\tan\beta$. Once the Higgs mass constraint is taken into consideration, these models have a single free parameter which is some combination of $m_{3/2}$ and $\tan\beta$ [@eioy]. However, because $\tan\beta$ is restricted to be less than about 3, $m_{3/2}$ tends to be rather large. The constraints on $\tan\beta$ can be removed, if the Higgs soft masses at the GUT scale are taken to be non-universal [@eioy2]. In this case, $m_{3/2}$ can be taken to be smaller for larger values of $\tan\beta$. Another important motivation for SUSY is grand unification [@Georgi:1974sy]. In the Standard Model (SM), the gauge couplings approach each other as they are run up to the high scale [@Georgi:1974yf]. However, the quality of the coupling unification is less than convincing. If the SM is supersymmetrized, on the other hand, the unification of the gauge couplings becomes quite good [@Dimopoulos:1981yj]. Furthermore, grand unification in the SM would generate enormous quadratic divergences for the Higgs boson. However, these quadratic divergences are significantly reduced for supersymmetric grand unified theories, even if the sfermions are larger than a TeV. Clearly, grand unification is another motivation for PGM. The signatures of these simple PGM-type models are limited. One possible signature at the LHC for small $m_{3/2}$ is the wino [@winolhc]. For larger $m_{3/2}$, on the other hand, the wino cannot be seen at the LHC but could be a viable thermal relic dark matter candidate [@Hisano:2006nn]. If this is indeed the case, it could be seen by indirect detection experiments in the near future [@Bhattacherjee:2014dya]. However, this scenario is already under tension from existing indirect detection experiments [@wino]. The direct detection of wino dark matter is challenging as its scattering cross section with a nucleon is as small as $10^{-47}~\text{cm}^2$ [@winodd]. A Higgsino signature at the LHC is another possible observable which arises from tuning $\mu$ to be small [@higgsinolhc]. However, this is also difficult to see. This scenario could also have Higgsino-like dark matter which could possibly be seen in future indirect detection experiments [@baer]. The scattering cross section of the Higgsino with a nucleon is dependent on the size of the wino component of the LSP, and may be probed in future experiments [@Hisano:2012wm]. In this work, we will examine another possible signature of these models. Since the colored triplet Higgs gives threshold corrections to the gauge couplings when integrated out, the quality of the coupling unification determines the mass of the colored triplet Higgs [@Hisano:1992mh; @Hisano:1992jj; @Hisano:2013cqa] and so affects the lifetime of the proton. When the colored Higgs is integrated out, it also generates a dimension-five operator proportional to down-type Yukawa couplings which lead to proton decay [@dim5protondecay]. Since this dimension-five operator is proportional to the down-type Yukawa couplings, it will be enhanced for large $\tan\beta$. Proton decay from this dimension-five operator arises from a loop diagram with a Higgsino mass insertion [@higprotondec]. Proton decay of this type can then be suppressed for small $\mu$. When unification is not ideal and $\tan\beta$ is large, a larger Higgsino mass can increase the rate of proton decay from this dimension 5 operator. Parameters of this size are viable in PGM models. Since proton decay of this type is also suppressed by $m_{3/2}$, the more interesting parameter space will be for smaller $m_{3/2}$ and larger $\tan\beta$. Therefore, we will need to consider non-universal Higgs soft masses. We will find that if $m_{3/2}$ is small and $\tan\beta$ is larger, which is also consistent with the Higgs mass measurement, the proton lifetime may be in reach of future experiments. However, for much of the parameter space the lifetime tends to be well beyond the reach of future experiments. We will also look at the quality of the gauge coupling unification determined by the deviation of the colored triplet Higgs mass, $M_{H_C}$, from the GUT-scale as well as the deviation of $\left(M_X^2M_\Sigma\right)^{1/3}$, where $X$ represents the GUT scale SU(5) gauge bosons that become massive and $\Sigma$ is the ${\bf 24}$ which breaks SU(5) at the GUT scale. Minimal SUSY SU(5) GUT {#sec:MSGUT} ====================== In this section, we will outline the SU(5) SUSY GUT theory [@Dimopoulos:1981zb; @Sakai:1981gr] we will consider. Additional details on these models can be found in Appendix \[App:NotCon\]. The superpotential for this minimal SU(5) SUSY GUT is given by $$W=W_{\text{Higgs}}+W_{\text{Yukawa}} ~, \label{eq:superpotential}$$ where $$\begin{aligned} W_{\rm Higgs} &= \frac{1}{3}\lambda_{\Sigma}{\rm Tr}\Sigma^3 +\frac{1}{2}m_\Sigma {\rm Tr} \Sigma^2 +\lambda_H \bar{H}\Sigma H +m_H\bar{H} H ~, \label{superpotentialHiggs} \\ W_{\rm Yukawa} &= \frac{1}{4}h^{ij}\epsilon_{\hat{a}\hat{b}\hat{c}\hat{d}\hat{e}}\Psi_i^{\hat{a} \hat{b}} \Psi_j^{\hat{c}\hat{d}}H^{\hat{e}} -\sqrt{2} f^{ij}\Psi_i^{\hat{a}\hat{b}} \Phi_{j\hat{a}}\bar{H}_{\hat{b}}~, \label{superpotentialYukawa}\end{aligned}$$ and $\hat{a},\hat{b},\dots=1$–$5$ represent the SU(5) indices and $\epsilon_{\hat{a}\hat{b}\hat{c}\hat{d}\hat{e}}$ is the totally antisymmetric tensor with $\epsilon_{12345}=1$. $\Phi_i$ and $\Psi_i$ are the chiral superfields in the $\bar{\bf 5}$ and ${\bf 10}$ representations, respectively, with $i$ denoting the generation index. $H$ and $\bar H$ are the ${\bf 5}$ and $\bar {\bf 5}$ containing the mininal supersymmetric Standard Model (MSSM) doublets. In these expressions, we have assumed $R$-parity conservation which forbids terms like $\Psi\Phi\Phi$ and $H\Phi$. The adjoint Higgs field, $\Sigma$, gets a vacuum expectation value (VEV) in the direction $$\langle \Sigma \rangle =V\cdot {\rm diag}(2,2,2,-3,-3)~,$$ breaking the SU(5) gauge group to the SM gauge groups SU(3)$_C\otimes$SU(2)$_{L}\otimes$U(1)$_Y$. Because SUSY remains unbroken for SU(5) breaking, we have $V=m_\Sigma/\lambda_\Sigma$. For this setup, the masses of $\Sigma_3$, $\Sigma_8$, $\Sigma_{24}$, and $H_C$ are given as $$M_{\Sigma}\equiv M_{\Sigma_8}=M_{\Sigma_3}=\frac{5}{2}\lambda_{\Sigma}V~, ~~~~~ M_{\Sigma_{24}}=\frac{1}{2}\lambda_{\Sigma}V~, ~~~~~ M_{H_C}=5\lambda_H V~,$$ while the $\mu$ term for the MSSM Higgs fields is $$\mu_0 =m_H -3\lambda_H V ~.$$ As is usually done, we tune the parameter $m_H$ to realize $\mu_0\ll m_H$ which is typically referred to as the doublet-triplet splitting.[^1] In addition, the gauge interactions of the adjoint Higgs field yield an $X$-boson mass of $M_X=5\sqrt{2}g_5 V$ where $g_5$ is the unified gauge coupling constant. The components $\Sigma_{(3^*, 2)}$ and $\Sigma_{(3,2)}$ become the longitudinal component of the $X$ bosons, and thus do not appear as physical states. The Yukawa couplings $h^{ij}$ and $f^{ij}$ in Eq. (\[superpotentialYukawa\]) have redundant degrees of freedom, most of which are eliminated by the field redefinition of $\Psi$ and $\Phi$. Since $h^{ij}$ is a symmetric matrix, $h^{ij}$ and $f^{ij}$ have six and nine complex degrees of freedom, respectively. The field redefinition of the SM fields forms the U(3)$\otimes$U(3) transformation group, and thus the physical degrees of freedom turn out to be $(12+18)-9\times 2=12$. Among these degrees of freedom, six of them are the quark mass eigenvalues and four are for the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements and we are left with two phases [@Ellis:1979hy]. In this paper, we take the same basis used in Ref. [@Hisano:1992jj] such that $$\begin{aligned} h^{ij}&= e^{i\varphi_i} \delta_{ij}f_{u_i}(Q_{{G}}) ~, \\ f^{ij}&= V^*_{ij} f_{d_j}(Q_{{G}}) ~,\end{aligned}$$ where $f_{u_i}(Q_{{G}})$ and $f_{d_j}(Q_{{G}})$ are the up-type and down-type Yukawa couplings, respectively, at a scale $Q_{{G}}$ around the GUT scale, and $V_{ij}$ is the CKM matrix. The phase factors $\varphi_i$ satisfy the condition $\sum_{i}\varphi_i=0$, and thus only two of them are independent. In this basis, the MSSM superfields are embedded into the SU(5) matter multiplets as $$\begin{aligned} \Psi_i&\ni \{ Q_i,~e^{-i\varphi_i}\overline{U}_i,~V_{ij}\overline{E}_j \}~,~~~~~~ \Phi_i\ni\{ \overline{D}_i,~L_i \}~.\end{aligned}$$ Then, Eq.  leads to $$\begin{aligned} W_{\rm Yukawa}&=f_{u_i} (Q^{a}_i\cdot H_2)\overline{U}_{ia}-V^*_{ij}f_{d_j} (Q^{a}_i\cdot H_1) \overline{D}_{ja}-f_{d_i} \overline{E}_i (L_i\cdot H_1)\nonumber \\[2pt] &-\frac{1}{2}e^{i\varphi_i}\epsilon_{abc}f_{u_i} (Q^{a}_i \cdot Q^{b}_i) H^c_C +V^*_{ij}f_{d_j}(Q^{a}_i\cdot L_j)\overline{H}_{Ca} \nonumber \\[2pt] &+f_{u_i}V_{ij}\overline{U}_{ia} \overline{E}_jH^a_C -V^*_{ij}f_{d_j}e^{-i\varphi_i}\epsilon^{abc} \overline{U}_{ia}\overline{D}_{jb}\overline{H}_{Cc}~. \label{eq:wyukawa}\end{aligned}$$ The new phase factors appear only in the couplings of the color-triplet Higgs multiplets. Mass Spectrum and Coupling Unification ====================================== To compute the proton decay rate, we need to evaluate the masses of the GUT-scale particles which induce the baryon-number violating interactions. In this section, we estimate these masses using the method discussed in Refs. [@Hisano:1992mh; @Hisano:1992jj; @Hisano:2013cqa]. The mass of the heavy particles is determined by first RG running the couplings to the scale where they approximately unify. Then, because the thresholds at the GUT scale depend on these superheavy particles, their masses can be determined by assuming the deviation in gauge coupling unification is solely due to these thresholds. Note, we will use the $\overline{\rm DR}$ scheme [@Siegel:1979wq] in the following calculation. At the scale $Q_G$ near the GUT scale, the one-loop matching conditions for the gauge coupling constants are as follows [@Weinberg:1980wa; @Hall:1980kf]: $$\begin{aligned} \frac{1}{g_1^2(Q_{G})}&=\frac{1}{g_G^2(Q_G)} +\frac{1}{8\pi^2}\biggl[ \frac{2}{5} \ln \frac{Q_{G}}{M_{H_C}}-10\ln\frac{Q_{G}}{M_X} \biggr]~,\nonumber \\ \frac{1}{g_2^2(Q_{G})}&=\frac{1}{g_G^2(Q_{G})} +\frac{1}{8\pi^2}\biggl[ 2\ln \frac{Q_{G}}{M_\Sigma}-6\ln\frac{Q_{G}}{M_X} \biggr]~,\nonumber \\ \frac{1}{g_3^2(Q_{G})}&=\frac{1}{g_G^2(Q_{G})} +\frac{1}{8\pi^2}\biggl[ \ln \frac{Q_{G}}{M_{H_C}}+3\ln \frac{Q_{G}} {M_\Sigma}-4\ln\frac{Q_{G}}{M_X} \biggr]~,\end{aligned}$$ where $g_G$ is the unified gauge coupling constant. Note that the conditions do not include constant (scale independent) terms since we use the $\overline{\rm DR}$ scheme for renormalization. Assuming the above equations contain the major thresholds for the gauge couplings, they can be used to solve for the masses $$\begin{aligned} \frac{3}{g_2^2(Q_{G})}- \frac{2}{g_3^2(Q_{G})} - \frac{1}{g_1^2(Q_G)} &=-\frac{3}{10\pi^2}\ln \biggl(\frac{Q_{G}}{M_{H_C}}\biggr) ~, \nonumber \\ \frac{5}{g_1^2(Q_{G})}- \frac{3}{g_2^2(Q_{G})} - \frac{2}{g_3^2(Q_{G})} &=-\frac{9}{2\pi^2}\ln\biggl( \frac{Q_{G}}{M_{G}}\biggr)~, \label{conditions}\end{aligned}$$ with $M_{\text{G}}\equiv (M_X^2M_{\Sigma})^{\frac{1}{3}}$. The above expressions allow us to find the masses of the heavy particles in the combination[^2], $M_{H_C}$ and $M_X^2M_{\Sigma}$. The value of $M_{H_C}$ and $M_G$ found from these relationships will be used below to find the lifetime of the proton. Proton Decay ============ In the minimal SUSY GUT, proton decay is induced by the exchange of the color-triplet Higgs boson, and the dominant decay mode is, generally, $p\to K^+\bar{\nu}$ [@dim5protondecay]. We will only give details of the contributions from the colored Higgs boson since it will often be the dominant source of proton decay in PGM. At the GUT scale, the triplet Higgs boson is integrated out. The most important interaction for our considerations is the color-triplet Higgs exchange which we match at the scale $Q_G$ on to the dimension-five effective Lagrangian $${\cal L}_5^{\rm eff}= C^{ijkl}_{5L}{\cal O}^{5L}_{ijkl} +C^{ijkl}_{5R}{\cal O}^{5R}_{ijkl} ~~+~~{\rm h.c.}~,$$ where the effective operators ${\cal O}^{5L}_{ijkl}$ and ${\cal O}^{5R}_{ijkl}$ are defined by $$\begin{aligned} {\cal O}^{5L}_{ijkl}&\equiv\int d^2\theta~ \frac{1}{2}\epsilon_{abc} (Q^a_i\cdot Q^b_j)(Q_k^c\cdot L_l)~,\nonumber \\ {\cal O}^{5R}_{ijkl}&\equiv\int d^2\theta~ \epsilon^{abc}\overline{U}_{ia}\overline{E}_j\overline{U}_{kb} \overline{D}_{lc}~, \label{eq:dim5operators}\end{aligned}$$ and the Wilson coefficients $C^{ijkl}_{5L}$ and $C^{ijkl}_{5R}$ are given by $$\begin{aligned} C^{ijkl}_{5L}(Q_G)& =\frac{1}{M_{H_C}}f_{u_i} e^{i\varphi_i}\delta^{ij}V^*_{kl}f_{d_l}~,\nonumber \\ C^{ijkl}_{5R}(Q_G) &=\frac{1}{M_{H_C}}f_{u_i}V_{ij}V^*_{kl}f_{d_l} e^{-i\varphi_k} ~. \label{eq:wilson5}\end{aligned}$$ Note, the color indices must be completely antisymmetric for these interactions and as a result, only operators with at least two generations will be allowed. For this reason, the dominant decay modes contain a strange quark in their final state, *i.e.*, $p\to K^+\bar{\nu}$. As can be seen in Eq. , at the GUT scale the lepton and down-type quark Yukawa couplings should be equal. However, in running up from the weak scale, we find them to be quite different especially those for the first two generations. The difference is, however, easily compensated by effects above the GUT scale; for instance, the higher-dimensional operators induced at the Planck scale contribute to the Yukawa couplings, which may account for this difference [@Nath:1996qs; @Nath:1996ft; @Bajc:2002pg]. Because it is not known which of these values is close to the correct value for the Yukawa coupling at the GUT scale, in the the discussion below, we use both the down quark and lepton-type Yukawa couplings to calculate the proton lifetime. This will allow us to quantify our uncertainty in the lifetime of the proton. The relevant operators in Eq.  can be further reduced by keeping only those with the largest Yukawa couplings. We find that only the operators ${\cal O}^{5R}_{3312}$ and ${\cal O}^{5R}_{3311}$ yield a sizable contribution to proton decay, even though the contribution is suppressed by a flavor changing element of the CKM matrix. This contribution turns out to be dominant because of the large third generation Yukawa couplings involved [@higprotondec]. The relevant Wilson coefficients are then $$\begin{aligned} C^{3311}_{5R} (Q_G)=\frac{1}{M_{H_C}}f_tf_d(Q_G) e^{-i\varphi_1}V_{tb}V_{ud}^*~, \nonumber \\ C^{3312}_{5R}(Q_G)=\frac{1}{M_{H_C}}f_tf_s(Q_G) e^{-i\varphi_1}V_{tb}V_{us}^*~.\label{eq:wilson3genT}\end{aligned}$$ Notice that the coefficients include a common phase factor $e^{-i\varphi_1}$, which is therefore not important for proton decay. The Wilson coefficients in Eq.  are then evolved down to the SUSY scale. At the SUSY scale, the sfermions of these dimension-five operators are integrated out via the one-loop diagram found in Fig. \[fig:1loop\] of Appendix \[App:ProDec\]. The process proceeds via the exchange of either a charged wino or a Higgsino.[^3] In PGM, we generally have $|\mu|\gg |M_2|$ and so the contribution from Higgsino exchange dominates [@Hisano:2013exa].[^4] For these reasons, we focus on the charged Higgsino exchange process in what follows. The loop diagram in Fig. \[fig:1loop\] is then matched onto the baryon-number violating four-fermion operators [@Weinberg:1979sa; @Wilczek:1979hc; @Abbott:1980zj] $${\cal L}^{\text{eff}}_6=C_i~ \epsilon_{abc}(u^a_{R1}d^b_{Ri}) (Q_{L3}^c \cdot L_{L3})~,$$ with $$C_i (Q_S)=\frac{f_tf_\tau}{(4\pi)^2}C^{*331i}_{5R}(Q_S) F(\mu, m_{\widetilde{t}_R}^2,m_{\tau_R}^2)~, \label{eq:susymatching}$$ where $i=1,2$, and $Q_S$ is the SUSY breaking scale taken to be around $m_{3/2}$. The loop function $F$ is found in Appendix \[App:ProDec\]. The above expression shows that the proton decay rate depends on the SUSY spectra through the loop function. We will see this dependence in Sec. \[sec:results\] for the PGM scenario. Note that the loop function is suppressed by the sfermion masses. Thus, we expect that for large $m_{3/2}$ the proton lifetime is long enough [@Hisano:2013exa; @Liu:2013ula] to evade the current bound, $\tau (p \to K^+ \bar{\nu}) > 5.9\times 10^{33}$ years [@Abe:2014mwa]. This can be compared to the weak-scale SUSY scenarios; in these cases, the proton decay rate is in general predicted to be so large that the minimal SUSY GUT is excluded [@Murayama:2001ur] and thus some additional conspiracy is required to realize a SUSY GUT. We now run the Wilson coefficients down to the hadronic scale, $Q_{\text{had}}=2$ GeV. The Lagrangian at this scale takes the form[^5] $${\cal L}(p\to K^+\bar{\nu}_\tau) =C_{usd} [\epsilon_{abc}(u_R^as_R^b)(d_L^c\nu_\tau)] +C_{uds} [\epsilon_{abc}(u_R^ad_R^b)(s_L^c\nu_\tau)] ~.$$ Using these Wilson coefficients, we then evaluate the partial decay width of the $p\to K^+ \bar{\nu}$ and find $$\Gamma(p\to K^+\bar{\nu})=\frac{m_p}{32\pi} \biggl(1-\frac{m_K^2}{m_p^2}\biggr)^2 |{\cal A}(p\to K^+\bar{\nu})|^2~, \label{tp5}$$ where $m_p$ and $m_K$ are the masses of proton and kaon, respectively, and $${\cal A}(p\to K^+\bar{\nu})= C_{usd}(Q_{\text{had}})\langle K^+\vert (us)_R ^{}d_L^{}\vert p\rangle + C_{uds}(Q_{\text{had}})\langle K^+\vert (ud)_R ^{}s_L^{}\vert p\rangle ~.$$ The hadron matrix elements in the above equation have been recently computed in Ref. [@Aoki:2013yxa] using a lattice simulation of QCD, $$\begin{aligned} \langle K^+\vert (us)_R ^{}d_L^{}\vert p\rangle&= -0.054(11)(9)~\text{GeV}^2 ~,\nonumber\\ \langle K^+\vert (ud)_R ^{}s_L^{}\vert p\rangle &= -0.093(24)(18)~\text{GeV}^2 ~,\end{aligned}$$ where the first and second parentheses represent statistical and systematic errors, respectively. The matrix elements are computed at the scale $Q_{\text{had}}=2$ GeV. Before concluding this section, we comment on other possible contributions to proton decay. Firstly, the dimension-five baryon-number violating operators in Eq.  can also be generated at the Planck scale, $M_P$. If the coefficients of the operators are ${\cal O}(1/M_P)$, that is, there is no suppression from Yukawa couplings, then they will give the dominant contribution to proton decay and result in a lifetime which is too short [@Dine:2013nga]. It is expected, however, that there is some underlying mechanism such as a flavor symmetry which is responsible for the structure of the Yukawa couplings. This symmetry could give additional suppression to these Planck-scale operators. In this paper, we assume that the contribution of these operators is less significant compared with the color Higgs contribution, and neglect them in the following analysis. Secondly, the exchange of the $X$ bosons will also induce proton decay. This decay mode is via a dimension-six GUT-scale effective operator and is thus usually subdominant compared to the contribution of the dimension-five operator discussed above. An approximate expression for the lifetime of the proton from the dimension-six operator is $$\begin{aligned} \tau(p\to e^+\pi^0) \simeq 3 \times 10^{35} \times \left(\frac{M_X}{1.0\times 10^{16}~\text{GeV}}\right)^4 ~. \label{tp6}\end{aligned}$$ There is a slight dependence on the masses of SUSY particles we have neglected. As can be seen from this expression, the proton decay width from the dimension-six operator will in general give lifetimes too long to be detected, at least much longer than the present bound: $\tau (p\to e^+\pi^0)> 1.4 \times 10^{34}~{\rm years}$ [@Shiozawa; @Babu:2013jba]. Pure Gravity Mediation ====================== As discussed above, the lifetime of the proton depends on the SUSY parameters. Motivated by the $126$ GeV Higgs boson [@lhch] and other cosmological considerations [@ego], we will analyze the proton lifetime for PGM models. The scalar potential of PGM takes the same form as that of mSUGRA $$\begin{aligned} V & = & \left|{\partial W \over \partial \phi^i}\right|^2 + \left( A_0 W^{(3)} + B_0 W^{(2)} + \text{h.c.}\right) + m_{3/2}^2 \phi^i \phi_i^* \, ,\end{aligned}$$ which is determined by the flat Kähler manifold[^6] and the superpotential $W$ is given in Eq. . $W^{(2)}$ and $W^{(3)}$ are the bi- and trilinear parts of the superpotential. For PGM, the SUSY breaking field is a non-singlet and strongly stabilized [@strongStab] which suppresses the gaugino masses and $A$-terms respectively. The gaugino masses are regenerated by anomalies and take the form[^7] $$\begin{aligned} M_{1} &=& \frac{33}{5} \frac{g_{1}^{2}}{16 \pi^{2}} m_{3/2}\ , \label{eq:M1} \\ M_{2} &=& \frac{g_{2}^{2}}{16 \pi^{2}} m_{3/2} \ , \label{eq:M2} \\ M_{3} &=& -3 \frac{g_3^2}{16\pi^2} m_{3/2}\ . \label{eq:M3}\end{aligned}$$ In order to account for radiative EWSB, mSUGRA is further modified by including a Giudice-Masiero term for the Higgs fields in the Kähler manifold [@gm]. This modifies the Higgs boson parameters to $$\begin{aligned} \mu &=& \mu_0 + c_H m_{3/2}\ , \label{eq:mu0} \\ B\mu &=& \mu_0 (A_0 - m_{3/2}) + 2 c_H m_{3/2}^2\ , \label{eq:Bmu0}\end{aligned}$$ where $\mu_0$ is the superpotential Higgs bilinear term found in $W^{(2)}$. This allows us to vary both $\mu$ and $B\mu$ independently in order to satisfy the EWSB conditions. This leaves $m_{3/2}$ and $c_H$ as free parameters. In this case, $\tan \beta$ is an output of the EWSB conditions, but in practice one can trade $c_H$ for $\tan \beta$ and use $m_{3/2}$ and $\tan \beta$ as free inputs. Since this simplest of PGM models tends to require small $\tan\beta$ and larger $m_{3/2}$, we will allow the Higgs soft masses to be free parameters. This will allow $\tan\beta$ to be larger and so allow for $m_{3/2}$ to be smaller [@eioy2]. As was seen in the previous sections, both larger $\tan \beta$ and smaller $m_{3/2}$ will lead to shorter lifetimes of the proton. We will not discuss the origin of these non-universal Higgs soft masses here. However, discussion about this can be found in Ref. [@eioy2]. Lastly, we note that the non-universal Higgs soft masses, $m_1$ and $m_2$, can also be parametrized in terms of the low scale values of $\mu$ and $m_A$ which are otherwise also outputs of the EWSB conditions. We will take advantage of this in the results below in order to zoom in on some features of the proton lifetime. Results {#sec:results} ======= We are now in a position to discuss the proton lifetime and mass scales associated with gauge coupling unification in a variety of models which have varying degrees of non-universality in the Higgs sector. We begin by displaying in Fig. \[m1eqm2p\] the $m_1=m_2$ vs. $\tan \beta$ plane for fixed gravitino mass. This is a one-parameter extension of the minimal (two-parameter) PGM model and resembles NUHM1 models [@nuhm1]. In the left panel we have fixed $m_{3/2} = 60$ TeV. For this value of the gravitino mass, the Higgs mass lies between 124 and 128 GeV [^8] for $\tan \beta$ roughly between 4–9 as shown by the red dot-dashed curves. The thin blue lines show the values of the LSP (wino) mass[^9] and are solid for $\mu > 0$ and dashed for $\mu < 0$. The anomaly mediated contribution to $m_\chi$ for $m_{3/2} = 60$ TeV is 170 GeV. At low $\tan \beta$, threshold corrections from the heavy Higgs bosons and the Higgsinos increase the mass for $\mu < 0$ and decrease the mass for $\mu > 0$. At large $\tan \beta$, the wino mass, for both positive and negative $\mu$, tends to its anomaly mediated value. The curves end at high and low values of the Higgs soft masses due to the absence of radiative electroweak symmetry breaking. For large and negative values of $m_1^2 = m_2^2$ (the sign on the axis refers to the sign of the mass squared), the Higgs pseudoscalar mass squared is negative, and for large positive $m_1^2 = m_2^2$, the electroweak conditions yield $|\mu|^2 < 0$. The thicker black curves in Fig. \[m1eqm2p\] show the values of the proton lifetime. As discussed earlier, as there is some uncertainty as to how we match the Yukawa couplings at the GUT scale, we have results based on quark Yukawa couplings (shown by the solid curves) and results based on lepton Yukawa couplings (shown by the dashed curves). As one can see from the figure, the calculated proton lifetime is sensitive to $\tan \beta$ yet relatively insensitive to the value of $m_{1,2}$ for fixed gravitino mass. In general, the proton lifetime is lower at high $\tan \beta$ due to the increase in the down-like Yukawa couplings when $\tan \beta$ is increased, whereas the Higgs mass increases with $\tan \beta$. For these relatively low values of the gravitino mass used in the left panel, the proton lifetimes based on quark Yukawas drop below $5 \times 10^{34}$ years only when $\tan \beta > 7$ where $m_h > 127$ GeV. The lifetime increases rapidly at lower $\tan \beta$ and exceeds $5 \times 10^{35}$ years when $\tan \beta < 4$ where $m_h < 124$ GeV. However, the wino mass requires $\mu<0$ and $\tan\beta\gtrsim 6$. Recall that these lifetimes are computed from Eq. (\[tp5\]) and when the lifetime exceeds 3 $\times 10^{35}$ years, the dominant contribution to the decay rate comes from the dimension-six operator given in Eq. (\[tp6\]). Proton lifetimes based on lepton Yukawas are significantly smaller (by a factor of roughly 20), so that $\tau_p^l < 5 \times 10^{33}$ years when $\tan \beta \gtrsim 6$ and is still smaller than $2 \times 10^{34}$ years when $\tan \beta > 4$. In the right panel of Fig. \[m1eqm2p\], we have taken $m_{3/2} = 100$ TeV and as expected the Higgs mass for a given value of $\tan \beta$ is higher. The range 124 – 128 GeV now requires $\tan \beta \simeq 3.5$ – 6. The uncorrected wino mass is now about 290 GeV and in the figure we see lower (higher) wino masses when $\mu > (<) 0$. The proton lifetimes are now significantly higher. At $\tan \beta = 6$, the quark based value of $\tau_p$ determined by the dimension five operator is now $10^{36}$ years and increases as $\tan \beta$ is lower. The lepton based lifetimes remain a factor of about 20 lower and may still be as low as $5 \times 10^{34}$ years at $\tan \beta = 6$. To see more clearly the dependence of the proton lifetime on the PGM parameters, we show in the left panel of Fig. \[tpm32tb\] the behavior of the proton lifetime as a function of $\tan \beta$ for fixed $m_{3/2} = 60$ and 200 TeV with $m_1 = m_2 = 0$. The lifetime falls off monotonically with $\tan \beta$. The ratio between the quark and lepton evaluation of $\tau_p$ is seen to be nearly constant as $\tan \beta$ is varied with a ratio of about 20. We also see the substantial increase in $\tau_p$ when $m_{3/2}$ is increased to 200 TeV. The experimental limit on the proton lifetime, $\tau (p \to K^+ \bar{\nu}) > 5.9\times 10^{33}$ years [@Abe:2014mwa], is shown by the horizontal line. In the right panel, we show this increase in $\tau_p$ with $m_{3/2}$ for fixed values of $\tan \beta = 2$ and 5. In both panels, we also show the Higgs mass as a function of $\tan \beta$ and $m_{3/2}$ with its value displayed on the right edge of each panel. Restricting the Higgs mass to the range 124–128 GeV allows one to focus on the relevant ranges of either $\tan \beta$ or $m_{3/2}$ and hence on the predicted proton lifetime. In contrast to the proton lifetime, the relevant GUT mass scales, $M_{H_C}$ and $M_G$, are relatively insensitive to the PGM parameter choices as seen in Fig. \[mm32tb\]. As one can see in the left panel, there is very little dependence on $\tan \beta$. The mass parameter $M_G\equiv (M_X^2M_\Sigma)^{\frac{1}{3}}$ is always close to $10^{16}$ GeV independent of $m_{3/2}$ (as also seen in the right panel). While the color-triplet mass is insensitive to $\tan \beta$, it does have a mild dependence on the gravitino mass and ranges from a few $\times 10^{16}$ – few $\times 10^{17}$ GeV. Notice that in the weak-scale SUSY scenario the mass of the color-triplet Higgs multiplet is predicted to be around $10^{15}$ GeV [@Murayama:2001ur]. A heavier color triplet mass makes the proton lifetime long enough to evade the current experimental bound. Furthermore, in some of the parameter space of PGM, the GUT-scale parameters $M_{H_C}$ and $M_G$ are both of ${\cal O}(10^{16})$. In these cases, the threshold corrections at the GUT scale become very small, which implies the unification of the gauge couplings is quite good. In fact, for $m_{3/2}\sim 60 $ TeV and $\tan\beta \sim 5$, we get good gauge coupling unification and a proton lifetime which could be in reach of future experiments. In Fig. \[muma\], we offer two additional planes which show the dependence of the proton lifetime on other PGM parameters. In the left panel, we plot the lifetime contours in the $m_1=m_2$, $m_{3/2}$ plane. This is again a NUHM1-like model and we have fixed $\tan \beta =5$. As in Fig. \[m1eqm2p\], the red-dashed curves show the Higgs mass contours which vary from about 124–128 GeV for the plane shown. As before, the curves extend across a limited range in $m_1=m_2$ where the EWSB conditions can be satisfied. At large positive $m_1^2$, $\mu^2$ goes to 0 (where the curve is cutoff). At very small $\mu$, the Higgs masses increases rapidly causing the sudden downturn in the mass contours. As expected, we see the wino mass varies considerably as $m_{3/2}$ is varied. For the range in $m_{3/2}$ shown, the proton lifetime varies from as low as $10^{33}$ years using the lepton Yukawas and low $m_{3/2}$ to as high as $10^{37}$ years using quark Yukawas and $m_{3/2} \approx 150$ TeV. In the right panel of Fig. \[muma\], we show a two-parameter extension of the two-parameter PGM similar to the NUHM2 [@nuhm2]. Results are displayed in the $\mu,m_A$ plane for fixed $\tan \beta = 5$ and $m_{3/2} = 60$ TeV. In this case, the EWSB conditions, are used to solve for the two Higgs soft masses which now differ. As the Higgs mass is largely independent of $m_A$, the Higgs mass contours are nearly vertical. At the center of the plot, as $|\mu|$ gets to be very small, $m_h$ gets large and exceeds 130 GeV. At large $|\mu|$, $m_h$ is always larger than 125 GeV in the ranges shown. The threshold corrections to the wino mass are sensitive to $\mu$ and $m_A$ and that accounts for the variation of $m_\chi$ as these parameters are varied. The proton lifetime varies between $10^{34}$ and $10^{36}$ years but shows significantly more variability. This is due to the competing effects of changing $\mu$. The proton lifetime depends both on the color-triplet Higgs mass and on $\mu$ itself.[^10] As $\mu$ is lowered, the color-triplet Higgs mass decreases which tends to decrease the proton lifetime. But as $|\mu|$ is further decreased, the proton lifetime dependence on $\mu$ overcomes its dependence on $M_{H_C}$ and the lifetime increases very rapidly at small $|\mu|$ seen by the sharp downturn in the contours near $\mu = 0$. These effects can be better understood by examining Figs. \[tpmu\] and \[mmu\] which show the behavior of the proton lifetime and GUT-scale masses, including the heavy Higgs mass, as a function of $\mu$ for fixed $\tan \beta$ and $m_{3/2}$. Here we see the first gradual and then rapid decrease in the color-triplet mass as $|\mu|$ is lowered from large values toward $\mu = 0$. There is no substantial difference in this behavior between the two values of $\tan \beta$ shown. Once again, we see that $M_G$ depends very little on our parameter choices and is always near $10^{16}$ GeV. Finally, in Fig. \[mmu\], we see the sharp increase in the proton lifetime as $|\mu|$ gets small.[^11] Here we see also that the Higgs mass rises sharply as $\mu$ tends to zero. It is important to recall that the lifetime plotted corresponds only to that given by the dimension-five operator given in Eq.  and would not exceed $3 \times 10^{35}$ years when the dimension-six operator is included. The latter is fairly insensitive to parameter choices. Conclusion and Discussion ========================= As we await new results for physics beyond the standard model from the LHC, we have been forced to consider supersymmetric models with sfermion masses larger than what was previously considered ‘natural’. While a great deal of attention had been focused on relatively simple models such as the CMSSM or mSUGRA (with four and three parameters respectively) or the NUHM1,2 with five and six parameters, pure gravity mediation models can be described with as few as two parameters at the cost of a mass spectrum which approaches the PeV scale. As we hope the actual theory of nature is in the realm of experimental science, it is imperative to find means to test these models. Here we have examined one additional possibility for testing these models despite their generally heavy mass spectra. PGM theories, with all their economy, are still able to resolve many of the questions their lower energy cousins (such as the CMSSM) were motivated from. These include the ability to achieve gauge coupling unification at the GUT scale, radiative breaking of the electroweak symmetry, the stability of the Higgs potential, and they also provide a suitable candidate for dark matter. The latter is definitely more difficult in PGM models, as the wino is usually the lightest supersymmetric particle and as such would require a wino mass near 3 TeV to supply the correct relic density. This pushes the gravitino mass up to several hundred TeV. Alternatives within PGM are possible if $\mu$ is relatively small and the Higgsino is the lightest supersymmetric particle [@eioy5] or if the theory contains additional vector-like states and bino-gluino co-annihilation controls the relic bino density [@vector], or even axion dark matter [@eioy5; @eioy4]. In contrast to their lower energy counterparts, PGM models have a relatively easy time obtaining a Higgs mass in agreement with the experimental measurement [@lhch]. Thus experimental verification of PGM models remains challenging. While there is the chance that the lightest supersymmetric particle is within reach of the LHC, the bulk of the PGM spectrum is not. Here we have calculated the proton lifetime in PGM models. We have found that typically the lifetime is long and in many cases significantly above the current experimental bounds. However in cases where $m_{3/2}$ is relatively small and $\tan \beta$ is relatively high, the proton lifetime is low and may be at the level of current experimental searches. While proton decay itself, can not point directly to PGM supersymmetry, it may provide one more handle on an ever increasingly elusive theory beyond the standard model. Acknowledgments {#acknowledgments .unnumbered} =============== The work of J.E. and K.A.O. was supported in part by DOE grant DE-SC0011842 at the University of Minnesota. The work of N.N. is supported by Research Fellowships of the Japan Society for the Promotion of Science for Young Scientists. Appendix {#appendix .unnumbered} ======== Minimal SU(5) Notation and Conventions \[App:NotCon\] ===================================================== Here, we review the minimal SUSY SU(5) GUT [@Dimopoulos:1981zb; @Sakai:1981gr] and clarify our notation and conventions. In these models, the MSSM matter fields are embedded into a $\bar{\bf 5}$ and ${\bf 10}$ representations of the SU(5) gauge group for each generation. Let $\Phi_i$ and $\Psi_i$ be the chiral superfields in the $\bar{\bf 5}$ and ${\bf 10}$ representations, respectively, with $i$ denoting the generation index. These fields decompose into the MSSM superfields as $$\begin{aligned} \Phi_i &= \begin{pmatrix} \bar{D}_{i1} \\ \bar{D}_{i2} \\ \bar{D}_{i3} \\ E_i \\ -N_i \end{pmatrix}~, ~~~~~~ \Psi_i=\frac{1}{\sqrt{2}} \begin{pmatrix} 0&\bar{U}_{i3}&-\bar{U}_{i2}&U^{1}_i&D^{1}_i \\ -\bar{U}_{i3}&0&\bar{U}_{i1}&U^{2}_i&D^{2}_i\\ \bar{U}_{i2}&-\bar{U}_{i1}&0&U^{3}_i&D^{3}_i\\ -U^{1}_i&-U^{2}_i&-U^{3}_i&0&\bar{E}_i \\ -D^{1}_i&-D^{2}_i&-D^{3}_i&-\bar{E}_i &0 \end{pmatrix}~,\end{aligned}$$ with $$L_i= \begin{pmatrix} N_i \\ E_i \end{pmatrix}~, ~~~~~~ Q^a_i= \begin{pmatrix} U^a_i \\ D^a_i \end{pmatrix}~,$$ where $a=1,2,3$ denotes the color index. The MSSM Higgs superfields, on the other hand, are embedded into a [**5**]{} and $\bar{\bf 5}$: $$H= \begin{pmatrix} H^1_C \\ H^2_C \\ H^3_C \\ H^+_2 \\ H^0_2 \end{pmatrix} ,~~~~~~\bar{H}= \begin{pmatrix} \bar{H}_{C1}\\ \bar{H}_{C2}\\ \bar{H}_{C3}\\ H^-_1 \\ -H^0_1 \end{pmatrix} ~,$$ where the last two components are the MSSM Higgs superfields, $$H_2= \begin{pmatrix} H^+_2 \\ H^0_2 \end{pmatrix} ,~~~~~~H_1= \begin{pmatrix} H^0_1 \\ H^-_1 \end{pmatrix} ~.$$ The other piece of the ${\bf 5}$ and $\bar{\bf 5}$ Higgs bosons are labeled by $H^a_C$ and $\bar{H}_{Ca}$ and will be referred to as the color-triplet Higgs bosons. The gauge boson of SU(5) is a ${\bf 24}$. In supersymmetry this corresponds to a real vector superfield, ${\cal V}^A$, where $A=1,\dots, 24$ represents the gauge index. $V^A$ can be decomposed into the SM gauge fields, plus the additional massive gauge bosons of SU(5) breaking, as follows $${\cal V}\equiv {\cal V}^AT^A=\frac{1}{\sqrt{2}} \begin{pmatrix} \begin{matrix} G -\frac{2}{\sqrt{30}} B \end{matrix} & \begin{matrix} X^{\dagger 1} \\ X^{\dagger 2} \\ X^{\dagger 3} \end{matrix} & \begin{matrix} Y^{\dagger 1}\\ Y^{\dagger 2} \\ Y^{\dagger 3} \end{matrix} \\ \begin{matrix} X_1 & X_2 & X_3 \\ Y_1 & Y_2 & Y_3 \end{matrix} & \begin{matrix} \frac{1}{\sqrt{2}}W^3+\frac{3}{\sqrt{30}}B \\ W^- \end{matrix} & \begin{matrix} W^+ \\ - \frac{1}{\sqrt{2}}W^3+\frac{3}{\sqrt{30}}B \end{matrix} \end{pmatrix} ~,$$ where $T^A$ is the generator of the fundamental representation of the SU(5), and $G$, $B$, and $W$ denote the MSSM gauge vector superfields with there associated generators. The massive gauge bosons associated with the breaking of SU(5) typically referred to as $X_a$ and $Y_a$ will be called just the $X$-bosons with definition $$(X)^\alpha_a = \begin{pmatrix} X^1_a \\ X^2_a \end{pmatrix} \equiv \begin{pmatrix} X_a \\ Y_a \end{pmatrix}~.$$ Here $\alpha,\beta,\dots$ denote the SU(2)$_L$ indices. The simplest means of breaking SU(5) to the SM gauge symmetries $\text{SU}(3)_C \otimes \text{SU}(2)_{L} \otimes \text{U}(1)_Y$ is via an adjoint ${\bf 24}$ discussed in the text. The ${\bf 24}$ decomposes as follows: $$\Sigma \equiv \Sigma^A T^A \begin{pmatrix} \Sigma_8&\Sigma_{(3,2)} \\ \Sigma_{(3^*,2)} & \Sigma_3 \end{pmatrix} +\frac{1}{2\sqrt{15}} \begin{pmatrix} 2&0\\0&-3 \end{pmatrix} \Sigma_{24}~.$$ Without losing any generality and for simplicity, we assume all SU(5) breaking occurs along the $\Sigma_{24}$ direction which is separated in the above equation. Proton Decay \[App:ProDec\] =========================== In this appendix, we give additional details of our calculation of the proton lifetime. The important Wilson coefficients arising from integrating out the colored Higgs triplet are $$\begin{aligned} C^{3311}_{5R} (Q_G)=\frac{1}{M_{H_C}}f_tf_d(Q_G) e^{-i\varphi_1}V_{tb}V_{ud}^*~, \nonumber \\ C^{3312}_{5R}(Q_G)=\frac{1}{M_{H_C}}f_tf_s(Q_G) e^{-i\varphi_1}V_{tb}V_{us}^*~.\label{eq:wilson3gen}\end{aligned}$$ These coefficients are then evolved down to the SUSY scale using $$\frac{d}{d\ln Q} C_{5R}^{331l} = \frac{1}{16\pi^2}\biggl[ -\frac{12}{5}g_1^2-8g_3^2 +2f_t^2 +2f_\tau^2 \biggr]C_{5R}^{331l}~,$$ where $l=1,2$ and $Q$ is the renormalization scale. ![[*One-loop Higgsino-exchanging diagram which gives rise to the dominant contribution to the baryon-number violating four-Fermi operators. Gray dot indicates the dimension-five effective interaction, while black dot represents the Higgsino mass term.*]{}[]{data-label="fig:1loop"}](1loop.eps){height="60mm"} At the SUSY scale $Q_S$, the sfermions are integrated out via the diagram in Fig. \[fig:1loop\] to give $${\cal L}^{\text{eff}}_6=C_i~ \epsilon_{abc}(u^a_{R1}d^b_{Ri}) (Q_{L3}^c \cdot L_{L3})~,$$ with $$C_i (Q_S)=\frac{f_tf_\tau}{(4\pi)^2}C^{*331i}_{5R}(Q_S) F(\mu, m_{\widetilde{t}_R}^2,m_{\tau_R}^2)~,$$ where $i=1,2$ and $$\begin{aligned} F(M, m_1^2, m_2^2) &\equiv \frac{M}{m_1^2-m_2^2} \biggl[ \frac{m_1^2}{m_1^2-M^2}\ln \biggl(\frac{m_1^2}{M^2}\biggr) -\frac{m_2^2}{m_2^2-M^2}\ln \biggl(\frac{m_2^2}{M^2}\biggr) \biggr]~. \label{eq:funceq}\end{aligned}$$ These Wilson coefficients $C_i$, which are initially defined at the SUSY scale, are then run down from the weak scale using [@Abbott:1980zj] $$\frac{d}{d\ln Q}C_i = \biggl[\frac{\alpha_1}{4\pi}\biggl(-\frac{11}{10}\biggr) +\frac{\alpha_2}{4\pi}\biggl(-\frac{9}{2}\biggr) +\frac{\alpha_3}{4\pi}(-4) \biggr]C_i ~.$$ At the weak scale the Lagrangian takes the form $${\cal L}(p\to K^+\bar{\nu}_\tau) =C_{usd} [\epsilon_{abc}(u_R^as_R^b)(d_L^c\nu_\tau)] +C_{uds} [\epsilon_{abc}(u_R^ad_R^b)(s_L^c\nu_\tau)] ~,$$ with $$\begin{aligned} C_{usd}&= -V_{td}C_2(m_Z) ~, \nonumber \\ C_{uds}&= -V_{ts}C_1(m_Z)~.\end{aligned}$$ The new Wilson coefficients $C_{usd,uds}$ are then further run down to the hadronic scale $Q_{\text{had}}=2$ GeV. Below the electroweak scale, the RGEs of the Wilson coefficients are given by $$\frac{d}{d\ln Q}C_{usd,uds} =-\biggl[ 4 \frac{\alpha_s}{4\pi} +\biggl(\frac{4}{3}+\frac{4}{9}N_f\biggr) \frac{\alpha_s^2}{(4\pi)^2} \biggr]C_{usd,uds}~,$$ at the two-loop level [@Nihei:1994tx]. The solution for this equation is $$\begin{aligned} A_L \equiv \frac{C(Q_{\text{had}})}{C(m_Z)} =\biggl[ \frac{\alpha_s(Q_{\text{had}})}{\alpha_s(m_b)} \biggr]^{\frac{6}{25}}\biggl[ \frac{\alpha_s(m_b)}{\alpha_s(m_Z)} \biggr]^{\frac{6}{23}} \biggl[ \frac{\alpha_s(Q_{\text{had}})+\frac{50\pi}{77}} {\alpha_s(m_b)+\frac{50\pi}{77}} \biggr]^{-\frac{173}{825}} \biggl[ \frac{\alpha_s(m_b)+\frac{23\pi}{29}} {\alpha_s(m_Z)+\frac{23\pi}{29}} \biggr]^{-\frac{430}{2001}}~.\end{aligned}$$ This long-range renormalization factor is computed to be $A_L = 1.247$ and appears as a multiplicative factor to the Wilson coefficients defined at the weak scale. The Wilson coefficients at the hadronic scale are then $$\begin{aligned} C_{usd,uds}(Q_{\text{had}})= C_{usd,uds}(m_Z) A_L ~.\end{aligned}$$ The partial decay width for $p\to K^+ \bar{\nu}$ is then found to be $$\Gamma(p\to K^+\bar{\nu})=\frac{m_p}{32\pi} \biggl(1-\frac{m_K^2}{m_p^2}\biggr)^2 |{\cal A}(p\to K^+\bar{\nu})|^2~,$$ where $m_p$ and $m_K$ are the proton and kaon masses, respectively, and $${\cal A}(p\to K^+\bar{\nu})= C_{usd}(Q_{\text{had}})\langle K^+\vert (us)_R ^{}d_L^{}\vert p\rangle + C_{uds}(Q_{\text{had}})\langle K^+\vert (ud)_R ^{}s_L^{}\vert p\rangle ~.$$ [99]{} G. Aad [*et al.*]{} \[ATLAS Collaboration\], arXiv:1405.7875 \[hep-ex\]. S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], JHEP [**1406**]{} (2014) 055 \[arXiv:1402.4770 \[hep-ex\]\]. M. Drees and M. M. Nojiri, Phys. Rev. D [**47**]{} (1993) 376 \[arXiv:hep-ph/9207234\]; G. L. Kane, C. F. Kolda, L. Roszkowski and J. D. Wells, Phys. Rev.  D [**49**]{} (1994) 6173 \[arXiv:hep-ph/9312272\]; H. Baer and M. Brhlik, Phys. Rev. D [**53**]{} (1996) 597 \[arXiv:hep-ph/9508321\]; Phys. Rev.  D [**57**]{} (1998) 567 \[arXiv:hep-ph/9706509\]; J. R. Ellis, T. Falk, K. A. Olive and M. Schmitt, Phys. Lett. B [**388**]{} (1996) 97 \[arXiv:hep-ph/9607292\]; Phys. Lett. B [**413**]{} (1997) 355 \[arXiv:hep-ph/9705444\]; J. R. Ellis, T. Falk, G. Ganis, K. A. Olive and M. Schmitt, Phys. Rev. D [**58**]{} (1998) 095002 \[arXiv:hep-ph/9801445\]; V. D. Barger and C. Kao, Phys. Rev. D [**57**]{} (1998) 3131 \[arXiv:hep-ph/9704403\]; J. R. Ellis, T. Falk, G. Ganis and K. A. Olive, Phys. Rev. D [**62**]{} (2000) 075010 \[arXiv:hep-ph/0004169\]; H. Baer, M. Brhlik, M. A. Diaz, J. Ferrandis, P. Mercadante, P. Quintana and X. Tata, Phys. Rev.  D [**63**]{} (2001) 015007 \[arXiv:hep-ph/0005027\]; J. R. Ellis, T. Falk, G. Ganis, K. A. Olive and M. Srednicki, Phys. Lett. B [**510**]{} (2001) 236 \[arXiv:hep-ph/0102098\]; V. D. Barger and C. Kao, Phys. Lett. B [**518**]{} (2001) 117 \[arXiv:hep-ph/0106189\]; L. Roszkowski, R. Ruiz de Austri and T. Nihei, JHEP [**0108**]{} (2001) 024 \[arXiv:hep-ph/0106334\]; A. Djouadi, M. Drees and J. L. Kneur, JHEP [**0108**]{} (2001) 055 \[arXiv:hep-ph/0107316\]; U. Chattopadhyay, A. Corsetti and P. Nath, Phys. Rev. D [**66**]{} (2002) 035003 \[arXiv:hep-ph/0201001\]; J. R. Ellis, K. A. Olive and Y. Santoso, New Jour. Phys.  [**4**]{} (2002) 32 \[arXiv:hep-ph/0202110\]; H. Baer, C. Balazs, A. Belyaev, J. K. Mizukoshi, X. Tata and Y. Wang, JHEP [**0207**]{} (2002) 050 \[arXiv:hep-ph/0205325\]; R. Arnowitt and B. Dutta, arXiv:hep-ph/0211417. S. Weinberg, Phys. Rev. Lett.  [**48**]{}, 1303 (1982); J. Ellis, D. V. Nanopoulos, and M. Quiros, Phys. Lett. B [**174**]{}, 176 (1986); T. Moroi, M. Yamaguchi and T. Yanagida Phys. Lett. B [**342**]{}, 105 (1995) \[hep-ph/9409367\]; M. Kawasaki, K. Kohri, T. Moroi and A. Yotsuyanagi, Phys. Rev. D [**78**]{}, 065011 (2008) \[arXiv:0804.3745 \[hep-ph\]\]. R. H. Cyburt, J. Ellis, B. D. Fields, F. Luo, K. A. Olive and V. C. Spanos, JCAP [**0910**]{}, 021 (2009) \[arXiv:0907.5003 \[astro-ph.CO\]\]; R. H. Cyburt, J. Ellis, B. D. Fields, F. Luo, K. A. Olive and V. C. Spanos, JCAP [**1305**]{}, 014 (2013) \[arXiv:1303.0574 \[astro-ph.CO\]\]. F. Gabbiani, E. Gabrielli, A. Masiero and L. Silvestrini, Nucl. Phys. B [**477**]{}, 321 (1996) \[hep-ph/9604387\]; T. Moroi and M. Nagai, Phys. Lett. B [**723**]{}, 107 (2013) \[arXiv:1303.0668 \[hep-ph\]\]; D. McKeen, M. Pospelov and A. Ritz, Phys. Rev. D [**87**]{}, 113002 (2013) \[arXiv:1303.1172 \[hep-ph\]\]; W. Altmannshofer, R. Harnik and J. Zupan, JHEP [**1311**]{}, 202 (2013) \[arXiv:1308.3653 \[hep-ph\]\]; K. Fuyuto, J. Hisano, N. Nagata and K. Tsumura, JHEP [**1312**]{}, 010 (2013) \[arXiv:1308.6493 \[hep-ph\]\]; M. Baumgart, D. Stolarski and T. Zorawski, Phys. Rev. D [**90**]{}, 055001 (2014) \[arXiv:1403.6118 \[hep-ph\]\]. G. Aad [*et al.*]{} \[ATLAS Collaboration\], Phys. Lett. B [**716**]{}, 1 (2012) \[arXiv:1207.7214 \[hep-ex\]\]; S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], Phys. Lett. B [**716**]{}, 30 (2012) \[arXiv:1207.7235 \[hep-ex\]\]. J. D. Wells, hep-ph/0306127; N. Arkani-Hamed and S. Dimopoulos, JHEP [**0506**]{}, 073 (2005) \[arXiv:hep-th/0405159\]; G. F. Giudice and A. Romanino, Nucl. Phys.  B [**699**]{}, 65 (2004) \[Erratum-ibid.  B [**706**]{}, 65 (2005)\] \[arXiv:hep-ph/0406088\]; N. Arkani-Hamed, S. Dimopoulos, G. F. Giudice and A. Romanino, Nucl. Phys.  B [**709**]{}, 3 (2005) \[arXiv:hep-ph/0409232\]; J. D. Wells, Phys. Rev.  D [**71**]{}, 015013 (2005) \[arXiv:hep-ph/0411041\]. M. Ibe, T. Moroi and T. T. Yanagida, Phys. Lett. B [**644**]{}, 355 (2007) \[hep-ph/0610277\]; M. Ibe and T. T. Yanagida, Phys. Lett. B [**709**]{}, 374 (2012) \[arXiv:1112.2462 \[hep-ph\]\]; M. Ibe, S. Matsumoto and T. T. Yanagida, Phys. Rev. D [**85**]{}, 095011 (2012) \[arXiv:1202.2253 \[hep-ph\]\]. L. J. Hall and Y. Nomura, JHEP [**1201**]{}, 082 (2012) \[arXiv:1111.4519 \[hep-ph\]\]; N. Arkani-Hamed, A. Gupta, D. E. Kaplan, N. Weiner and T. Zorawski, arXiv:1212.6971 \[hep-ph\]; A. Arvanitaki, N. Craig, S. Dimopoulos and G. Villadoro, JHEP [**1302**]{}, 126 (2013) \[arXiv:1210.0555 \[hep-ph\]\]; L. J. Hall, Y. Nomura and S. Shirai, JHEP [**1301**]{}, 036 (2013) \[arXiv:1210.2395 \[hep-ph\]\]. J. L. Evans, M. Ibe, K. A. Olive and T. T. Yanagida, Eur. Phys. J. C [**73**]{}, 2468 (2013) \[arXiv:1302.5346 \[hep-ph\]\]. M. Dine and D. MacIntire, Phys. Rev. D [**46**]{}, 2594 (1992) \[hep-ph/9205227\]; L. Randall and R. Sundrum, Nucl. Phys.  B [**557**]{}, 79 (1999) \[arXiv:hep-th/9810155\]; G. F. Giudice, M. A. Luty, H. Murayama and R. Rattazzi, JHEP [**9812**]{}, 027 (1998) \[arXiv:hep-ph/9810442\]; J. A. Bagger, T. Moroi and E. Poppitz, JHEP [**0004**]{}, 009 (2000) \[arXiv:hep-th/9911029\]; P. Binetruy, M. K. Gaillard and B. D. Nelson, Nucl. Phys.  B [**604**]{}, 32 (2001) \[arXiv:hep-ph/0011081\]. G. F. Giudice and A. Masiero, Phys. Lett.  B [**206**]{}, 480 (1988); K. Inoue, M. Kawasaki, M. Yamaguchi and T. Yanagida, Phys. Rev. D [**45**]{}, 328 (1992); E. Dudas, Y. Mambrini, A. Mustafayev and K. A. Olive, Eur. Phys. J. C [**72**]{}, 2138 (2012) \[arXiv:1205.5988 \[hep-ph\]\]. J. L. Evans, K. A. Olive, M. Ibe and T. T. Yanagida, Eur. Phys. J. C [**73**]{}, 2611 (2013) \[arXiv:1305.7461 \[hep-ph\]\]. H. Georgi and S. L. Glashow, Phys. Rev. Lett.  [**32**]{}, 438 (1974). H. Georgi, H. R. Quinn and S. Weinberg, Phys. Rev. Lett.  [**33**]{}, 451 (1974). S. Dimopoulos, S. Raby and F. Wilczek, Phys. Rev. D [**24**]{}, 1681 (1981); W. J. Marciano and G. Senjanovic, Phys. Rev. D [**25**]{}, 3092 (1982); M. B. Einhorn and D. R. T. Jones, Nucl. Phys. B [**196**]{}, 475 (1982); U. Amaldi, W. de Boer and H. Furstenau, Phys. Lett. B [**260**]{}, 447 (1991); P. Langacker and M. x. Luo, Phys. Rev. D [**44**]{}, 817 (1991). M. Cirelli, F. Sala and M. Taoso, JHEP [**1410**]{}, 033 (2014) \[Erratum-ibid.  [**1501**]{}, 041 (2015)\] \[arXiv:1407.7058 \[hep-ph\]\]. J. Hisano, S. Matsumoto, M. Nagai, O. Saito and M. Senami, Phys. Lett. B [**646**]{}, 34 (2007) \[hep-ph/0610249\]. B. Bhattacherjee, M. Ibe, K. Ichikawa, S. Matsumoto and K. Nishiyama, JHEP [**1407**]{}, 080 (2014) \[arXiv:1405.4914 \[hep-ph\]\]. T. Cohen, M. Lisanti, A. Pierce and T. R. Slatyer, JCAP [**1310**]{}, 061 (2013) \[arXiv:1307.4082\]; J. Fan and M. Reece, JHEP [**1310**]{}, 124 (2013) \[arXiv:1307.4400 \[hep-ph\]\]; M. Baumgart, I. Z. Rothstein and V. Vaidya, arXiv:1412.8698 \[hep-ph\]. J. Hisano, K. Ishiwata and N. Nagata, Phys. Lett. B [**690**]{}, 311 (2010) \[arXiv:1004.4090 \[hep-ph\]\]; J. Hisano, K. Ishiwata and N. Nagata, Phys. Rev. D [**82**]{}, 115007 (2010) \[arXiv:1007.2601 \[hep-ph\]\]; J. Hisano, K. Ishiwata, N. Nagata and T. Takesako, JHEP [**1107**]{}, 005 (2011) \[arXiv:1104.0228 \[hep-ph\]\]. N. Nagata and S. Shirai, JHEP [**1501**]{}, 029 (2015) \[arXiv:1410.4549 \[hep-ph\]\]. H. Baer, V. Barger and D. Mickelson, Phys. Lett. B [**726**]{}, 330 (2013) \[arXiv:1303.3816 \[hep-ph\]\]. J. Hisano, K. Ishiwata and N. Nagata, Phys. Rev. D [**87**]{}, 035020 (2013) \[arXiv:1210.5985 \[hep-ph\]\]. J. Hisano, H. Murayama and T. Yanagida, Phys. Rev. Lett.  [**69**]{}, 1014 (1992). J. Hisano, H. Murayama and T. Yanagida, Nucl. Phys. B [**402**]{}, 46 (1993) [\[hep-ph/9207279\]]{}. J. Hisano, T. Kuwahara and N. Nagata, Phys. Lett. B [**723**]{}, 324 (2013) \[arXiv:1304.0343 \[hep-ph\]\]. N. Sakai and T. Yanagida, Nucl. Phys. B [**197**]{}, 533 (1982); S. Weinberg, Phys. Rev. D [**26**]{}, 287 (1982). V. Lucas and S. Raby, Phys. Rev. D [**55**]{}, 6986 (1997) \[hep-ph/9610293\]; K. S. Babu and M. J. Strassler, hep-ph/9808447; T. Goto and T. Nihei, Phys. Rev. D [**59**]{}, 115009 (1999) \[hep-ph/9808255\]. S. Dimopoulos and H. Georgi, Nucl. Phys. B [**193**]{}, 150 (1981). N. Sakai, Z. Phys. C [**11**]{}, 153 (1981). Y. Nomura and S. Shirai, Phys. Rev. Lett.  [**113**]{}, 111801 (2014) \[arXiv:1407.3785 \[hep-ph\]\]. T. T. Yanagida, in private communication. J. R. Ellis, M. K. Gaillard and D. V. Nanopoulos, [Phys. Lett. B [**88**]{}, 320 (1979)]{}. W. Siegel, Phys. Lett. B [**84**]{}, 193 (1979). S. Weinberg, Phys. Lett. B [**91**]{}, 51 (1980). L. J. Hall, Nucl. Phys. B [**178**]{}, 75 (1981). P. Nath, Phys. Rev. Lett.  [**76**]{}, 2218 (1996) \[hep-ph/9512415\]. P. Nath, Phys. Lett. B [**381**]{}, 147 (1996) \[hep-ph/9602337\]. B. Bajc, P. Fileviez Perez and G. Senjanovic, hep-ph/0210374. N. Nagata and S. Shirai, JHEP [**1403**]{}, 049 (2014) \[arXiv:1312.7854 \[hep-ph\]\]. J. Hisano, D. Kobayashi, T. Kuwahara and N. Nagata, JHEP [**1307**]{}, 038 (2013) \[arXiv:1304.3651 \[hep-ph\]\]. S. Weinberg, Phys. Rev. Lett.  [**43**]{}, 1566 (1979). F. Wilczek and A. Zee, Phys. Rev. Lett.  [**43**]{}, 1571 (1979). L. F. Abbott and M. B. Wise, Phys. Rev. D [**22**]{}, 2208 (1980). M. Liu and P. Nath, Phys. Rev. D [**87**]{}, 095012 (2013) \[arXiv:1303.7472 \[hep-ph\]\]. K. Abe [*et al.*]{} \[Super-Kamiokande Collaboration\], Phys. Rev. D [**90**]{}, 072005 (2014) \[arXiv:1408.1195 \[hep-ex\]\]. H. Murayama and A. Pierce, Phys. Rev. D [**65**]{}, 055009 (2002) \[hep-ph/0108104\]. Y. Aoki, E. Shintani and A. Soni, Phys. Rev. D [**89**]{}, 014505 (2014) \[arXiv:1304.7424 \[hep-lat\]\]. M. Dine, P. Draper and W. Shepherd, JHEP [**1402**]{}, 027 (2014) \[arXiv:1308.0274 \[hep-ph\]\]. M. Shiozawa, talk presented at TAUP 2013, September 8–13, Asilomar, CA, USA. K. S. Babu, E. Kearns, U. Al-Binni, S. Banerjee, D. V. Baxter, Z. Berezhiani, M. Bergevin and S. Bhattacharya [*et al.*]{}, arXiv:1311.5285 \[hep-ph\]. J. L. Evans, M. A. G. Garcia and K. A. Olive, JCAP [**1403**]{}, 022 (2014) \[arXiv:1311.0052 \[hep-ph\]\]. J. L. Evans, M. Ibe, K. A. Olive and T. T. Yanagida, Eur. Phys. J. C [**74**]{}, no. 2, 2775 (2014) \[arXiv:1312.1984 \[hep-ph\]\]. E. Dudas, C. Papineau and S. Pokorski, JHEP [**0702**]{}, 028 (2007) \[hep-th/0610297\]. H. Abe, T. Higaki, T. Kobayashi and Y. Omura, Phys. Rev. D [**75**]{}, 025019 (2007) \[hep-th/0611024\]; E. Dudas, A. Linde, Y. Mambrini, A. Mustafayev and K. A. Olive, Eur. Phys. J. C [**73**]{}, 2268 (2013) \[arXiv:1209.0499 \[hep-ph\]\]. H. Baer, A. Mustafayev, S. Profumo, A. Belyaev and X. Tata, Phys. Rev.  D [**71**]{}, 095008 (2005) \[arXiv:hep-ph/0412059\]; H. Baer, A. Mustafayev, S. Profumo, A. Belyaev and X. Tata, [*JHEP*]{} [**0507**]{} (2005) 065, hep-ph/0504001; J. R. Ellis, K. A. Olive and P. Sandick, Phys. Rev.  D [**78**]{}, 075012 (2008) \[arXiv:0805.2343 \[hep-ph\]\]. G. Aad [*et al.*]{} \[ATLAS Collaboration\], Phys. Rev. D [**88**]{}, 112006 (2013) \[arXiv:1310.3675 \[hep-ex\]\]. J. Ellis, K. Olive and Y. Santoso, Phys. Lett.  B [**539**]{}, 107 (2002) \[arXiv:hep-ph/0204192\]; J. R. Ellis, T. Falk, K. A. Olive and Y. Santoso, Nucl. Phys. B [**652**]{}, 259 (2003) \[arXiv:hep-ph/0210205\]. J. L. Evans, M. Ibe, K. A. Olive and T. T. Yanagida, arXiv:1412.3403 \[hep-ph\]. K. Harigaya, M. Ibe and T. T. Yanagida, JHEP [**1312**]{}, 016 (2013) \[arXiv:1310.0643 \[hep-ph\]\]; K. Harigaya, K. Kaneta and S. Matsumoto, Phys. Rev. D [**89**]{}, 115021 (2014) \[arXiv:1403.0715 \[hep-ph\]\]; J. L. Evans and K. A. Olive, Phys. Rev. D [**90**]{}, 115020 (2014) \[arXiv:1408.5102 \[hep-ph\]\]. J. L. Evans, M. Ibe, K. A. Olive and T. T. Yanagida, Eur. Phys. J. C [**74**]{}, 2931 (2014) \[arXiv:1402.5989 \[hep-ph\]\]. T. Nihei and J. Arafune, Prog. Theor. Phys.  [**93**]{}, 665 (1995) \[hep-ph/9412325\]. [^1]: This is another fine-tuning besides that for the Higgs mass. Note that the fine-tuning for the Higgs mass becomes worse as the $\mu$ parameter is taken to be larger, while the doublet-triplet fine-tuning becomes less severe. This tension may explain why $\mu$ is much larger than the electroweak scale [@Nomura:2014asa; @Yanagida:2014]. [^2]: The third condition is used to determine $g_G^2(Q_G)$. [^3]: This is the dominant contribution to proton decay, unless there is flavor violation in the sfermion sector. In this paper, we assume there is no flavor violation in the sfermion sector. The flavor violating case is discussed in Ref. [@Nagata:2013sba]. [^4]: Higgsino exchange dominates in this limit because the gauginos and Higgsinos in the one-loop diagrams are required to flip their chirality, and thus their contribution to proton decay is proportional to their masses, as can be seen from the expression for the function $F$ given in Eq. . [^5]: For more details of how we arrived at this expression see Appendix \[App:ProDec\]. [^6]: If the Kähler manifold for the first two generations is no-scale like, these models can explain $g-2$ experiments [@Evans:2013uza]. However, in this case the proton decay calculation is more complicated because of an additional wino contribution but should give a similar order of magnitude for the proton lifetime. [^7]: The $A$-terms are also regenerated by anomalies. However, they are too small to be of importance. [^8]: We refer to this extended range of Higgs masses to account for the uncertainty in the calculation of the Higgs mass. Note also that the Higgs masses calculated here differ slightly from those calculated in [@eioy2] as here we are not imposing strict gauge coupling unification at the GUT scale. [^9]: The present lower limit on the wino mass from the LHC experiment is about 270 GeV [@Aad:2013yna]. [^10]: The proton decay rate directly depends on $\mu$ through the loop function $F$ in Eq. . When $|\mu| << m_{3/2}$, $F\propto \mu /m_{3/2}^2$, while if $\mu >> m_{3/2}$, $F\propto \log(\mu^2/m_{3/2}^2)/\mu$, as can be seen from Eq. . [^11]: Our calculations are only valid for $|\mu|$ much greater than the wino mass.
--- author: - 'G. G. Sacco' - 'S. Randich' - 'E. Franciosini' - 'R. Pallavicini' - 'F. Palla' bibliography: - 'biblio.bib' date: 'Received 22 september 2006; accepted 27 november 2006' title: 'Lithium depleted stars in the young $\sigma$ Ori cluster[^1]' --- [Knowledge of the age distribution of stars in young clusters and associations is crucial to constrain models of star formation. HR diagrams of different young clusters and associations suggest the presence of age spreads, but the influence of errors on the derived ages is still largely debated. Determination of lithium abundances in low-mass stars represents an alternative and robust way to infer stellar ages.]{} [We measured lithium in a sample of low mass members of the young (4-5 Myr) $\sigma$ Ori cluster with the main goal of investigating its star formation history.]{} [Using the FLAMES multi-object spectrograph on VLT/UT2, we obtained spectra of 98 candidate cluster members. The spectra were used to determine radial velocities, to infer the presence of H$\alpha$ emission, and to measure the strength of the Li [i]{} 670.8 nm absorption line.]{} [Using radial velocities, H$\alpha$ and Li, together with information on X-ray emission, we identified 59 high probability cluster members. Three of them show severe Li depletion. The nuclear ages inferred for these highly depleted stars exceed 10-15 Myr; for two of them these values are in good agreement with the isochronal age, while for the third star the nuclear age exceeds the isochronal one.]{} Introduction ============ The time scale of stellar birth within molecular clouds is one of the main open issues in star formation theory. Two scenarios have been proposed to explain the observational results: Rapid Star Formation (RSF) or Slow Accelerating Star Formation (SASF) [@Elmegreen2000ApJ; @Palla2002ApJ]. HR diagrams, which are the main tool to determine Pre-Main Sequence (PMS) stellar ages, show a spread of $\simgr$10$^7$ years in individual clusters and associations, supporting the SASF model, but the effect of various sources of errors, both observational and theoretical, is largely debated [@Hartmann2001AJ; @Burningham2005MNRAS]. Lithium (Li) abundances can be used as an independent and robust method to determine ages of young objects [@Martin1998MNRAS; @Martin1998AJ], since low-mass stars in the range 0.5–0.08 $M_{\odot}$ deplete their initial Li content during the PMS phase . The timescale of Li depletion depends on mass, with higher mass stars (0.5-0.3 M$_{\odot}$) starting to burn it after 5-10 Myr and lower mass stars (M$<$0.2 M$_{\odot}$) after 20-30 Myr. [@Palla2005ApJL] employed the Li age-dating method among low-mass members of the Orion Nebula Cluster and found four Li-depleted stars with nuclear ages $\sim 10$ Myr. The $\sigma$ Ori cluster was discovered by ROSAT [@Walter1997MmSAI] around the O9.5 V binary star $\sigma$ Ori AB (distance 350$^{+166}_{-85}$ pc, ). Its low-mass stellar population has been intensively studied by photometry and low-resolution spectroscopy in the optical and near-infrared , while the X-ray properties have been investigated most recently by [@Franciosini2006AA] using XMM-Newton. determined an isochronal median age of 4.2$^{+2.7}_{-1.5}$ Myr. Subsequently, @Oliveira2004MNRAS presented evidence for a large age spread ($\sim$30 Myr) in the I/I-J Color-Magnitude Diagram (CMD), and suggested that part of it could be due to photometric variability of the PMS stars. @Kenyon2005MNRAS have found [*bona fide*]{} cluster members with small Li line equivalent widths (EW), possibly indicating a certain amount of depletion. Finally, @Jeffries2006MNRAS have discovered two kinematically separate populations of young PMS stars, one concentrated around $\sigma$ Ori AB sharing a common radial velocity with this star ([v$_1=31.0\pm 0.5$ km/s)]{}, and the second one, more dispersed in the sky, with a radial velocity similar to that of the Orion OB1a and 1b associations ([v$_2=23.8\pm 1.1$ km/s)]{}. In this Letter, we report on the discovery of three high-probability members of the main $\sigma$ Ori cluster with Li abundances a factor of about 1000 below the interstellar value. This result was obtained as part of a VLT/FLAMES survey to study membership, Li abundances and accretion diagnostics of a large sample of K and M stars around $\sigma$ Ori AB (Sacco et al., in preparation). Observations and Results ======================== Target selection and observations --------------------------------- We have selected 98 cluster candidates from previous studies. In order to secure a high fraction of cluster members, we have granted higher priority to stars detected in X-rays by @Franciosini2006AA and with isochronal ages $\simle$10 Myr from optical and infrared CMDs. All stars of the sample have an infrared counterpart in the 2MASS catalog [@Skrutskie2006AJ], while for 79 of them optical photometry is available in the literature . The sample stars were observed using FLAMES on VLT/UT2 [@Pasquini2002Messanger]. FLAMES was operated in MEDUSA mode with the high resolution ($\lambda/\Delta\lambda$=17000) HR15N grating covering the spectral range 647-679 nm. The Field of View (FoV) was centered at RA=05$^{h}$38$^{m}$48$^s$.9 and Dec=-02$^{d}$34$^{m}$22$^s$ and is almost coincident with the FoV of XMM-Newton [@Franciosini2006AA]. The observations were divided in six runs between October and December 2004, for a total exposure time on target of 4.3 hours. Data reduction was performed using the GIRAFFE girBLDRS pipeline 1.12, following the standard steps [@Blecha2004Man]. Sky subtraction was performed separately using the average of six-seven sky spectra. Membership ---------- We have measured radial Velocities (RVs) using the IRAF[^2] task FXCOR by Fourier cross-correlation with two template spectra chosen among sample stars with no accretion signatures. For 86 stars we derived a median RV as the weighted average of the six values measured for the different runs, while the remaining 12 stars turned out to be candidate binaries and will be discussed in the forthcoming paper. Cluster RV was determined by fitting the observed RV distribution, shown in Figure \[fig:RV\_dist\], with the weighted sum of 2 gaussian distributions, one for the cluster and the other for the field. The best fit yields a value of v$_C$=30.91$\pm$0.90 km/s for the cluster and v$_F$=43$\pm$ 36 km/s for the field. Our cluster velocity is in excellent agreement with that (v$_C$=31.0$\pm$0.5 km/s) determined by @Jeffries2006MNRAS. We considered as cluster members stars with RVs within 3 $\sigma$ from the average, yielding 61 members and 25 field stars. The statistical contamination of the cluster member sample, estimated by integrating the field star distribution between v$_{C}-3\sigma_{C}$ and v$_{C}+3\sigma_{C}$, is $\sim$2 stars. The second requirement on membership is the presence of H$\alpha$ in emission. Among the 25 RV non-members, 24 are field stars also according to H$\alpha$, while one star, which appears young based on H$\alpha$, has a radial velocity (v=23.73$\pm$0.45 km/s) similar to that of the second population discovered by @Jeffries2006MNRAS. Note that all the non-members have Li I 670.8 nm pseudo Equivalent Width (pEW) smaller than 200 mÅ. Finally, among the 61 RV cluster members, 56 are PMS stars also from H$\alpha$ emission and from the strength of the Li line; two stars with H$\alpha$ in absorption and no Li are most likely field stars. [ccccc]{} Object &v$_r$ & H$\alpha$ EW & Log L$_X$ & PMS\ & (km/s) & (Å) & (erg/s) &indicator\ SE51$^a$ & 29.5$\pm$0.5 &$-$1.1 & 29.22 &variable\ SWW127$^b$& 33.2$\pm$0.5 &$-$2.6 & 29.51 & -\ J053914.5-022834$^c$ & 29.2$\pm$0.5 &$-$4.6 & 30.17&NIR ex.\ \ \ The remaining 3 RV cluster members, listed in Table \[tab:info\], have H$\alpha$ in emission and Li pEW less than 200 mÅ. As shown in Figure \[fig:spectra\], the Li line is clearly identified (EW=150 mÅ) in the spectrum of SE51, but not in those of SWW127 and J053914.5-022834. These stars have an X-ray counterpart and result to be PMS stars in CMDs. Moreover, J053914.5-022834 shows excess in K-L [@Oliveira2004MNRAS] and SE51 is characterized by photometric variability . Considering that only two out of 22 field stars in our sample have an X-ray counterpart (2 field stars are out of the XMM FoV) and that the probability of having a field star with RV between v$_C-3\sigma_C$ and v$_C+3\sigma_C$ is 0.056, the probability of finding, among the RV cluster members, 3 field stars with an X-ray counterpart is less than 5$\times$10$^{-3}$. We conclude that, although physical membership must be confirmed by, e.g., proper motion studies, we regard as unlikely that all of them are non members. Lithium depleted stars ---------------------- pEWs of the Li line were measured using the IRAF task SPLOT, by direct integration of the line profile over an interval of 0.2 nm. Measured pEWs may be affected by spectral veiling. We have estimated $r$, the ratio of the excess emission to the photospheric continuum, for all stars of the sample from the measurement of the EWs of three absorption lines (V I 662.5 nm, Ni I 664.3 nm, Fe I 666.3 nm) in their spectra and in those of 11 unveiled comparison stars with similar temperatures (see [@Palla2005ApJL]). In those cases where we were not able to measure at least 2 lines, because of low S/N or high veiling, we have considered $r$ indeterminate and the measured Li pEWs are lower limits to the true values. Derived $r$ values range between 0 and 1.4, with errors of about 0.1-0.2. Figure \[fig:pew\] shows pEWs of the Li line, corrected for veiling, for stars with available optical photometry. The median pEW is 590 mÅ, with a typical dispersion of $\sim$100 mÅ. The scatter in Li pEWs could be due to measurement errors, but we cannot exclude the presence of some partially depleted stars. The three stars reported in Table \[tab:info\] are not veiled ($r$=0.0-0.2) and their Li pEWs are less than 200 mÅ, indicating a large amount of Li depletion. ------------------ ------------------------- ------------------------------- ------------ ---------------- ----------------- --------------- ----------------- ---------------- Object T$_{\rm eff}$ (K) Log(L/L$_{\odot}$) Li EW A(Li) M$_{\rm HRD}$ t$_{\rm HRD}$ M$_{\rm Li}$ t$_{\rm Li}$ (R,I/I,J) (R,I/I,J) (mÅ) (M$_{\odot}$) (Myr) (M$_{\odot}$) (Myr) SE51 3750$\pm$20/3750$\pm$60 -0.91$\pm$0.01/-0.91$\pm$0.02 150$\pm$50 $-$0.5$\pm$0.5 0.52–0.53 12–13 $0.49 \pm 0.04$ $10.5 \pm 0.6$ SWW127 3645$\pm$15/3720$\pm$60 -0.92$\pm$0.01/-0.94$\pm$0.02 $<$90 $<$-1.0 0.42 – 0.49 8.4-14.2 $\geq 0.44 $ $\geq 10$ J053914.5-022834 3345$\pm$70 -1.14$\pm$0.06 $<$100 $<$-0.7 $0.23 \pm 0.05$ $4.5 \pm 1.5$ $\geq 0.4$ $\geq 15 $ ------------------ ------------------------- ------------------------------- ------------ ---------------- ----------------- --------------- ----------------- ---------------- We have determined Li abundances for these three stars using Curves of Growths of , based on @Pavlenko2001AstRep models and spectral code. Surface gravity was fixed at $\log$ g=4.0 dex, the expected value for a 4 Myr old late type star by models. The effective temperature of the depleted stars is given in Table \[tab:lidep\]: in the case of J053914.5-022834 it has been derived from the spectral type measured by , while T$_{\rm eff}$ of SE51 and SWW127 are estimated from the color indexes R-I and I-J, using the models of . The resulting Li abundances are listed in Table \[tab:lidep\] and are a factor of $\simgr$ 1000 below the interstellar value (A(Li)=3.3). We stress that even relatively large errors in T$_{\rm eff}$ and/or log g would not greatly affect Li abundances (or depletion factors), given the extremely small pEWs (or upper limits to pEWs). Similarly, the low abundances cannot be explained by veiling, since it would imply a value of $r\gg 1$, or by other sources of error, such as uncertainty in the pseudo-continuum. Contrary to the suggestion of @Kenyon2005MNRAS, we believe that the measured low pEWs represent real Li depletion, since Pavlenko (2001) models provide a correct treatment of the Li line formation in cool objects and pEWs below $\sim$170 mÅ imply that only TiO contributes to the 670.8 feature. Discussion and Conclusion ========================= The three Li depleted, high probability members of the $\sigma$ Ori cluster represent an example of post T Tauri stars (following the definition of ), and allow us to investigate its star formation history from a novel point of view. Isochronal and nuclear ages (t$_{\rm HRD}$) and masses (M$_{\rm HRD}$) for the three Li-depleted stars are listed Table \[tab:lidep\]. These values have been derived from the models of @Palla1999ApJ using the T$_{\rm eff}$ and luminosity range given in the table. In the last two columns we list the nuclear mass (M$_{\rm Li}$) and age (t$_{\rm Li}$) estimated following  [@Palla2005ApJL], based on [@Bildsten1997ApJ]. The latter provides analytical prescriptions to derive the age vs. stellar luminosity and the age vs. Li abundance relations for a fully convective star undergoing gravitational contraction at a nearly constant T$_{\rm eff}$ and assuming fast and complete mixing. In Figure \[fig:bild2\] we display the mass vs. age relations at fixed luminosity (positive slope) and Li abundance (negative slope) predicted by the models. The intersection of the two curves gives the mass and nuclear age of the star. We find that the age and mass of SE51 and SWW127 derived from Li are in very good agreement with the isochronal values obtained using both R–I and I–J colors. On the contrary, the measured amount of Li depletion for J053914.5-022834 is too large for the isochronal age and low mass from evolutionary tracks. Considering that most of the remaining stars of our sample have pEWs consistent with the interstellar value, our results support the view that the bulk of the $\sigma$ Ori population has an age $\simle$ 4–6 Myr. However, three low-mass, high probability members are definitively older than $\sim$10 Myr. Therefore, we propose that $\sigma$ Ori has been forming stars on a time scale of $>$10-15 Myr, as found in the case of the Orion Nebula Cluster. Finally, we note that our sample has been selected requiring an isochronal age from CMDs less than $\sim$10 Myr. This bias has precluded the identification of additional Li-depleted, old stars that might exist in the $\sigma$ Ori cluster. Further observations at low spectral resolution to derive stellar parameters, as well as at high resolution to measure lithium abundances on a larger sample of stars can help to fully characterize the star formation history of the cluster. [^1]: Based on data collected at the ESO Very Large Telescope, Paranal Observatory, Chile \[program 074.D-0136(A)\] [^2]: IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for research in Astronomy, Inc.,under contract to the National Science Foundation.
--- abstract: 'We investigate the particle and kinetic-energy densities for a system of $N$ fermions bound in a local (mean-field) potential $V(\bfr)$. We generalize a recently developed semiclassical theory \[J. Roccia and M. Brack, Phys. Rev. Lett. [**100**]{}, 200408 (2008)\], in which the densities are calculated in terms of the closed orbits of the corresponding classical system, to $D>1$ dimensions. We regularize the semiclassical results $(i)$ for the U(1) symmetry breaking occurring for spherical systems at $r=0$ and $(ii)$ near the classical turning points where the Friedel oscillations are predominant and well reproduced by the shortest orbit going from $r$ to the closest turning point and back. For systems with spherical symmetry, we show that there exist two types of oscillations which can be attributed to radial and non-radial orbits, respectively. The semiclassical theory is tested against exact quantum-mechanical calculations for a variety of model potentials. We find a very good overall numerical agreement between semiclassical and exact numerical densities even for moderate particle numbers $N$. Using a “local virial theorem”, shown to be valid (except for a small region around the classical turning points) for arbitrary local potentials, we can prove that the Thomas-Fermi functional $\tau_{\text{TF}}[\rho]$ reproduces the oscillations in the quantum-mechanical densities to first order in the oscillating parts.' author: - 'J. Roccia, M. Brack, and A. Koch' title: Semiclassical theory for spatial density oscillations in fermionic systems --- Introduction {#secint} ============ Recent experimental success confining fermion gases in magnetic traps [@jin] has led to renewed interest in theoretical studies of confined degenerate fermion systems at zero [@vig1; @glei; @bvz; @mar1; @mar2; @mar3; @vig2; @homa; @bm; @mue] and finite temperatures [@akde; @zbsb]. According to the density functional theory (DFT) [@hk; @ks; @dft], the local particle density $\rho({\bfr})$ is the key ingredient of a system of interacting fermions in that it contains all information about its ground state. In this paper we study the oscillations in the particle density $\rho({\bfr})$ and in different forms of the kinetic-energy density of $N$ fermions bound in a local potential $V(\bfr)$. Although we treat the particles as non-interacting, we keep in mind that this potential models the self-consistent Kohn-Sham (KS) potential obtained for an [*interacting system in the mean-field approximation*]{}. We shall also consider potentials with infinitely steep walls, so-called “billiards”, which have been shown to be good approximations to the self-consisten mean fields of quantum dots [@qdot] or metal clusters [@sciam] with many particles. A semiclassical theory for spatial density oscillations has been developed recently in [@rb]. Using Gutzwiller’s semiclassical Green function [@gubu], expressions for the oscillating parts of spatial densities of fermionic systems were given in terms of the [*closed orbits*]{} of the corresponding classical system. The semiclassical theory was shown in [@rb] to reproduce very accurately the quantum oscillations in the spatial densities of one-dimensional systems, even for moderate particle numbers $N$, and some general results have also been given for arbitrary higher-dimensional spherical potentials $V(r)$. In this paper, we present in more detail the semiclassical closed-orbit theory developed in [@rb] and apply it explicitly for a variety of potentials in $D>1$ dimensions. We find overall a good agreement between the quantum-mechanical and the semiclassical densities. The paper is organized as follows. In we give the basic definitions of the quantum-mechanical spatial densities. In we discuss the asymptotic (extended) Thomas-Fermi (TF) limits for $N\to\infty$ and emphasize the existence of two types of density oscillations occurring in potentials for $D>1$ with spherical symmetry (except for isotropic harmonic oscillators). is devoted to the semiclassical closed-orbit theory for spatial density oscillations. In Secs. \[secpot\] - \[secscl1dim\] we review the basic equations and former results, including also details that were not presented in [@rb]. In Secs. \[secrad\] and \[secnonrad\] we extend the semiclassical theory to higher-dimensional systems ($D>1$) and test its results for various model potentials against exact quantum-mechanical densities. In Sect. \[secregul\] we discuss the regularization necessary in spherical systems for $D>1$ near the center ($r=0$), where a U(1) symmetry breaking occurs for $r>0$. In a separate publication [@circ], we have presented the analytical determination and classification of all closed orbits in the two-dimensional circular billiard and give analytical results of the semiclassical theory for the spatial density oscillations in this system. Some of the numerical results for the densities are included in of the present paper. In we present regularizations of the spatial densities near the classical turning points, where the semiclassical theory diverges, both for smooth potentials and for billiard systems. contains some general results valid for finite fermion systems such as trapped fermionic gases or metallic clusters. We discuss there, in particular, a “local virial theorem” and, as its direct consequence, the extended validity of the TF functional $\tau_{\text{TF}}[\rho]$. Throughout this paper, we only treat the zero-temperature ground state of an $N$-particle system. In the Appendix \[appcor\], we outline how to include finite temperatures for grand-canonical ensembles in the semiclassical theory. The quantum-mechanical densities {#secbas} ================================ Basic definitions and ingredients {#secdef} --------------------------------- Let us recall some basic quantum-mechanical definitions, using the same notation as in [@bm]. We start from the stationary Schrödinger equation for particles with mass $m$, bound by a local potential $V(\bfr)$ with a discrete energy spectrum $\{E_n\}$: {- \^2 + V()} \_n() = E\_n \_n(). \[seq\] We order the spectrum and choose the energy scale such that $0 < E_1 \leq E_2 \leq \dots \leq E_n \leq \dots$. We consider a system with an even number $N$ of fermions with spin $s=1/2$ filling the lowest levels, and define the particle density by () := 2 \_[E\_n]{} |\_n()|\^2, ()\^Dr = N. \[rho\] Hereby $\lambda$ is the Fermi energy and the factor 2 accounts for the fact that due to spin and time-reversal symmetry, each state $n$ is at least two-fold degenerate. Further degeneracies, which may arise for $D>1$, will not be spelled out but included in the summations over $n$. For the kinetic-energy density, we consider two different definitions [@foot] $$\begin{aligned} \tau(\bfr) \; &:=& \; - \frac{\hbar^2}{2m}\; 2\!\! \sum_{E_n\leq \lambda} \!\! \phi_n^*(\bfr)\nabla^2 \phi_n(\bfr)\,, \label{tau} \qquad\\ \tau_1(\bfr) \; &:=& \; \frac{\hbar^2}{2m}\; 2\!\! \sum_{E_n\leq \lambda} \!\! |\nabla\phi_n(\bfr)|^2, \label{tau1}\end{aligned}$$ which upon integration both yield the exact total kinetic energy. Due to the assumed time-reversal symmetry, the two above functions are related by () = \_1() - 12 \^2(). \[taurel\] An interesting, and for the following discussion convenient quantity is their average () := 12 \[()+\_1()\]. \[xi\] For harmonic oscillators it has been observed [@bvz; @bm; @rkb1] that inside the system (i.e., sufficiently far from the surface region), $\xi(\bfr)$ is a smooth function of the coordinates, whereas $\tau(\bfr)$ and $\tau_1(\bfr)$, like the density $\rho(\bfr)$, exhibit characteristic shell oscillations that are opposite in phase for $\tau$ and $\tau_1$. We can express $\tau(\bfr)$ and $\tau_1(\bfr)$ in terms of $\xi(\bfr)$ and $\nabla^2\rho(\bfr)$: () &=& () -14 \^2(),\[tauxi2\]\ \_1() &=& () +14 \^2(), \[tauxi\] so that $\rho(\bfr)$ and $\xi(\bfr)$ can be considered as the basic densities characterizing our systems. Eqs.  – are exact for arbitrary potentials $V\bf(r)$. For any even number $N$ of particles they can be computed once the quantum-mechanical wave functions $\phi_n(\bfr)$ are known. As mentioned in the introduction, the potential $V(\bfr)$ can be considered to represent the self-consistent mean field of an interacting system of fermions obtained in the DFT approach. The single-particle wavefunctions $\phi_n(\bfr)$ are then the Kohn-Sham orbitals [@ks] and $\rho(\bfr)$ is (ideally) the ground-state particle density of the interacting system. For later reference we express the densities – in terms of the Green function in the energy representation, which in the basis $\left\{\phi_n(\bfr)\right\}$ is given by G(E,,’)= \_n , (&gt;0) . \[green\] Using the identity $\displaystyle 1/(E+i \epsilon -E_n)={\cal P} [1/(E-E_n)]-i \pi \delta (E-E_n)$, where ${\cal P}$ is the Cauchy principal value, one can write the densities as $$\begin{aligned} \rho({\bf r})&=&-\frac{1}{\pi}\, \text{Im} \int_0^{\lambda} \d E \, G(E,{\bf r},{\bf r}') |_{{\bf r'}={\bf r}} \,, \label{rhog} \\ \tau({\bf r})&=& \frac{\hbar^2}{2\pi m}\, \text{Im} \int_0^{\lambda} \d E \, \nabla^2_{{\bf r}'}G(E,{\bf r},{\bf r}') |_{{\bf r}'={\bf r}} \,, \label{taug}\\ \tau_1({\bf r})&=& -\frac{\hbar^2}{2\pi m}\, \text{Im} \int_0^{\lambda} \d E \, \nabla_{{\bf r}}\nabla_{{\bf r}'}G(E,{\bf r},{\bf r}') |_{{\bf r}'={\bf r}} \,, \ \ \ \ \label{tau1g}\end{aligned}$$ whereby the subscript of the nabla operator $\nabla$ denotes the variable on which it acts. The density of states $g(E)$ of the system is given by a sum of Dirac delta functions, which can be expressed as a trace integral of the Green function: g(E) = \_n (E-E\_n) = - \^D r G(E,,’)|\_[[**r**]{}’=[**r**]{}]{}. \[dos\] The particle number can then also be obtained as N = N() = 2\_0\^Eg(E). \[intdos\] Due to the discreteness of the spectrum, $N(\lambda)$ is a monotonously increasing staircase-function and consequently the function $\lambda(N)$, too, is a monotonously increasing staircase-function. Asymptotic quantum-mechanical results {#secexa} ------------------------------------- ### Thomas-Fermi limits and oscillating parts {#sectf} In the limit $N\to\infty$, the densities are expected to go over into the approximations obtained in the Thomas-Fermi (TF) theory [@matf]. These are given, for any local potential $V(\bfr)$, by \_() = ()\^[D/2]{} \[\_-V()\]\^[D/2]{}, \[rhotf\] (\_1)\_() = \_() = \_() , \[xitf\] \_() = ()\^[D/2]{} \^[D/2+1]{}. \[tautf\] These densities are defined only in the classically allowed regions where $\lambda_{\text{TF}}\geq V(\bfr)$, and the Fermi energy $\lambda_{\text{TF}}$ is defined such as to yield the correct particle number $N$ upon integration of $\rho_{\text{TF}}(\bfr)$ over all space. The direct proof that the quantum-mechanical densities, as defined in in terms of the wavefunctions of a smooth potential, reach the above TF limits for $N\to\infty$ is by no means trivial. It has been given for isotropic harmonic oscillators in arbitrary dimensions in Ref. [@bm]. The TF densities – fulfill the following functional relation: $$\begin{aligned} \hspace{-.4cm} \tau_{\text{TF}}(\bfr) &=& \tau_{\text{TF}}[\rho_{\text{TF}}(\bfr)]\,\nonumber\\ &=& \frac{\hbar^2}{2m} \frac{4\pi D}{(D+2)}\!\left[\frac{D}{4} \Gamma\left(\frac{D}{2}\right)\right]^{\!2/D}\!\rho_{\text{TF}}^{1+2/D}(\bfr)\,,~~~ \label{tautff}\end{aligned}$$ which will be investigated further below. For smooth potentials in $D>1$ dimensions, next-to-leading order terms in $1/N$ modify the smooth parts of the spatial densities, which are obtained in the extended Thomas-Fermi (ETF) model as corrections of higher order in $\hbar$ through an expansion in terms of gradients of the potential [@kirk]. These corrections usually diverge at the classical turning points and can only be used sufficiently far from the turning points, i.e., in the interior of the system. We do not reproduce the ETF densities here but refer to [@book] (chapter 4) where they are given for arbitrary smooth potentials in $D=2$ and 3 dimensions, and to [@bm] where explicit results are given for spherical harmonic oscillators in $D=2$ and 4 dimensions. This leads us to decompose the densities in the following way: () & = & \_() + (), \[rhodec\]\ () & = & \_() + (), \[taudec\]\ \_1() & = & (\_1)\_() + \_1(), \[tau1dec\]\ () & = & \_() + (). \[xidec\] For $D=1$ and for billiard systems [@notebil], the subscripts TF and the explicit relations – hold. The oscillating parts $\delta\rho(\bfr)$ etc. are the main objects of this paper. ### Two types of oscillating parts in spherical systems {#secsep} We have investigated the density oscillations in various potentials in $D>1$ dimensions with [*radial symmetry*]{} such that $V(\bfr)=V(r)$, where $r=|\bfr|$. We found that, generally, there exist two types of oscillations in their spatial densities:\ ($i$) regular, short-ranged oscillations with a constant wavelength in the radial variable $r$ over the whole region, and\ ($ii$) irregular, long-ranged oscillations whose wavelength decreases with increasing $r$.\ An example is shown in for a spherical billiard with unit radius containing $N=100068$ particles. Note the irregular, long-ranged oscillations of $\xi(r)$ around its bulk value [@notebil] $\xi_{\text{TF}}$ seen in the upper panel. In the lower panel, where we exhibit only an enlarged region around the bulk value, we see that $\tau(r)$ and $\tau_1(r)$ oscillate regularly around $\xi(r)$, but much faster than $\xi(r)$ itself and with opposite phases. The same two types of oscillations are also found in the particle density $\rho(r)$. ![\[osc\] Kinetic-energy density profiles of a $3D$ spherical billiard with $N=100068$ particles (units: $\hbar^2\!/2m=R=1$; all densities are divided by $N^{5/3}$). [*Upper panel:*]{} $\xi(r)$ (solid line) and its constant TF value $\xi_{\text{TF}}$ (dashed). [*Lower panel:*]{} $\tau(r)$ (dashed), $\tau_1(r)$ (dotted) and $\xi(r)$ (solid line). Note that in both panels, the vertical scale does not start at zero. ](fig1.eps){width="1.15\columnwidth"} For radial systems, we can thus decompose the oscillating parts of the spatial densities defined in – as follows: (r) & = & \_(r) + \_(r), \[drhodec\]\ (r) & = & \_(r) + \_(r), \[dtaudec\]\ \_1(r) & = & \_\_1(r) + \_\_1(r), \[dtau1dec\]\ (r) & = & \_(r). \[dxidec\] Here the subscript “r” denotes the regular, short-ranged parts of the oscillations, while their long-ranged, irregular parts are denoted by the subscript “irr”. We emphasize that this separation of the oscillating parts does not hold close to the classical turning points. As we see in and in later examples, the oscillating parts defined above fulfill the following properties in the interior of the system (i.e., except for a small region around the classical turning points): a\) For $D>1$, the irregular oscillating parts of $\tau(r)$ and $\tau_1(r)$ are asymptotically identical and equal to $\delta\xi(r)$: \_(r) \_\_1(r) \_(r) = (r). b\) The irregular oscillations are [*absent*]{} (i.e., asymptotically zero) in the densities of all potentials in $D=1$ and, also, in isotropic harmonic oscillators (see Ref. [@bm]) and in linear potentials (see [@nlvt]) for arbitrary $D$. c\) The regular oscillating parts of $\tau(r)$ and $\tau_1(r)$ are asymptotically equal with opposite sign: \_(r) - \_\_1(r). (This relation holds in particular for isotropic harmonic oscillators, for which it has been derived [@bm] asymptotically for $N\to\infty$ from quantum mechanics.) These numerical findings will be understood and explained within the semiclassical theory developed in the following. Henceforth, the symbol $\delta$ will always denote the sum of both types of oscillating parts and the subscripts will only be used if reference is made to one particular type of oscillations. Semiclassical closed-orbit theory {#secscl} ================================= In this section we present the semiclassical theory, initiated by Gutzwiller (see [@gutz] and earlier references quoted therein, and [@gubu]), for the approximate description of quantum oscillations in terms of classical orbits. In we recall the trace formula for the density of states, and in we present the newly developed theory for spatial density oscillations [@rb]. In both cases, we limit ourselves – as in the previous section – to $N$ non-interacting fermions in a local potential $V(\bfr)$. The inclusion of finite temperatures in the semiclassical theory is dealt with in Appendix \[appcor\]. Brief review of periodic orbit theory for the density of states {#secpot} --------------------------------------------------------------- Before deriving semiclassical expressions for the spatial densities, we remind the reader of the periodic orbit theory (POT) for the density of states. The starting point is the semiclassical approximation of the Green function which was derived by Gutzwiller [@gubu]: $$G_{\text{scl}}(E,{\bf r,r'}) = \alpha_{_D}\sum_{\gamma} \sqrt{|{\cal D}_\gamma|} \,e^{\frac{i}{\hbar} S_\gamma(E,{\bf r,r'})-i \mu_\gamma\frac{\pi}{2}}. \label{sclgreen}$$ The sum runs over all classical trajectories $\gamma$ leading from a point $\bfr$ to the point $\bfr'$ at fixed energy $E$. $S_\gamma(E,{\bf r,r'})$ is the action integral taken along the trajectory $\gamma$ S\_(E,[**r,r’**]{}) = \_\^[’]{} [**p**]{}(E,[**q**]{})[**q**]{}, \[actint\] whereby ${\bf p}(E,{\bf r})$ is the classical momentum (E,[**r**]{}) = , \[pclass\] defined only inside the classically allowed region where $E\ge V(\bfr)$; its modulus is denoted by $p(E,\bfr)$. ${\cal D}_\gamma$ is the Van Vleck determinant: \_= [D]{}\_, \_ = (\_/\_), \[vleckdet\] where $\bfp_{\bot}$ and $\bfr_{\!\bot}'$ are the initial momentum and final coordinate, respectively, [*transverse*]{} to the orbit $\gamma$. The Morse index $\mu_\gamma$ counts the sign changes of the eigenvalues of the Van Vleck determinant along the trajectory $\gamma$ between the points $\bfr$ and $\bfr'$; it is equal to the number of conjugate points along the trajectory [@conjp]. The prefactor in is given by \_D=2 (2 i )\^[-(D+1)/2]{}. The approximation of the Green function is now inserted into the r.h.s of for the density of states $g(E)$. Since $\bfr'=\bfr$ in the trace integral of , only closed orbits contribute to it. The running time $T_\gamma(E,\bfr)$ of these orbits, i.e., the time it takes the classical particle to run though the closed orbit, is given by T\_(E,) = . \[time\] It was shown by Berry and Mount [@bemo] that to leading order in $\hbar$, the orbits with zero running time, $T_\gamma(E,\bfr)=0$, yield the smooth TF value of $g(E)$. In systems with $D>1$ higher-order terms in $\hbar$ also contribute, which can also be obtained from the ETF model (see, e.g., chapter 4 of [@book]). Separating smooth and oscillatory parts of the density of states by defining g(E):=(E)+g(E), \[dossep\] the oscillating part $\delta g(E)$ is, to leading order in $\hbar$, given by the [*semiclassical trace formula*]{} g(E) \_ [A]{}\_(E) , \[trf\] where the sum runs over all [*periodic orbits*]{} (POs). For systems in which all orbits are isolated in phase space, Gutzwiller [@gutz] derived explicit expressions for the amplitudes ${\cal A}_{\text{PO}}(E)$, which depend on the stability of the orbits, and for the Maslov indices $\sigma_{\text{PO}}$. Performing the trace integral in along all directions transverse to each orbit $\gamma$ in the stationary phase approximation (SPA) leads immediately to the periodicity of the contributing orbits. The Maslov index $\sigma_{\text{PO}}$ collects all phases occurring in and in the SPA for the trace integral (see [@masl] for detailed computations of $\sigma_{\text{PO}}$). It has been shown [@crl] that $\sigma_{\text{PO}}$ is a canonical and topological invariant property of any PO. $S_{\text{PO}}(E)$ is the closed action integral S\_(E) = \_ (E,[**q**]{}) [**q**]{}. \[spo\] For smooth one-dimensional potentials, the trace formula is particularly simple and reads g\^[(D=1)]{}(E) = \_[k=1]{}\^ (-1)\^k , \[trf1\] where the sum is over the repetitions $k\geq 1$ of the primitive orbit with action $S_1(E)$ and period $T_1(E)=S'_1(E)$. Equation is equivalent to the sum of delta functions in , using the spectrum obtained in the WKB approximation [@book; @beta]. For systems with $D>1$ with continuous symmetries (and hence also for integrable systems), the same type of trace formula holds, but the summation includes all degenerate families of periodic orbits and the amplitudes ${\cal A}_{\text{PO}}(E)$ and indices $\sigma_{\text{PO}}$ have different forms. For an overview of various trace formulae and the pertinent literature, as well as many applications of the POT, we refer to [@book]. Semiclassical approximation to the spatial densities {#secsclden} ---------------------------------------------------- In order to derive semiclassical expressions for the spatial densities defined in , we start from the expressions given in the equations – , which are functions of $\bfr$ and the Fermi energy $\lambda$, and replace the exact Green function $G(E,{\bf r,r'})$ by its semiclassical expansion . The energy integration can be done by parts, using and , and to leading order in $\hbar$ we obtain for the particle density (,[**r**]{})  \_[\_D]{}\_ e\^[ S\_(,[**r,r**]{})-i \_]{}. \[rhosc\] Again, the orbits with zero running time $T(E,\bfr)=0$ yield, to leading order in $\hbar$, the smooth TF particle density ; the proof given in [@bemo] for the density of states applies also to the spatial densities discussed here. Like for the density of states, higher-order $\hbar$ corrections contribute also to the smooth part of $\rho(\bfr)$ in $D>1$ and will be included in their ETF expressions. The periodic orbits (POs), too, can only contribute to the smooth part of $\rho(\bfr)$, since their action integrals are independent of $\bfr$ and hence the phase in the exponent of is constant. Thus, [*a priori*]{} only [*non-periodic orbits*]{} (NPOs) contribute to the oscillating part of $\rho(\bfr)$. The same holds also for the other spatial densities, so that we can write their semiclassical approximations as [@rb]: && ()  \_[\_D]{}\_ e\^[(,[**r**]{})]{}, \[drhosc\]\ && ([**r**]{}) \_[\_D]{}\_[NPO]{} e\^[i(,[**r**]{})]{}, \[dtausc\] \_1([**r**]{})  \_[\_D]{}\_ Q(,) e\^[i(,[**r**]{})]{}. \[dtau1sc\] The sums are only over [*non-periodic orbits*]{} (NPOs) that lead from a point $\bfr$ back to the same point $\bfr$. For convenience, we have omitted the subscript “NPO” from all quantities in the above equations. The phase function $\Phi(\lambdab,\bfr)$ is given by (,[**r**]{})=S(,[**r,r**]{})/-. \[phase\] The quantity $Q(\lambdab,\bfr)$ appearing in for $\delta\tau_1(\bfr)$ is defined as Q(,) = = , \[mismatch\] where $\bfp$ and $\bfp'$ are the short notations for the initial and final momentum, respectively, of a given closed orbit $\gamma$ at the point $\bfr$. These are obtained also from the action integral by the canonical relations . \_S\_(,[**r,r’**]{})|\_[=’]{} = -,. \_[’]{}S\_(,[**r,r’**]{})|\_[=’]{} = ’. \[pcanon\] Since $Q$ in depends on the angle $\theta$ between $\bfp$ and $\bfp'$, it may be called the “momentum mismatch function”, being +1 for $\bfp=\bfp'$ (i.e., for POs) and $-1$ for $\bfp=-\bfp'$ (e.g., for self-retracing NPOs). Note that the upper limit $\lambda$ of the energy integral in – has been replaced here by the smooth Fermi energy $\lambdab$ defined by N = 2\_0\^ E(E),= + . \[nsmooth\] The reason for this is the following. Since $\lambda(N)$ is a non-smooth staircase function, as mentioned at the end of , it is natural to expand it around its smooth part $\lambdab$ which can be identified with its TF value $\lambda_{\text{TF}}$ (or $\lambda_{\text{ETF}}$ for $D>1$). Taylor expanding equation using up to first order in $\delta\lambda$, we easily obtain an expression for its oscillating part (cf. [@clmrs]): -\_0\^ E g(E). The quantity $\delta\lambda$ is of higher order in $\hbar$ than $\lambdab$ and can be considered as a small semiclassical correction; the $\delta g(E)$ in the integrand may be expressed through the trace formula . Now, the contribution of the zero-length orbits to yields formally the smooth (E)TF density, but taken at the exact (quantum) value of $\lambda$. The density should therefore be developed around the smooth (E)TF value $\lambdab$ before it can be identified with the standard (E)TF density. Its first variation with $\delta\lambda$ leads to a further smooth contribution which should be taken into account. The same holds for the other densities. The contribution of all finite-length orbits to is of higher order in $\hbar$ than the leading smooth (ETF) terms, so it is consistent to evaluate them at $\lambdab$. In one-dimensional systems, all smooth terms can be exactly controlled. The smooth part of the density may be written as . \_(,x) \_(,x) + |\_. \[rhotftay\] The first term on the r.h.s. is the standard TF density (for $D=1$). The second term, using and the fact that $g_{\text{TF}}(\lambda_{\text{TF}}) =T_1(\lambda_{\text{TF}})/2\pi\hbar$ for $D=1$, is found to exactly cancel the contribution of the periodic orbits to (evaluated at $\lambdab$), which has been explicitly calculated in [@rb] and given in Eq. (22) there. For $D>1$ dimensions, we cannot prove that the same cancellation of smooth terms takes place. Furthermore, for the circular billiard treated in [@circ] it is shown that the contributions of periodic and non-periodic orbits cannot be separated in the vicinity of bifurcations that occur for $D>1$ under variation of $\bfr$. For arbitrary local potentials in $D>1$ dimensions, it is in general a difficult task to evaluate all nonperiodic closed orbits. In non-integrable systems, the number of POs is known to grow exponentially with energy or some other chaoticity parameter (cf. the Appendix H in [@chaos2] or, to a large extent, [@chaos3]); the number of NPOs is evidently even much larger. For the semiclassical density of states , the summation over POs is known not to converge in general (cf. [@chaos4]). For the semiclassical expressions – , however, the convergence of the sums over NPOs is appreciably improved due to the appearance of their periods $T(\lambdab,\bfr)$ in the denominators. In practice, we find that it is sufficient to include only a finite number of shortest orbits, as illustrated for example  in below. The expressions – are only valid if the NPOs going through a given point $\bfr$ are [*isolated*]{}. In systems with continuous symmetries, [*caustic points*]{} exist in which the Van Vleck determinant ${\cal D}_\perp$ becomes singular. The same happens at points where [*bifurcations*]{} of NPOs occur. In such cases, [*uniform approximations*]{} can be developed which lead to finite semiclassical expressions; these will be presented in Sec. \[secregul\] and in [@circ]. We should also emphasize that the semiclassical approximations are not valid in regions close to the classical turning points $\bfr_\lambda$ defined by $V(\bfr_\lambda)=\lambdab$. Since the classical momentum $p(\lambdab,\bfr_\lambda)$ in becomes zero there, the spatial density always diverges at the turning points. Furthermore the running time $T(\lambdab,{\bf r})$, which appears in the denominator of all densities – , may turn to zero at the turning point for certain orbits. To remedy these divergences, one has to resort to the technique of linearizing a smooth potential $V(\bfr)$ around the classical turning points, which is familiar from WKB theory [@wkb]. We shall discuss this in detail in . Our semiclassical formulae – can also be applied to billiard systems in which a particle moves freely inside a given domain and is ideally reflected at its boundary. The only modification is that for a given orbit, each reflection at the boundary contributes one extra unit to the Morse index $\mu$ in , since the difference in the semiclassical reflection phases between a soft and a hard wall is $\pi/2$. A detailed application of our formalism to the two-dimensional circular billiard, including a complete determination of all closed orbits of this system, has been given in [@circ]. Local virial theorem {#secdlvt} -------------------- ### Statement and test of the theorem We now shall discuss a result which can be directly inferred from the semiclassical equations – , without detailed knowledge of the NPOs that contribute to them in a particular potential. Since the modulus of the momentum $p(\lambdab,\bfr)$ depends only on position and Fermi energy, but not on the orbits, we have taken it outside the sum over the NPOs. Comparing the prefactors in and and using , we immediately find [@rb] the relation $$\delta \tau({\bf r})\simeq[\lambdab-V({\bf r})] \,\delta \rho({\bf r}) \,. \label{lvt}$$ This is exactly the local virial theorem (LVT) that was derived in [@bm] from the quantum-mechanical densities in the asymptotic limit $N\to\infty$ for isotropic harmonic oscillators. Here we obtain it explicitly from our semiclassical approximation. Since no further assumption about the potential or the contributing NPOs has been made, the LVT holds for arbitrary integrable or non-integrable systems in arbitrary dimensions with local potentials $V(\bfr)$ and hence also for [*interacting fermions in the mean-field approximation given by the DFT*]{}. We recall, however, that is not expected to be valid close to the classical turning points. No such theorem holds for the density $\delta\tau_1({\bf r})$, since it depends on the relative directions of the momenta $\bfp$ and $\bfp'$ of each contributing orbit through the factor $Q(\lambdab,\bfr)$ appearing under the sum in . ![\[chaos\] (Color online) Oscillating part of spatial densities of $N=632$ particles in the nearly chaotic potential with $\kappa=0.6$ ($\hbar=m=1$). [*Top:*]{} The solid (black) line gives the r.h.s. of the LVT , the dashed (red) line gives $\delta \tau(x,y)$, and the dotted (blue) line gives $\delta \tau_1(x,y)$, all taken along the line $y=x/\sqrt{3}$. [*Bottom:*]{} $\delta \xi(x,y)$ along $y=x/\sqrt{3}$. ](fig2.eps){width="1\columnwidth"} In we test explicitly for the coupled two-dimensional quartic oscillator V(x,y)=(x\^4+y\^4)- x\^2 y\^2, \[vqo\] whose classical dynamics is almost chaotic in the limits $\kappa=1$ and $\kappa\to -\infty$ [@btu; @erda], but in practice also for $\kappa=0.6$ (see, e.g., [@marta]). We have computed its wavefunctions using the code developed in [@marta]. In the upper panel of we show the left side (dashed line) and the right side (solid line) of the LVT for this system with $N=632$ particles, using the exact densities along line $y=x/\sqrt{3}$, i.e., $\delta\rho(x,x/\sqrt{3})$ and $\delta\tau(x,x/\sqrt{3})$. The agreement between both sides is seen to be very good, except in the surface region. We also show $\delta\tau_1(x,x/\sqrt{3})$ (dotted line). This demonstrates that the leading contributing NPOs in this system are not self-retracing. Correspondingly, the quantity $\delta\xi(x,x/\sqrt{3})$ in the lower panel is seen not to be negligible. $D=1$ dimensional systems {#secscl1dim} ------------------------- In a one-dimensional potential $V(x)$ there is only linear motion along the $x$ axis. As discussed in [@rb], the only types of NPOs are those running from a given point $x$ to one of the turning points and back, including $k\geq 0$ full periodic oscillations between both turning points. We name the two types of orbits the “+” orbits that start from any point $x\neq 0$ towards the [*closest*]{} turning point and return to $x$, and the “$-$” orbits that are first reflected from the [*farthest*]{} turning point. Clearly, these orbits have opposite initial and final momenta: $p=-p'$, so that the momentum mismatch function equals $Q(\lambdab,x)=-1$. Consequently, one obtains from directly the relations \_1(x) -(x), (x) 0. \[tautau1\] Note that these results do not hold near the classical turning points, where the semiclassical approximation breaks down (cf. ; see also the example in , where $\delta\xi(x)$ is small inside the systems but becomes comparable to $\delta\rho(x)$ near the turning points.) The explicit evaluation of for $D=1$ was done in [@rb] for smooth potentials; the result in the present notation is (x) - \_[k=0, ]{}\^(-1)\^k , \[drhosc1\] where $S_\pm^{(k)}$ are the actions of the “+” and “$-$” type NPOs (including $k$ full periods), and $T_\pm^{(k)}$ are there running times defined by . A numerical example was given in [@rb] for the quartic oscillator in one dimension V(x)=x\^4/4. \[Vx4\] Unfortunately, an error occurred in the drawing of Fig. 1 in [@rb]; the present is its corrected version. In the upper panel, it is seen that the semiclassical approximation for $\delta\rho(x)$ agrees very well with the quantum result, and in the lower panel the relations and between the quantum results are seen to be well fulfilled. The only sizable deviations occur very near the classical turning point, as expected. ![\[1dx4drho\] (Color online) [*Upper panel:*]{} Oscillating part $\delta\rho(x)$ of the particle density of $N$=40 particles in the quartic potential (without spin degeneracy; units: $\hbar=m=1$). Dots (black) show the quantum-mechanical result; the solid line (red) shows the semi-classical result , and the dashed line (blue) the approximation (for $D$=1) valid for small $x$ values. [*Lower panel:*]{} Tests of relations and between the quantum-mechanical densities for the same system. Solid line (red): $\delta\tau(x)$, dashed line (blue): $-\delta\tau_1(x)$, dotted line (black): r.h.s. of . \[Corrected figure from [@rb].\] ](fig3.eps){width="0.95\columnwidth"} We emphasize that the Friedel oscillations near the surface are dominated by the primitive “+” orbit (with $k=0$). Its contribution diverges, however, since its running time $T_+^{(0)}(\lambdab,x)$ tends to zero there. This divergence can be remedied in the WKB-type linear approximation to the potential which we discuss in [@nlvt] for smooth potentials, or by the short-time propagator for hard-wall potentials (i.e., billiard systems) discussed in . First we will, however, examine the strictly linear potential for which the WKB approximation is exact. ### The linear potential {#secscllin} In [@nlvt], we give the exact quantum-mechanical densities for the one-dimensional potential $V(x)=ax$. Although this potential does not bind any particles, its density close to the turning point will be of use in . Here we give its semiclassical analysis. Since a particle cannot be bound in this potential, the only closed classical orbit starting from a point $x$ is the primitive orbit “+” ($k=0$) going to the turning point $x_\lambda=\lambdab/a$ and back to $x$. Its action is && S\_+(x)=S\_+\^[(0)]{}(x) = 2\_x\^[x\_]{} p(,x)x = (-ax)\^[3/2]{}\ && = 43|z\_|\^[3/2]{}=2\_, \[splinap\] where the last equalities make use of the quantities defined as = ()\^[1/3]{}, \[sigma\] and z\_= (ax-),\_0 = 2()\^[1/3]{} = 2a. \[zmu\] Using for $D=1$, we obtain the semiclassical contribution of this orbit to the spatial density \[cf. Eq. (23) of [@rb] with $\sigma=+$, $k=1$\] (x) = - , \[drholinsc\] which is identical to the asymptotic expression for the exact quantum-mechanical result [@nlvt]. Thus, the orbit “+” creates the Friedel oscillations. Using the LVT and $Q=-1$ in , we obtain immediately the expression for the kinetic-energy densities (x) = -\_1(x) = - , \[dtaulinsc\] which is identical to the asymptotic quantum result [@nlvt]. The expression diverges at the classical turning point $x_\lambda$. To avoid this divergence one has to use the exact expressions [@nlvt], which can be considered as the regularized contributions of the primitive “+” orbit near the turning points. ### The 1-dimensional box For the one-dimensional box of length $L$, Eq.  has to be modified by omitting the phase factor $(-1)^k$, since each turning point gives two units to the Morse index. Using $\rho_{\text{TF}} =2/\pi \sqrt{2m\lambda_{\text{TF}}/\hbar^2}$ and summing over all $k$, one finds that it reproduces exactly the quantum-mechanical $\rho(x)$ in the large-$N$ limit, so that the semiclassical approximation here is asymptotically exact. $D>1$ dimensional potentials with spherical symmetry {#secrad} ---------------------------------------------------- In this section we discuss potentials in $D>1$ with spherical symmetry, so that $V(\bfr)=V(r)$ depends only on the radial variable $r=|\bfr|$. The particle number $N$ is chosen such that energy levels with angular-momentum degeneracy are filled so that all spatial densities, too, depend only on $r$. In such systems, the two kinds of oscillations discussed in can always be separated clearly in the central region $r\simeq 0$. Indeed, this behavior is explained by the fact that the angular momentum of the orbits is conserved. Therefore, the shape of a closed orbit whose starting point $r$ approaches the center of the potential tends to become flattened and concentrated near a radial periodic orbit. Thus, close to the center there are only two types of non-periodic orbits: Firstly, the [*radial*]{} orbits of the same types “$+$” and “$-$” as discussed for the one-dimensional case, with opposite momenta $\bfp=-\bfp'$, leading to the same kind of oscillations that we know for $D=1$. Secondly, [*non-radial*]{} orbits which near $r=0$ have almost equal momenta $\bfp\simeq\bfp'$, so that they become nearly periodic. Semiclassically, the two types of radial and non-radial NPOs are responsible precisely for the two kinds of oscillations which we described in . The regular short-ranged oscillations, denoted $\delta_{\text{r}}\rho(r)$ etc., can be attributed to the radial “+” and “$-$” orbits. The long-ranged irregular oscillations, denoted $\delta_{\text{irr}}\rho(r)$ etc., must be attributed to the non-radial NPOs: these lead to slow oscillations because their actions are almost independent of the starting point near $r=0$. The contributions of the radial NPOs in radially symmetric systems has already been anticipated in [@rb]; they will be discussed in the following section. In particular, like for $D=1$, the primitive “+” orbit is seen to be solely responsible for the Friedel oscillations near the surface of a $D>1$ dimensional spherical system. Non-Radial orbits can only occur if there exist classical trajectories which intersect themselves in a given point $\bfr$. As is well known from classical mechanics, such orbits do not exist in isotropic harmonic oscillators (and in the Coulomb potential). This explains the fact that no irregular long-ranged oscillations are found in the densities of harmonic oscillators [@bm] (or, trivially, in any one-dimensional potential). We emphasize that for $D=2$, all closed NPOs are isolated except if they start at $r=0$, in which case they form degenerate families due to the radial symmetry (cf. ). In $D>2$ dimensions, however, also the non-radial NPOs starting at $r>0$ have continuous rotational degeneracies. For the corresponding families of orbits, the Van Vleck determinant ${\cal D}_\gamma$ in the semiclassical Green function becomes singular at all points $r$. This divergence can be removed [@strut] by going one step back in the derivation of . In the convolution integral for the time-dependent propagator, one has to perform a sufficient number of intermediate integrals exactly rather than in the stationary-phase approximation (for details, see [@strut] where this was done to obtain the trace formula for systems with continuous symmetries). As a result, the semiclassical amplitudes of the degenerate families of orbits are of lower order in $\hbar$ than for isolated orbits and thus have a larger weight. In our present case, the $\hbar$ dependence of the ratio of amplitudes between the irregular and the regular oscillations e.g. in the particle density becomes: $$\frac{|\delta_{\text{irr}}\rho(r)|}{|\delta_{\text{r}}\rho(r)|} \propto \hbar^{-(D-2)/2}. \qquad (D>1) \label{family}$$ The same ratio holds also for the other spatial densities. This can be seen, e.g., in for the spherical billiard in $D=3$, where the amplitude of the irregular oscillations is larger than that of the radial oscillations (except near $r=0$). In passing, we note that for spherical billiards with radius $R$, the energy dependence of the semiclassical results scales with the dimensionless variable $p_\lambda R/\hbar$, and the ratio becomes $|\delta_{\text{irr}}\rho/\delta_{\text{r}}\rho| \propto [p_\lambda R/\hbar]^{(D-2)/2}$. It should be stressed that the separation of two classes of NPOs and hence the two types of oscillations is not possible in systems in $D>1$ dimensions without radial symmetry. This will be illustrated in . A further complication in systems with $D>1$ is that the NPOs can undergo bifurcations under the variation of the starting point $r$. At these bifurcations, new NPOs or POs are created. This is discussed extensively in a publication [@circ] on the two-dimensional circular billiard. For this system, a complete classification of all NPOs could be made and analytical expressions for their actions and Van Vleck determinants have been derived. ### Contributions of the radial orbits: Earlier results {#secradlin} Recall that since all radial NPOs fulfill $\bfp'=-\bfp$, they have $Q(\lambdab,r)=-1$ under the sum in . Therefore we immediately obtain the semiclassical relation [@rb] $$\delta_{\text{r}}\tau_1(r) \simeq -\delta_{\text{r}}\tau(r)\,. \label{tautau1r}$$ Indeed, this was found to be fulfilled, sufficiently far from the turning point, for all quantum systems discussed in . In order to derive some of the other forms of local virial theorems discussed in , it is important to notice the action of the differential operator $\nabla$ on the semiclassical density in . The contributions of leading-order in $\hbar$ (i.e., the terms of the largest [*negative*]{} power of $\hbar$) come from the phase $\Phi(\lambdab,{\bf r})$ given in . From the canonical relations we find e\^[i(,)]{} = (’-) e\^[i(,)]{}, \[nablaphas\] and \^2e\^[i(,)]{} = -(’-)\^2 e\^[i(,)]{}, \[lapphas\] which occurs for each NPO under the summation in . For the radial orbits, one therefore obtains with the following differential equation for $\delta_{\text{r}}\rho(r)$, which was already given in [@bm]: - \^2\_(r) \[-V(r)\]\_(r). \[lapeqv\] For small distances $r$ from the center so that $V(r)\ll\lambdab$, becomes the universal Laplace equation - \^2\_(r) \_(r), \[lapeq\] which was obtained asymptotically from the quantum-mechanical densities of isotropic harmonic oscillators in [@bm]. It has the general solution $$\delta_{\text{r}}\rho(r) = (-1)^{^{M_{\text{s}}\!-1}}\frac{m}{\hbar\,T_{\text{r1}}(\lambdab)} \left(\frac{p_\lambda}{4\pi\hbar r}\right)^{\!\nu} \!\!J_\nu(2rp_\lambda/\hbar)\,. \label{delrhorad}$$ Here $J_\nu(z)$ is a Bessel function with index $\nu=D/2-1$, $M_{\text{s}}=M+1$ is the number of filled main shells [@noteM], $T_{\text{r1}}$ is the period of the primitive radial full oscillation and $p_\lambda=(2m\lambdab)^{1/2}$ is the Fermi momentum. The normalization of cannot be obtained from the linear equation ; we have determined it from the calculation presented in . For harmonic oscillators, where $T_{\text{r1}}=2\pi/\omega$, equation becomes identical with the result in [@bm], Eq. (69), that was derived from quantum mechanics in the large-$N$ limit. The quantity $\delta_{\text{r}}\rho(r)$ can also be calculated directly from , including only the radial NPOs. The summation over their repetitions goes exactly like in the one-dimensional case done in [@rb], except for the evaluation of the determinant ${\cal D}_\perp$. This determinant becomes singular at $r=0$ due to the continuous degeneracy of the “+” and “$-$” orbits: the point $r=0$ is a caustic point for all radially symmetric systems with $D>1$. The regularization of this singularity, leading precisely to the result $\eq{delrhorad}$, is discussed in below. ### Isotropic harmonic oscillators in $D$ dimensions {#sechoscl} We now investigate the densities in the isotropic harmonic oscillator (IHO) potential in $D$ dimensions defined as V(r)=\^2r\^2,r=||,\^D. \[vho\] First we mention the well-know fact that in IHO potentials with arbitrary $D>1$, all orbits with nonzero angular momentum are periodic, forming ellipses which may degenerate to circles or radial librations. Hence the only NPOs are the radial orbits “+” and “$-$”. Since we just have seen that in the leading-order semiclassical approximation, $\delta_{\text{r}}\tau_1(r)=-\delta_{\text{r}}\tau(r)$, it follows that $\delta\xi(r)=0$ to leading order like for $D=1$, thus explaining the smooth behavior of $\xi(r)$ for IHOs [@bm]. For the IHO potentials, the transverse determinant ${\cal D}_{\bot}$ can be easily computed. It is diagonal and reads $$|{\cal D}_{\bot}(\lambdab,r)|=\left[\frac{m \lambdab}{r p(\lambdab,r)}\right]^{D-1},$$ which does not depend on the type and the repetition number $k$ of the orbit. Following and [@rb], we compute $\delta\rho(r)$ as a sum over the contributions of the “$+$” and “$-$” orbits, which is given by (r) & = & \^\ & & \_[k=0,]{}\^ .        \[rho\_HO\] Here we have used the analytical form of the actions and periods S\_\^[(k)]{}(,r) & = & (2k+1) r p(,r)\ & & (),\ T\_\^[(k)]{}(,r) & = & (2k+1) ().    We compute the Morse indices following Gutzwiller [@gutz1]. Each turning point contributes a phase of $\pi/2$. Besides we evaluate the number of extra conjugate points including their multiplicities depending on the dimension, contributing a phase $\pi(D-1)/2$ each (they are most easily determined from the propagator of the harmonic oscillator in the time representation). The final result for the Morse indices is $$\mu_{+}^{(k)}=2 k D+1\,, \qquad \mu_{-}^{(k)}=2 k D +D\,. \label{Mascorr}$$ We note that the equation is consistent with results derived in [@mue] from the quantum mechanical density $\rho(r)$. ![\[fig\_2D\_HO\] (Color online) Oscillating part of the spatial particle density times $r^3$ for 4$D$ IHO for $N=632502$, i.e. with $M=50$ filled shells (units: $\hbar=m=\omega=1$). Dots are the quantum results. The solid (red) line is the analytical expression (\[rho\_HO\]) using the Morse indices given in , and the dashed (blue) line is the asymptotic formula (\[delrhorad\]) valid close to $r=0$. ](fig4.eps){width="1\columnwidth"} Fig. \[fig\_2D\_HO\] shows a comparison of the semiclassical results with the exact quantum result for the case $D=4$. We have multiplied both by a factor $r^3$ since the semiclassical determinant ${\cal D}_{\bot}$ diverges at $r=0$ which is a caustic point due to the spherical symmetry. This divergence will be regularized in the following section. Using the Morse indices and for $\lambdab$ the expression $\lambdab~=~\hbar\omega~[M+(D+1)/2]$ [@bm], we can perform the summation over $k$ in analytically for small $r$, like it was done in [@rb] for the 1$D$ case. The result then is exactly that given in with $T_{\text{r1}}(\lambdab)=2\pi/\omega$, but replacing the Bessel function $J_\nu(z)$ by its asymptotic expression for large argument $z$, i.e., using J\_(z) (z-/2-/4). \[Besasy\] ### Regularization close to the center {#secregul} In this section we compute the contribution of radial NPOs to the semiclassical particle density close to the center of an arbitrary potential with radial symmetry. As stressed in the last section, the semiclassical Green function for $D>1$ is not defined at $r=0$ where ${\cal D}_{\bot}$ diverges. The reason is the caustic that occurs there: fixing the position of the point $r=r'=0$ does not uniquely determine a closed orbit (periodic or non-periodic) which belongs to a continuously degenerate family due to the spherical symmetry. A standard method to solve this problem is to introduce the mixed phase-space representation of the Green function close to the diverging point, as proposed initially by Maslov and Fedoriuk [@mf]. Here we follow more specifically the procedure outlined in [@ruj]. The mixed representation of the Green function can be approximated in a form analogous to that in the coordinate representation. This is due to the smoothness of the phase-space torus which implies that no diverging points can occur simultaneously in position and momentum (cf. [@mf]). Following Gutzwiller, we use for every classical trajectory $\gamma$ an “intrinsic” (or local) coordinate system $\bfr=(r_\parallel,\bfr_\perp)$, where the coordinate $r_\parallel$ is taken along the trajectory and $\bfr_\perp$ is the vector of all other coordinates transverse to it; $\bfp=(p_\parallel,\bfp_\perp)$ is the corresponding system for the momentum. We next re-write the coordinate-representation of the Green function as the inverse Fourier transform of the mixed Green function with respect to the final transverse momentum $\bfp'_\perp$: $$\begin{aligned} &&\hspace{-1cm}G_{\text{scl}}(E,\bfr,r'_\parallel,\bfr'_\perp) = \frac{1}{(-2 i \pi \hbar)^{(D-1)/2}} \sum_\gamma \int \d \bfp'_\perp \nonumber\\ && \hspace{1.cm} \times~ {\widehat{G}}_\gamma(E,\bfr,r'_\parallel,\bfp'_\perp)\, \exp\left(\frac{i}{\hbar}\, \bfr'_\perp\cdot\bfp'_\perp\right)\!,~~ \label{def_green_ft}\end{aligned}$$ where the sum is over all classical trajectories $\gamma$ starting at $\bfr$ and ending at $(r'_\parallel,\bfp'_\perp)$ in phase space. Hereby the contribution of the orbit $\gamma$ to the semiclassical mixed representation of the Green function is given by [@mf]: $$\begin{aligned} &&\hspace{-1cm}{\widehat{G}}_\gamma(E,\bfr,r'_\parallel,\bfp'_\perp)=\alpha_D\, \mathrm{\widehat{\cal D}}_\gamma (E,\bfr,r'_\parallel,\bfp'_\perp)\nonumber\\ && \hspace{1.cm}\times \exp {\bigg (\frac{i}{\hbar} \widehat{S}_\gamma (E,\bfr,r'_\parallel,\bfp'_\perp)-\frac{i\pi}{2} \widehat{\mu}_\gamma\bigg)},~ \label{def_mix_green}\end{aligned}$$ where $\widehat{S}$ is the Legendre transform of the action $S$ between the variables $\bfr'_\perp$ and $\bfp'_\perp$: \_(E,,r’\_,’\_) = S\_(E,,r’\_,’\_) -[**r’**]{}\_\_. \[legtra\] Since in the mixed-representation Green function, we have to evaluate the action $\widehat{S}$ for radial orbits with [*fixed*]{} momentum close to the center, the rotational symmetry in position is removed and $\widehat{G}$ is regular. The Van Vleck determinant in this representation is $$\label{detp} \mathrm{\widehat{\cal D}}_\gamma = \frac{m}{|p_\parallel p'_\parallel|^{1/2}}\, |\mathrm{\widehat{\cal D}}_{\!\bot\gamma} |^{1/2}\,, \quad \mathrm{\widehat{\cal D}}_{\!\bot \gamma}= \det \bigg(\dfrac{\partial\bfp_\perp}{\partial\bf p'_\perp}\bigg) \,, \ee and the Morse index becomes \begin{eqnarray} \widehat{\mu}_\gamma = \left\{ \begin{array}{ll} \mu_\gamma&\hbox{for positive eigenvalue of} ~\det\bigg(\dfrac{\partial{\bf r'}_{\!\bot}}{\partial{\bf p'}_{\!\bot}}\bigg )\,,\nonumber\\ \nonumber\\ \mu_\gamma+1&\hbox{for negative eigenvalue of} ~\det\bigg(\dfrac{\partial{\bf r'}_{\!\bot}}{\partial{\bf p'}_{\!\bot}}\bigg )\,. \end{array}\right.\nonumber \end{eqnarray} Far from singular points, the evaluation of \eq{def_green_ft} using the stationary phase approximation (SPA) yields \cite{spa} the standard semiclassical Green function \eq{sclgreen}. After performing the $\hbar$ expansion and the integration over the energy similarly as in \cite{rb}, the oscillating part of the particle density is given by: \begin{eqnarray} \hspace{-1.5cm} \delta \rho (\bf r) &=& 2 \sum_{\gamma} \text{Im} \bigg \{ \frac{i \alpha_D \hbar }{\pi T} \int \d{\bf p'}_{\!\bot} \widehat{G}(\lambdab,{\bf r},r_{\parallel},{\bf p'}_{\!\bot})\nonumber \\ &&\hspace{2.3cm}\times \exp \bigg ( \frac{i}{\hbar} {\bf r}_{\!\bot} \cdot{\bf p'}_{\!\bot}\bigg) \bigg \}. \label{drhomix} \end{eqnarray} Close to the center of the potential, now, we replace the non-radial NPOs by the radial ones with $\gamma=$''$\pm$'' orbits with $k$-th repetitions. For non-periodic orbits in the radial direction $r$ we have $r_\parallel=r$. We neglect the higher orders in ${\bf r}_{\!\bot}$, leading to the following approximations: \begin{eqnarray} && |\det (\partial{\bf p}_{\!\bot}/\partial{\bf p'}_{\!\bot})|\;\approx\; 1\,,\nonumber\\ && |p_{_{\parallel}}|\approx |p'_{_{\parallel}}|\;\approx\; p_\parallel(\lambdab,{\bf p'}_{\!\bot}) := \sqrt{2 m \lambdab-{\bf p'}_{\!\bot}^2}~,\hspace*{0.8cm} \nonumber\\ && (r,{\bf r}_{\!\bot})\;\approx\; (r,0)\,,\nonumber\\ && \widehat{S}_{\pm}^{(k)}\;\approx\; (k+1/2)\,S_{\text{r1}} \mp 2rp'_{_{\parallel}}\,,\nonumber\\ && T_{\pm}^{(k)}\;\approx\;(k+1/2)\, T_{\text{r1}}\,. \label{tpmapp} \end{eqnarray} Furthermore, we approximate the action $S_{\text{r1}}$ of the primitive periodic diameter orbit by $S_{\text{r1}}\approx 2 \pi \hbar[M+(D+1)/2]$. This is exact for IHOs where $S_{\text{r1}}=2\pi\lambdab/\omega$ and $\lambdab~=~\hbar\omega~[M+(D+1)/2]$ can be used \cite{bm}; for arbitrary radial potentials it corresponds to a radial WKB quantization, whereby $M$ is a ``main shell'' quantum number that has to be suitably chosen \cite{noteM}. Also, we assume that each eigenvalue of $\det(\partial{\bf r'}_{\!\bot}/\partial{\bf p'}_{\!\bot})$ is negative (positive) for the orbits ``$+$'' (``$-$''), leading to $\widehat{\mu}_{+}^{(k)}=\widehat{\mu}_{-}^{(k)}$. This is again exact for IHOs; for other radial potentials we have verified its validity numerically. With these approximations, the sum over the repetitions of all radial orbits can be performed exactly like in the previous section. The oscillating part of the particle density then simplifies to: \begin{equation} \delta_{\text{r}} \rho(r) = \frac{(-1)^{M}m}{ (2 \pi \hbar)^{D-1} T_{\text{r1}}} \int \d {\bf p'}_{\!\bot} \frac{\cos[2rp_\parallel(\lambdab,{\bf p'}_{\!\bot})/\hbar]} {p_\parallel(\lambdab,{\bf p'}_{\!\bot})}\,.$$ The integration has to be taken over half the solid angle in the $(D-1)$-dimensional transverse momentum space, avoiding a double-counting of the two orbits. So it is natural to make a change of variables to dimensionless hyper-spherical coordinates. Using the integral representation of the Bessel functions [@abro] J\_(z) = \_0\^1(1-t\^2)\^[-1/2]{}(zt) t, we obtain exactly the same result as in , confirming its normalization. We stress that this regularization is only valid near the center, i.e., for $r\simeq 0$, as can be seen in the example of , where the result is displayed by the dashed line. The reason is that for larger values of $r$, the approximations are no longer valid. If one restricts oneself to the leading contributions of the primitive orbits “+” and “$-$” with $k=0$, a “global uniform” approximation can be made which interpolates smoothly between the regularized result near $r=0$ and the correct semiclassical contributions obtained from at larger $r$. This uniform approximation is derived and used in [@circ] for the $2D$ circular billiard system which we briefly discuss in the following section. ### The two-dimensional circular billiard {#secdisc} The two-dimensional circular billiard, which can be taken as a realistic model for quantum dots with a large number $N$ of particles, has been investigated semiclassically in [@circ], where all its periodic and nonperiodic closed orbits have been classified analytically. We discuss there also the various bifurcations at specific values of the radial variable $r$, at which POs bifurcate from NPOs or pairs of NPOs are born. At these bifurcations, the semiclassical amplitudes in – must be regularized by suitable uniform approximations. We refer to [@circ] for the details and reproduce here some numerical results to illustrate the quality of the semiclassical approximation. ![\[disk606\] Particle density in the two-dimensional disk billiard with radius $R$, containing $N=606$ particles (units: $\hbar^2\!/2m=R=1$), divided by $N$. The solid line is the quantum result, the dotted line the semiclassical result with all regularizations (see [@circ] for details). ](fig5.eps){width="1.15\columnwidth"} shows the total particle density $\rho(r)$ for $N=606$ particles in the circular billiard. The solid line gives the quantum result, obtained from using the solutions of the Schrödinger equation with Dirichlet boundary conditions, which are given in terms of cylindrical Bessel functions. The dotted line gives the semiclassical result, obtained by summing over the $\sim$ 30 shortest NPOs. (Hereby we used the regularization of the radial “+” and “$-$” orbits at $r=0$ by , that of the primitive “+” orbit near $r=R$ by given in below, and uniform approximations for the bifurcations of some of the non-radial NPOs as described in detail in [@circ].) We see that, indeed, a satisfactory approximation of the quantum density can be obtained in terms of the shortest classical orbits of this system. ![\[disk9834\] Oscillating parts of kinetic-energy densities, $\delta\tau_1(r)$ (fast oscillations) and $\delta\xi(r)$ (slow oscillations) for $N=9834$, divided by $N^{5/3}$. [*Solid lines:*]{} exact quantum results. [*Dashed lines:*]{} semiclassical results and units as in . ](fig6.eps){width="1.1\columnwidth"} In we demonstrate explicitly the contributions of non-radial NPOs to the kinetic-energy densities $\tau_1(r)$ and $\xi(r)$ close to the center, calculated as in but for $N=9834$ particles. We clearly see that $\delta\xi(r)$ is not smooth; its slow, irregular oscillations are due to non-radial NPOs which have the form of polygons with $2k$ reflections ($k=1,2,\dots$) at the boundary and one corner at a point $r$ close to the center. The first $k_{\text{max}}=20$ of them were included with the appropriate regularization at $r=0$ where they are degenerate with the $k$-th repetitions of the diagonal PO (see [@circ] for details). The agreement between quantum and semiclassical results is again satisfactory; the discrepancy that sets on for $r\simg 0.18$ is due to the missing of more complicated non-radial orbits. The quantity $\delta\tau_1(r)$, on the other hand, clearly exhibits both kinds of oscillations according to : the slow irregular part, which is identical with $\delta\xi(r)$, is modulated by the regular fast oscillations due to the radial orbits. $D>1$ dimensional systems without continuous symmetries {#secnonrad} ------------------------------------------------------- In $D>1$ dimensional systems without continuous symmetries, it is in general not possible to find the classical orbits analytically. As in POT, the search of closed orbits must then be done numerically. A practical problem in such systems is also that the densities as functions of $D$ coordinates are not easily displayed. For tests and comparisons of various approximations or of the local virial theorems, we have to resort to taking suitable one-dimensional cuts (i.e., projections) of the densities. In the following paragraph, we discuss a class of integrable billiard systems, in which all closed classical orbits can easily be found and their semiclassical contributions to the densities can be analytically obtained. These are $D$-dimensional polygonal billiards that tessellate the full space under repeated reflections at all borders. We illustrate the method for the example of a rectangular billiard. Although this does not correspond to any physical system (unless experimentally manufactured as a rectangular quantum dot with many electrons), it is a useful model without spherical symmetry that allows for analytical calculation of the classical orbits and their properties. ### Billiards tessellating flat space: the rectangular billiard {#secrect} For billiards, classical trajectories are straight lines which are reflected at the boundary according to the specular law. Let us consider a two-dimensional billiard that tessellates the plane, such as the rectangular billiard shown in . Choose a trajectory starting at a point $P$, reflected at the point $R_0$ and reaching the point $P_1$. Now reflect the boundary at a side containing the point $R_0$, the image $R_0P_1'$ of the segment $R_0P_1$ gives the straight line $PP_1'$. The next portion of the trajectory after reflexion in $R_1$, can be found by reflecting the new billiard at the side containing $R_1'$. This process can be repeated until the trajectory ends. To get the closed trajectories at $P$ we have to compute all images of $P$ in the images of the billiard. Now a straight line joining $P$ and an image of $P$ gives a closed orbit. Thus, constructing all images of $P$ by simple geometry, enables one to compute all trajectories and their related initial and final momenta for $D$-dimensional polygonal billiard that fills the $D$-dimensional Euclidian space. Note that the Jacobian $\displaystyle {\cal D}_{\bot}$ for these systems is easily computed and equals $(p/L_{\text{NPO}})^{D-1}$ where $L_{\text{NPO}}$ is the length of the orbit. ![\[fig\_tab\] Images (triangles and crosses) of a point $P(x,y)$ (full circle) for a rectangular billiard. Joining by a straight line the full circle to a cross gives a non-periodic orbit whereas joining to a triangle gives a periodic orbit. ](fig7.eps){width="0.95\columnwidth"} We illustrate this method for the case of a 2$D$ rectangular billiard with side lengths $Q_x$ and $Q_y$. There are four types of images of $P(x,y)$; one leading to POs and three (labeled by the index a, b and c) leading to NPOs (see Fig.\[fig\_tab\]). Table \[table1\] lists the basic ingredients to compute the spatial densities, using $L(x,y)=2 \sqrt{x^2+y^2}$ and f(x,y,) = , with $p_\lambda=(2m\lambdab)^{1/2}$. From , for $D=2$ we obtain $$\begin{aligned} \delta \rho(x,y) & = & \sum_{k_x,k_y=-\infty}^{\infty}\; \sum_{l=\text{a},\text{b},\text{c}} \delta \rho_{l}(x,y)\,,\label{rho_rect}\\ \delta \tau_1(x,y) & = & \sum_{k_x,k_y=-\infty}^{\infty}\; \sum_{l=\text{a},\text{b},\text{c}} \delta \tau_{1l}(x,y)\,, \label{tau1_rect}\end{aligned}$$ where the partial contributions $\delta\rho_l(x,y)$ and $\delta\tau_{1l}(x,y)$ for the orbits of types $l$ = a, b, and c are given in . $\delta \tau(x,y)$ is obtained from (\[rho\_rect\]) using the LVT (\[lvt\]). ![\[fig\_dens\_rect\] (Color online) Oscillating part of the spatial densities for a rectangular billiard with sides $Q_x=2^{1/4}$ and $Q_y=3^{1/4}$ for $N=2000$ along the line $x=Q_x/2$ (units: $\hbar^2=2m=1$). Solid (black) lines are the semiclassical results using , and ; dashed (red) lines are the quantum-mechanical results. ](fig8.eps){width="1.02\columnwidth"} ![\[fig\_dens\_rect2\] (Color online) Same system as in . Here, selected contributions to of the primitive NPOs with $k=0$ are shown. The full (black) line gives the contributions of the primitive self-retracing orbits, and the dashed (red) line that of all other primitive orbits. ](fig9.eps){width="1\columnwidth"} We now present numerical results for the rectangular billiard with side lengths $Q_x=2^{1/4}$, $Q_y=3^{1/4}$ (units: $\hbar^2=2m=1$), containing $N=2000$ particles. In we show the quantities $\delta\rho$ (top), $\delta\tau$ (center) and $\delta\tau_1$ (bottom) as functions of $y$ with fixed $x=Q_x/2$. Dashed lines are the quantum-mechanical results, solid lines the semiclassical ones using , and . We see that summing over all orbits yields very good agreement, except close to the boundary where the Friedel oscillations were not regularized. In we display selected contributions of some of the primitive orbits $(k=0)$ to the particle density $\delta\rho(x,y)$. The solid line gives the contribution of self-retracing orbits with $\bfp=-\bfp'$, and the dashed line that of the other primitive NPOs. It is evident that no clear separation of regular short-ranged and irregular long-ranged oscillations can be made here. Regularization near surface {#secsurf} =========================== As we have pointed out in the previous section, the semiclassical approximation of density oscillations in terms of classical orbits breaks down near the classical turning point due to the diverging amplitude of the primitive “+” orbit (with $k=0$) which close to the surface is responsible for the Friedel oscillations. In order to regularize this diverging amplitude, different techniques must be used for smooth potentials and for billiards with reflecting walls. Smooth potentials {#secfriedlin} ----------------- In smooth potentials $V(\bfr)$, the divergence can be regularized by linearizing the potential near the classical turning points, as it is done in the standard WKB approximation [@wkb]. In the surface region close to a turning point, the exact results for linear potentials given in [@nlvt] can then be used. We demonstrate this first for the one-dimensional case, and then illustrate it also for potentials in $D=3$ with spherical symmetry. ### Linear approximation to a smooth 1$D$ potential {#seclinap} We start from an arbitrary smooth binding potential $V(x)$ and approximate it linearly around the turning point $x_\lambda$ defined by $V(x_\lambda)=\lambdab$. Without loss of generality, we assume $x_\lambda>0$. Expanding $V(x)$ around $x_\lambda$ up to first order in $x-x_\lambda$, we get the approximated potential (x) = + a(x-x\_), a = V’(x\_)&gt;0. \[vlinap\] We can therefore apply the results of [@nlvt]. The oscillating part of the density near the turning point then becomes && \_(x) = \_0{\[’(z\_)\]\^2-z\_\^2(z\_)\ && -(x\_-x)}, \[drholinap\] where the last term is the subtracted TF part and \_0 = 2()\^[1/3]{}, z\_= (x-x\_). \[linap\] The oscillating parts of the kinetic-energy densities $\tau(x)$ and $\xi(x)$ become in the same approximation && \_(x) = {(z\_)’(z\_) -z\_\^2\ && +z\_\^2 [Ai]{}\^2(z\_) -|z\_|\^[3/2]{}(x\_-x)}, \[dtaulinap\]\ && \_(x) = -{(z\_)’(z\_) +2z\_\^2\ && -2z\^2\_\^2(z\_) +|z\_|\^[3/2]{}(x\_-x)}. \[dxilinap\] In the next step, we introduce uniform linearized approximations, in which the argument $z_\lambda$ in , , and is not as given in , but replaced by \_= -\^[2/3]{}, \[zuni\] where $S_+(x)$ is the correct action of the “+” orbit for the given potential $V(x)$. On can show that this relation is exact for the linear potential; it is uniform for other smooth potentials in that it holds locally at the turning point and yields the correct phase of the oscillation at all other distances from the turning point. ![\[x4friedun\] Oscillating parts of densities for the quartic potential with $N=40$ particles (units $\hbar=m=1$), shown on the same scale. [*Solid lines:*]{} exact quantum-mechanical results, [*dotted lines:*]{} uniform linearized approximations , and with the argument ${\widetilde z}_\lambda$ given in . ](fig10.eps){width="1\columnwidth"} Figure \[x4friedun\] shows numerical results for these uniform approximations for the quartic oscillator with $N=40$ particles, compared to the exact quantum results. We see that the uniform linearized approximation reproduces very well the Friedel oscillations near the turning point in all three densities. The phase of the oscillations is seen to be correct at all distances. The amplitudes are not exact in the asymptotic region, i.e., near $x=0$. This is not surprising, since the contributions of all “$-$” orbits and those of the “+” with $k>0$ are missing in this approximation. We see that $\delta\xi(x)$ vanishes inside the system, as expected from the semiclassical leading-order result on the r.h.s. of . However, near the turning point, where the semiclassical approximation breaks down, the magnitude of $\delta\xi(x)$ is comparable to – and for the quartic potential even larger than – that of $\delta\rho(x)$. (Note that all three density oscillations are shown on the same vertical scale.) ### Linear approximation to smooth radially symmetric potentials in $D>1$ {#radlinap} We now start from an arbitrary smooth binding potential with radial symmetry, $V(\bfr)=V(r)$, $r=|\bfr|$, in $D>1$ dimensions. As above, we replace it by its linear approximation around the turning point $r_\lambda$ analogously to : () = + (-\_), = V(r\_). \[vlinaprad\] Due to the spherical symmetry of $V(r)$, all components of the vector $\bfa$ have the same magnitudes: = a\_/r\_,a = V’(r\_). Therefore, we may choose the radial variable $r$ along any of the Cartesian axes $x_i$, and the results (x\_i) & = & -\_[i0]{}\^3 {(z\_[i]{})’(z\_[i]{})+2z\_[i]{} \[[Ai]{}’(z\_[i]{})\]\^2\ & & -2z\^2\_[i]{} [Ai]{}\^2(z\_[i]{})}, (D=3)    \[rholin3\] where $\rho_{i0}=2\sigma_ia_i$, taken from [@nlvt] for the linear potential with $D>1$ apply with the replacements $x_i\rightarrow r$, $z_{i\lambda}\rightarrow \sigma(ar-\lambda)$. As in for $D=1$, we may then subtract their ETF contribution. Finally we introduce the uniform approximation to their oscillating parts near the surface with the argument expressed in terms of the action $S_+(r)$ of the primitive radial “+” orbit of the given radial potential $V(r)$: \_= -\^[2/3]{}. \[zunirad\] ![\[3hoairy\] Oscillating part of particle density for the $3D$ IHO with $N=22960$ particles ($M_{\text{s}}=40$) (units $\hbar=m=\omega=1$). [*Solid lines:*]{} exact results, [*dashed lines:*]{} uniform linearized approximation (\[rholin3\])from [@nlvt] with argument ${\widetilde z}_\lambda$ given in . [*Upper panel:*]{} smooth part in $\delta\rho(r)$ taken as TF density, [*lower panel:*]{} smooth part in $\delta\rho(r)$ taken as ETF density. ](fig11.eps){width="1.09\columnwidth"} In we show numerical results for this approximation for the 3-dimensional IHO with $M_{\text{s}}=40$ occupied shells. The upper panel shows the exact result for $\delta\rho(r)$ (solid line), whereby only the TF approximation was used for its smooth part: $\delta\rho(r)=\rho(r)-\rho_{\text{TF}}(r)$. We notice that the oscillations in the interior are not symmetric about the zero line, which is due to smooth errors in the TF density. In the lower panel, the ETF corrections have been included in $\delta\rho(r)$; now the oscillations are symmetric about zero. The price paid for this is that $\delta\rho(r)$ diverges at the classical turning point. The uniform linear approximation with the argument , shown in both panels by the dashed lines, reproduces well the Friedel oscillation near the surface. In the interior, it fails due to the missing contributions of the repetitions ($k>0$) of the “+” and of all “$-$” orbits. Once more, these results demonstrate that the Friedel oscillations near the surface are semiclassically explained by the primitive “+” orbit alone. Its diverging amplitudes according to must, however, by regularized by the uniform linear approximation. ![\[3horhotot\] Total particle density for the $3D$ IHO with $N=3080$ particles ($M_{\text{s}}=20$) (units $\hbar=m=\omega=1$). [*Solid lines:*]{} exact result. [*Crosses:*]{} semiclassical result for $\delta\rho(r)$ in , summed up to $k_{\text{max}}=15$, plus $\rho_{\text{ETF}}(r)$. [*Dashed line:*]{} uniform linearized approximation (\[rholin3\]) from [@nlvt] with argument ${\widetilde z}_\lambda$ in . ](fig12.eps){width="1.18\columnwidth"} In we show the total density for the $3D$ IHO with $M_{\text{s}}=20$ filled shells. The solid line is the exact quantum result . The crosses give the semiclassical result as the sum $\rho_{\text{ETF}}(r)+\delta\rho(r)$, where the latter is calculated from the sum over the NPOs in up to $k_{\text{max}}=15$. We see that the semiclassical result reproduces very accurately the exact result up to $r\sim 5.9$, which is rather close to the turning point $r_\lambda\sim 6.48$ where it diverges. The linearized approximation is shown by the dashed line; it approximates the exact density closely above $r\sim 5.8$. Thus, switching from the semiclassical approximation to the linearized one around $r\sim 5.85$ allows one to obtain a very good approximation of the density in all points. Billiard systems {#secfriedbil} ---------------- In billiards with reflecting walls, the above linearization is not possible since the slope of the potential is always infinite at the classical turning points. The amplitude of the primitive “+” orbit can in such systems be regularized by using the following uniform approximation of the Green function for short times [@agam; @bemo]: $$\begin{aligned} G_{\text{scl}}^{\text{(un)}}(E,{\bf r},{\bf r'})&=&\frac{m \pi}{i \hbar (2 \pi \hbar)^{D/2}} \sum_{\gamma} \bigg|\frac{S}{p_{_{\parallel}}p'_{_{\parallel}}} \det \frac{\partial{\bf p}_{\!\bot}}{\partial{\bf r'}_{\!\bot}}\bigg|^{1/2} \nonumber\\ &&\hspace{.5cm} \times~ H^{(1)}_{D/2-1}\bigg ( S/\hbar-\mu \pi/2 \bigg ),\label{green_st}\end{aligned}$$ where $H^{(1)}_{\nu}(x)$ is the Hankel function of the first kind. To evaluate the corresponding uniform approximation for the particle density, we have to take the imaginary part of and perform the integration over the energy. This last step is not easily done analytically in general, since $H^{(1)}_{\nu}(x)$ is not a simple oscillatory function of the energy. In the following, we give results for the contributions to the particle density $\rho(\bfr)$ in two special cases. Unfortunately, we have not been able to derive the corresponding contributions to the kinetic-energy densities. ### Arbitrary 2$D$ billiard {#secfriedel2d} For billiards in $D$=2 dimensions with arbitrary boundaries, the uniform contribution to the particle density becomes (see [@agam] for details): $$\begin{aligned} \delta \rho_+^{\text{(un)}}(d) = -\frac{p_{\lambda} J_1(2 d p_{\lambda}/\hbar )} {2\pi \hbar d \sqrt{1-d/R}}\,, \label{friedreg2d}\end{aligned}$$ where $d$ is the distance from the boundary and $R$ its curvature radius at the reflection point. Hereby it is assumed that $d$ is small enough so that there is only one “+” orbit going to the boundary and back to the given starting point. Note that the curvature radius $R$ is negative if the boundary is convex at the turning point. ### Spherical billiards in $D$ dimensions For spherical billiards in $D$ dimensions with radius $R$, the energy integral over can also be performed, and the regularized contribution of the primitive “+” orbit becomes \_[+]{}\^(r) = -\_\^[(D)]{}2\^(+1) ()\^[-1/2]{} , \[drfriedd\] where $\rho_{\text{TF}}^{(D)}$ is the TF density given in , and =D/2, z=2(R-r)p\_/. For $D=3$, the expression agrees with a result derived by Bonche [@bonc] using the multiple-reflection expansion of the Green function introduced by Balian and Bloch [@babl]. For $D=1$ (one-dimensional box), the result is also found from the exact solution. As mentioned above, the contribution is responsible for the Friedel oscillations in the densities near the boundary $r=R$. It is interesting to perform the spatial integral of over the volume of the billiard. Using the formula ([@grry], 6.561.14, with $\mu=-\nu$) \_0\^dx = , the integral can be done in the limit $p_\lambda\to\infty$ (i.e., for large particle numbers), and the asymptotically leading term yields the following contribution to the particle number: N\_[S]{} - p\_\^[D-1]{}[S]{}\_D, \[weylsurf\] where ${\cal S}_D$ is the hypersurface of the $D$-dimensional sphere: \_D = R\^[D-1]{}. We note that corresponds precisely to the surface term in the Weyl expansion [@weyl] of the particle number $N$. The Fermi energy $\lambda_{\text{TF}}$ in hereby has to be replaced by the corresponding quantity $\lambda_{\text{Weyl}}$ obtained by integrating the Weyl-expanded density of states to the particle number $N$. The role of the “+” orbit in contributing the surface term to the Weyl expansion of the density of states has been demonstrated by Zheng [@zheng] for arbitrary billiards in $D=2$ dimensions. General results for finite fermion systems {#traps} ========================================== After having presented our semiclassical theory for spatial density oscillations and tested it in various model potentials, we shall now discuss some of its results in the general context of finite fermion systems. Besides the trapped fermionic gases [@jin] mentioned already in the introduction, we have in mind also self-bound molecular systems with local pseudopotentials, such as clusters of alkali metals [@clust], treated in the mean-field approach of DFT with the local density approximation (LDA), for which the KS-potential is local [@dft]. Local virial theorem {#seclvt} -------------------- One of our central results was given in Eq.  which we repeat for the present discussion: $$\delta \tau({\bf r})\simeq[\lambdab-V({\bf r})] \,\delta \rho({\bf r}) \,. \label{lvtr}$$ We call it the “local virial theorem” (LVT) because it connects the oscillating parts of the kinetic and potential energy densities locally at any given point $\bfr$. While the well-known virial theorem relates, both classically and quantum-mechanically, [*integrated*]{} (i.e., averaged) kinetic and potential energies to each other, the LVT in does this [*locally*]{} at any point $\bfr$. We recall that hereby the Fermi energy $\lambdab$ of the averaged system is defined by Eq. . Since no particular assumptions need be made [@noteB] to derive semiclassically from the basic equations and , the LVT holds for arbitrary local potentials, and hence also for systems of interacting fermions in the mean-field approximation given by the DFT-LDA-KS approach. This is in itself a interesting basic result. It may also be of practical interest, because it allows one to determine kinetic energy densities from the knowledge of particle densities that in general are more easy to measure experimentally. We leave it as a challenge to the condensed matter community, in particular those working with trapped ultracold fermionic atoms, to verify the LVT experimentally. Other forms of local virial theorems have been derived in [@bm] from the exact quantum-mechanical densities of isotropic harmonic oscillators in arbitrary dimensions. A Schrödinger-like (integro-)differential equation for the particle density $\rho(r)$ has also been derived in [@bm]. It would lead beyond the scope of the present paper to discuss these results and their generalization to arbitrary local potentials based upon our semiclassical theory. This will be done in a forthcoming publication [@nlvt], where we also give exact expressions for spatial densities in linear potentials of which we already have made use in . Extended validity of the TF kinetic-energy functional ----------------------------------------------------- Presently we discuss the direct functional relation between the particle and kinetic energy densities obtained in the Thomas-Fermi model. While Eq.  is exact only when applied to the TF expressions and of the (smooth) densities, we shall now show that a semiclassically approximate relation holds also between the oscillating exact densities: () \_\[()\], \[tfofrho\] Eq.  states that the TF relation holds approximately, for arbitrary local potentials $V(\bfr)$, also for the [*exact quantum-mechanical densities including their quantum oscillations*]{}. This had been observed numerically already earlier [@brfest], but without understanding of the reason for its validity. The proof of is actually very easy, having the LVT at hand. Inserting $\rho(\bfr)=\rho_{\text{TF}}(\bfr)+\delta\rho(\bfr)$ into and Taylor expanding around $\rho_{\text{TF}}(\bfr)$, we obtain \_\[()\] = \_\[\_()\] + . |\_[\_()]{}() + [O]{}. Using the obvious identity $\tau_{\text{TF}}[\rho_{\text{TF}}(\bfr)] =\tau_{\text{TF}}(\bfr)$ and the fact that d$\tau_{\text{TF}}[\rho_{\text{TF}}(\bfr)]/{\rm d}\rho_{\text{TF}}(\bfr)= [\lambdab-V({\bf r})]$, we see immediately with that, to first order in the oscillating parts, we have indeed the relation \_\[()\] \_()+() = (). \[taufct\] We stress that, although the TF expression for all three kinetic energy densities $\tau(\bfr)$, $\tau_1(\bfr)$ and $\xi(\bfr)$ is the same \[cf.\], the relation holds only for $\tau(\bfr)$. The reason is that the LVT also only holds for this kinetic energy density, as discussed explicitly in the previous sections. In and \[tf2D\] we present numerical tests of the relation for the two-dimensional coupled quartic oscillator , which represents a classically chaotic system, with two different particle numbers. An example for the three-dimensional spherical billiard, which is a good approximation for the self-consistent mean field of very large alkali metal clusters [@sciam], is shown in . We see that in all cases, the relation between the exact quantum-mechanical densities $\tau(\bfr)$ and $\rho(\bfr)$ is extremely well fulfilled; only close to the classical turning points, where the LVT does not hold, do we see a slight deviation. Obviously, the terms of order ${\cal O}\left[(\delta\rho)^2\right]$, neglected in the above derivation, play practically no significant role in the interior of the systems – even for moderate particle numbers $N$ as seen in or in the examples given in Ref. [@brfest] (and reproduced in [@bvz]). ![\[chaos1\] (Color online) TF functional relation for the same system as in ($N=632$ particles). Cuts along the diagonal $x=y$. The solid (black) line is the l.h.s., and the dashed (red) line is the r.h.s. of . ](fig13.eps){width="1.\columnwidth"} ![\[tf2D\] (Color online) Same as in for $N=42$ particles. Cuts along the line $y=x/\sqrt{3}$. ](fig14.eps){width="1.\columnwidth"} ![\[tf3D\] Test of the TF functional relation for $N=100068$ particles in the three-dimensional spherical billiard (lines as in , units $\hbar^2\!/2m=R=1$; both densities divided by $N^{5/3}$). Note that in the vertical direction of the figure, only a very small excerpt around the bulk value is displayed. ](fig15.eps){width="1\columnwidth"} This result might come as a surprise, since it is well known from the ETF model that [*for smooth densities*]{} the gradient corrections to the functional $\tau_{\text{TF}}[\rho]$ do play an important role for obtaining the correct average kinetic energy. (For three-dimensional systems, the first of them is the famous Weizsäcker correction [@ws].) Examples for this are given in chapter 4.4 of [@book]. However, if gradient corrections up to a given order were consistently added to and used with the [*exact*]{} density $\rho(\bfr)$, the agreement seen in the above figures would be completely spoiled. Summary and concluding remarks {#secsum} ============================== We have presented a semiclassical theory, initiated in [@rb], for the oscillating parts of the spatial densities in terms of closed non-periodic orbits (NPOs), while the smooth part of the densities are given by the (extended) Thomas-Fermi (TF) theory. Our equations – are the analogues of the semiclassical trace formula for the density of states in terms of periodic orbits. For spherical systems, two kinds of oscillations in the spatial densities can be separated, as is implied in Eqs. – : regular, short-ranged ones (denoted by the symbol $\delta_{\text{r}}$) that we can attribute to the librating NPOs in the radial direction, and irregular, long-ranged ones (denoted by $\delta_{\text{irr}}$) that are due to non-radial NPOs and therefore only exist in $D>1$ dimensions. The simple nature of the radial NPOs leads immediately to a number of relations between the regular parts of the oscillations, such as Eqs. , , or the universal form for $\delta_{\text{r}}\rho(r)$ valid near $r=0$. It also explains that the kinetic-energy density $\xi(r)$ defined in has no rapid regular oscillations, as implied in , but is smooth for all one-dimensional systems, as well as for isotropic harmonic oscillators [@bm] and linear potentials [@nlvt] in arbitrary $D$ dimensions, since these contain no non-radial NPOs. In spherical systems, the semiclassical expansion in terms of NPOs is expected to work best for filled “main shells” where the total energy has a pronounced local minimum. This is also discussed in Ref. [@circ] on the two-dimensional circular billiard, for which a complete classification of all NPOs (in addition to the periodic orbits) has been made and the semiclassical theory for the spatial density oscillations has been studied analytically. The semiclassical approximation for the density oscillations is, indeed, found there to work best for the closed-shell systems with filled main shells. But even for “mid-shells” systems with half-filled main shells and for most intermediate systems, the agreement of the semiclassical densities with the quantum-mechanical ones has turned out in [@circ] to be very satisfactory. Based on the semiclassical theory, we were able to generalize the “local virial theorem” (LVT) given in and , which had earlier been derived from exact results for isotropic harmonic oscillators [@bm], to arbitrary local potentials $V(\bfr)$. We emphasize that the LVT is valid (semiclassically) also for an [*interacting $N$-fermion system*]{} bound by the self-consistent Kohn-Sham potential obtained within the framework of DFT and might be verified experimentally in finite fermionic systems. We are grateful to M. Gutiérrez, M. Seidl, D. Ullmo, T. Kramer, M.V.N. Murthy, A.G. Magner and S.N. Fedotkin for helpful discussions. After posting of the first preprint of this publication, A.G. Magner kindly brought Ref. [@bonc] to our attention. A.K. acknowledges financial support by the Deutsche Forschungsgemeinschaft (Graduierten-Kolleg 638). Inclusion of finite temperature in the semiclassical theory {#appcor} =========================================================== In this Appendix we give a short sketch of how to include finite-temperatures in the semiclassical formalism. Extensions of semiclassical trace formulae to finite temperatures have been used already long ago in the context of nuclear physics [@mako] and more recently in mesoscopic physics [@ruj]. We shall present here a derivation by means of a suitable folding function, which has proved useful also in the corresponding microscopic theory [@bq]. For a grand-canonical ensemble of fermions embedded in a heat bath with fixed temperature, the variational energy is the so-called grand potential $\Omega$ defined by [@foot2] = - TS - , \[omega\] where ${\hat H}$ and ${\hat N}$ are the Hamilton and particle number operators, respectively, $T$ is the temperature in energy units (i.e., we put the Boltzmann constant $k_B$ equal to unity), $S$ is the entropy, and $\lambda$ the chemical potential. Note that both energy and particle number are conserved only on the average. For non-interacting particles, we can write the Helmholtz free energy $F$ as F = - TS = 2\_n E\_n \_n - TS, \[free\] where $E_n$ is the energy spectrum of ${\hat H}$ and $\nu_n$ are the Fermi occupation numbers $$\nu_n = \frac{1}{1+\exp{\left(\frac{E_n - \lambda}{T}\right)}}\,, \label{nuocc}$$ and the entropy is given by $$S=-2\sum_n \,[\nu_n \log\nu_n + (1-\nu_n) \log (1-\nu_n)] \,. \label{Sent}$$ The chemical potential $\lambda$ is determined by fixing the average particle number N = = 2\_n \_n . \[avnum\] It can be shown [@bq] that the above quantities $N$, $F$ and $S$ may be expressed in terms of a convoluted [*finite-temperature level density*]{} $g_T(E)$ as $$F=2\int_{-\infty}^\lambda E\,g_T(E)\,\d E\,. \label{gtrel}$$ The function $g_T(E)$ is defined by a convolution of the “cold” ($T=0$) density of states $$g_T(E)=\int_{-\infty}^\infty g(E')\,f_T(E-E') \, \d E' =\sum_n f_T(E-E_n)\,, \label{gT}$$ whereby the folding function $f_T(E)$ is given as f\_T(E) = . \[fT\] Note that all sums in – run over the complete (infinite) spectrum of the Hamiltonian ${\hat H}$. It is now easily seen that N = 2 \_[-]{}\^g\_T(E)E. To show that the integral gives, indeed, the correct free energy including the “heat energy” $-TS$ needs some algebraic manipulations. From $F$, the entropy $S$ can always be gained by the canonical relation S=-. \[canonent\] The same convolution can now be applied also to the semiclassical trace formula for the oscillating part of the density of states which we re-write as g(E) \_ [A]{}\_(E) e\^[S\_(E)-i\_]{}. \[trfc\] The oscillating part $\delta g_T(E)$ of the finite-temperature level density is obtained by the convolution of with the function $f_T(E)$ as in . In the spirit of the stationary-phase approximation, we take the slowly varying amplitude ${\cal A}_{\text{PO}}(E)$ outside of the integration and approximate the action in the phase by S\_(E’) S\_(E) + (E’-E)T\_(E), so that the result becomes a modified trace formula g\_T(E) \_ [A]{}\_(E) [f]{}\_T(\_(E)) e\^[S\_(E)-i\_]{}, \[trft\] where \_(E) = T\_(E)/ and the temperature modulation factor ${\tilde f}_T$ is given by the Fourier transform of the convolution function $f_T$: \_T() = \_[-]{}\^f\_T()e\^[i]{} . The Fourier transform of the function is known, so that \_T() = . \[modT\] The “hot” trace formula with the modulation factor has been obtained in [@ruj; @mako]. For the spatial densities, we can proceed exactly in the same way. For the particle density, e.g., the microscopic expression is replaced by \_T() = 2 \_n |\_n()|\^2\_n, \[rhoT\] where the sum again runs over the complete spectrum. Starting from the semiclassical expression for $\delta\rho(r)$ at $T=0$, we rewrite it as \_0(,) \_ [A]{}\_(,) e\^[i(,)]{}, \[drhoscfoll\] where the amplitude $\cal{A}_{\text{NPO}}$ collects all the prefactors of the phase in . The finite-$T$ expression is given by the convolution integral \_T(,) \_[-]{}\^ \_0(-E,) f\_T(E) E. Expanding the phase under the integral as above, we arrive at \_T(,) \_ [A]{}\_(,) [f]{}\_T(\_(,))e\^[i(,)]{}, \[drhoscT\] where $\Tau_{\text{NPO}}=T_{\text{NPO}}(\lambdab,\bfr)/\hbar$ is the period of the NPO in units of $\hbar$. The corresponding expressions for the other spatial densities are obvious. For the smooth parts of the densities, we recall that the (E)TF theory at $T>0$ is well known and refer to chapter 4.4.3 of [@book] for the main results and relevant literature. Other types of correlations can be included in the semiclassical theory in the same way, as soon as a suitable folding $f_{\text{corr}}(E)$ function corresponding to $f_T(E)$ in and its Fourier transform are known (see. e.g., Ref. [@kaz09]). [31]{} B. DeMarco and D.S. Jin, Science [**285**]{}, 1703 (1999); B. DeMarco, S.B. Papp and D.S. Jin, Phys. Rev. Lett. [**86**]{} 5409 (2001); A. Görlitz [*et al.*]{}, Phys. Rev. Lett., 130402 (2001); A.G. Truscott [*et al.*]{}, Science [**291**]{}, 2570 (2001); F. Schreck [*et al.*]{}, Phys. Rev. Lett., 080403 (2001); C.A. Regal [*et al.*]{}, Nature (London) [**424**]{}, 47 (2003); M.W. Zwierlein [*et al.*]{}, Phys. Rev. Lett. [**91**]{}, 250401 (2003); C.A. Regal [*et al.*]{}, Phys. Rev. Lett. [**92**]{}, 040403 (2004); M.W. Zwierlein [*et al.*]{}, Nature (London) [**435**]{}, 1046 (12005); G.B. Partridge [*et al.*]{}, Science [**311**]{}, 503 (2006). P. Vignolo, A. Minguzzi and M.P. Tosi, Phys. Rev. Lett. [**85**]{}, 2850 (2000). F. Gleisberg , W. Wonneberger, U. Schlöder and C. Zimmermann, Phys. Rev. A [**62**]{}, 063602 (2000). M. Brack and B. van Zyl, Phys. Rev. Lett. [**86**]{}, 1574 (2001). A. Minguzzi, N.H. March and M.P. Tosi, Eur. Phys. J. D [**15**]{}, 315 (2001). A. Minguzzi, N.H. March and M.P. Tosi, Phys. Lett. A [**281**]{}, 192 (2001). N.H. March and L.M. Nieto, Phys. Rev. A [**63**]{}, 044502 (2001). P. Vignolo and A. Minguzzi, J. Phys. B: At. Mol. Opt. Phys. [**34**]{}, 4653 (2001). I.A. Howard, N.H. March and L.M. Nieto, Phys. Rev. A [**66**]{}, 054501 (2002). M. Brack and M.V.N. Murthy, J. Phys. A: Math. Gen. [**36**]{}, 1111 (2003). E.J. Mueller, Phys. Rev. Lett. [**93**]{}, 190404 (2004). Z. Akdeniz, P. Vignolo, A. Minguzzi and M.P. Tosi, Phys. Rev. A [**66**]{}, 055601 (2002). B. van Zyl, R.K. Bhaduri, A. Suzuki and M. Brack, Phys. Rev. A [**67**]{}, 023609 (2003). P. Hohenberg and W. Kohn, Phys. Rev. [**136**]{}, B864 (1964). W. Kohn and L.J. Sham, Phys. Rev. A [**137**]{}, 1697 (1965); [*ibidem*]{} [**140**]{}, 1133 (1965). M.R. Dreizler and E.K.U. Gross: [*Density Functional Theory*]{} (Springer-Verlag, Berlin, 1990). S. M. Reimann, M. Persson, P. E. Lindelof, and M. Brack, Z. Phys. B [**101**]{}, 377 (1996). M. Brack, The Scientific American, December 1997, p. 50. J. Roccia and M. Brack, Phys. Rev. Lett. [**100**]{}, 200408 (2008). M.C. Gutzwiller: [*Chaos in classical and quantum mechanics*]{} (Springer, New York,1990). M. Brack and J. Roccia, J. Phys. A: Math. Theor. [**42**]{}, 355210 (2009). Note that in the standard literature on DFT, $\tau(\bfr)$ sometimes denotes what we here call $\tau_1(\bfr)$. See also Ref. [@dft], chapter 5.5, for a discussion and further literature on the various forms of the kinetic-energy density. R.K. Bhaduri and L.F. Zaifman, Can. J. Phys. [**57**]{}, 1990 (1979); C. Guet and M. Brack, Z. Phys. A [**297**]{}, 247 (1980). N. March, Adv. in Physics [**6**]{}, 1 (1957). J. G. Kirkwood, Phys. Rev. [**44**]{}, 31 (1933). M. Brack and R.K. Bhaduri: [*Semiclassical Physics*]{}, revised edition (Westview Press, Boulder, CO, USA, 2003). In billiard systems with hard-wall reflection, there exists no gradient expansion of the potential and therefore the smooth parts of the densities are given by their TF values. M. Brack [*et al.*]{}, work in preparation;\ cf. also Sec. VI in: arXiv:0903.2172v3 \[math-ph\] (2009). M.C. Gutzwiller, J. Math. Phys. [**12**]{}, 343 (1971). Conjugate points are those in which a fan of slightly perturbed orbits, obtained by small changes of the initial momentum (or position) of a given orbit, focuses again on the unperturbed orbit, see [@gubu]. M.V. Berry and K.E. Mount, Rep. Prog. Phys. [**35**]{}, 315 (1972). S.C. Creagh and R.G. Littlejohn, Phys. Rev. A [**44**]{}, 836 (1991); A. Sugita, Ann. Phys. (N.Y.) [**288**]{}, 277 (2001); M. Pletyukhov and M. Brack, J. Phys. A [**36**]{}, 9449 (2993). S.C. Creagh, J.M. Robbins, R.G. Littlejohn, Phys. Rev. A [**42**]{}, 1907 (1990) M.V. Berry and M. Tabor, Proc. R. Soc. Lond. A [**349**]{}, 101 (1976). M. Centelles, P. Leboeuf, A.G. Monastra, J. Roccia, P. Schuck, and X. Viñas, Phys. Rev. C [**74**]{}, 034332 (2006). M.V. Berry, Ann. Phys. (N.Y.) [**131**]{}, 163 (1981). W. Parry and M. Pollicott, Ann. Math. [**118**]{}, 573 (1983). B. Eckhardt and E. Aurell, Europhys. Lett. [**9**]{}, 509 (1989). A.B. Migdal: [*Qualitative Methods in Quantum Theory*]{} (W.A. Benjamin, Inc., Reading, 1977), chapter 3. O. Bohigas, S. Tomsovic, and D. Ullmo, Phys. Rep., 43 (1993). A.B. Eriksson and P. Dahlqvist, Phys. Rev. E [**47**]{}, 1002 (1993). M. Gutierréz, M. Brack, K. Richter, A. Sugita, J. Phys. A [**40**]{}, 1525 (2007). V.M. Strutinsky and A.G. Magner, Sov. J. Part. Nucl. [**7**]{}, 138 (1976). For arbitrary radial potentials in $D>1$, the determination of $M_{\text{s}}$ is not as straightforward as for IHOs. It is, however, a well-known phenomenon that such systems exhibit nearly-degenerate “main shells”, see e.g. M. Brack, J. Damg[å]{}rd, A.S. Jensen, H.C. Pauli, V.M. Strutinsky, and C.Y. Wong, Rev. Mod. Phys. [**44**]{}, 320 (1972). The values of $M_{\text{s}}$ (or the corresponding particle numbers $N$) are best determined by looking for pronounced minima in the oscillating part $\delta E(N)$ of the total energy, the so-called “shell-correction energy”. M.C. Gutzwiller, J. Math. Phys. [**8**]{}, 1979 (1967). P. Maslov and M.V. Fedoriuk: [*Semiclassical Approximation in Quantum Mechanics*]{} (Reidel, Dordrecht, 1981). K. Richter, D. Ullmo, R. Jalabert, Phys. Rep. [**276**]{}, 1 (1996). The second derivatives of $\widehat{S}$ with respect to ${\bf p'}_{\!\bot}$ together with (\[detp\]) yield the expected determinant on the r.h.s of (\[vleckdet\]). Besides, the SPA also changes the topology of the classical trajectories in the phase space to have final position $\bfr'_\perp=\bfr_\perp$ instead of final momentum $\bfp'_\perp$. Note that the relation $\partial \widehat{S}/\partial{\bf p'}_{\!\bot}=-{\bf r'}_{\!\bot}$ guarantees that the Legendre transformation conserves Hamilton’s equations of motion (i.e., that it corresponds to a canonical transformation). M. Abramowitz and I.A. Stegun: [*Handbook of Mathematical Functions*]{} (Dover, 9th printing, New York, 1970). O. Agam, Phys. Rev. B [**54**]{}, 2607 (1996). P. Bonche, Nucl. Phys. A [**191**]{}, 609 (1972). R. Balian and C. Bloch, Ann. Phys. (N. Y.) [**69**]{} 76 (1972). I. S. Gradshteyn and I. M. Ryzhik: [*Table of Integrals, Series, and Products*]{} (Academic Press, New York, 5th edition, 1994). see, e.g., H.P. Baltes and E.R. Hilf: [*Spectra of Finite Systems*]{} (B.-I. Wissenschaftsverlag, Mannheim, 1976). W.-M. Zheng, Phys. Rev. E [**60**]{}, 2845 (1999). W. A. de Heer, W. D. Knight, M. Y. Chou, and M. L. Cohen, in [*Solid State Physics, Vol. 40*]{}, eds. H. Ehrenreich and D. Turnbull (Acad. Press, New York, 1987), p. 93;\ J. Pedersen, S. Bj[ø]{}rnholm, J. Borggreen, K. Hansen, T. P. Martin, and H. D. Rasmussen, Nature [**353**]{}, 733 (1991);\ W. A. de Heer, Rev. Mod. Phys. [**65**]{}, 611 (1993);\ M. Brack, Rev. Mod. Phys. [**65**]{}, 677 (1993). We point out that the LVT holds also at critical points $\bfr$ connected with symmetry breaking and bifurcations, where the semiclassical amplitudes in – have to be regularized by uniform approximations (cf. ; see also Ref. [@circ] for the circular billiard). M. Brack in: [*From nuclei to bose condensates*]{}, Festschrift for the 65th birthday of Rajat K. Bhaduri (Regensburg and Chennai, April 2000), p. 35. C.F. v. Weizs[ä]{}cker, Z. Phys. [**96**]{}, 431 (1935). V.M. Kolomietz, A.G. Magner, and V.M. Strutinsky, Yad. Fiz. [**29**]{}, 1478 (1979); A.G. Magner, V.M. Kolomietz, and V.M. Strutinsky, Izvestiya Akad. Nauk SSSR, Ser. Fiz. [**43**]{}, 142 (1979). M. Brack and P. Quentin, Nucl. Phys, A [**361**]{}, 35 (1981). Note that in this Appendix, $T$ and $S$ without subscript denote temperature and entropy, respectively, while the same symbols with subscripts “PO” or “NPO” denote the periods and actions of the classical orbits, as used everywhere else in this paper. M. Brack and J. Roccia, Int. J. Mod. Phys. E, in print; preprint see arXiv:0911.0284 \[math-ph\]. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- -- -- -- -- -- -- **[Type of orbits]{}&&&a&&&b&&&c\ & & &&&&&\ & & &&&&&\ **[Image points of $P(x,y)$ ]{} &&&$(2 k_x Q_x+x,2 k_y Q_y-y)$ &&&$(2 k_x Q_x-x,2 k_y Q_y+y)$ &&&$(2 k_x Q_x-x,2 k_y Q_y-y)$\ &&& & &&&\ & &&& &&&\ **[Orbit length]{} &&&$\displaystyle L(k_x Q_x,k_y Q_y-y)$ &&&$\displaystyle L(k_x Q_x-x,k_y Q_y)$ &&&$\displaystyle L(k_x Q_x-x,k_y Q_y-y)$\ &&&& & & &\ &&&& & & &\ **[$\theta$]{} &&&$\theta_{\text{a}}= -2 \arctan \bigg ( \frac{k_y Q_y-y}{k_x Q_x} \bigg)$ &&&$\theta_{\text{b}}= 2 \arctan \bigg ( \frac{k_y Q_y}{k_x Q_x-x} \bigg)$ &&&$\theta_{\text{c}}= \pi$\ &&&& & & &\ &&&& & & &\ **[Contribution to $\delta \rho$]{} &&&$\delta \rho_{\text{a}} = f(k_x Q_x,k_y Q_y-y,1)$ &&&$\delta \rho_{\text{b}} = f(k_x Q_x-x,k_y Q_y,1)$ &&&$\delta \rho_{\text{c}} = f(k_x Q_x-x,k_y Q_y-y,0)$\ &&&& & & &\ &&&& & & &\ **[Contribution to $\delta \tau_1$]{} &&&$\delta \tau_{1{\text{a}}}= \lambdab \cos(\theta_{\text{a}}) \delta \rho_{\text{a}}$ &&&$\delta \tau_{1{\text{b}}}= \lambdab \cos(\theta_{\text{b}}) \delta \rho_{\text{b}}$ &&&$\delta \tau_{1{\text{c}}}= \lambdab \cos(\theta_{\text{c}}) \delta \rho_{\text{c}}$\ &&&& & & &\ ************ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- -- -- -- -- -- -- -- --
--- abstract: 'Following the early paper of Goldreich & Julian (1969), polar-cap models have usually assumed that the closed sector of a pulsar magnetosphere corotates with the neutron star. Recent work by Melrose & Yuen has been a reminder that in an oblique rotator, the induction field arising from the time-varying magnetic flux density cannot be completely screened. The principal consequence is that the plasma does not corotate with the star. Here it is shown that the physics of the polar cap is not changed at the altitudes of the radio emission source. But the presence of a plasma drift velocity in the corotating frame of reference does provide a mechanism whereby the net charge of the star can be maintained within a stable band of values. It also shows directly how electron injection and acceleration occur in the outer gap of the magnetosphere. It is consistent with radio-loud pulsars in the Fermi LAT catalogue of $\gamma$-emitters all having positive polar-cap charge density.' author: - | P. B. Jones[^1]\ University of Oxford, Department of Physics, Denys Wilkinson Building,\ Keble Road, Oxford OX1 3RH, U.K. title: Polar caps in the presence of an induction field --- \[firstpage\] pulsars: general - plasmas - stars: neutron Introduction ============ The present consensus is that the source of the universal coherent radio emission in pulsars is in the open sector of the magnetosphere at low altitudes above the polar cap. In this region, the ${\bf B}$-field is so large that it must be regarded as fixed and model-independent, and basic physical considerations can be used to determine the nature of the acceleration field ${\bf E}_{\parallel}$ and the composition of the plasma. (Throughout this paper, the parallel and perpendicular directions are with respect to the local magnetic flux density ${\bf B}$.) The assumption of plasma corotation in the closed sector of the magnetosphere has been an essential feature of polar-cap models, and follows from the paper of Goldreich & Julian (1969) which treated the aligned rotating neutron star with no induction component, ${\bf E}^{ind} = 0$, in the observer-frame electric field. In this case, the electric field generated by the rotation is exactly screened by the Goldreich-Julian charge density $\rho_{GJ}$. The presence of an induction field in an observable pulsar whose magnetic moment is at an angle $\psi$ with the rotation angular velocity ${\bf \Omega}$ has usually not been the subject of comment. But two recent papers by Melrose & Yuen (2012, 2014) have examined the consequences of the induction field given directly in the observer frame, in terms of the time-dependence of ${\bf B}$, by Faraday’s law. They express the field as ${\bf E}^{ind} = {\bf E}^{ind}_{\parallel} + {\bf E}^{ind}_{\perp}$ and consider the screening of these components separately. The parallel component can, in principle, be screened precisely by a charge density $\tilde{\rho}$ whilst the perpendicular component causes a plasma drift velocity in the corotating frame. They note that non-corotation must affect the distinction between open and closed sectors of the magnetosphere. The consequences of this for polar-cap physics and for the outer gap of the magnetosphere are the subject of the present brief paper. A further problem that has been the subject of little comment is the question of how currents flowing from the polar cap return so as to maintain the net charge of the star within a stable and limited band of values. Polar-cap models have ignored this problem, though its treatment is implicit in force-free models of the outer magnetosphere. In the computational aspects of this latter work, the neutron-star radius was artificially large, approximately $0.2R_{LC}$, where $R_{LC}$ is the light cylinder radius, and the boundary condition assumed on the neutron-star surface is an equipotential with space-charge-limited flow. The force-free solutions specify the charge and current densities at all points in the magnetosphere. We refer to Bai & Spitkovsky (2010) for some informative computed distributions of these quantities. The assumption of corotation in the closed magnetosphere creates problems in understanding, for example in a polar-cap ${\bf \Omega}\cdot{\bf B} < 0$ pulsar, how the outflow of positive charge in the open sector can be compensated by a flux of electrons. However, the plasma drift calculated by Melrose & Yuen does obviate this difficulty and enables us to see, at least qualitatively, how the necessary charge stability can be maintained. We refer to Melrose & Yuen (2012, 2014) for a full discussion of the induction field and related topics but give here in Section 2 a very brief summary of the relevant results. These are considered in relation to the closed magnetosphere in Section 3, the polar cap and coherent emission processes in Section 4, and the outer gap in section 5. Unless otherwise stated, neutron stars with polar-cap ${\bf \Omega}\cdot {\bf B} < 0$ are assumed. The reason for this is that the polar-cap flux of positive particles has been shown to be a non-steady-state plasma, principally of ions and protons. The acceleration field ${\bf E}_{\parallel}$ is limited by the reverse flux of electrons produced by ion motion through blackbody radiation. The accelerated plasma is relativistic but not ultra-relativistic and is an ideal basis for the growth of Langmuir modes and turbulence. Pair creation is not significant except at very high magnetic flux densities in which case it actually suppresses growth of the Langmuir mode. The properties of the mode do not otherwise depend on the polar magnetic field and so differ little over a wide range of objects, from normal pulsars to millisecond pulsars (MSP) and are consistent with the almost universal spectral properties of the radio emission. We refer to Jones (2015) and papers cited therein for further details of the case for ${\bf \Omega}\cdot{\bf B} <0$ pulsars. Possible observable consequences of plasma drift in the corotating frame for the ${\bf \Omega}\cdot{\bf B} > 0$ case are considered briefly in Section 6. The induction field and screening ================================= The induction field ${\bf E}^{ind}$ is given directly in the observer frame, in terms of the time-dependence of ${\bf B}$, by Faraday’s law. It is divergence-free, $\nabla\cdot{\bf E}^{ind} = 0 $, and so cannot be universally screened by the magnetosphere plasma. The parallel component can be screened completely, in principle, by a time-dependent charge density $\tilde{\rho} = - \nabla\cdot{\bf E}^{ind}_{\parallel}/(4\pi)$ provided the plasma contains an adequate density of charged particles. For the dipole field assumed, this can be expressed as, $$\begin{aligned} \tilde{\rho} = -\rho_{GJ0}\left(\cos\theta - \cos\theta_{m}\cos\psi\right)\frac{9\cos\theta_{m}(1 + \cos^{2}\theta_{m})}{2(1 + 3\cos^{2}\theta_{m})^{2}},\end{aligned}$$ (Melrose & Yuen 2014) in which $\theta$ and $\phi$ are observer-frame spherical polar coordinates defined with respect to the rotation axis and $\rho_{GJ0}(\eta) = \Omega B/2\pi c$ is the Goldreich-Julian charge density defined for $\theta = 0$ and $\psi = \pi$, where ${\bf \Omega}\cdot{\bf B} =\Omega B\cos\psi$. Here $\eta$ is the polar coordinate in units of the neutron star radius $R$. Equation (1) is stationary in the corotating frame. The magnetic colatitude $\theta_{m}$ is, $$\begin{aligned} \cos\theta_{m} = \cos\psi\cos\theta \pm \sin\psi\sin\theta\cos(\phi - \omega t),\end{aligned}$$ in which the appropriate sign is that of the polar-cap ${\bf \Omega}\cdot{\bf B}$. The general value of the Goldreich-Julian density is, $$\begin{aligned} \rho_{GJ} = -\frac{1}{2} \rho_{GJ0}\left(3\cos\theta\cos\theta_{m} - \cos\psi\right).\end{aligned}$$ The component ${\bf E}^{ind}_{\parallel}$ is then completely removed. (The consequent polarization current density given by the continuity equation exactly replaces the displacement current component $\partial{\bf E}^{ind}_{\parallel}/c\partial t$ in Maxwell’s equation.) We shall adopt the approximation that $\tilde{\rho}$ can be treated as a perturbation to the aligned corotating state (for small $\psi$, see Melrose & Yuen 2014) so that the total screening charge density is $\rho_{s} = \rho_{GJ} + \tilde{\rho}$. Melrose & Yuen show that the effect of the component ${\bf E}^{ind}_{\perp}$ on the plasma is quite different. In the plane locally perpendicular to ${\bf B}$, the plasma has a drift velocity $\tilde{\bf v} = c{\bf E}^{ind}_{\perp}\times{\bf B}/B^{2}$ relative to the corotating frame. The polarization current density in this plane is perpendicular to the drift velocity. Owing to the very large values of $B$ for a neutron-star plasma, the Alfvén velocity is $v_{A} \gg c$ so that this component of the polarization current density is negligible and can give no significant screening of ${\bf E}^{ind}_{\perp}$. The drift velocity given by Melrose & Yuen (2012) for dipole field geometry is precisely azimuthal on the magnetic axis so that in the vicinity of the polar cap we require only this component. Directly from these authors’ equation (10) it is, $$\begin{aligned} \tilde{v}_{\phi} = \pm \frac{2\Omega R\eta\sin\psi\cos\theta_{m}\cos(\phi - \Omega t)} {1 + 3\cos^{2}\theta_{m}},\end{aligned}$$ relative to the corotating frame (Melrose & Yuen 2012) in which it is time-independent. Again the correct sign is that of ${\bf \Omega}\cdot{\bf B}$. The degree to which polar-cap physics is affected by $\tilde{v}_{\phi}$ is described in Sections 3 and 4. The closed magnetosphere ======================== The closed magnetosphere has an atmosphere close to local thermodynamic equilibrium. Above that atmosphere, it is divided into sectors of positive and negative screening charge density separated by a null surface. The neutrality of its atmosphere requires that the electron and proton chemical potentials satisfy the relation $\mu_{e}(z) - \mu_{e}(0) = \mu_{p}(z) - \mu_{p}(0)$ at altitudes $z$ within it, and this is achieved through the presence of a small non-zero $E_{\parallel}$ which balances the surface gravitational acceleration $g_{s}$, $$\begin{aligned} {\rm e}E_{\parallel} = - (m_{p} - m_{e})\frac{g_{s}}{2}, \hspace{1cm} g_{s} < 0.\end{aligned}$$ The scale height is of the order of $10^{-1}$ cm and the field within it is $E_{\parallel} \approx 100$ V cm$^{-1}$. The boundary condition at $z = 0$ allows free transit of electrons, protons or positive ions so that electric fields which do not allow an exponentially decreasing number density of either electrons or protons at the top of the atmosphere are given by ${\rm e}E_{\parallel}(z) < m_{e}g_{s}$ or ${\rm e}E_{\parallel}(z) > - m_{p}g_{s}$, respectively. There are two reasons why such an electric field component should exist from time to time to disturb the local thermodynamic equilibrium. Firstly, the screening charge density $\rho_{s}$ specified by equations (1) and (3) is stationary in the corotating frame but is a function of $\phi$ and has to be supplied by plasma drifting locally perpendicular to ${\bf B}$ with velocity $\tilde{\bf v}$. This azimuthal variation of $\rho_{s}$ is shown in Figure 1: the extent to which it is significant is an increasing function of $\psi$. This takes no account of the additional charge density which is required to maintain the open-closed sector boundary condition. The correct charge density on a flux line at any instant has to be maintained by transit of electrons or protons from the surface atmosphere and a combination of drift and motion parallel with ${\bf B}$. Also, the flux of protons and ions (in an ${\bf \Omega}\cdot{\bf B} < 0$ pulsar) in the open sector leads to a rate change of the order of $\partial E_{\parallel}/\partial t\sim -\pi u_{0}^{2}\rho_{GJ0}c/2R^{2}$, in which $u_{0}$ is the open polar-cap radius. This rate of change in $E_{\parallel}$ must also be compensated by electron transit at $z = 0$. It would be present even if there were no induction field. ![The screening charge density $\rho_{s}$ is stationary in the corotating frame but must be supplied by the drifting plasma. It is shown here in units of $\rho_{GJ0}$ at $t = 0$ as a function of the azimuthal angle $\phi$ for fixed values of the obliquity $\psi$ and polar angle $\theta$. The current flow from the neutron-star surface required to maintain the correct charge density is determined by the rate of variation of $\rho_{s}$ as a function of $\phi$. Each relation is labelled $(\psi, \theta)$ in degrees.](pbj115fig1.eps){width="84mm"} In either case a reverse transfer of protons must also be present, but here the very great mass difference becomes significant in that the electrons move with relativistic speeds at quite low levels of $E_{\parallel}$. Compensation then occurs on any flux line at a rate limited only by the velocity of light, as described by Melrose & Yuen (2014). In the absence of an induction field, there would be a problem in understanding how the outward flux of electrons could move perpendicular to ${\bf B}$ and so be enabled to cross the light cylinder. The presence of the drift velocity $\tilde{\bf v}$ solves this directly. In the following Section we attempt to describe the effect of drift into the open magnetosphere. The polar cap and open sector of the magnetosphere ================================================== We shall assume that the open magnetosphere divides into two sectors as first envisaged by Arons & Scharlemann (1979). Its precise definition requires a demarcation between those particles that cross the light cylinder and those that do not: it depends on knowledge of the ${\bf E}$ and ${\bf B}$ fields near the light cylinder. Numerical plasma physics, such as the work of Bai & Spitkovsky (2010) in the force-free approximation, indicates that the cross-sectional area of the open magnetosphere is roughly circular at the inner radius of $0.2\eta_{LC}$ adopted by them. We refer to flux lines on which the effect of the polar-cap acceleration field first described by Muslimov & Tsygan (1992) is not cancelled by flux-line curvature at higher altitudes as the active sector within which the energy of outward-accelerated particles is the source of the coherent radio emission. For favourable flux-line curvature, with $\sigma = \eta^{3}\rho_{s}(\eta)$ an increasing function of $\eta$, equation (6) shows that further acceleration occurs beyond that of the Lense-Thirring effect. The remaining open flux lines will be referred to as the return sector. This is obviously not a precise definition, but those lines on which a zero in ${\bf \Omega}\cdot{\bf B}$ exists inside the light cylinder are certainly in the return sector. This division is shown in Fig. 2 which also shows the drift relative to the boundary of the open field lines as it would be if there were no inductive field. To be specific, we adopt a circular form for the open magnetosphere boundary ${\bf u}_{0}$ with radius $u_{0} = 1.6\times 10^{4}P^{-1/2}$ cm for a $1.4$ $M_{\odot}$ star (Harding & Muslimov 2001). Above the polar cap and for dipole geometry, the open magnetosphere consists of a long narrow tube of flux lines whose cross-sectional area given by the open sector radius $u_{0}(\eta) = \eta^{3/2}u_{0}(1)$ varies little over lengths of the order of the neutron-star radius. Thus if there were no division into active and return sectors, the potential at any point inside the tube away from the polar-cap surface could be expressed as, $$\begin{aligned} \Phi({\bf u},\eta) = \pi \left(u_{0}^{2} - u^{2}\right)\left(\rho - \rho_{s}\right).\end{aligned}$$ This elementary approximation is adequate, particularly as the reduction in cross-sectional area caused by the division into active and return sectors is uncertain and reduces the potential, $\Phi \rightarrow \zeta\Phi$, where $1/4 < \zeta <1/2$ for a semi-circular shape. Equation (6) assumes the condition $\Phi = 0$ on the open-closed boundary and that it is maintained in the presence of drift. The induction field component ${\bf E}^{ind}_{\perp}$ causing drift is present in the open sector, and its effect on particle motion follows at once by considering motion parallel with, and perpendicular to, the flux lines. On the basis of equation (4), a particle entering an active dipole-field sector at altitude $\eta_{1}$ drifts a distance, $$\begin{aligned} b_{\perp} & = & \frac{R}{v_{\parallel}} \int^{\eta_{2}}_{\eta_{1}} d\eta \tilde{v}_{\phi}\left(\frac{\eta_{2}}{\eta}\right)^{3/2} \nonumber \\ & = & H\eta_{2}^{3/2}\left|\eta_{2}^{1/2} - \eta_{1}^{1/2}\right|, \hspace{1cm} H = \frac{\Omega R^{2}\sin\psi}{v_{\parallel}}\end{aligned}$$ perpendicular to the flux line it intersected at entry whilst moving, inward or outward, with velocity $v_{\parallel}$ to $\eta_{2}$. The basic unit of length is $H = 300P^{-1}\sin\psi$ cm for $R = 1.2\times 10^{6}$ cm and $v_{\parallel} = c$, where $P$ is the rotation period in seconds. The velocity in equation (7) is the instantaneous $\tilde{v}_{\phi}$, not its average over a rotation period. With this evaluation, it is easy to see that drift of protons or ions into the active open sector from the closed has little effect on the polar-cap physics of normal pulsars at altitudes less than $\eta_{1} \approx 10$ which contain the source of coherent radio emission (Hassall et al 2012). In this case, the drift $b_{\perp}$ at $\eta_{2} \approx 10$ is small compared with the width of the active flux tube and the generation of radio frequencies is not impaired. This is illustrated by the example shown in Fig. 2 [**for $\sin\psi = 1$.**]{} However, for $\eta_{2} \gg 10$, it is possible that a fraction of protons accelerated from the neutron-star surface will enter the closed sector, but the immediately observable consequences of this, if any, are unclear. However, in the case of millisecond pulsars (MSP), the drift is much more significant, although the light-cylinder radius is small, $\eta_{LC} \sim 20$. Even so, ion and proton acceleration remains consistent with radio emission spectra broadly of the universal form, as is observed (Jenet et al 1998, Kramer et al 1999, Espinoza et al 2013). The direction of drift is such that $\tilde{v}_{\phi} > 0$, that is, super-rotation. It is possible to confirm, directly from equation (10) of Melrose & Yuen (2012), that its direction is identical for both signs of ${\bf \Omega}\cdot{\bf B}$. Drift into the return open sector is a different matter. In the absence of an induction field, initial acceleration of a test particle arising from the Lense-Thirring effect would be nullified in equation (6) by the unfavourable flux-line curvature, for which $\sigma(\eta)$ is a decreasing function of $\eta$. Given free transit of protons (or ions) at the neutron-star surface, the consequence would be stasis unless secondary electron-positron plasma exists whose overall charge density could adjust so as to accommodate to a null point and an adverse flux-line curvature. But in the absence of pairs, input of protons from the positive sector of the closed magnetosphere merely contributes to this state and drift continues azimuthally through the return open sector into the closed. Input of electrons from the closed sector beyond the null point on any flux line is followed by outward acceleration to the light cylinder. Melrose & Yuen caution that their published $\tilde{\rho}$ and $\tilde{\bf v}$ may be unreliable near the light cylinder, but we shall use them to estimate that the total electron current passing from the closed sector is of the order of, $$\begin{aligned} \int^{\eta_{LC}}_{\eta_{null}} d\eta Ru_{0}(\eta)\rho_{GJ0}(\eta)\tilde{v}_{\phi} \nonumber \\ = Hcu_{0}(1)\rho_{GJ0}(1)\left(\eta^{1/2}_{LC} - \eta^{1/2}_{null} \right),\end{aligned}$$ estimated using the instantaneous $\tilde{v}$. We can see that $H \eta_{LC}^{1/2} = R\sin\psi \eta^{-1/2}_{LC}$ is of the same order as $u_{0}(1)$ so that, qualitatively, the electron current given by equation (8) is adequate to compensate for the outward flux of positive charge. The outer gap ============= Although the non-corotation may not be exactly as described by Melrose & Yuen, it does have implications for the outer-gap model introduced by Cheng, Ho & Ruderman (1986) who first noticed the possible significance of the null surface in what we have termed the return sector of the magnetospheric charge density distribution. In the ${\bf \Omega}\cdot{\bf B} < 0$ case, electrons injected beyond it are accelerated outward to high energies at the light cylinder and could, in principle, be the source of such incoherent radiation as is observed. But finding the source of the electrons, or of electron-positron pairs, has always been a problem. Single-photon pair production by curvature radiation is not usually possible and so a further source, photon-photon collisions of curvature photons and polar-cap blackbody radiation was later introduced by Cheng, Ruderman & Zhang (2000). There has been much work on outer-gap models of $\gamma$-emission in the MSP and other pulsars, but particularly in the MSP, the above problem remains. The non-corotation flux of electrons we have referred to in the final paragraph of the previous Section is, in principle, a solution. Electron drift into this sector occurs naturally in the ${\bf \Omega}\cdot{\bf B} < 0$ case and the integrated flux estimated by equation (8) is of the same order of magnitude as the Goldreich-Julian active-sector flux from the polar cap. It is known that high-energy electrons (or positrons) must be present in the vicinity of the light cylinder because MSP feature numerously in the Fermi LAT catalogue of $\gamma$-emitting pulsars (Abdo et al 2013). But it is extremely unlikely that the formation of a dense plasma of outward-moving secondary pairs by curvature radiation is possible in the outer gap in any model. It must be presumed that the outward-moving particles are exclusively electrons and that Goldreich-Julian fluxes are sufficient to produce the observed $\gamma$-ray spectrum. A further feature of the catalogue is the absence of any radio-quiet MSP (defined as a flux density $S_{1400} < 30$ $\mu$Jy) which is considered notable even though their detection procedures are necessarily different. If the outer gap is the source of $\gamma$-emission in the MSP, the neutron star must have polar-cap ${\bf \Omega}\cdot{\bf B} < 0$. This is consistent with the source of the universal radio emission being an ion-proton plasma at an ${\bf \Omega}\cdot{\bf B} < 0$ polar cap (see Jones 2015). We do not expect radio-frequency emission from an ${\bf \Omega}\cdot{\bf B} > 0$ polar cap and in such a neutron star, particle drifting into the outer gap region would necessarily be positively charged, that is, ions or protons, whose acceleration would produce negligible incoherent emission. Conclusions =========== Although the arguments of Melrose & Yuen (2014) are persuasive, these authors bear in mind the relative ubiquity of corotation and suggest that it might be enforced by some mechanism as yet unknown. But we have discounted this possibility and in this paper have taken their results for screening charge density and drift velocity as correct, although this cannot be so near the neutron-star surface because, as they point out, they assume a point-dipole field or near the light cylinder (for example, the denominator term in the Goldreich-Julian charge density has been neglected). The case in which ${\bf \Omega}\cdot{\bf B} < 0$ has been chosen for discussion because we consider that such neutron stars are the sources of the almost universal coherent emission that is observed whether in normal pulsars or in MSP (Jones 2013, 2015). The fact that screening of the induction field component ${\bf E}^{ind}_{\parallel}$ may be incomplete at radii near the light cylinder is likely to have interesting consequences for particle acceleration in that region, as noted Melrose & Yuen (2012). In relation to the latter comment, we note that electron drift into the region of the return sector beyond the null surface at once solves the problem of how the outer gap is populated. Observable incoherent emission, particularly $\gamma$-emission is then possible in neutron stars which are not capable of supporting pair creation. But the polar cap in a magnetosphere that is not in strict corotation has, in principle, properties which are unchanged insofar as radio emission is concerned, certainly if the source is at altitudes $\eta \sim 10$ as favoured by Hassall et al (2012). The presence of drift relative to the corotating frame also solves, in principle, the return current problem. It follows that there appear to be no obvious observational consequences of non-corotation for radio emission. The closed sector is in a dynamic state. Time-dependent currents needed to maintain the screening $\rho_{s}$ and the open-closed boundary condition are always present. Particle flow inward to the atmosphere is likely to be dissipative but the local heating arising from a Goldreich-Julian flux distributed over much of the neutron-star surface is unlikely to be observable. The possibility that counter-flowing beams of oppositely charged particles might co-exist in a more detailed description of non-corotation is of interest in the case of ${\bf \Omega}\cdot{\bf B} > 0$ neutron stars. For an ion-proton beam with counter-flowing electrons, Langmuir modes exist whose wave-vectors are parallel with the ion-proton velocity and so, for this spin sense, probably directed outward towards the observer. They require both beams to have quite small Lorentz factors, but the kinematics of the process allows the generation of substantial radio-frequency power. We refer to Jones (2014) for the methods of analysis which lead to this conclusion. The observation of unusual or low flux-density sources in the Square Kilometre Array (SKA) would be interesting in this context. Acknowledgments {#acknowledgments .unnumbered} =============== I thank the anonymous referee for some very helpful recommendations which have improved the presentation of this work. [99]{} Abdo A. A., et al, 2013, ApJ. Suppl., 208:17 Arons J., Scharlemann E. T., 1979, ApJ, 231, 854 Bai X.-N., Spitkovsky A., 2010, ApJ, 715, 1282 Cheng K. S., Ho C., Ruderman M., 1986, ApJ, 300, 500 Cheng K. S., Ruderman M., Zhang L., 2000, ApJ, 537, 964 Espinoza C. M., et al, 2013, MNRAS, 430, 571 Goldreich P., Julian W. H., 1969, ApJ, 157, 869 Harding A. K., Muslimov A. G., 2001, ApJ, 556, 987 Hassall T. E. et al, 2012, A&A, 543, A66 Jenet F. A., Anderson S. B., Kaspi V. M., Prince T. A., Unwin S. C., 1998, ApJ, 498, 365 Jones P. B., 2013, MNRAS, 435, L11 Jones P. B., 2014, MNRAS, 445, 2297 Jones P. B., 2015, MNRAS, 450, 1420 Kramer M., Lange C., Lorimer D. R., Backer D. C., Xilouris K. M., Jessner A., Wielebinski R., 1999, ApJ, 526, 957 Melrose D. B., Yuen R., 2012, ApJ, 745-169 Melrose D. B., Yuen R., 2014, MNRAS, 437, 262 Muslimov A. G., Tsygan A. I., 1992, MNRAS, 255, 61 Ruderman M. A., Sutherland P. G., 1975, ApJ, 196, 51 \[lastpage\] [^1]: E-mail: p.jones1@physics.ox.ac.uk
--- abstract: 'We interpret heterotic M-theory in terms of $h$-cobordism, that is the eleven-manifold is a product of the ten-manifold times an interval is translated into a statement that the former is a cobordism of the latter which is a homtopy equivalence. In the non-simply connected case, which is important for model building, the interpretation is then in terms of $s$-cobordism, so that the cobordism is a simple-homotopy equivalence. This gives constraints on the possible cobordisms depending on the fundamental groups and hence provides a characterization of possible compactification manifolds using the Whitehead group– a quotient of algebraic K-theory of the integral group ring of the fundamental group– and a distinguished element, the Whitehead torsion. We also consider the effect on the dynamics via diffeomorphisms and general dimensional reduction, and comment on the effect on F-theory compactifications.' --- [ **Constraints on heterotic M-theory from s-cobordism** ]{} Hisham Sati [^1] Department of Mathematics\ University of Maryland\ College Park, MD 20742 Introduction ============ A major goal of string theory is to provide a unification of fundamental interactions. This includes constructing the standard model via string compactifications [@GSW2], most notably via heterotic M-theory [@HW1] [@HW2]. Eleven-dimensional spacetime is taken to be an interval $I$ times a ten-manifold $M^{10}$, and with the two boundaries each supporting an $E_8$ gauge theory. One of the boundaries is called the hidden sector and the other is the visible sector, in which the structure group is broken down to a realistic symmetry group. The ten-manifold $M^{10}$ is typically taken to be Minkowski space ${{\mathbb R}}^{1,3}$ times a Calabi-Yau threefold $X^6$. In the visible sector one usually works with SU(5) or SO(10) $\subset E_8$ and breaks this group further (at least in principle) to the standard model group (ideally) SU(3)$\times$SU(2)$\times$U(1). Physical and mathematical constraints on the (Calabi-Yau) manifold $X^6$ and bundles on $X^6$ are recently reviewed in [@He]. Wilson lines are needed to break the gauge group from the grand unified (GUT) group to the standard model group [@BOS] [@W85]. In order to introduce Wilson lines, the manifolds $M^{10}$ must have a nontrivial fundamental group. Starting with a simply connected Calabi-Yau manifold, one gets a smooth non-simply connected Calabi-Yau manifold by dividing by a freely acting discrete symmetry $X^6 \mapsto X^6/\Gamma$, where $\Gamma$ is a discrete group of finite order $|\Gamma|$, and the resulting fundamental group is $\pi_1(X^6/\Gamma) =\Gamma$. Important choices for the finite group include $\Gamma={{\mathbb Z}}_2$, which breaks SU(5) down to SU(3)$\times$SU(2)$\times$U(1), and $\Gamma= {{\mathbb Z}}_3\times {{\mathbb Z}}_3$ or ${{\mathbb Z}}_6$, which break SO(10) down to SU(3)$\times$SU(2)$\times {\rm U(1)}^2$ [@GSW2]. A major area of research involves choosing $\Gamma$ so that one gets the standard model, not just as far as the symmetry groups are concerned but also accounting for example for correct generations and spectra of particles. A sampler of fundamental groups of Calabi-Yau threefolds $X^6$ applied in the heterotic setting include: ${{\mathbb Z}}_2$ [@BD], ${{\mathbb Z}}_2 \times {{\mathbb Z}}_2$ [@DOPR], ${{\mathbb Z}}_3 \times {{\mathbb Z}}_3$ [@BOPR] [@BHOP], ${{\mathbb Z}}_8 \times {{\mathbb Z}}_8$ constructed in [@GP] on which rank 5 bundles are constructed in [@BBD], abelian surface fibrations over ${{\mathbb C}}P^1$ with (abelianization of) fundamental group ${{\mathbb Z}}_n \times {{\mathbb Z}}_n$ are considered in [@Sch] [@DGS], complete intersection Calabi-Yau manifolds with fundamental groups which include [@CD] ${{\mathbb Z}}_3$, ${{\mathbb Z}}_3 \times {{\mathbb Z}}_2$, ${{\mathbb Z}}_3 \times {{\mathbb Z}}_3$, ${{\mathbb Z}}_5$, ${{\mathbb Z}}_5 \times {{\mathbb Z}}_2$, ${{\mathbb Z}}_5 \times {{\mathbb Z}}_5$, and the quaternion group $Q_8$, the latter being closely related to construction in [@BH] of Calabi-Yau threefolds with nonabelian fundamental groups, roughly speaking a semidirect product of ${{\mathbb Z}}_8$ with a quaternion group. Torsion curves, important for instanton corrections to the heterotic minimally supersymmetric standard model (MSSM), are studied in [@BKOS] for the quintic as well as for threefolds with fundamental groups ${{\mathbb Z}}_3 \times {{\mathbb Z}}_3$. Almost all known Calabi-Yau threefolds are simply connected. For example, only 16 out of about 500 million hypersurfaces in complex 4-dimensional toric varieties have nontrivial fundamental groups, and the only groups which occur are ${{\mathbb Z}}_2$, ${{\mathbb Z}}_3$ or ${{\mathbb Z}}_5$ [@BK]. All elliptically fibered Calabi-Yau threefolds are simply connected, with the exception of fibrations over an Enriques base. In [@DOPW] elliptic fibrations without section, i.e. torus bundles, with nontrivial fundamental group are constructed. Another class of examples with no section is the Schoen family [@Sc] which are fiber products of two rational elliptic surfaces. Free finite group actions on these are classified (under certain conditions) [@BD2] giving fundamental groups $\pi_1(X) \in \{ {{\mathbb Z}}_2, {{\mathbb Z}}_3, {{\mathbb Z}}_4, {{\mathbb Z}}_2\times{{\mathbb Z}}_2, {{\mathbb Z}}_5, {{\mathbb Z}}_6, {{\mathbb Z}}_2\times{{\mathbb Z}}_4, {{\mathbb Z}}_3\times{{\mathbb Z}}_3\ \}$. [^2] In another class of threefolds, the complete intersections in products of projective spaces, an exhaustive search [@Br] of the 7890 such threefolds leads to many interesting fundamental groups including ${{\mathbb Z}}_i$ (for $i=2,3,4,5,6,8,10,12$), ${{\mathbb Z}}_2 \times {{\mathbb Z}}_j$ (for $j=2,4,8, 10$), ${{\mathbb Z}}_4 \times {{\mathbb Z}}_k$ (for $k=4,8$), ${{\mathbb Z}}_5 \times {{\mathbb Z}}_5$, ${{\mathbb Z}}_8 \times {{\mathbb Z}}_8$, as well as semidirect products ${{\mathbb Z}}_3 \rtimes {{\mathbb Z}}_4$, ${{\mathbb Z}}_4 \rtimes {{\mathbb Z}}_4$, ${{\mathbb Z}}_5 \times {{\mathbb Z}}_{10}$ and groups involving the quaternion group $Q_8$, namely ${{\mathbb Z}}_2 \times Q_8$, ${{\mathbb Z}}_2 \times {{\mathbb Z}}_2 \times Q_8$, ${{\mathbb Z}}_4 \rtimes Q_8$, ${{\mathbb Z}}_8 \rtimes Q_8$ (for a complete list see [@Br]). In this paper we seek constraints on the possible fundamental groups coming from global considerations, namely from looking at the relation between the heterotic boundary and bounding M-theory. We first interpret this relation as a cobordism which connects one boundary component to the other through the eleven-dimensional bulk. We take one of the two boundary components and the bulk to be of the same homotopy type. It is natural to ask when such cobordisms are trivial, that is when are they of product (or “cylinder") form, as is usually the case in heterotic M-theory. When the fundamental groups of both the eleven-manifold $Y^{11}$ and the ten-manifold $M^{10}$ are trivial then we consider the cobordism as an $h$-cobordism ($h$ is for homotopy). When the fundamental groups are equal but nontrivial then we view heterotic M-theory as an $s$-cobordism ($s$ is for simple homotopy). Since the dimension of the nontrivial part of $M^{10}$, namely the Calabi-Yau threefold, is six then the h-cobordism [@Mil0] and the s-cobordism [@Ker] [@Maz] [@Sta] theorems can be applied. In both cases we are assuming that inclusions of the boundaries in $Y^{11}$ are homotopy equivalences. The case when $\pi_1(Y^{11})$ is nontrivial is discussed extensively in [@DMW-Spinc] in relation the partition functions and to type IIA string theory. The obstruction to finding a cobordism that is of the cylinder type is the Whitehead torsion of the inclusion $\tau (Y^{11}, M^{10})$, which is an element of the Whitehead group of the fundamental group ${\rm Wh}(\pi_1(M^{10}))$. The Whitehead group is extensively studied and is well-known for finite groups (see [@Ol]), which is the case we mainly study as such groups seem to be the most interesting for model building. Given our identification of heterotic M-theory as an $s$-cobordism, we are able to identify fundamental groups that allow trivial cobordisms from the ones which do not. We view this as providing global consistency constraints on heterotic compactifications in view for model building. We summarize the main point of this article with Consider heterotic M-theory with the $E_8$ heterotic string theory on each of the two the boundary components. Then $(i)$. M-theory is an $s$-cobordism for one of the two connected components of the boundary. $(ii)$. Consistency requires the Whitehead torsion, in the Whitehead group of the integral group ring of the fundamental group of the boundary component, to vanish. Since the use of $h$- and $s$-cobordism and the Whitehead torsion is novel in the context of heterotic M-theory and is perhaps not widely known in theoretical physics in general, we choose to take an expository route to arrive at our conclusions. We provide the description of heterotic M-theory in terms of $h$ and $s$-cobordism in section \[sec cob\]. Then in section \[sec ww\] we look at constraints on the fundamental group, coming from the Whitehead group in section \[sec Wh\] and from the Whitehead torsion in section \[sec tau\]. We provide many examples along the way and then in section \[sec ex\] we consider representative examples explicitly appearing in model building. We then consider the dynamical aspects in section \[sec dyn\], emphasizing the main points of this article. We first consider automorphisms, including diffeomorphisms and issues of orientation, in section \[sec aut\], and then we consider dynamical aspects of compactifications in section \[sec com\]. Heterotic M-theory as an $h$-cobordism and $s$-cobordism {#sec cob} ======================================================== In this section we set up heterotic M-theory as a cobordism, first as an $h$-cobordism and then as $s$-cobordism. Viewed from M-theory the data involves an eleven-dimensional manifold $Y^{11}$ which is a product $[0,1]\times M^{10}$ together with an $E_8$ bundle on each of $M^{10} \times \{0\}$ and $M^{10}\times \{1\}$. We will consider this from a ten-dimenional point of view, where we will have a cobordism taking one boundary component to the other. #### $H$-cobordism. A compact connected eleven-manifold $Y^{11}$ whose boundary $\partial Y^{11}$ is the disjoint union of two closed manifolds $M^{10}$ and $M'^{10}$, $\partial Y^{11}=M^{10} \cup M'^{10}$, is called an $h$-cobordism, provided the inclusions of $M^{10}$ into $Y^{11}$ and of $M'^{10}$ into $Y^{11}$ are both homotopy equivalences. The pair $(Y^{11}, M^{10})$ is called a $h$-cobordism with base $M^{10}$ and top $M'^{10}$. A smooth $h$-cobordism is one where $Y^{11}$ is a smooth manifold. A trivial or product $h$-cobordism is of the form $M^{10} \times [0,1]$. If $Y^{11}$ is simply-connected, then the $h$-cobordism theorem can be applied (see [@Mil0]) to give that $Y^{11}$ is diffeomorphic to the product $M^{10}\times [0,1]$. This is the configuration that is usually considered in heterotic M-theory [@HW1] [@HW2]. We can consider a more detailed description, which will be useful in section \[sec tau\] and section \[sec dyn\]. An eleven-dimensional cobordism $(Y^{11}; M_0^{10}, f_0, M_1^{10}, f_1)$ consists of a compact oriented eleven-manifold $Y^{11}$, two closed ten-manifolds $M_0^{10}$ and $M_1^{10}$, a disjoint decomposition $\partial Y^{11}=\partial_0 Y^{11} \coprod \partial_1Y^{11}$ of the boundary $\partial Y^{11}$ of $Y^{11}$ and orientation-preserving diffeomorphisms $f_0: M_0^{10} \to \partial_0Y^{11}$ and $f_1: (M_1^{10})^- \to \partial_1Y^{11}$. By $X^{-}$ we mean the manifold $X$ taken with the opposite orientation. On the boundary $\partial Y^{11}$ we use the orientation with respect to the decomposition $TY^{11}=T\partial Y^{11} \oplus {{\mathbb R}}$ coming from an inward normal field to the boundary. If $\partial_0Y^{11}=M_0^{10}$, $\partial_1Y^{11}=(M_1^{10})^-$, and $f_0$ and $f_1$ are the identity maps, then the $h$-cobordism can be referred to as $(Y^{11};\partial_0Y^{11}, \partial_1Y^{11})$. An $h$-cobordism over $M_0^{10}$ is trivial if it is diffeomorphic relative $M_0^{10}$ to the trivial $h$-cobordism $(M_0^{10}\times [0,1]; M_0^{10}\times \{0\}, (M_0^{10}\times \{1\})^{-})$. #### The fundamental group. The $h$-cobordism theorem can be applied only when the fundamental group is trivial. Next we consider the more interesting case when the fundamental group is not necessarily trivial. We will assume that $\pi_1(Y^{11})\cong \pi_1(M^{10})$. The fundamental group functor takes products to products, that is, the fundamental group is multiplicative. For $M^{10}={{\mathbb R}}^{1,3}\times X$, we have $ \pi_1(M^{10})\cong \pi_1({{\mathbb R}}^{1,3}) \times \pi_1(X) \cong \pi_1(X)$, so that the fundamental group of $M^{10}$ is determined by that of the Calabi-Yau threefold $X$. The generalization from Minkowski to other four-dimensional spacetimes gives an obvious modification, which depends on whether or not the latter is simply connected. Next we consider the appropriate description of heterotic M-theory when $\pi_1(M^{10})\neq 0$. By our assumption, this is equivalent to taking $\pi_1(Y^{11}) \neq 0$, considered in [@DMW-Spinc]. #### $S$-cobordism. Let $M^{10}$ be a connected compact 10-manifold with fundamental group ${\Gamma}$, and consider the family ${\ensuremath{\mathcal F}}$ of all $h$-cobordisms built on $M^{10}$. These are connected compact 11-manifolds $Y^{11}$ with exactly two boundary components, one of which is $M^{10}$ and the other of which is some other manifold $M'^{10}$ such that $Y^{11}$ is homotopy equivalent to both $M^{10}$ and $M'^{10}$. There is a map $\tau : \mathcal{F} \to {\rm Wh}({\Gamma})$ called the [*Whitehead torsion*]{} which induces a natural one-to-one correspondence from $\mathcal{F}/\sim$ to ${\rm Wh}({\Gamma})$, where $\sim$ is the equivalence relation induced by diffeomorphisms $Y^{11} \to Y'^{11}$ which are the identity on $M^{10}$. If $Y^{11}$ is the “trivial" $h$-cobordism $Y^{11}=M^{10} \times [0,1]$, then $\tau (Y^{11})=1$. This is an application of the Barden-Mazur-Stallings theorem [@Ker] [@Maz] [@Sta] (see [@Ro] for a review). If the fundamental group ${\Gamma}$ is such its Whitehead group ${\rm Wh}({\Gamma})$ is trivial, then certainly the Whitehead torsion will vanish and we are back to the case of an $h$-cobordism. Consequently, $Y^{11}$ is diffeomorphic (relative $M^{10}$) to a product $M^{10} \times [0,1]$. In particular, the other boundary component $M'^{10}$ is diffeomorphic to $M^{10}$. There is a bijection, given by the Whitehead torsion $\tau (Y^{11}, M^{10})$, between the set of diffeomorphism classes of $h$-cobordisms $(Y^{11},M^{10})$ with a given base $M^{10}$ and the set ${\rm Wh}(\pi_1(M^{10}))$. The cylinder corresponds to 0 under this bijection. We will consider this in much more detail in the following sections. Note that there are several versions of the the $s$-cobordism (and $h$-cobordism) theorem depending on the category of spaces within which we are working; for example we could work with homeomorphisms rather than diffeomorphisms (but here we are assuming all spaces to be smooth). However, if we start with a homotopy equivalence then we might not be able to extend it to a diffeomorphism. Consider a large class of manifolds called [*aspherical*]{}, which are ones for which all homotopy groups vanish except the first one, i.e. the fundamental group. Let $Y$ and $M$ be aspherical spaces and let $\alpha: \pi_1( M) \to \pi_1(Y)$ be an isomorphism. Then, by the Theorem of Hurewicz, $\alpha$ is induced by a homotopy equivalence. It is an open conjecture of Borel from 1955 that this can be extended to a homeomorphism. The strengthening to smooth manifolds fails [@DH]. Note that we can work in category of spaces other than that of smooth manifolds, since the $h$- and $s$-cobordism arguments work for piecewise linear (PL) and topological spaces. This implies, for example, that orbifolds are also included in our discussion, for which we would choose the category of topological spaces. The Whitehead group and Whitehead torsion {#sec ww} ========================================= We now consider the Whitehead group and Whitehead torsion in our setting of heterotic M-theory via algebraic K-theory of the group ring of the fundamental group and give the main properties which are useful for us. The Whitehead group {#sec Wh} ------------------- Algebraic K-theory roughly characterizes how, in passing from a field to an arbitrary ring, notions of linear algebra related to the general linear group and vector spaces might extend. One measure of failure of such an extension is $K_1(R)$, the algebraic K-theory of an associative ring $R$. Let $\widetilde{K}_1(R)$ be the cokernel of the map $K_1({{\mathbb Z}}) \to K_1(R)$ induced by the canonical ring homomorphism ${{\mathbb Z}}\to R$. Since ${{\mathbb Z}}$ is a ring with Euclidean algorithm then the homomorphism det: $K_1({{\mathbb Z}}) \to \{ \pm\}$, given by $[A] \mapsto {\rm det}(A)$, is a bijection. Hence $\widetilde{K}_1(R)$ is the quotient of $K_1(R)$ by a cyclic group of order two generated by the class of the $1\times1$-matrix $(-1)$. We are interested in the case when $R$ is a group algebra ${{\mathbb Z}}[\Gamma]$ of the fundamental group $\Gamma=\pi_1(M^{10})$, that is in integer linear combinations of elements of ${\Gamma}$. Define the [*Whitehead group*]{} ${\rm Wh}(\Gamma)$ of a group $\Gamma$ to be the cokernel of the map $\Gamma \times \{\pm \} \to K_1({{\mathbb Z}}[\Gamma])$ which sends $({{\gamma}}, \pm 1)$ to the class of the invertible $1\times1$-matrix $(\pm {{\gamma}})$. In other words, ${\rm Wh}({\Gamma})$ is the quotient of $K_1({{\mathbb Z}}[{\Gamma}])$ by the image of $\{ \pm {{\gamma}}: {{\gamma}}\in {\Gamma}\}$, that is $ {\rm Wh}({\Gamma})= K_1({{\mathbb Z}}[{\Gamma}]) / \{ \pm {{\gamma}}: {{\gamma}}\in {\Gamma}\}\;. $ The zero element $0 \in {\rm Wh}({\Gamma})$ is represented by the identity matrix $I_n$ for any positive integer $n$. Note that one can define the Whitehead group of the fundamental group by choosing a base point, as is usual in the fundamental group. However, the end result will be independent of the choice of the base point. Therefore, one should think of $\pi_1(M)$ in ${\rm Wh}(\pi_1(M))$ as the [*fundamental groupoid*]{} of $M$. Note also that the Whitehead group can be viewed either additively or multiplicatively. In the first point of view, this corresponds to adding two cobordisms by connecting one ‘cylinder’ to another over a ten-dimensional section, while an instance of the second point of view is a ’flip’. #### Example 1. Trivial case. Consider the case when the fundamental group is trivial. Then the group algebra ${{\mathbb Z}}[1]={{\mathbb Z}}$ is a ring with a Gaussian algorithm, so that the determinant induces an isomorphism $K_1({{\mathbb Z}}) \buildrel{\cong}\over{\to} \{\pm\}$ and the Whitehead group ${\rm Wh}(\{1\})$ of the trivial group vanishes. Hence any $h$-cobordism over a simply-connected closed $M^{10}$ is trivial. Thus, as expected in this case, $s$-cobordism reduces to $h$-cobordism. #### Example 2. Finite cyclic groups. ${\rm Wh}({\Gamma})$ is torsion-free for a finite cyclic group. For example, ${\rm Wh}({{\mathbb Z}}_p)$, $p$ odd prime, is the free abelian group of rank $(p-3)/2$ and ${\rm Wh}({{\mathbb Z}}_2)=0$. We will consider many more examples in section \[sec ex\]. #### Properties of the Whitehead group. We are interested in the case when the fundamental group $\Gamma$ is a finite group. For such a group the following useful properties hold [@Mil] [@ADS] [@Ol] 1. [*Functoriality:*]{} ${\rm Wh}(\Gamma)$ is a covariant functor of $\Gamma$, that is, any homomorphism $f: \Gamma_1 \to \Gamma_2$ induces a homomorphism $f_*: {\rm Wh}(\Gamma_1) \to {\rm Wh}(\Gamma_2)$. 2. [*Trivial group:*]{} Let ${\Gamma}=\pi_1(M)$ be trivial. Then from $K_1({{\mathbb Z}})={{\mathbb Z}}_2$ one gets ${\rm Wh}(\pi_1(M)) = {\rm Wh}(1)=1$. This is example 1 above. 3. [*Low rank:*]{} Whitehead showed that ${\rm Wh}(\Gamma)=1$ if $|\Gamma| \leq 4$. This implies, for instance, that ${{\mathbb Z}}_2$, ${{\mathbb Z}}_3$, ${{\mathbb Z}}_4$ and ${{\mathbb Z}}_2 \times {{\mathbb Z}}_2$ have trivial Whitehead group and hence lead to (desirable) trivial $h$-cobordisms. 4. [*Rank:*]{} By a result of Bass, ${\rm Wh}(\Gamma)$ is a finitely generated abelian group of rank $r(\Gamma)-q(\Gamma)$, where $r(\Gamma)$ is the number of irreducible real representations of $\Gamma$ and $q(\Gamma)$ is the number of irreducible rational representations of $\Gamma$. Explicitly, $q(\Gamma)$ is the number of conjugate classes of cyclic subgroups of $\Gamma$ and $r(\Gamma)$ is the number of conjugate classes of unordered pairs $\{ {{\gamma}}, {{\gamma}}^{-1}\}$. 5. [*Free product:*]{} The Whitehead group of a free product is multiplicative, ${\rm Wh}(G \ast H)={\rm Wh}(G) \oplus {\rm Wh}(H)$. Unfortunately, there is no corresponding formula for Cartesian products. For example, ${\rm Wh}({{\mathbb Z}}_3)=0$ and ${\rm Wh}({{\mathbb Z}}_4)=0$ but ${\rm Wh}({{\mathbb Z}}_3 \times {{\mathbb Z}}_4) \cong {{\mathbb Z}}$. More on this will be discussed in section \[sec ex\]. #### The torsion subgroup. We have seen above that the Whitehead group of a cyclic group ${{\mathbb Z}}_p$ of prime order $p$ is torsion-free. While these groups form an important class of fundamental groups we are considering, we should consider other cases as well. In particular, there could be groups ${\Gamma}$ for which ${\rm Wh}({\Gamma})$ is torsion. The torsion in the algebraic K-group is ${\rm Tor}(K_1({{\mathbb Z}}[{\Gamma}]))=(\pm) \times {\Gamma}^{\rm ab} \times SK_1({{\mathbb Z}}[{\Gamma}])$, where ${\Gamma}^{\rm ab}$ is the abelianization of ${\Gamma}$ (that is the first homology group $H_1(M^{10})$) and $SK_1({{\mathbb Z}}[{\Gamma}])=\ker (K_1({{\mathbb Z}}[{\Gamma}]) \to K_1({{\mathbb Q}}[{\Gamma}]))$. This kernel of the change of coefficients homomorphism is the full torsion subgroup of ${\rm Wh}({{\mathbb Z}}[{\Gamma}])$. #### Properties of the torsion subgroup. The torsion subgroup $SK_1({{\mathbb Z}}[{\Gamma}])$ of ${\rm Wh}({\Gamma})$ is highly nontrivial [@ADS] [@Wa74] [@Ol]. Some of the useful properties are 1. The torsion subgroup of ${\rm Wh}(\Gamma)$ is isomorphic to $SK_1({{\mathbb Z}}[\Gamma])$. 2. The torsion in ${\rm Wh}({\Gamma})$ comes from $SL(2,{{\mathbb Z}}[{\Gamma}])$. 3. $SK_1({{\mathbb Z}}[\Gamma])$ is non-vanishing for all groups of the form $\Gamma \cong ({{\mathbb Z}}_p)^n$, $n\geq 3$ and $p$ an odd prime. 4. $SK_1({{\mathbb Z}}[\Gamma])=1$ if $\Gamma \cong {{\mathbb Z}}_{p^n}$ or ${{\mathbb Z}}_{p^n}\times {{\mathbb Z}}_p$ (for any prime $p$, and any $n$), if $\Gamma\cong ({{\mathbb Z}}_2)^n$ (any $n$), or if $\Gamma$ is any dihedral, quaternion, or semidihedral 2-group. 5. The classes of finite groups $\Gamma$ for which ${\rm Wh}(\Gamma)=1$, or $SK_1({{\mathbb Z}}[\Gamma])=1$, are [*not*]{} closed under products. This provides many nontrivial examples using products. For a finitely generated fundamental group ${\Gamma}$ the vanishing of the Whitehead group ${\rm Wh}({\Gamma})$ is equivalent to the statement that each $h$-cobordism over a closed connected $M^{10}$ is trivial. Knowing that all $h$-cobordisms over a given manifold are trivial is useful, but strong. Alternatively, we could have ${\rm Wh}({\Gamma})$ nontrivial yet the distinguished element, the Whitehead torsion $\tau$ is zero. Whitehead torsion {#sec tau} ----------------- The Whitehead torsion, which is essentially a linking matrix for handles in the handle decomposition of the manifold, serves as an obstruction to the reduction of an $h$-cobordism to a product. We have encountered above many situations where the Whitehead group is not trivial. In certain cases these elements, including the distinguished element given by the Whitehead torsion, can be characterized. This characterization can be geometric due to the realization theorem which says that every Whitehead torsion comes from a manifold (see [@Ker]). First, note that a map $f: Y^{11}\to M_0^{10}$ induces a homomorphism $f_*: {\rm Wh}(\pi_1(Y^{11})) \to {\rm Wh}(\pi_1(M_0^{10}))$ on the corresponding Whitehead groups such that ${\rm id}_*={\rm id}$, $(g \circ f)_*=g_* \circ f_*$, and $f \simeq g$ implies that $f_*=g_*$. Next, the Whitehead torsion of our eleven-dimensional $h$-cobordism $(Y^{11}; M_0^{10}, f_0, M_1^{10}, f_1)$ over $M_0^{10}$, $ \tau (Y^{11}, M_0^{10}) \in {\rm Wh}(\pi_1(M_0^{10}))\;, $ is defined to be the preimage of the Whitehead torsion $\tau (M_0^{10}\buildrel{f_0}\over{\longrightarrow} \partial_0Y^{11} \buildrel{\iota_0}\over{\longrightarrow} Y^{11})\in {\rm Wh}(\pi_1(Y^{11})$ under the isomorphism $(\iota_0 \circ f_0)_*: {\rm Wh}(\pi_1(M_0^{10})) \buildrel{\cong}\over{\longrightarrow} {\rm Wh}(\pi_1(Y^{11}))$, where $\iota_0: \partial_0Y^{11} \hookrightarrow Y^{11}$ is the inclusion (see [@KL]). Next we will consider the simple situation when the diffeomorphisms are the identity. #### Geometric definition of Whitehead torsion. There is a description of Whitehead torsion at the level of chain complexes [@Mil] [@Co]. Let ${\ensuremath{\mathcal W}}(M^{10})$ be the collection of all pairs of finite complexes $(Y^{11}, M^{10})$ such that $M^{10}$ is a strong deformation retract of $Y^{11}$. For any two objects $(Y^{11}_1, M^{10})$, $(Y^{11}_2, M^{10}) \in {\ensuremath{\mathcal W}}$ define an equivalence $(Y^{11}_1, M^{10}) \sim (Y^{11}_2, M^{10})$ if and only if $Y^{11}_1$ and $Y^{11}_2$ are simple homotopically equivalent relative to the subcomplex $M^{10}$. Define ${\rm Wh}(M^{10})= {\ensuremath{\mathcal W}}/\sim$ and let $[Y^{11}_1, M^{10}]$ and $[Y^{11}_2, M^{10}]$ be two classes in ${\rm Wh}(M^{10})$. For $Y^{11}_1 \bigsqcup_{M^{10}} Y^{11}_2$, the disjoint union of $Y^{11}_1$ and $Y^{11}_2$ identified along the common subcomplex $M^{10}$, an abelian group structure can be defined on the Whitehead group ${\rm Wh}(M^{10})$ by $[Y^{11}, M^{10}] \oplus [Y^{11}_2, M^{10}]= [Y^{11}_1 \bigsqcup_{M^{10}} Y^{11}_2, M^{10}]$. The universal cover $(\widetilde{Y}^{11}, \widetilde{M}^{10})$ of an element $(Y^{11}, M^{10})$ in ${\ensuremath{\mathcal W}}$ can be equipped with the CW-complex structure lifted from the CW-structure of $(Y^{11}, M^{10})$. The inclusion $\widetilde{M}^{10} \subset \widetilde{Y}^{11}$ is a homotopy equivalence. Let $C_*(\widetilde{Y}^{11}, \widetilde{M}^{10})$ be the cellular chain complex of $(\widetilde{Y}^{11}, \widetilde{M}^{10})$. The covering action of $\pi_1(Y^{11})$ on $(\widetilde{Y}^{11}, \widetilde{M}^{10})$ induces an action on $C_*(\widetilde{Y}^{11}, \widetilde{M}^{10})$ and makes it a finitely generated free acyclic chain complex of ${{\mathbb Z}}[\pi_1(Y^{11})]$-modules. In addition to the boundary map $\partial$, there is a contraction map $\delta$ of degree $+1$ on $C_*(\widetilde{Y}^{11}, \widetilde{M}^{10})$ such that $\partial \delta + \delta \partial ={\rm id}$ and $\delta^2=0$. The module homomorphism $\partial + \delta: \bigoplus_{i=0}^\infty C_{2i+1}(\widetilde{Y}^{11}, \widetilde{M}^{10}) \to \bigoplus_{i=0}^\infty C_{2i}(\widetilde{Y}^{11}, \widetilde{M}^{10}) $ is an isomorphism of ${{\mathbb Z}}[\pi_1(Y^{11})]$-modules. The image and the range of this homomorphism are finitely generated free modules with a basis we choose coming from the CW-structure on $(\widetilde{Y}^{11}, \widetilde{M}^{10})$. Consider the matrix of this homomorphism $\partial + \delta$ which is an invertible matrix with entries in ${{\mathbb Z}}[\pi_1(Y^{11})]$ and hence lies in $GL(n, {{\mathbb Z}}[\pi_1(Y^{11})])$ for some $n$. Now take the image of this matrix in ${\rm Wh}(\pi_1(Y^{11}))$ via an isomorphism $\tau$, sending $(Y^{11}, M^{10})$ to ${\rm Wh}(\pi_1(Y^{11}))$. More explicitly, let $ \cdots \buildrel{\partial}\over{\longrightarrow} C_{i+1} \buildrel{\partial}\over{\longrightarrow} C_{i} \buildrel{\partial}\over{\longrightarrow} \cdots C_{0} \buildrel{\partial}\over{\longrightarrow} 0 $ be the complex which calculates the homology $H_*(Y^{11}, M^{10};{{\mathbb Z}}[{\Gamma}])$ of the inclusion $M^{10} \subset Y^{11}$. Each $C_i$ is a finitely generated free ${{\mathbb Z}}[{\Gamma}]$-module. Up to orientation and translation by an element in ${\Gamma}$, each $C_i$ has a preferred basis over ${{\mathbb Z}}[{\Gamma}]$ coming from the $i$-simplices added to get from $M^{10}$ to $Y^{11}$ in some triangulation of the universal covering spaces. The group $Z_i$ of $i$-cycles is the kernel of $\partial : C_i \to C_{i-1}$ and the group $B_i$ of $i$-boundaries is the image of $\partial: C_{i+1} \to C_i$. Since $M^{10} \subset Y^{11}$ is a deformation retract, homotopy invariance of homology gives that $H_*=0$, so that $B_*=Z_*$. Let ${\ensuremath{\mathcal M}}_i \in GL({{\mathbb Z}}[\pi_1(M^{10})])$ be the matrices representing the isomorphism $B_i \oplus B_{i-1} \cong C_i$ coming from a choice of section $0 \to B_i \to C_i \to B_{i-1} \to 0$. Let $[{\ensuremath{\mathcal M}}_i] \in {\rm Wh}(\pi_1(M^{10}))$ be the corresponding equivalence classes. The Whitehead torsion is then $ \tau(Y^{11}, M^{10})= \sum (-1)^i [{\ensuremath{\mathcal M}}_i] \in {\rm Wh}(\pi_1(M^{10}))\;. $ Note that the Whitehead group is identified as a quotient of $K_1({{\mathbb Z}}[{\Gamma}])$ by the subgroup generated by the units of the form $\pm {{\gamma}}$ for ${{\gamma}}\in {\Gamma}=\pi_1(M_0^{10})$. In the present context, this ensures the independence of the choice of ${{\mathbb Z}}[{\Gamma}]$-basis within the cellular equivalence class of ${{\mathbb Z}}[{\Gamma}]$-bases. #### Properties of Whitehead torsion. The Whitehead torsion has existence and uniqueness properties. 1\. [*Existence.*]{} Given $\alpha \in {\rm Wh}(\pi_1(M^{10})$, there exists an $h$-cobordism $Y^{11}$ with $\tau (Y^{11})=\alpha$. This implies that if the Whitehead group is nontrivial then we can find a cobordism for every element in that group. In order to get a trivial $h$-cobordism, that is one of cylinder type, we have to make sure that the element ${\rm Wh}({\Gamma})$ we identify for our spaces will be the zero element. This is of course not guaranteed to occur. 2\. [*Uniqueness.*]{} $\tau (Y^{11})= \tau (Y'^{11})$ if and only if there exists a diffeomorphism $f: Y^{11} \to Y'^{11}$ such that $f|_M={\rm id}_M$. This tells us that we are allowed to “deform" $Y^{11}$ in a nice way and still be able to get the same type of cobordism. In particular, for $Y^{11}$ with $\tau(Y^{11})=0$ we can always find a diffeomorphic $Y'^{11}$ for which the property that the Whitehead torsion is zero is preserved. #### Elements of finite order in the Whitehead group. We have seen that the Whitehead group of products of finite cyclic groups may contain torsion. Elements of finite order can be characterized as follows [@Mil]. Consider an orthogonal representation ${\Gamma}\to O(n)$ of the finite group ${\Gamma}$. This representation gives rise to a ring homomorphism $\rho: {{\mathbb Z}}[{\Gamma}] \to {\rm \mathbb{M}}_n ({{\mathbb R}})$, where ${\rm \mathbb{M}}_n({{\mathbb R}})$ is the algebra of $n \times n$ matrices over the real numbers. This induces a group homomorphism $\rho_*: \widetilde{K}_1({{\mathbb Z}}[\pi]) \to \widetilde{K}_1({\rm \mathbb{M}}_n({{\mathbb R}}))\cong \widetilde{K}_1({{\mathbb R}}) \cong {{\mathbb R}}^+$. Since ${{\mathbb R}}^+$ has no elements of finite order then there is the corresponding homomorphism ${\rm Wh}({\Gamma}) \to {{\mathbb R}}^+$. Therefore, an element $\omega \in {\rm Wh}({\Gamma})$ has finite order if and only if $\rho_*(\omega)=1$ for every orthogonal representation $\rho$ of ${\Gamma}$. #### Elements of ${\rm Wh}({\Gamma})$ as matrices and the representation dimension. Nontrivial elements of the Whitehead group can be represented by matrices, usually of small size. The [*representation dimension*]{} of a group ${\Gamma}$ is said to be less than or equal to $m$, with notation $r$-$\dim {\Gamma}\leq m$, if every element of ${\rm Wh}({\Gamma})$ can be realized as a matrix in $GL(m, {{\mathbb Z}}[{\Gamma}])$. If ${\Gamma}$ is finite then $r$-$\dim {\Gamma}\leq 2$. Furthermore, the representation dimension of the finite group ${\Gamma}$ satisfies $r$-$\dim {\Gamma}\leq 1$ if and only if ${\Gamma}$ admits no epimorphic mapping onto the following (see [@Sh]) [*1.*]{} the generalized quaternion group, [*2.*]{} the binary tetrahedral, octahedral, or icosahedral groups, [*3.*]{} and the groups ${{\mathbb Z}}_{p^2} \times {{\mathbb Z}}_{p^2}$, ${{\mathbb Z}}_p \times {{\mathbb Z}}_p \times {{\mathbb Z}}_p$, $Z_p \times {{\mathbb Z}}_2 \times {{\mathbb Z}}_2 \times {{\mathbb Z}}_2$, ${{\mathbb Z}}_4 \times {{\mathbb Z}}_2 \times {{\mathbb Z}}_2$, and ${{\mathbb Z}}_4 \times {{\mathbb Z}}_4$, for $p$ a prime. Thus $r$-$\dim {\Gamma}\leq 1$ for all finite simple groups. However, if we take products then the size of the matrix can grow (see expression for an explicit matrix). Further examples in heterotic M-theory {#sec ex} ======================================= We have already seen many classes of examples both for the Whitehead group in section \[sec Wh\] and for the Whitehead torsion in section \[sec tau\]. we now provide more examples and in particular ones which appear explicitly in model building (cf. the introduction). #### Tori and free abelian groups. The fundamental group of the circle is the free abelian group ${{\mathbb Z}}$, so that the corresponding Whitehead torsion is zero, ${\rm Wh} ({{\mathbb Z}})=0$. For the $n$-torus $T^n$, the fundamental group $\pi_1(T^n)={{\mathbb Z}}^n$. This free abelian group of rank $n$ has a trivial Whitehead torsion ${\rm Wh}(\pi_1(T^n))=0$, since ${\rm Wh}({{\mathbb Z}}\oplus \cdots \oplus {{\mathbb Z}})=0$ by the multiplicative property of Whitehead torsion under free product (section \[sec Wh\]). It follows from the theorem of Bass about the rank of the Whitehead group that ${\rm Wh}(\Gamma)$ of a free abelian group $\Gamma$ is zero if and only if $\Gamma$ has exponent 1, 2, 3, 4, or 6 [@Mil]. #### Cyclic groups. Suppose ${\Gamma}$ is a finite group. Then ${\rm Wh}({\Gamma})$ is finitely generated, and ${\rm rank}({\rm Wh}({\Gamma}))$ is the difference between the number of irreducible representations of ${\Gamma}$ over ${{\mathbb R}}$ and the number of irreducible representations of ${\Gamma}$ over ${{\mathbb Q}}$. For $\Gamma$ a cyclic group ${{\mathbb Z}}_p$ of order $p$, an odd prime, the numbers of representations are $q({{\mathbb Z}}_p)=2$ and $r({{\mathbb Z}}_p)=\frac{1}{2}(p+1)$, respectively. This implies that ${\rm Wh}({{\mathbb Z}}_p)$ is the free abelian group of rank $(p-3)/2$ and that ${\rm Wh}({{\mathbb Z}}_2)=0$. Alternatively, note that ${{\mathbb Z}}_p$ has $(p-1)/2$ inequivalent two-dimensional irreducible representations over ${{\mathbb R}}$, but one $(p-1)$-dimensional irreducible representation over ${{\mathbb Q}}$ (since ${{\mathbb Q}}[{{\mathbb Z}}_p] \cong {{\mathbb Q}}\times {{\mathbb Q}}(\zeta)$, $\zeta$ a primitive $p$-th root of unity, and $[{{\mathbb Q}}(\zeta) : {{\mathbb Q}}]=p-1$), so ${\rm rank}({\rm Wh}({{\mathbb Z}}_p))=\frac{p-1}{2}+1 -2= (p-3)/2$. Note that we have already seen that ${\rm Wh}({{\mathbb Z}}_k)=0$ for $k=2,3,4,6$. #### Units in the group ring. Consider the integral group ring ${{\mathbb Z}}[{{\mathbb Z}}_p]$ of the finite cyclic group ${{\mathbb Z}}_p$ and let $\zeta$ be a primitive $p$th root of unity with corresponding group ring ${{\mathbb Z}}[\zeta]$. The pullback square of rings $ \xymatrix{ {{\mathbb Z}}[{{\mathbb Z}}_p] \ar[rr] \ar[d] && {{\mathbb Z}}[\zeta] \ar[d] \\ {{\mathbb Z}}\ar[rr] && \mathbb{F}_p }\;, $ where $\mathbb{F}_p$ is the field with $p$ elements, implies that the $(p-1)$st power of any unit in ${{\mathbb Z}}[\zeta]$ comes from a unit in ${{\mathbb Z}}[{{\mathbb Z}}_p]$. An example of a unit in ${{\mathbb Z}}[\zeta]$ is $(\zeta + \zeta^{-1})^r$. This is invariant under complex conjugation in ${{\mathbb Z}}[\zeta]$ (this corresponds to invariance under the orientation duality discussed in section \[sec aut\]). #### The quintic and the cyclic group of order 5. The quintic threefold plays an important role as a prototype example of compactification on Calabi-Yau manifolds. Consider the one-parameter family of quintic threefolds ${\ensuremath{\mathcal Q}}:= \{ z_1^5 + z_2^5 +z_3^5 + z_4^5 + z_5^5 + \psi^5 z_1z_2 z_3 z_4 z_5=0\} \subset {{\mathbb C}}P^4$. The defining equation is invariant under the ${{\mathbb Z}}_5 \times {{\mathbb Z}}_5 \subset {\rm PGL}(5,{{\mathbb C}})$ group action $ [z_1 : z_2 : z_3 : z_4 : z_5]\mapsto [z_2 : z_3 : z_4 : z_5 : z_1]\;, \qquad [z_1 : z_2 : z_3 : z_4 : z_5]\mapsto [\zeta z_1 : \zeta^2 z_2 : \zeta^3 z_3 : \zeta^4 z_4 : z_5]\;, $ where $\zeta=e^{2\pi i/5}$. The fixed points lie on ${{{\mathbb C}}\text{P}}^4-{\ensuremath{\mathcal Q}}$, so that ${\ensuremath{\mathcal Q}}/{{\mathbb Z}}_5$ and ${\ensuremath{\mathcal Q}}/{{\mathbb Z}}_5\times {{\mathbb Z}}_5$ are smooth Calabi-Yau threefolds. The six different ${{\mathbb Z}}_5$ subgroups in ${{\mathbb Z}}_5 \times {{\mathbb Z}}_5$ can be used. The Whitehead group of ${{\mathbb Z}}_5=\{t~|~t^5=1\}$ is ${\rm Wh}({{\mathbb Z}}_5)={{\mathbb Z}}$ with generator the torsion $\tau (u)$ of the unit $u=1-t+t^2 \in {{\mathbb Z}}[{{\mathbb Z}}_5]$ [@Mil]. The identity $(t+t^{-1} -1)(t^2 + t^{-2} -1)=1$ indeed shows that $u$ is a unit. The homomorphism $\alpha: {{\mathbb Z}}[{{\mathbb Z}}_5] \to {{\mathbb C}}$, sending $t$ to $\zeta$, also sends $\{\pm {{\gamma}}: {{\gamma}}\in {\Gamma}\}$ to the roots of unity in ${{\mathbb C}}$, and hence $x \mapsto |\alpha (x)|$ defines a homomorphism from ${\rm Wh}({{\mathbb Z}}_5)$ into ${{\mathbb R}}^*_+$, the nonzero positive real numbers. Then the map $u \mapsto 1- \zeta - \zeta^{-1} =1- 2\cos(2\pi/5)$ can be used to show that no power of $u$ is equal to 1. Indeed, $|\alpha (u)|= | 1- 2\cos(2\pi/5)| \approx 0.4$, so that $\alpha$ defines an element of infinite order in ${\rm Wh}({{\mathbb Z}}_5)$. Note that the unit $u$ is self-conjugate, and that the automorphism $t \mapsto t^2$ of ${{\mathbb Z}}_5$ carries $u$ to $u^{-1}$. In fact, for $\Gamma$ finite abelian, every element of ${\rm Wh}(\Gamma)$ is self-conjugate [@Mil] (see the last paragraph in section \[sec tau\]). We see from the example of the quintic that, a priori, there are countably infinitely many elements in the Whitehead group of the fundamental group of the quintic. Unless the Whitehead torsion is the zero element, there will be an obstruction to having a trivial $h$-cobordism and hence to a consistent relation to heterotic M-theory. Therefore, it is an interesting problem to compute the Whitehead torsion of the quintic. Recall from the end of section \[sec Wh\] that the full torsion subgroup of the Whitehead group is given by $SK_1({{\mathbb Z}}[{\Gamma}])$. Therefore, one way to tell that ${\rm Wh}({\Gamma})$ is nontrivial is to detect torsion via $SK_1({{\mathbb Z}}[{\Gamma}])$. #### Products of abelian groups. We now consider products of abelian groups, in particular of cyclic groups. [*1. Products of groups of even order.*]{} For even order, we have already seen that the Whitehead group of the lowest rank non-simple group, ${{\mathbb Z}}_2 \times {{\mathbb Z}}_2$, is zero. Next we consider products of ${{\mathbb Z}}_2$ with ${{\mathbb Z}}_4$ and so on. We use the following two general formulae [@Ol] for the torsion part of the Whitehead group $ SK_1({{\mathbb Z}}[ ({{\mathbb Z}}_2)^k \times {{\mathbb Z}}_{2^n}])\cong \left[ \oplus_{r=1}^k \binom{k}{r} \cdot ({{\mathbb Z}}_{2^{r-1}}) \right] \oplus \left[ \oplus_{s=2}^n ({{\mathbb Z}}_{2^s})\right]$ and $SK_1({{\mathbb Z}}[({{\mathbb Z}}_2)^2 \times {{\mathbb Z}}_{2^n}]) \cong {{\mathbb Z}}_2^{n-1}$. For instance, the following cases can then be deduced: [*1.*]{} $SK_1({{\mathbb Z}}[{{\mathbb Z}}_4 \times {{\mathbb Z}}_4])\cong {{\mathbb Z}}_2$. [*2.*]{} $SK_1({{\mathbb Z}}[{{\mathbb Z}}_2 \times {{\mathbb Z}}_2 \times {{\mathbb Z}}_4])\cong {{\mathbb Z}}_2$. [*3.*]{} $SK_1({{\mathbb Z}}[({{\mathbb Z}}_2)^3 \times {{\mathbb Z}}_4])\cong ({{\mathbb Z}}_2)^3 \times {{\mathbb Z}}_4$. This last case is curious in that ${\rm Wh}(\Gamma)=\Gamma$. We can also use the general formula $SK_1({{\mathbb Z}}[{{\mathbb Z}}_{4} \times {{\mathbb Z}}_{2^n}]) \cong ({{\mathbb Z}}_2)^{(n-1)}$ to deduce other relevant groups. For example, $SK_1({{\mathbb Z}}[{{\mathbb Z}}_4 \times {{\mathbb Z}}_8])\cong ({{\mathbb Z}}_2)^2$, $SK_1({{\mathbb Z}}[{{\mathbb Z}}_4 \times {{\mathbb Z}}_{16}])\cong ({{\mathbb Z}}_2)^3$, $SK_1({{\mathbb Z}}[{{\mathbb Z}}_4 \times {{\mathbb Z}}_{32}])\cong ({{\mathbb Z}}_2)^4$, etc. [*2. Products of groups of odd order.*]{} Next we consider the case when the orders of the groups in the products are odd. We will look at groups of the form $({{\mathbb Z}}_p)^k$, ${{\mathbb Z}}_{p^2} \times {{\mathbb Z}}_{p^n}$ and $({{\mathbb Z}}_p)^2 \times {{\mathbb Z}}_{p^n}$, as well as combinations involving three factors, using general results from reference [@Ol]. $(i)$ The torsion subgroup $SK_1({{\mathbb Z}}[{\Gamma}])$ is trivial if ${\Gamma}$ is cyclic or an elementary 2-group, or of type ${{\mathbb Z}}_p \oplus {{\mathbb Z}}_{p^n}$. However, $SK_1({{\mathbb Z}}[{\Gamma}])$ is nontrivial form most abelian groups [@ADS]. If ${\Gamma}=({{\mathbb Z}}_p)^k$, $p$ odd, then $SK_1({{\mathbb Z}}[{\Gamma}])$ is a ${{\mathbb Z}}_p$-vector space of dimension $(p^k-1)/(p-1) - \binom{p+k+1}{p}$. For example, for ${\Gamma}=({{\mathbb Z}}_3)^3$, the torsion subgroup is $SK_1({{\mathbb Z}}[({{\mathbb Z}}_3)^3]) \cong ({{\mathbb Z}}_3)^3$. $(ii)$ For $p$ an odd prime, $SK_1({{\mathbb Z}}[{{\mathbb Z}}_{p^2} \times {{\mathbb Z}}_{p^n}]) \cong ({{\mathbb Z}}/p)^{(p-1)(n-1)}$. $(iii)$ For $p$ an odd prime, $SK_1({{\mathbb Z}}[({{\mathbb Z}}_p)^2\times {{\mathbb Z}}_{p^n} ]) \cong ({{\mathbb Z}}_p)^{np(p-1)/2} $. Let $p$ be an odd prime and $\Gamma$ an elementary abelian $p$-group of rank $k$. Then [@Ste] $SK_1({{\mathbb Z}}[\Gamma])$ is an elementary abelian $p$-group of rank $(p^k-1)/(p-1) - \binom{p+k-1}{p}$. In particular $SK_1({{\mathbb Z}}[\Gamma])\neq 0$ for $k\geq 3$. For example, the following table can be formed (see also [@Ste]) $ \begin{array}{|c|c|} \hline \Gamma & SK_1({{\mathbb Z}}[\Gamma])\\ \hline \hline {{\mathbb Z}}_{p^2}\times {{\mathbb Z}}_{p^2} ~(p=3,5,7) & ({{\mathbb Z}}_p)^{p-1}\\ \hline {{\mathbb Z}}_{p^2}\times {{\mathbb Z}}_{p}\times {{\mathbb Z}}_p ~(p=3,5,7) & ({{\mathbb Z}}_p)^{p(p-1)}\\ \hline {{\mathbb Z}}_{27} \times {{\mathbb Z}}_9 & ({{\mathbb Z}}_3)^4\\ \hline {{\mathbb Z}}_{27}\times {{\mathbb Z}}_3 \times {{\mathbb Z}}_3 & ({{\mathbb Z}}_3)^9\\ \hline {{\mathbb Z}}_9 \times {{\mathbb Z}}_9 \times {{\mathbb Z}}_3 & ({{\mathbb Z}}_3)^{15} \times ({{\mathbb Z}}_9)^2\\ \hline \end{array} $ #### Nonabelian groups. We have already seen examples of nonabelian groups in section \[sec Wh\]. In addition, [*1. Crystallographic groups.*]{} $SK_1({{\mathbb Z}}[\Gamma])=0$ for $\Gamma$ a dihedral, the binary tetrahedral or icosahedral group [@Mag] [@Ste]. [*2. The quaternion group.*]{} The Whitehead group ${\rm Wh}({{\mathbb Z}}[Q_8])$ of the quaternion group $Q_8$ of order 8 is isomorphic to $\pm V$, where $V={{\mathbb Z}}_2\times {{\mathbb Z}}_2$ is Klein’s 4-group [@Ke]. Note that $V$ is the factor group $Q_8/\{\pm\}$, where $\{\pm\}$ is the commutator subgroup of $Q_8$. [*3. Products with abelian groups.*]{} If $\Gamma$ is any (nonabelian) quaternion or semidihedral 2-group, then for all $k \geq 0$, the torsion subgroup is $ SK_1({{\mathbb Z}}[\Gamma \times ({{\mathbb Z}}_2)^k]) \cong ({{\mathbb Z}}_2)^{2^k-k-1}$. [*4. Nonabelian groups with specified abelianization.*]{} For instance, if for order $|\Gamma|=16$ the torsion subgroup is given by $ SK_1({{\mathbb Z}}[\Gamma]) \cong \left\{ \begin{array}{ll} 1 & {\rm if~} \Gamma^{\rm ab} \cong {{\mathbb Z}}_2 \times {{\mathbb Z}}_2 {\rm ~~or~~} {{\mathbb Z}}_2 \times {{\mathbb Z}}_2 \times {{\mathbb Z}}_2\\ {{\mathbb Z}}_2 & {\rm if~} \Gamma^{\rm ab} \cong {{\mathbb Z}}_4 \times {{\mathbb Z}}_2. \end{array} \right. $ #### Finding Whitehead groups via transfer. Looking at inclusions tells us about the corresponding Whitehead groups. We will consider several situations. [*1.*]{} Consider the cyclic group ${{\mathbb Z}}_{2k+1}$ of order $2k+1$ as a subgroup of the cyclic group ${{\mathbb Z}}_{4k+2}$ of order $4k+2$. Then the transfer $i^*: {\rm Wh}({{\mathbb Z}}_{4k+2}) \to {\rm Wh}({{\mathbb Z}}_{2k+1})$, corresponding to $i: {{\mathbb Z}}_{2k+1} \hookrightarrow {{\mathbb Z}}_{4k+2}$, is onto for all $k$ [@Kw]. [*2.*]{} Now consider the inclusion $i: {{\mathbb Z}}_{2k} \hookrightarrow {{\mathbb Z}}_{2k}\oplus {{\mathbb Z}}_2$. Then the transfer $i^*: {\rm Wh}({{\mathbb Z}}_{2k} \oplus Z_2) \to {\rm Wh}({{\mathbb Z}}_{2k})$ is onto if and only if $k=1,2$ or 3 [@Kw2]. Since ${\rm Wh}({{\mathbb Z}}_{2k})=0$ for $k=1,2$ and 3, then this means that ${\rm Wh}({{\mathbb Z}}_{2k} \oplus {{\mathbb Z}}_2)$ is trivial for these values of $k$. [*3.*]{} Now let ${\Gamma}$ be a finite abelian group of odd order. Then $i^*: {\rm Wh}({\Gamma}\oplus {{\mathbb Z}}_2) \to {\rm Wh}({\Gamma})$ is onto [@Kw2]. This then can tell us whether ${\Gamma}\oplus {{\mathbb Z}}_2$ is trivial from whether or not the Whitehead group of ${\Gamma}$ itself is trivial. In general, if ${\Gamma}\to {\Gamma}'$ is a surjection of finite abelian groups induces a surjection $SK_1({{\mathbb Z}}[{\Gamma}]) \to SK_1({{\mathbb Z}}[{\Gamma}'])$ [@ADS]. #### Semidirect products. For finite ${\Gamma}$, the torsion subgroup of the Whitehead group is trivial $SK_1(R[{\Gamma}])=1$ for all rings of integers in number fields if and only if ${\Gamma}$ is a semidirect product of two cyclic groups of relatively prime orders [@ADOS]. In general, we can determine the ranks of the (torsion-free part) of these groups using Bass’ theorem. Given the above rules and results, it is a straightforward exercise to find the Whitehead groups of the fundamental groups appearing in the literature of model building (reviewed partially in the introduction). This includes, for instance, the groups appearing in [@Br]. #### Whitehead torsion. The approach in this paper can also guide us to anticipate conditions on cobordisms when constructing Calabi-Yau threefolds with fundamental groups of certain types. Recall that just because the Whitehead group is nontrivial does not mean that the particular element, the Whitehead torsion, is a nontrivial element. That is, one still has to compute the Whitehead torsion (geometrically), which we do not do here. We consider examples where elements in the torsion subgroup of the Whitehead group can be explicitly characterized (see [@Ol]). $(i)$ For $\Gamma={{\mathbb Z}}_4 \times {{\mathbb Z}}_2 \times {{\mathbb Z}}_2= \langle g \rangle \langle h_1 \rangle \langle h_2 \rangle$, the torsion subgroup is $SK_1({{\mathbb Z}}[\Gamma]) \cong {{\mathbb Z}}/2$, and the nontrivial element is represented by the matrix $ \left[ \begin{array}{cc} 1+ 8(1-g^2)(1+h_1)(1+h_2)(1-g) & -(1-g^2)(1+h_1)(1+h_2)(3+g)\\ &\\ -13(1-g^2)(1+h_1)(1+h_2)(3-g) & 1+ 8(1-g^2)(1+h_1)(1+h_2)(1+g) \end{array} \right] \in {\rm GL}(2, {{\mathbb Z}}[\Gamma])\;. \label{22} $ In this case, one would have to check for a given $h$-cobordism built out of $Y^{11}$ and $M^{10}$ whether the corresponding Whitehead torsion is the zero element or the nontrivial element represented by matrix . $(ii)$ For $\Gamma={{\mathbb Z}}_3 \times Q_8=\langle g \rangle \times \langle a, b \rangle$, where $Q_8$ is a quaternion group of order 8, the torsion subgroup is $SK_1({{\mathbb Z}}[\Gamma])\cong {{\mathbb Z}}/2$, and the nontrivial element is represented by the unit $ 1+ (2-g-g^2)(1-a^2)\left( 3g + a + 4g^2a+4(g^2-g)b + 8ab \right) \in ({{\mathbb Z}}[\Gamma])^*\;. $ Again, one would check the geometry to see which of the two elements one gets. It would be very interesting to calculate the Whitehead torsion explicitly for interesting classes of non-simply connected Calabi-Yau manifolds. As far as we know, no such calculations exist. One approach could be to find an explicit Morse function (which seems not easy). Dynamical aspects {#sec dyn} ================= In this section we consider some dynamical aspects of heterotic M-theory as they arise in connection to the Whitehead group and Whitehead torsion. We consider the effect of diffeomorphisms as well as orientation characters in section \[sec aut\] and then consider dynamical constraints on general compactifications in heterotic M-theory in section \[sec com\]. Automorphisms {#sec aut} ------------- #### Diffeomorphism. We study the effect of diffeomorphisms on our cobordisms, starting with a visible sector $M_0^{10}$. Two eleven-dimensional cobordisms $(Y^{11}; M_0^{10}, f_0, M_1^{10}, f_1)$ and $(Y'^{11}; M_0^{10}, f'_0, M_1'^{10}, f'_1)$ over $M_0^{10}$ are diffeomorphic relative $M_0^{10}$ if there is an orientation preserving diffeomorphism $F: Y^{11} \to Y'^{11}$ such that $F \circ f_0=f'_0$. Indeed in [@FM] the quantum integrand in the M-theory effective action is shown to be invariant under the group of Spin diffeomorphisms of $Y^{11}$ which act freely on the space of metrics. On the other hand, the effective action of the heterotic string is invariant under diffeomorphisms $\varphi: M^{10} \to M^{10}$ which lift to the Spin bundle and to the $E_8$ vector bundles [@W-tool]. The global anomaly is absent for arbitrary choices of the Spin $M^{10}$ and the two $E_8$ vector bundles. In addition to the many examples that we have considered so far, one might be able to generate others using diffeomorphism. In a sense, constructing manifolds with cobordisms for which the Whitehead torsion is nontrivial would be easier than calculating the Whitehead torsion for a given fixed cobordism. The idea is to take a cobordism and and glue it to another after a ‘twist’ via an automorphism, i.e. a diffeomorphism in our case. This may give rise to a nonzero Whitehead torsion. This requires the study of the mapping torus as is done with the global anomalies in the heterotic effective action, e.g. in [@W-tool]. #### Scale and intervals. In the discussion so far we have used unit intervals $[0,1]$ to characterize the cobordism. In the physical set-up of Horava-Witten [@HW1] [@HW2] we have a length scale imposed by the dynamics in the theory. In the above formulation, we can introduce this length scale by simply replacing the unit interval by the interval $[0, L]$ or $[-L, L]$, with $L$ the (dynamical) length in the eleventh direction. #### Manifolds with non-positive sectional curvature. It is interesting to note that ${\rm Wh}(\Gamma)$ is trivial for $\Gamma$ the fundamental group of closed manifolds with all the sectional curvatures $\leq 0$ [@FJ]. Therefore, although not Calabi-Yau (see [@HLW] Theorem 2.3), such spaces are admissible for $s$-cobordism (see [@Rares]). #### The Whitehead torsion relative to left vs. right boundary. We ask whether it makes a difference to take the Whitehead torsion relative to the left boundary vs. taking it relative to the right boundary. There is a duality theorem which relates the Whitehead torsion relative to one boundary to that of the second boundary [@Mil]. For any orientable $h$-cobordism $(Y^{11}, M^{10}, M'^{10})$ we have the relation between $\tau (Y^{11}, M^{10})$ and $\tau (Y^{11}, M^{10})$ as $ \tau (Y^{11}, M'^{10})= \overline{\tau} (Y^{11}, M^{10})\;, $ where $\overline{\tau}$ is the conjugate of $\tau$, defined as follows. If $a=\sum n_i {{\gamma}}_i$ is an element of ${{\mathbb Z}}[{\Gamma}]$, with $n_i\in {{\mathbb Z}}, {{\gamma}}_i\in{\Gamma}$, then the conjugate of $a$ is the element $\sum n_i {{\gamma}}_i^{-1}$. This conjugation operation is an anti-automorphism of the group ring with corresponding automorphism on $GL({{\mathbb Z}}[{\Gamma}])$ given by sending each matrix to its conjugate transpose. Passing to the abelianized group $K_1({{\mathbb Z}}[{\Gamma}])$ gives an automorphism and hence an automorphism also of the quotient ${\rm Wh}({\Gamma})$. We see that ‘reversing’ the direction of the cobordism, that is taking $M'^{10}$ to $M^{10}$ instead of going from $M^{10}$ to $M'^{10}$, will result only in a mild modification in having to deal with the conjugate torsion. For large classes of examples in which we are interested, there is even a simplification. If ${\Gamma}$ is finite abelian then every element $\omega$ of ${\rm Wh}({\Gamma})$ is self-conjugate, $\omega= \overline{\omega}$. This in particular holds for the distinguished element, the Whitehead torsion. Therefore, for finite abelian fundamental groups working with the Whitehead torsion relative to $M^{10}$ is equivalent to working with the Whitehead torsion relative to $M'^{10}$. #### Remark on the $E_8$ gauge bundles. General boundary conditions for M-theory on a manifold with boundary are considered in [@DFM] [@DMW-boundary]. The left and right boundaries in heterotic M-theory each carries an $E_8$ bundle which, in the process of model building is desired to be broken down to a realistic group. Each of the two bundles is characterized with a degree four characteristic class, $a_L$ for left and $a_R$ for right. As explained in [@DFM], when $a_L=a_R$ then the eleven-dimensional spacetime provides a homotopy of the left and right connections so that the $E_8$ bundles on the boundaries necessarily have $a_L=a_R$, which is the case in the non-supersymmetric model in [@FH]. However, in (the supersymmetric) Horava-Witten theory, $a_L + a_R=\frac{1}{2}p_1(Y^{11})$. In order to overcome this difficulty, the authors of [@DFM] give a parity-invariant formulation of the C-field in M-theory by passing from $Y^{11}$ to $Y^{11}_d$, the orientation double cover of $Y^{11}$, and defining the C-field to be a parity invariant $E_8$ cocycle on $Y^{11}_d$. This is done via a nontrivial deck transformation $\sigma$ on $Y^{11}_d$, so that a parity-invariant $E_8$ cocycle is one for which the differential character corresponding to the C-field satisfies $\sigma^* ([\check{C}])=[\check{C}]^{\cal{P}}$, where the action of the parity ${\cal P}$ is $[\check{C}]^{\cal P}= [\check{C}]^*$. While this solves the parity problem it uses boundary conditions which lead to a Bianchi identity for the C-field which is different from the one in [@HW2]. We should keep these subtleties in mind when dealing with bundles, which are always there (but we do not directly deal with them in this paper). Nevertheless, next we provide an explanation of this in our current context. #### Orientation characters and twisted group algebras of the fundamental group. The orientation character $\omega(M_0^{10}): \pi_1(M_0^{10}) \to {{\mathbb Z}}_2=\{\pm 1\}$ sends a loop ${{\gamma}}: S^{1} \to M_0^{10}$ to $\omega({{\gamma}})=+1$ (respectively, -1) if ${{\gamma}}$ is orientation-preserving (respectively, orientation-reversing). Thus, in the oriented case $\omega ({{\gamma}})=+1$ for all ${{\gamma}}\in {\Gamma}$, that is $\omega$ is trivial if and only if $M_0^{10}$ is orientable. This has the following effect on the integral group ring of the fundamental group. The orientation character defines a twisted involution (an anti-automorphism) on the group ring ${{\mathbb Z}}[{\Gamma}]$ given by $a \mapsto \omega(a) a^{-1}$, i.e. $\pm a$ according to whether $a$ is orientation preserving or reversing. The resulting group ring is denoted ${{\mathbb Z}}[{\Gamma}]^\omega$. Let us consider this in more detail. An involution on ${{\mathbb Z}}[{\Gamma}]$ is a function ${{\mathbb Z}}[{\Gamma}] \to {{\mathbb Z}}[{\Gamma}]$, taking an element $a$ to an element $\overline{a}$ satisfying: $\overline{(a + b)}=\overline{a} + \overline{b}$, $\overline{(ab)}=\overline{b}\cdot \overline{a}$,  $\overline{\overline{(a)}}=a$, and $\overline{1}=1\in {{\mathbb Z}}[{\Gamma}]$. This gives rise to the $\omega$-twisted involution on ${{\mathbb Z}}[{\Gamma}]$, defined as the map from ${{\mathbb Z}}[{\Gamma}]$ to ${{\mathbb Z}}[{\Gamma}]$ given by $ a=\sum_{{{\gamma}}\in {\Gamma}} n_{{{\gamma}}}{{\gamma}}\longmapsto \overline{a}=\sum_{{{\gamma}}\in {\Gamma}} \omega({{\gamma}}) n_{{\gamma}}{{\gamma}}^{-1}\;, \quad n_{{\gamma}}\in {{\mathbb Z}}. $ In this case we have to use $\omega$-twisted cohomology and fundamental class in evaluating expressions in the theory. Starting from the cellular ${{\mathbb Z}}[\pi_1(M_0^{10})]$-module chain complex $C(\widetilde{M}_0^{10})$, the $\omega(M_0^{10})$-twisted involution on ${{\mathbb Z}}[\pi_1(M_0^{10})]$ can be used to define the left ${{\mathbb Z}}[\pi_1(M_0^{10})]$-module structure on the dual cochain complex $C(\widetilde{M}_0^{10})={\rm Hom}_{{{\mathbb Z}}[\pi_1(M_0^{10})]}\left( C(\widetilde{M}_0^{10}, {{\mathbb Z}}[\pi_1(M_0^{10})] \right)$. When $M_0^{10}$ is compact, [^3] the fundamental class is given by $[M_0^{10}] \in H_{10}(M_0^{10}; {{\mathbb Z}}^{\omega(M)})$ such that the cap product defines ${{\mathbb Z}}[\pi_1]$-module isomorphisms $ [M_0^{10}] \cap - ~: H^*_{\omega(M)} (\widetilde{M}_0^{10}) \buildrel{\cong}\over{\longrightarrow} H_{10-*}(\widetilde{M}^{10}_0) $ with $\widetilde{M}^{10}_0$ the universal cover of $M_0^{10}$. Quantities, e.g. ones appearing the effective action and the corresponding partition function, should be formulated using this fundamental class. There is a duality formula for the Whitehead torsion which takes into account the orientation character. Let $Y^{11}$ be an eleven-dimensional $h$-cobordism and let $\omega: \Gamma \to \{\pm 1\}$ be the orientation character. This gives rise to an anti-involution on the integral group ring ${{\mathbb Z}}{\Gamma}$ by sending a group element $g$ to $\omega g^{-1}$, as above, and hence leads to an involution $*$ on the Whitehead group ${\rm Wh}({\Gamma})$. Then Milnor’s duality formula is cast as $ \tau (Y^{11}, M'^{10})= \tau (Y^{11}, M^{10})^*\;. $ #### Effect on F-theory. Recently there has been a lot of research activity in model building using F-theory (see [@De] and references therein). F-theory can be considered as a limit of M-theory on a 2-torus when the volume of the two-torus becomes very small. This means that constraints on the possible fundamental groups of $Y^{11}$, assumed to have a 2-torus factor, will have an effect on the possible fundamental groups on the space on which F-theory is considered. Nontrivial fundamental groups in this context are considered in [@Br2]. Therefore, we expect that our discussion in the heterotic/M-theory setting will have, via duality, consequences for fundamental groups in F-theory. This is strengthened by the fact that in a class of models which admit perturbative heterotic duals, the F-theory and heterotic computations match [@DW]. It would be interesting to perform explicit checks of this in relevant examples. Compactification {#sec com} ---------------- We have considered in general the relation between M-theory on a general eleven-manifold and heterotic string theory on a general ten-manifold $M_0^{10}$. There are two aspects to this. First, for consistency the theory should make sense on any admissible manifold and so studying this might give insight into understanding the theory further. Second, there are certain favorable types of spaces for model building. We have in mind that $M_0^{10}$ is a product (or a bundle) of a Calabi-Yau threefold $X^6$ with a four-dimensional spacetime. In general, the latter can be taken to be a general four-manifold that solves the equations of motion and does/does not break supersymmetry according to the goal one has in mind. It can be taken to be flat Minkowski or something close. We study such situations in this section and consider whether the choice of four-dimensional spacetime changes the discussion we have had so far. We take M-theory on an eleven-manifold $Y^{11}=Z^7 \times N^4$, where $N^4$ is spacetime and $Z^7$ is a seven-dimensional cobordism of the Calabi-Yau threefold $X^6$. This always exists because the Stiefel-Whitney numbers of a Calabi-Yau threefold are zero: $w_1=0$ because of orientation, $w_2=0$ because of Spin, and $w_3=0$ because both $w_1$ and $w_2$ are zero; then the Stiefel-Whitney numbers $w_1w_5[X^6]$, $w_2w_4[X^6]$, and $w_3w_3[X^6]$ are all zero. The heterotic ten-manifold is of the form $M_0^{10}=X^6 \times N^4$. #### The $h$-cobordism of a product. Let $(Z^7; X_0^6, X_1^6)$ be a seven-dimensional $h$-cobordism for the Calabi-Yau threefold $X_0^6$, and let $N^4$ be a closed four-manifold. Then we can form an eleven-dimensional $h$-cobordism $(Z^7 \times N^4; X_0^6 \times N^4, X_1^6\times N^4)$. From the cut and paste properties of the Whitehead torsion (see [@Mil] [@We] [@KL]), we get that the torsion are related as follows $ \tau (Z^7 \times N^4, X_0^6 \times N^4)= \tau (Z^7, X_0^6)~ \chi (N^4)\;, \label{eq zcs} $ where $\chi(N^4)$ is the Euler characteristic of $N^4$. Thus the value of this invariant will determine whether there we can relate the discussion of torsion in eleven/ten dimensions to that in seven/six dimensions. The former is the global picture we have built so far, and the latter correspond to the actual situation studied in model building, that is the fundamental groups appearing as examples are those of $X^6$ and not (necessarily) of $M^{10}$. If spacetime were compact and odd-dimensional then the Euler characteristic would vanish identically. In that case, the torsion would vanish. For example, if we take spacetime to be the circle $S^1$ then $Z^7 \times S^1 \approx X_0^6 \times S^ \times [0,1] \approx X_1^6 \times S^1 \times [0,1]$, i.e. the torsion vanishes. In particular, this gives $X_0^6 \times S^1 \approx X_1^6 \times S^1$. #### Product with a torus and Wall’s finiteness obstruction. The circle $S^1$ has fundamental group $\pi_1(S^1)\cong {{\mathbb Z}}$. If we consider the product $S^1 \times Y$, then what is the corresponding Whitehead group in terms of that of the factors? There is in fact a direct sum decomposition [@BHS] $ {\rm Wh}({{\mathbb Z}}\times \Gamma) \cong {\rm Wh}(\Gamma) \oplus \widetilde{K}_0({{\mathbb Z}}[\Gamma]) \oplus N$, for some Nil-group $N$. For the 2-torus with fundamental group ${{\mathbb Z}}^2$, the process can be repeated. It might seem that for this product we can have nonzero Whitehead group for the product manifold even though that group for the factors might not be zero. However, elements in $\widetilde{K}_0({{\mathbb Z}}[\pi_1(X)])$, called Wall’s finiteness obstruction, detects whether or not $X^6$ has the homotopy type of a CW-complex. If we are within the category of such spaces then this element within the class group vanishes. #### Spacetime with flat structure. A manifold admits a flat structure if the tangent bundle is isomorphic to a flat vector bundle, i.e. admits a flat connection. Even for such manifolds, one can have nonzero Euler-characterstic. For example, if we take take the connected sum $N^4=(\Sigma_3 \times \Sigma_3)~ \#_{i=1}^6(S^1 \times S^3)$, where $\Sigma_3$ is a surface of genus 3. The product $\Sigma_3 \times \Sigma_3$ is almost parallelizable and the product of spheres $S^1 \times S^3$ is parallelizable. Then the Euler characteristic is $\chi(N^4)=4$ (see [@Sm]). In this example, the fundamental group is the free product $\pi_1(N^4)={\Gamma}_1 \ast {\Gamma}_2$, where ${\Gamma}_1$ is the direct product of two copies of a non-abelian surface group and ${\Gamma}_2$ is of rank 6. In fact, $S^1 \times S^3$ can be replaced by any parallelizable four-manifold. #### Compact vs. noncompact spacetime. So far we have taken $N^4$ to be compact. For compact manifolds, the existence of a smooth Lorentzian metric is equivalent to the manifold having a vanishing Euler characteristic (see [@St]). However, the situation gets modified in the presence of singularities (see [@Ma]). What if it is not compact? Noncompact spacetimes are more desirable for the purpose of equipping spacetime with a Lorentzian structure; all noncompact manifolds admit a Lorentzian metric. On the other hand, every noncompact manifold admits vector fields with any specified set of isolated zeros. This suggests that noncompact manifolds with nonzero (appropriate notion of) [^4] Euler characteristic are abundant. Note that for noncompact Riemann surfaces, the Cohn-Vossen theorem gives the inequality $\int_\Sigma K dA \leq 2\pi \chi(\Sigma)$ (see e.g. [@Ku]). In general one works with $L^2$-Euler characteristics. For example, the Euler characteristic of an Asymptotically Locally Euclidean (ALE) space corresponding to the Lie algebra of type $A_n$ is $n+1$. It is important to note that it should be checked whether equation extends to the noncompact case. Furthermore, strictly speaking, in the noncompact case we have to use the noncompact version of the $s$-cobordism theorem, for which the Whitehead torsion lives a new group, which fits into an exact sequence involving the Whitehead group and algebraic $K_0$, as well as information about the ends [@Si2]. Some aspects of behavior of ends in M-theory are discussed in [@DMW-corner]. In the following few paragraphs we describe a way for studying the Whitehead torsion via other invariants, namely the Reidemeister torsion [@Mil] and the Ray-Singer torsion [@RS]. This then provides a setting for making some direct connections to phenomenology. #### Relation of the Whitehead torsion to Reidemeister torsion. The Whitehead torsion $\tau$ is closely related to Reidemeister torsion (or R-torsion) $\Delta$; the former generalizes the latter but is a more delicate invariant. Algebraically, the Whitehead torsion is more general than R-torsion in that it is also defined for noncommutative rings (such as the group ring of the fundamental group ${{\mathbb Z}}[\pi_1(X)]$) for which the determinant, needed for the R-torsion, is not defined. The R-torsion is a topological invariant which distinguishes spaces which are homotopy equivalent but not homeomorphic, and is defined for spaces whose fundamental group $\pi$ is finite and for which the homology with coefficients in a certain $\pi$-representation vanishes. The R-torsion is defined in more general situations than Whitehead torsion, since any homotopy equivalence is a homology equivalence. Furthermore, R-torsion has two advantages over the Whitehead torsion: \(i) It is more likely to be defined. \(ii) Its value is an honest real number, instead of being an element of a somewhat esoteric group. On the other hand, when defined, the Whitehead torsion is a sharper invariant. When they are both defined, the R-torsion is a function of the Whitehead torsion. That is, for each unitary (orthogonal) representation $\rho$ of the fundamental group $\pi$, the R-torsion is the real part of the determinant of the complex (real) matrix induced by $\rho$ from any matrix representation of the Whitehead torsion. One can find a useful criterion for when the Whitehead torsion is zero by studying the R-torsion. For concreteness, let $h : \pi_1(M^{10}) \to O(n)$ be an orthogonal representation of the fundamental group $\pi=\pi_1(M^{10})$. Then $h$ extends to a unique homomorphism from the group ring ${{\mathbb Z}}[\pi]$ to the ring ${\cal M}_n({{\mathbb R}})$ of all real $n \times n$ matrices and determines a homomorphism $h_*: {\rm Wh}(\pi) \to \overline{K}_1({{\mathbb R}}) \cong {{\mathbb R}}^+$. Suppose that the Whitehead torsion $\tau(Y^{11}; M^{10}) \in {\rm Wh}(\pi)$ is defined and suppose that $\pi$ is a finite group. Then it follows from the identity relating the two torsions [@Mil] $ \Delta_h(Y^{11}; M^{10})= h_*\tau (Y^{11}; M^{10}) $ that $\tau(Y^{11}; M^{10})$ is an element of finite order in ${\rm Wh}(\pi)$ if and only if the R-torsion is $\Delta_h(Y^{11}; M^{10})=1$ for all possible orthogonal representations $h$ of $\pi$. If $\pi$ is finite abelian, then $\tau (Y^{11}; M^{10})=0$ if and only if $\Delta_h(Y^{11}; M^{10})=1$ for all possible such representations $h$. Since the R-torsion is easier to calculate, this gives a concrete way of checking whether the Whitehead torsion vanishes without having to go through the difficult task of calculating it explicitly. #### Examples of when R-torsion is defined and the Whitehead torsion is not. There are examples in which the Whitehead torsion cannot be defined but the R-torsion can (see [@Mil]). For instance, the Whitehead torsion $\tau (S^1)$ of the circle $S^1$ cannot be defined since the module $H_0(\hat{S}^1)$ for the universal cover $\hat{S}^1$ is not zero, and is not a free ${{\mathbb Z}}[\pi]$-module. On the other hand, the R-torsion is defined; if the homomorphism $h$ from the fundamental group $\pi_1(S^1)$ to the units $\mathbb{F}^\times$ in a field $\mathbb{F}$ maps a generator into the field element $x\neq 1$, then the associated R-torsion $\Delta_h(S^1) \in \mathbb{F}^\times /\pm h(\pi_1)$ is well-defined and equal to $1-h$, up to multiplication by $h^{m}$ for some $m\in {{\mathbb Z}}^\times$. Another example is a knot complement $X$ in the 3-sphere with $h: \pi_1(X) \to \mathbb{F}^\times$ mapping each loop with linking number $+1$ into the field element $x \neq 1$. Then the R-torsion is well-defined, and is equal to $(1-h)/A(h)$, where $A(h)$ is the Alexander polynomial of the knot. #### Effect on phenomenology. The Ray-Singer torsion, which is an analytic analog of R-torsion and which coincides with it for Riemannian manifolds, has direct physical applications. The Ray-Singer torsion can be defined using determinants of Laplacians. In this form it has natural connection to one-loop amplitudes. For example, this torsion governs the threshold corrections for the heterotic string [@BCOF]. In M-theory compactifications on manifolds with $G_2$ holonomy, the GUT scale $M_{\rm GUT}$ is essentially given by the Ray-Singer torsion $\Delta_{RS}(\Sigma)$ via $M_{\rm GUT}^3= \Delta_{RS}(\Sigma)/V_\Sigma$, where $V_\Sigma$ is the volume of the corresponding 3-cycle $\Sigma$ [@FW]. For example, when $\Sigma=S^3/{{\mathbb Z}}_q$ is a lens space, on which there is a Wilson line of eigenvalues $\left( e^{2\pi i(2m/q)}, e^{2\pi i(2m/q)}, e^{2 \pi i(m/q)}, e^{-2\pi i(3m/q)}, e^{-2\pi i(3m/q)} \right)$ with $m$ and $q$ coprime integers, then the Ray-Singer torsion for the lens space is $\Delta_{RS}(\Sigma)=4q\sin^2(5\pi m/q)$. Now, the more delicate Whitehead torsion can be partially studied by considering the R-torsion (or Ray-Singer torsion) as above. It should be an obstruction to supersymmetry in heterotic M-theory. The breaking scale would be the intermediate 5-dimensional scale, and only gravitationally mediate to the visible sector. It would be interesting to see how this works explicitly. #### Higher-dimensional compactifications. If we take our eleven-manifold $Y^{11}$ to be a product of two manifolds, where the internal manifold is of dimension lower than 6 then we can no longer apply the $s$-cobordism arguments we have been using. In particular, the $s$-cobordism theorem fails in dimensions five and it is an open problem in dimension four (see [@CS] [@Ch]). For example, there exists an $h$-cobordism $(W^5, T^4, T^4)$, where $T^4$ is the four-dimensional torus, for which there is no diffeomorphism from $W^5$ to $T^4 \times [0,1]$. Since ${\rm Wh}(\pi_1(T^4))=0$, the $s$-cobordism indeed fails in five dimensions. For topological spaces, the theorem fails in both four and five dimensions [@Si]; one might say that we could apply the $s$-cobordism in this case to the spacetime part rather than the internal part, now that spacetime has grown to admissible dimensions. This certainly can be done and will give consistency conditions depending on fundamental groups of spacetime (the arguments we have outlined will go through with the obvious changes). However, we would then not be studying fundamental groups for purposes of particle physics but rather for purposes of cosmology. The author would like to thank Jonathan Rosenberg for useful discussions on the Whitehead torsion and Kenji Fukaya and the referee for useful comments. He also acknowledges the hospitality of the Department of Physics and the Department of Mathematics at the National University of Singapore where part of this work was done. [99]{} R. C. Alperin, R. K. Dennis, R. Oliver, and M. R. Stein, [*$SK_1$ of finite abelian groups II*]{}, Invent. Math. [**87**]{} (1987), no. 2, 253–302. R. C. Alperin, R. K. Dennis and M. R. Stein, [*The nontriviality of $SK_1({{\mathbb Z}}\pi)$*]{}, in Orders, Group Rings and Related Topics, Lecture Notes in Math., vol. 353, Springer-Verlag, New York, 1973, 1–7. A. Bak, V. Bouchard, and R. Donagi, [*Exploring a new peak in the heterotic landscape*]{}, J. High Energy Phys. [**06**]{} (2010) 108, 1–31, \[[arXiv:0811.1242]{}\] \[[hep-th]{}\]. H. Bass, A. Heller, and R. Swan, [*The Whitehead group of a polynomial extension*]{}, Publ. de I’lnst. des Hautes Etudes Sci. [**22**]{} (1964) 61–79. V. Batyrev and M. Kreuzer, [*Integral cohomology and mirror symmetry for Calabi-Yau 3-folds*]{}, Mirror symmetry V, 255–270, AMS/IP Stud. Adv. Math., 38, Amer. Math. Soc., Providence, RI, 2006, \[[math.AG/0505432]{}\]. M. Bershadsky, S. Cecotti, H. Ooguri, and C. Vafa, [*Kodaira-Spencer theory of gravity and exact results for quantum string amplitudes*]{}, Commun. Math. Phys. [**165**]{} (1994) 311–428, \[[arXiv:hep-th/9309140]{}\]. L. Borisov and Z. Hua, [*On Calabi-Yau threefolds with large nonabelian fundamental groups*]{}, Proc. Amer. Math. Soc. [**136**]{} (2008), no. 5, 1549–1551, \[[arXiv:math/0609728]{}\] \[[math.AG]{}\]. V. Bouchard and R. Donagi, [*An SU(5) heterotic standard model*]{}, Phys. Lett. [**B633**]{} (2006) 783–791, \[[arXiv:hep-th/0512149]{}\]. V. Bouchard and R. Donagi, [*On a class of non-simply connected Calabi-Yau threefolds*]{}, Comm. Numb. Theor. Phys. [**2**]{} (2008) 1–61, \[[arXiv:0704.3096]{}\] \[[math.AG]{}\]. V. Braun, [*On free quotients of complete intersection Calabi-Yau manifolds*]{}, \[[arXiv:1003.3235]{}\] \[[hep-th]{}\]. V. Braun, [*Discrete Wilson lines in F-theory*]{} , \[[arXiv:1010.2520]{}\] \[[hep-th]{}\]. V. Braun, Y.-H. He, B. A.Ovrut, and T. Pantev, [*The exact MSSM spectrum from string theory*]{}, J. High Energy Phys. [**0605**]{} (2006) 043, \[[arXiv:hep-th/0512177]{}\]. V. Braun, M. Kreuzer, B. A. Ovrut, and E. Scheidegger, [*Worldsheet instantons and torsion curves, part A: Direct computation*]{}, J. High Energy Phys. [**0710**]{} (2007) 022, \[[arXiv:hep-th/0703182]{}\]. V. Braun, B. A.Ovrut, T. Pantev, and R. Reinbacher, [*Elliptic Calabi-Yau threefolds with ${{\mathbb Z}}_3 \times {{\mathbb Z}}_3$ Wilson lines*]{}, J. High Energy Phys. [**0412**]{} (2004) 062, \[[arXiv:hep-th/0410055]{}\]. J. D. Breit, B. A. Ovrut, and G. C. Segre, [*$E_6$ symmetry breaking in the superstring theory*]{}, Phys. Lett. [**B158**]{} (1985) 33–39. P. Candelas and R. Davies, [*New Calabi-Yau manifolds with small Hodge numbers*]{}, \[[arXiv:0809.4681]{}\] \[[hep-th]{}\]. S. Cappell and J. Shaneson, [*On 4-dimensional $s$-cobordisms*]{}, J. Differential Geom. [**22**]{} (1985), no. 1, 97–115. W. Chen, [*Smooth $s$-cobordisms of elliptic 3-manifolds*]{}, J. Differential Geom. [**73**]{} (2006), no. 3, 413–490. M. M. Cohen, A Course in Simple-Homotopy Theory, GTM 10, Springer- Verlag, New York-Berlin, 1973. M. Davis and J.-C. Hausmann, [*Aspherical manifolds without smooth or PL structure*]{}, Lect. Notes in Math., Vol. 1370, Springer-Verlag, New York, 1989, pp. 135-142. F. Denef, [*Les Houches lectures on constructing string vacua*]{}, \[[arXiv:0803.1194]{}\] \[[hep-th]{}\]. E. Diaconescu, D. S. Freed and G. Moore, [*The M-theory 3-form and $E_8$ gauge theory*]{}, Elliptic cohomology, 44–88, Cambridge Univ. Press, Cambridge, 2007, \[[arXiv:hep-th/0312069]{}\]. R. Donagi, P. Gao, and M. B. Schulz, [*Abelian fibrations, string junctions, and flux/geometry duality*]{}, \[[arXiv:0810.5195]{}\] \[[hep-th]{}\]. R. Donagi, B. A.Ovrut, T. Pantev, and R. Reinbacher, [*SU(4) instantons on Calabi-Yau threefolds with ${{\mathbb Z}}_2 \times {{\mathbb Z}}_2$ fundamental group*]{}, J. High Energy Phys. [**0401**]{} (2004) 022, \[[arXiv:hep-th/0307273]{}\]. R. Donagi, B. A.Ovrut, T. Pantev, and D. Waldram, [*Standard models from heterotic M-theory*]{}, Adv. Theor. Math. Phys. [**5**]{} (2002) 93–137, \[[arXiv:hep-th/9912208]{}\]. R. Donagi and M. Wijnholt, [*Model building with F-theory*]{}, \[[arXiv:0802.2969]{}\] \[[hep-th]{}\]. M. Fabinger and P. Horava, [*Casimir effect between world-branes in heterotic M-theory*]{}, Nucl. Phys. [**B580**]{} (2000) 243–263, \[[arXiv:hep-th/0002073]{}\]. F.T. Farrell and L.E. Jones, [*Topological rigidity for compact nonpositively curved manifolds*]{}, Proc. Sympos. Pure Math. [**54**]{}, Part 3, American Mathematical Society, Providence, R.I. (1993), 229–274. D. S. Freed and G. W. Moore, [*Setting the quantum integrand of M-theory*]{}, Commun. Math. Phys. [**263**]{} (2006) 89–132, \[[arXiv:hep-th/0409135]{}\]. T. Friedmann and E. Witten, [*Unification scale, proton decay, and manifolds of $G_2$ holonomy*]{}, Adv. Theor. Math. Phys. [**7**]{} (2003) 577–617, \[[arXiv:hep-th/0211269]{}\]. M. B. Green, J. H. Schwarz, and E. Witten, [Superstring Theory]{}, vol 2, Cambridge Univ. Press, Cambridge, 1988. M. Gross and S. Popescu, [*Calabi-Yau threefolds and moduli of abelian surfaces I*]{}, Compositio Math. [**127**]{} (2001), 169–228. Y.-H. He, [*An algorithmic approach to heterotic string phenomenology*]{}, Mod. Phys. Lett. A [**25**]{} (2010) 79–90, \[[arXiv:1001.2419]{}\] \[[hep-th]{}\]. G. Heier, S. S. Y. Lu, and B. Wong, [*On the canonical line bundle and negative holomorphic sectional curvature*]{}, Math. Res. Lett. [**17**]{} (2010) 1101–1110. P. Horava and E. Witten, [*Heterotic and type I string dynamics from eleven dimensions*]{}, Nucl. Phys. [**B460**]{} (1996) 506–524, \[[arXiv:hep-th/9510209]{}\]. P. Horava and E. Witten, [*Eleven-dimensional supergravity on a manifold with boundary*]{}, Nucl. Phys. [**B475**]{} (1996) 94–114, \[[arXiv:hep-th/9603142]{}\]. M. E. Keating, [*On the K-theory of the quaternion group*]{}, Mathematika [**20**]{} (1973) 59–62. M. Kervaire, [*Le théorème de Barden-Mazur-Stallings*]{}, Comment. Math. Helv. [**40**]{} (1965) 31–42. M. Kreck and W. Lück, The Novikov Conjecture: Geometry and Algebra, Birkhäuser, Basel, 2005. W. Kühnel, Differential Geometry: Curves– Surfaces– Manifolds, Amer. Math. Soc., Providence, RI, 2006. K. W. Kwun, [*Transfer homomorphisms of Whitehead groups of some cyclic groups*]{}, Amer. J. Math. [**93**]{} (1971) 310–316. K. W. Kwun, [*Transfer homomorphisms of Whitehead groups of some cyclic groups II*]{}, Lecture Notes in Math. [**298**]{}, 437–440, Springer, Berlin, 1972. B. Magurn, [*$SK_1$ of dihedral groups*]{}, J. Algebra [**51**]{} (1978), no. 2, 399–415. L. Markus, [*Line element fields and Lorentz structures on differentiable manifolds*]{}, Ann. of Math. (2) [**62**]{} (1955), 411–417. B. Mazur, [*Relative neighborhoods and the theorems of Smale*]{}, Ann. of Math. (2) [**77**]{} (1963) 232–249. J. Milnor, Lectures on the $h$-cobordism theorem, Princeton Univ. Press, Princeton, NJ, 1965. J. Milnor, [*Whitehead torsion*]{}, Bull. Amer. Math. Soc. [**72**]{} (1966) 358–426. R. Oliver, Whitehead groups of finite groups, Cambridge Univ. Press, Cambridge, 1988. R. Rares, [*On the topology and differential geometry of Kähler threefolds*]{}, PhD Dissertation, Stony Brook University, 2005. D. B. Ray and I. M. Singer, [*Analytic torsion for complex manifolds*]{}, Ann. Math. (2) [**98**]{} (1973) 154–177. J. Rosenberg, Algebraic K-theory and its Applications, Springer-Verlag, New York, 1994. H. Sati, [*Geometry of Spin and Spin${}^c$ structures in the M-theory partition function*]{} \[[arXiv:1005.1700]{}\] \[[hep-th]{}\]. H. Sati, [*Duality and cohomology in M-theory with boundary*]{}, \[[arXiv:1012.4495]{}\] \[[hep-th]{}\]. H. Sati, [*Corners in M-theory*]{}, J. Phys. [**A44**]{} (2011) 255402, \[[arXiv:1101.2793]{}\] \[[hep-th]{}\]. C. Schoen, [*On fiber products of rational elliptic surfaces with section*]{}, Math. Zeitschrift [**197**]{}(2) (1988) 177–199. M. B. Schulz, [*Calabi-Yau duals of torus orientifolds*]{}, J. High Energy Phys. [**0605**]{} (2006) 023, \[[arXiv:hep-th/0412270]{}\]. V. V. Sharko, Functions on Manifolds: Algebraic and Geometric Aspects, American Mathematical Society, Providence, RI, 1993. L. C. Siebenmann, [*Disruption of low-dimensional handlebody theory by Rohlin’s theorem*]{}, Topology of Manifolds, 57–76, Markham, Chicago, Ill, 1970. L. C. Siebenmann, [*Infinite simple homotopy types*]{}, Indag. Math. [**32**]{} (1970) 479–495. J. Smillie, [*Flat manifolds with non-zero Euler characteristic*]{}, Comment. Math. Helv. [**52**]{} (1977), no. 3, 453–455. J. Stallings, [*On infinite processes leading to differentiability in the complement of a point*]{}, in Differential and Combinatorial Topology, 245–254, Princeton Univ. Press, Princeton, NJ, 1965. N. Steenrod, The Topology of Fiber Bundles, Princeton Univ. Press, Princeton, NJ, 1951. M. R, Stein, [*Whitehead groups of finite groups*]{}, Bull. Amer. Math. Soc. [**84**]{} (1978) 201–212. C. T. C. Wall, [*Norms of units in group rings*]{}, Proc. London Math. Soc. [**29**]{} (1974), 593–632. S. Weinberger, The Topological Classification of Stratified Spaces, Univ. of Chicago Press, Chicago, Ill 1994. E. Witten, [*Symmetry breaking patterns in superstring models*]{}, Nucl. Phys. [**B258**]{} (1985) 75–100. E. Witten, [*Topological tools in 10-dimensional physics*]{}, Int. J. Mod. Phys. A [**1**]{} (1986) 39–64. [^1]: e-mail: [hsati@math.umd.edu]{}\ Current address: Department of Mathematics, University of Pittsburgh, 139 University Place, Pittsburgh, PA 15260. [^2]: A free quotient of the manifold corresponding to ${{\mathbb Z}}_3 \times {{\mathbb Z}}_3$ by the quaternion group is given in [@CD]. [^3]: $M_0^{10}$ does not necessarily have to be a manifold, but just a Poincaré duality complex. [^4]: Note that there are various definitions and versions of the Euler characteristic in the noncompact setting.
--- abstract: | In honor of Alan Turing’s hundredth birthday, I unwisely set out some thoughts about one of Turing’s obsessions throughout his life, the question of physics and free will.  I focus relatively narrowly on a notion that I call Knightian freedom: a certain kind of in-principle physical unpredictability that goes beyond probabilistic unpredictability. Other, more metaphysical aspects of free will I regard as possibly outside the scope of science. I examine a viewpoint, suggested independently by Carl Hoefer, Cristi Stoica, and even Turing himself, that tries to find scope for freedom in the universe’s boundary conditions rather than in the dynamical laws.  Taking this viewpoint seriously leads to many interesting conceptual problems.  I investigate how far one can go toward solving those problems, and along the way, encounter (among other things) the No-Cloning Theorem, the measurement problem, decoherence, chaos, the arrow of time, the holographic principle, Newcomb’s paradox, Boltzmann brains, algorithmic information theory, and the Common Prior Assumption.  I also compare the viewpoint explored here to the more radical speculations of Roger Penrose. The result of all this is an unusual perspective on time, quantum mechanics, and causation, of which I myself remain skeptical, but which has several appealing features.  Among other things, it suggests interesting empirical questions in neuroscience, physics, and cosmology; and takes a millennia-old philosophical debate into some underexplored territory. author: - 'Scott Aaronson[^1]' bibliography: - 'thesis.bib' title: The Ghost in the Quantum Turing Machine --- [unseen.eps]{}\[unseen\] Postcard from Alan M. Turing to Robin Gandy, March 1954 (reprinted in Hodges [@hodges]) It reads, in part: > The Universe is the interior of the Light Cone of the Creation > > Science is a Differential Equation.  Religion is a Boundary Condition “Arthur Stanley” refers to Arthur Stanley Eddington, whose books were a major early influence on Turing. Introduction\[INTRO\] ===================== When I was a teenager, Alan Turing was at the top of my pantheon of scientific heroes, above even Darwin, Ramanujan, Einstein, and Feynman.  Some of the reasons were obvious: the founding of computer science, the proof of the unsolvability of the Entscheidungsproblem, the breaking of the Nazi Enigma code, the unapologetic nerdiness and the near-martyrdom for human rights.  But beyond the facts of his biography, I idolized Turing as an über-reductionist: the scientist who had gone further than anyone before him to reveal the mechanistic nature of reality.  Through his discovery of computational universality, as well as the Turing Test criterion for intelligence, Turing finally unmasked the pretensions of anyone who claimed there was anything more to mind, brain, or the physical world than the unfolding of an immense computation.  After Turing, it seemed to me, one could assert with confidence that all our hopes, fears, sensations, and choices were just evanescent patterns in some sort of *cellular automaton*: that is, a huge array of bits, different in detail but not in essence from Conway’s famous Game of Life,[^2] getting updated in time by simple, local, mechanistic rules. So it’s striking that Turing’s own views about these issues, as revealed in his lectures as well as private correspondence, were much more complicated than my early caricature.  As a teenager, Turing devoured the popular books of Sir Arthur Eddington, who was one of the first (though not, of course, the last!) to speculate about the implications of the then-ongoing quantum revolution in physics for ancient questions about mind and free will.  Later, as a prize from his high school in 1932, Turing selected John von Neumann’s just-published *Mathematische Grundlagen der Quantenmechanik* [@vonneumann:qm]: a treatise on quantum mechanics famous for its mathematical rigor, but also for its perspective that the collapse of the wavefunction ultimately involves the experimenter’s mental state.  As detailed by Turing biographer Andrew Hodges [@hodges], these early readings had a major impact on Turing’s intellectual preoccupations throughout his life, and probably even influenced his 1936 work on the theory of computing. Turing also had a more personal reason for worrying about these deep questions.  In 1930, Christopher Morcom—Turing’s teenage best friend, scientific peer, and (probably) unrequited love—died from tuberculosis, sending a grief-stricken Turing into long ruminations about the nature of personal identity and consciousness.  Let me quote from a remarkable disquisition, entitled Nature of Spirit, that the 19-year-old Turing sent in 1932 to Christopher Morcom’s mother. > It used to be supposed in Science that if everything was known about the Universe at any particular moment then we can predict what it will be through all the future.  This idea was really due to the great success of astronomical prediction.  More modern science however has come to the conclusion that when we are dealing with atoms and electrons we are quite unable to know the exact state of them; our instruments being made of atoms and electrons themselves.  The conception then of being able to know the exact state of the universe then really must break down on the small scale.  This means then that the theory which held that as eclipses etc. are predestined so were all our actions breaks down too.  We have a will which is able to determine the action of the atoms probably in a small portion of the brain, or possibly all over it.  The rest of the body acts so as to amplify this.  (Quoted in Hodges [@hodges]) The rest of Turing’s letter discusses the prospects for the survival of the spirit after death, a topic with obvious relevance to Turing at that time.  In later years, Turing would eschew that sort of mysticism.  Yet even in a 1951 radio address *defending* the possibility of human-level artificial intelligence, Turing still brought up Eddington, and the possible limits on prediction of human brains imposed by the uncertainty principle: > If it is accepted that real brains, as found in animals, and in particular in men, are a sort of machine it will follow that our digital computer suitably programmed, will behave like a brain.  \[But the argument for this conclusion\] involves several assumptions which can quite reasonably be challenged.  \[It is\] necessary that this machine should be of the sort whose behaviour is in principle predictable by calculation.  We certainly do not know how any such calculation should be done, and it was even argued by Sir Arthur Eddington that on account of the indeterminacy principle in quantum mechanics no such prediction is even theoretically possible.[^3] (Reprinted in Shieber [@shieber:book]) Finally, two years after his sentencing for homosexual indecency, and a few months before his tragic death by self-poisoning, Turing wrote the striking aphorisms that I quoted earlier: The universe is the interior of the light-cone of the Creation.  Science is a differential equation.  Religion is a boundary condition. The reason I’m writing this essay is that I *think* I now understand what Turing could have meant by these remarks.  Building on ideas of Hoefer [@hoefer], Stoica [@stoica], and others, I’ll examine a perspective—which I call the freebit perspective, for reasons to be explained later—that locates a nontrivial sort of freedom in the universe’s boundary conditions, even while embracing the mechanical nature of the time-evolution laws.  We’ll find that a central question, for this perspective, is how well complicated biological systems like human brains can *actually* be predicted: not by hypothetical Laplace demons, but by prediction devices compatible with the laws of physics.  It’s in the discussion of this predictability question (and *only* there) that quantum mechanics enters the story. Of course, the idea that quantum mechanics might have *something* to do with free will is not new; neither are the problems with that idea or the controversy surrounding it.  While I chose Turing’s postcard for the opening text of this essay, I also could have chosen a striking claim by Niels Bohr, from a 1932 lecture about the implications of Heisenberg’s uncertainty principle: > \[W\]e should doubtless kill an animal if we tried to carry the investigation of its organs so far that we could tell the part played by the single atoms in vital functions.  In every experiment on living organisms there must remain some uncertainty as regards the physical conditions to which they are subjected, and the idea suggests itself that the minimal freedom we must allow the organism will be just large enough to permit it, so to say, to hide its ultimate secrets from us.  (Reprinted in [@bohr]) Or this, from the physicist Arthur Compton: > A set of known physical conditions is not adequate to specify precisely what a forthcoming event will be.  These conditions, insofar as they can be known, define instead a range of possible events from among which some particular event will occur.  When one exercises freedom, by his act of choice he is himself adding a factor not supplied by the physical conditions and is thus himself determining what will occur.  That he does so is known only to the person himself.  From the outside one can see in his act only the working of physical law. [@compton] I want to know: > *Were Bohr and Compton right or weren’t they?  Does quantum mechanics (specifically, say, the No-Cloning Theorem or the uncertainty principle) put interesting limits on an external agent’s ability to scan, copy, and predict human brains and other complicated biological systems, or doesn’t it?* Of course, one needs to spell out carefully what one means by interesting limits, an external agent, the ability to scan, copy, and predict, and so forth.[^4]  But once that’s done, I regard the above as an unsolved scientific question, and a big one.  Many people seem to think the answer is obvious (though they disagree on what it is!), or else they reject the question as meaningless, unanswerable, or irrelevant. In this essay I’ll argue strongly for a different perspective: that we can easily imagine worlds consistent with quantum mechanics (and all other known physics and biology) where the answer to the question is yes, and other such worlds where the answer is no.  And we don’t yet know which kind we live in.  The most we can say is that, like $\mathsf{P}$ versus $\mathsf{NP}$ or nature of quantum gravity, the question is well beyond our *current* ability to answer. Furthermore, the two kinds of world lead, not merely to different philosophical stances, but to different visions of the remote future.  Will our descendants all choose to upload themselves into a digital hive-mind, after a technological singularity that makes such things possible?  Will they then multiply themselves into trillions of perfect computer-simulated replicas, living in various simulated worlds of their own invention, inside of which there might be further simulated worlds with still more replicated minds?  What will it be *like* to exist in so many manifestations: will each copy have its own awareness, or will they comprise a single awareness that experiences trillions of times more than we do?  Supposing all this to be possible, is there any reason why our descendants might want to hold back on it? Now, if it turned out that Bohr and Compton were wrong—that human brains were as probabilistically predictable by external agents as ordinary digital computers equipped with random-number generators—then the freebit picture that I explore in this essay would be falsified, to whatever extent it says anything interesting.  It should go without saying that I see the freebit picture’s vulnerability to future empirical findings as a feature rather than a bug. In summary, I’ll make no claim to show here that the freebit picture is *true*.  I’ll confine myself to two weaker claims: 1. That the picture is *sensible* (or rather, not obviously much crazier than the alternatives): many considerations one might think would immediately make a hash of this picture, fail to do so for interesting reasons. 2. That the picture is *falsifiable*: there are open empirical questions that need to turn out one way rather than another, for this picture to stand even a *chance* of working. I ask others to take this essay as I do: as an exercise in what physicists call model-building.  I want to see *how far I can get* in thinking about time, causation, predictability, and quantum mechanics in a certain unusual but apparently-consistent way.  Resolving the millennia-old free will debate isn’t even on the table!  The most I can hope for, if I’m lucky, is to construct a model whose strengths and weaknesses help to move the debate slightly forward. Free Will Versus Freedom\[FWFREEDOM\] ------------------------------------- There’s one terminological issue that experience has shown I need to dispense with before anything else.  In this essay, I’ll sharply distinguish between free will and another concept that I’ll call freedom, and will mostly concentrate on the latter. By free will, I’ll mean a metaphysical attribute that I hold to be largely outside the scope of science—and which I can’t even *define* clearly, except to say that, if there’s an otherwise-undefinable thing that people have tried to get at for centuries with the phrase free will, then free will is that thing!  More seriously, as many philosophers have pointed out, free will seems to combine two distinct ideas: first, that your choices are free from any kind of external constraint; and second, that your choices are not arbitrary or capricious, but are willed by you.  The second idea—that of being willed by you—is the one I consider outside the scope of science, for the simple reason that no matter what the empirical facts were, a skeptic could always deny that a given decision was really yours, and hold the true decider to have been God, the universe, an impersonating demon, etc.  I see no way to formulate, in terms of observable concepts, what it would even mean for such a skeptic to be right or wrong. But crucially, the situation seems different if we set aside the will part of free will, and consider only the free part.  Throughout, I’ll use the term *freedom*, or *Knightian freedom*, to mean a certain strong kind of physical unpredictability: a lack of determination, even probabilistic determination, by knowable external factors.  That is, a physical system will be free if and only if it’s unpredictable in a sufficiently strong sense, and freedom will simply be that property possessed by free systems.  A system that’s not free will be called mechanistic. Many issues arise when we try to make the above notions more precise.  For one thing, we need a definition of unpredictability that does *not* encompass the merely probabilistic unpredictability of (say) a photon or a radioactive atom—since, as I’ll discuss in Section \[KNIGHTIANPHYS\], I accept the often-made point that *that* kind of unpredictability has nothing to do with what most people would call freedom, and is fully compatible with a system’s being mechanistic.  Instead, we’ll want what economists call Knightian unpredictability, meaning unpredictability that we lack a reliable way even to quantify using probability distributions.  Ideally, our criteria for Knightian unpredictability will be so stringent that they won’t encompass systems like the Earth’s weather—for which, despite the presence of chaos, we arguably *can* give very well-calibrated probabilistic forecasts. A second issue is that unpredictability seems observer-relative: a system that’s unpredictable to one observer might be perfectly predictable to another.  This is the reason why, throughout this essay, I’ll be interested less in particular methods of prediction than in the *best* predictions that could ever be made, consistent both with the laws of physics and with the need not to destroy the system being studied. This brings us to a third issue: it’s not obvious what should count as destroying the system, or which interventions a would-be predictor should or shouldn’t be allowed to perform.  For example, in order to ease the prediction of a human brain, should a predictor first be allowed to replace each neuron by a functionally-equivalent microchip?  How would we decide whether the microchip *was* functionally equivalent to the neuron? I’ll offer detailed thoughts about these issues in Appendix \[MEAN\].  For now, though, the point I want to make is that, once we *do* address these issues, it seems to me that freedom—in the sense of Knightian unpredictability by any external physical observer—is perfectly within the scope of science.  We’re no longer talking about ethics, metaphysics, or the use of language: only about whether such-and-such a system is or isn’t physically predictable in the relevant way!  A similar point was recently made forcefully by the philosopher Mark Balaguer [@balaguer], in his interesting book *Free Will as an Open Scientific Problem*.  (However, while I strongly agree with Balaguer’s basic thesis, as discussed above I reject any connection between freedom and merely probabilistic unpredictability, whereas Balaguer seems to accept such a connection.) Surprisingly, my experience has been that many scientifically-minded people will happily accept that humans plausibly *are* physically unpredictable in the relevant sense.  Or at least, they’ll accept my own position, that whether humans *are or aren’t* so predictable is an empirical question whose answer is neither obvious nor known.  Again and again, I’ve found, people will concede that chaos, the No-Cloning Theorem, or some other phenomenon might make human brains physically unpredictable—indeed, they’ll seem oddly indifferent to the question of whether they do or don’t!  But they’ll never fail to add: even if so, who cares?  we’re just talking about unpredictability!  that obviously has nothing to do with *free will*! For my part, I grant that free will can’t be *identified* with unpredictability, without doing violence to the usual meanings of those concepts.  Indeed, it’s precisely because I grant this that I write, throughout the essay, about freedom (or Knightian freedom) rather than free will.  I insist, however, that unpredictability has *something* to do with free will—in roughly the same sense that verbal intelligence has *something* to do with consciousness, or optical physics has *something* to do with subjectively-perceived colors.  That is, some people might see unpredictability as a pale empirical shadow of the true metaphysical quality, free will, that we really want to understand.  But the great lesson of the scientific revolution, going back to Galileo, is that understanding the empirical shadow of something is *vastly* better than not understanding the thing at all!  Furthermore, the former might already be an immense undertaking, as understanding human intelligence and the physical universe turned out to be (even setting aside the mysteries of consciousness and metaphysics).  Indeed I submit that, for the past four centuries, start with the shadow has been a spectacularly fruitful approach to unravelling the mysteries of the universe: one that’s succeeded where greedy attempts to go behind the shadow have failed.  If one likes, the goal of this essay is to explore what happens when one applies a start with the shadow approach to the free-will debate. Personally, I’d go even further than claiming a vague connection between unpredictability and free will.  Just as displaying intelligent behavior (by passing the Turing Test or some other means) might be thought a *necessary condition* for consciousness if not a sufficient one, so I tend to see Knightian unpredictability as a necessary condition for free will.  In other words, if a system were completely predictable (even probabilistically) by an outside entity—not merely in principle but in practice—then I find it hard to understand why we’d still want to ascribe free will to the system.  Why not admit that we now fully understand what makes this system tick? However, I’m aware that many people sharply reject the idea that unpredictability is a necessary condition for free will.  Even if a computer in another room perfectly predicted all of their actions, days in advance, these people would still call their actions free, so long as they themselves chose the actions that the computer also predicted for them.  In Section \[UPLOADING\], I’ll explore some of the difficulties that this position leads to when carried to science-fiction conclusions.  For now, though, it’s not important to dispute the point.  I’ll happily settle for the weaker claim that unpredictability has *something* to do with free will, just as intelligence has something to do with consciousness.  More precisely: in both cases, even when people *think* they’re asking purely philosophical questions about the latter concept, much of what they want to know often turns out to hinge on grubby empirical questions about the former concept![^5]  So if the philosophical questions seem too ethereally inaccessible, then we might as well focus for a while on the scientific ones. Note on the Title\[TITLE\] -------------------------- The term ghost in the machine was introduced in 1949 by Gilbert Ryle [@ryle].  His purpose was to ridicule the notion of a mind-substance: a mysterious entity that exists outside of space and ordinary physical causation; has no size, weight, or other material properties; is knowable by its possessor with absolute certainty (while the minds of others are *not* so knowable); and somehow receives signals from the brain and influences the brain’s activity, even though it’s nowhere to be found *in* the brain.  Meanwhile, a *quantum Turing machine*, defined by Deutsch [@deutsch:qc] (see also Bernstein and Vazirani [@bv]), is a Turing machine able to exploit the principle of quantum superposition.  As far as anyone knows today [@aar:np], our universe seems to be efficiently simulable by—or even isomorphic to—a quantum Turing machine, which would take as input the universe’s quantum initial state (say, at the Big Bang), then run the evolution equations forward. Level\[LEVEL\] -------------- Most of this essay should be accessible to any educated reader.  In a few sections, though, I assume familiarity with basic concepts from quantum mechanics, or (less often) relativity, thermodynamics, Bayesian probability, or theoretical computer science.  When I *do* review concepts from those fields, I usually focus only on the ones most relevant to whatever point I’m making.  To do otherwise would make the essay even more absurdly long than it already is!  Readers seeking an accessible introduction to some of the established theories invoked in this essay might enjoy my recent book *Quantum Computing Since Democritus* [@aar:qcsd].[^6] In the main text, I’ve tried to keep the discussion extremely informal.  I’ve found that, with a contentious subject like free will, mathematical rigor (or the pretense of it) can easily obfuscate more than it clarifies.  However, for interested readers, I did put some more technical material into appendices: a suggested formalization of Knightian freedom in Appendix \[MEAN\]; some observations about prediction, Kolmogorov complexity, and the universal prior in Appendix \[KOLMOG\]; and a suggested formalization of the notion of freebits in Appendix \[KNIGHTIAN\]. FAQ\[FAQ\] ========== In discussing a millennia-old conundrum like free will, a central difficulty is that *almost everyone already knows what he or she thinks*—even if the certainties that one person brings to the discussion are completely at odds with someone else’s.  One practical consequence is that, no matter how I organize this essay, I’m bound to make a large fraction of readers impatient; some will accuse me of dodging the real issues by dwelling on irrelevancies.  So without further ado, I’ll now offer a Frequently Asked Questions list.  In the thirteen questions below, I’ll engage determinists, compatibilists, and others who might have strong *a priori* reasons to be leery of my whole project.  I’ll try to clarify my areas of agreement and disagreement, and hopefully convince the skeptics to read further.  Then, after developing my own ideas in Sections \[KNIGHTIANPHYS\] and \[FIO\], I’ll come back and address still further objections in Section \[OBJECTIONS\]. Narrow Scientism\[SCIENTISM\] ----------------------------- **For thousands of years, the free-will debate has encompassed moral, legal, phenomenological, and even theological questions.  You seem to want to sweep all of that away, and focus exclusively on what would some would consider a narrow scientific issue having to do with physical predictability.  Isn’t that presumptuous?** On the contrary, it seems presumptuous *not* to limit my scope!  Since it’s far beyond my aims and abilities to address all aspects of the free-will debate, as discussed in Section \[FWFREEDOM\] I decided to focus on one issue: the physical and technological questions surrounding how well human and animal brains can ever be predicted, in principle, by external entities that also want to keep those brains alive.  I focus on this for several reasons: because it seems underexplored; because I might have something to say about it; and because even if what I say is wrong, the predictability issue has the appealing property that *progress* on it seems possible.  Indeed, even if one granted—which I don’t—that the predictability issue had nothing to do with the true mystery of free will, I’d still care about the former at least as much as I cared about the latter! However, in the interest of laying all my cards on the table, let me offer some brief remarks on the moral, legal, phenomenological, and theological aspects of free will. On the moral and legal aspects, my own view is summed up beautifully by the Ambrose Bierce poem: > There’s no free will, says the philosopher > > To hang is most unjust. > > There’s no free will, assent the officers > > We hang because we must. [@bierce] For the foreseeable future, I can’t see that the legal or practical implications of the free-will debate are nearly as great as many commentators have made them out to be, for the simple reason that (as Bierce points out) any implications would apply symmetrically, to accused and accuser alike. But I would go further: I’ve found many discussions about free will and legal responsibility to be downright *patronizing*.  The subtext of such discussions usually seems to be: > We, the educated, upper-class people having this conversation, *should* accept that the entire concept of should is quaint and misguided, when it comes to the uneducated, lower-class sorts of people who commit crimes.  Those poor dears’ upbringing, brain chemistry, and so forth absolve them of any real responsibility for their crimes: the notion that they had the free will to choose otherwise is just naïve.  My friends and I are *right* because we accept that enlightened stance, while other educated people are *wrong* because they fail to accept it.  For us educated people, of course, the relevance of the categories right and wrong requires no justification or argument. Or conversely: > Whatever the truth, we educated people *should* maintain that all people are responsible for their choices—since otherwise, we’d have no basis to punish the criminals and degenerates in our midst, and civilization would collapse.  For *us*, of course, the meaningfulness of the word should in the previous sentence is not merely a useful fiction, but is clear as day. On the phenomenological aspects of free will: if someone claimed to know, from introspection, either that free will exists or that it doesn’t exist, then of course I could never refute that person to his or her satisfaction.  But precisely because one can’t decide between conflicting introspective reports, in this essay I’ll be exclusively interested in what can be learned from scientific observation and argument.  Appeals to inner experience—including my own and the reader’s—will be out of bounds.  Likewise, while it might be impossible to avoid grazing the mystery of consciousness in a discussion of human predictability, I’ll do my best to avoid addressing that mystery head-on. On the theological aspects of free will: probably the most relevant thing to say is that, even if there existed an omniscient God who knew all of our future choices, that fact wouldn’t concern us in this essay, *unless* God’s knowledge could somehow be made manifest in the physical world, and used to *predict* our choices.  In that case, however, we’d no longer be talking about theological aspects of free will, but simply again about scientific aspects. Bait-and-Switch\[BAITSWITCH\] ----------------------------- **Despite everything you said in Section \[FWFREEDOM\], I’m still not convinced that we can learn anything about free will from an analysis of unpredictability.  Isn’t that a shameless bait-and-switch?** Yes, but it’s a shameless bait-and-switch with a distinguished history!  I claim that, *whenever* it’s been possible to make definite progress on ancient philosophical problems, such progress has almost always involved a similar bait-and-switch.  In other words: one replaces an unanswerable philosophical riddle $Q$ by a merely scientific or mathematical question $Q^{\prime}$, which captures *part* of what people have wanted to know when they’ve asked $Q$.  Then, with luck, one solves $Q^{\prime}$. Of course, even if $Q^{\prime}$ is solved, centuries later philosophers might still be debating the exact relation between $Q$ and $Q^{\prime}$!  And further exploration might lead to *other* scientific or mathematical questions—$Q^{\prime\prime}$, $Q^{\prime\prime\prime}$, and so on—which capture aspects of $Q$ that $Q^{\prime}$ left untouched.  But from my perspective, this process of breaking off answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is. Successful examples of this breaking-off process fill intellectual history.  The use of calculus to treat infinite series, the link between mental activity and nerve impulses, natural selection, set theory and first-order logic, special relativity, Gödel’s theorem, game theory, information theory, computability and complexity theory, the Bell inequality, the theory of common knowledge, Bayesian causal networks—each of these advances addressed questions that could rightly have been called philosophical before the advance was made.  And after each advance, there was *still* plenty for philosophers to debate about truth and provability and infinity, space and time and causality, probability and information and life and mind.  But crucially, it seems to me that the technical advances transformed the philosophical discussion as philosophical discussion *itself* rarely transforms it!  And therefore, if such advances don’t count as philosophical progress, then it’s not clear that anything should. Appropriately for this essay, perhaps the *best* precedent for my bait-and-switch is the Turing Test.  Turing began his famous 1950 paper Computing Machinery and Intelligence [@turing:ai] with the words: > I propose to consider the question, Can machines think? But after a few pages of ground-clearing, he wrote: > The original question, Can machines think? I believe to be too meaningless to deserve discussion. So with legendary abruptness, Turing simply *replaced* the original question by a different one: Are there imaginable digital computers which would do well in the imitation game—i.e., which would successfully fool human interrogators in a teletype conversation into *thinking* they were human?  Though some writers would later accuse Turing of conflating intelligence with the mere simulation of it, Turing was perfectly clear about what he was doing: > I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words ... We cannot altogether abandon the original form of the problem, for opinions will differ as to the appropriateness of the substitution and we must at least listen to what has to be said in this connexion \[sic\]. The claim is not that the new question, about the imitation game, is *identical* to the original question about machine intelligence.  The claim, rather, is that the new question is a worthy candidate for what we *should* have asked or *meant* to have asked, if our goal was to learn something new rather than endlessly debating definitions.  In math and science, the process of revising one’s original question is often the core of a research project, with the actual answering of the revised question being the relatively easy part! A good replacement question $Q^{\prime}$ should satisfy two properties: - $Q^{\prime}$ should capture some *aspect* of the original question $Q$—so that an answer to $Q^{\prime}$ would be *hard to ignore* in any subsequent discussion of $Q$. - $Q^{\prime}$ should be precise enough that one can see what it would mean to make *progress* on $Q^{\prime}$: what experiments one would need to do, what theorems one would need to prove, etc. The Turing Test, I think, captured people’s imaginations precisely because it succeeded so well at (a) and (b).  Let me put it this way: if a digital computer were built that aced the imitation game, then *it’s hard to see what more science could possibly say* in support of machine intelligence being possible.  Conversely, if digital computers were proved unable to win the imitation game, then it’s hard to see what more science could say in support of machine intelligence *not* being possible.  Either way, though, we’re no longer slashing air, trying to pin down the true meanings of words like machine and think: we’ve hit the relatively-solid ground of a science and engineering problem.  Now if we want to go further we need to *dig* (that is, do research in cognitive science, machine learning, etc).  This digging might take centuries of backbreaking work; we have no idea if we’ll ever reach the bottom.  But at least it’s something humans know how to do and have done before.  Just as important, diggers (unlike air-slashers) tend to uncover countless treasures besides the ones they were looking for. By analogy, in this essay I advocate replacing the question of whether humans have free will, by the question of how accurately their choices can be predicted, in principle, by external agents compatible with the laws of physics.  And while I don’t pretend that the replacement question is identical to the original, I do claim the following: if humans turned out to be arbitrarily predictable in the relevant sense, then *it’s hard to see what more science could possibly say in support of free will being a chimera.*  Conversely, if a fundamental reason were discovered why the appropriate prediction game *couldn’t* be won, then it’s hard to see what more science could say in support of free will being real. Either way, I’ll try to sketch the research program that confronts us if we take the question seriously: a program that spans neuroscience, chemistry, physics, and even cosmology.  Not surprisingly, much of this program consists of problems that scientists in the relevant fields are already working on, or longstanding puzzles of which they’re well aware.  But there are also questions—for example, about the past macroscopic determinants of the quantum states occurring in nature—which as far as I know *haven’t* been asked in the form they take here. Compatibilism\[COMPAT\] ----------------------- **Like many scientifically-minded people, I’m a *compatibilist*: someone who believes free will can exist even in a mechanistic universe.  For me, free will is as real as baseball, as the physicist Sean Carroll memorably put it.[^7]  That is, the human capacity to weigh options and make a decision exists in the same sense as Sweden, caramel corn, anger, or other complicated notions that might interest us, but that no one expects to play a role in the fundamental laws of the universe.  As for the fundamental laws, I believe them to be completely mechanistic and impersonal: as far as they know or care, a human brain is just one more evanescent pattern of computation, along with sunspots and hurricanes.  Do you dispute any of that?  What, if anything, can a compatibilist take from your essay?** I have a lot of sympathy for compatibilism—certainly more than for an incurious mysticism that doesn’t even try to reconcile itself with a scientific worldview.  So I hope compatibilists will find much of what I have say compatible with their own views! Let me first clear up a terminological confusion.  Compatibilism is often defined as the belief that free will is compatible with *determinism*.  But as far as I can see, the question of determinism versus indeterminism has almost nothing to do with what compatibilists actually believe.  After all, most compatibilists happily accept quantum mechanics, with its strong indeterminist implications (see Question \[QMHV\]), but regard it as having almost no bearing on their position.  No doubt some compatibilists find it important to stress that *even if classical physics had been right*, there still would have been no difficulty for free will.  But it seems to me that one can be a compatibilist even while denying that point, or remaining agnostic about it.  In this essay, I’ll simply define compatibilism to be the belief that free will is compatible with a *broadly mechanistic worldview*—that is, with a universe governed by impersonal mathematical laws of *some* kind.  Whether it’s important that those laws be probabilistic (or chaotic, or computationally universal, or whatever else), I’ll regard as internal disputes within compatibilism. I can now come to the question: is my perspective compatible with compatibilism?  Alas, at the risk of sounding lawyerly, I can’t answer without a further distinction!  Let’s define *strong compatibilism* to mean the belief that the statement Alice has free will is compatible with the actual, physical existence of a machine that predicts all of Alice’s future choices—a machine whose predictions Alice herself can read and verify after the fact.  (Where by predict, we mean in roughly the same sense that quantum mechanics predicts the behavior of a radioactive atom: that is, by giving arbitrarily-accurate probabilities, in cases where deterministic prediction is physically impossible.)  By contrast, let’s define *weak compatibilism* to mean the belief that Alice has free will is compatible with Alice living in a mechanistic, law-governed universe—but *not* necessarily with her living in a universe where the prediction machine can be built. Then my perspective is compatible with weak compatibilism, but incompatible with strong compatibilism.  My perspective *embraces* the mechanical nature of the universe’s time-evolution laws, and in that sense is proudly compatibilist.  On the other hand, I care whether our choices can *actually* be mechanically predicted—not by hypothetical Laplace demons but by physical machines.  I’m troubled if they are, and I take seriously the possibility that they aren’t (e.g., because of chaotic amplification of unknowable details of the initial conditions). Quantum Flapdoodle\[FLAPDOODLE\] -------------------------------- **The usual motivation for mentioning quantum mechanics and mind in the same breath has been satirized as quantum mechanics is mysterious, the mind is also mysterious, ergo they must be related somehow!  Aren’t you worried that, merely by writing an essay that *seems* to take such a connection seriously, you’ll fan the flames of pseudoscience?  That any subtleties and caveats in your position will quickly get lost?** Yes!  Even though I can only take responsibility for what I write, not for what various Internet commentators, etc. might mistakenly *think* I wrote, it would be distressing to see this essay twisted to support credulous doctrines that I abhor.  So for the record, let me state the following: - I don’t think quantum mechanics, or anything else, lets us bend the universe to our will, except through interacting with our external environments in the ordinary causal ways.  Nor do I think that quantum mechanics says everything is holistically connected to everything else (whatever that means).  Proponents of these ideas usually invoke the phenomenon of *quantum entanglement* between particles, which can persist no matter how far apart the particles are.  But contrary to a widespread misunderstanding encouraged by generations of quantum mystics, it’s an elementary fact that entanglement does *not* allow instantaneous communication.  More precisely, quantum mechanics is local in the following sense: if Alice and Bob share a pair of entangled particles, that nothing that Alice does to her particle only can affect the probability of any outcome of any measurement that Bob performs on his particle only.[^8]  Because of the famous Bell inequality, it’s crucial that we don’t interpret the concept of locality to mean *more* than that!  But quantum mechanics’ revision to our concept of locality is so subtle that neither scientists, mystics, nor anyone else anticipated it beforehand. - I don’t think quantum mechanics has vindicated Eastern religious teachings, any more than (say) Big Bang cosmology has vindicated the Genesis account of creation.  In both cases, while there are interesting parallels, I find it dishonest to seek out only the points of convergence while ignoring the many inconvenient parts that don’t fit!  Personally, I’d say that the quantum picture of the world—as a complex unit vector $\left\vert \psi\right\rangle $ evolving linearly in a Hilbert space—is not a close match to *any* pre-$20^{th}$-century conception of reality. - I don’t think quantum mechanics has overthrown Enlightenment ideals of science and rationality.  Quantum mechanics does overthrow the naïve realist vision of particles with unambiguous trajectories through space, and it does raise profound conceptual problems that will concern us later on.  On the other hand, the point is still to describe a physical world external to our minds by positing a state for that world, giving precise mathematical rules for the evolution of the state, and testing the results against observation.  Compared to classical physics, the reliance on mathematics has only increased; while the Enlightenment ideal of describing Nature as we find it to be, rather than as intuition says it must be, is emphatically upheld. - I don’t think the human brain is a quantum computer in any interesting sense.  As I explain in [@aar:qcsd], at least three considerations lead me to this opinion.  First, it would be nearly miraculous if complicated entangled states—which today, can generally survive for at most a few seconds in near-absolute-zero laboratory conditions—could last for any appreciable time in the hot, wet environment of the brain.  (Many researchers have made some version of that argument, but see Tegmark [@tegmark:qmbrain] for perhaps the most detailed version.)  Second, the sorts of tasks quantum computers are known to be good at (for example, factoring large integers and simulating quantum systems) seem like a terrible fit to the sorts of tasks that *humans* seem be good at, or that could have plausibly had survival value on the African savannah!  Third, and most importantly, I don’t see anything that the brain being a quantum computer would plausibly help to *explain*.  For example, why would a conscious quantum computer be any less mysterious than a conscious *classical* computer?  My conclusion is that, *if* quantum effects play any role in the brain, then such effects are almost certainly short-lived and microscopic.[^9]  At the macro level of most interest to neuroscience, the evidence is overwhelming that the brain’s computation and information storage are classical.  (See Section \[PENROSE\] for further discussion of these issues in the context of Roger Penrose’s views.) - I don’t think consciousness is in any sense necessary to bring about the reduction of the wavefunction in quantum measurement.  And I say that, despite freely confessing to unease with all existing accounts of quantum measurement!  My position is that, to whatever extent the reduction of the wavefunction is a real process at all (as opposed to an artifact of observers’ limited perspectives, as in the Many-Worlds Interpretation), it must be a process that can occur even in interstellar space, with no conscious observers anywhere around.  For otherwise, we’re forced to the absurd conclusion that the universe’s quantum state evolved linearly via the Schrödinger equation for billions of years, *until* the first observers arose (who: humans? monkeys? aliens?) and looked around them—at which instant the state suddenly and violently collapsed! If one likes, whatever I *do* say about quantum mechanics and mind in this essay will be said in the teeth of the above points.  In other words, I’ll regard points (a)-(e) as sufficiently well-established to serve as useful *constraints*, which a new proposal ought to satisfy as a prerequisite to being taken seriously. Brain-Uploading: Who Cares?\[UPLOADING\] ---------------------------------------- **Suppose it were possible to upload a human brain to a computer, and thereafter predict the brain unlimited accuracy.  Who cares?  Why should anyone even worry that that would create a problem for free will or personal identity?** For me, the problem comes from the observation that it seems impossible to give any operational difference between a perfect predictor of your actions, and a second copy or instantiation of yourself.  If there are two entities, both of which respond to every situation exactly as you would, then by what right can we declare that only one such entity is the real you, and the other is just a predictor, simulation, or model?  But having multiple copies of you in the same universe seems to open a Pandora’s box of science-fiction paradoxes.  Furthermore, these paradoxes aren’t merely metaphysical: they concern *how you should do science* knowing there might be clones of yourself, and which predictions and decisions you should make. Since this point is important, let me give some examples.  Planning a dangerous mountain-climbing trip?  Before you go, make a backup of yourself—or two or three—so that if tragedy should strike, you can restore from backup and then continue life as if you’d never left.  Want to visit Mars?  Don’t bother with the perilous months-long journey through space; just use a brain-scanning machine to fax yourself there as pure information, whereupon another machine on Mars will construct a new body for you, functionally identical to the original. Admittedly, some awkward questions arise.  For example, after you’ve been faxed to Mars, what should be done with the original copy of you left on Earth?  Should it be destroyed with a quick, painless gunshot to the head?  Would *you* agree to be faxed to Mars, knowing that that’s what would be done to the original?  Alternatively, if the original were left alive, then what makes you sure you would wake up as the copy on Mars?  At best, wouldn’t you have 50/50 odds of still finding yourself on Earth?  Could that problem be solved by putting a thousand copies of you on Mars, while leaving only one copy on Earth?  Likewise, suppose you return unharmed from your mountain-climbing trip, and decide that the backup copies you made before you left are now an expensive nuisance.  If you destroy them, are you guilty of murder?  Or is it more like suicide?  Or neither? Here’s a purer example of such a puzzle, which I’ve adapted from the philosopher Nick Bostrom [@bostrom].  Suppose an experimenter flips a fair coin while you lie anesthetized in a white, windowless hospital room.  If the coin lands heads, then she’ll create a thousand copies of you, place them in a thousand identical rooms, and wake each one up.  If the coin lands tails, then she’ll wake you up without creating any copies.  You wake up in a white, windowless room just like the one you remember.  Knowing the setup of the experiment, at what odds should you be willing to bet that the coin landed heads?  Should your odds just be 50/50, since the coin was fair?  Or should they be biased 1000:1 in favor of the coin having landed heads—since if it *did* land heads, then there are a thousand of you confronting the same situation, compared to only one if the coin landed tails? Many people immediately respond that the odds should be 50/50: they consider it a metaphysical absurdity to adjust the odds based on the number of copies of yourself in existence.  (Are we to imagine a warehouse full of souls, with the odds of any particular soul being taken out of the warehouse proportional to the number of suitable bodies for it?)  However, those who consider 50/50 the obvious answer should consider a slight variant of the puzzle.  Suppose that, if the coin lands tails, then as before the experimenter leaves a single copy of you in a white room.  If the coin lands heads, then the experimenter creates a thousand copies of you and places them in a thousand windowless rooms.  Now, though, 999 of the rooms are painted blue; only one of the rooms is white like you remember. You wake up from the anesthesia and find yourself in a white room.  *Now* what posterior probability should you assign to the coin having landed heads?  *If* you answered 50/50 to the first puzzle, then a simple application of Bayes’ rule implies that, in the *second* puzzle, you should consider it overwhelmingly likely that the coin landed tails.  For if the coin landed heads, then presumably you had a 99.9% probability of being one of the 999 copies who woke up in a blue room.  So the fact that you woke up in a white room furnishes powerful evidence about the coin.  Not surprisingly, many people find *this* result just as metaphysically unacceptable as the 1000:1 answer to the first puzzle!  Yet as Bostrom points out, it seems mathematically inconsistent to insist on 50/50 as the answer to *both* puzzles. Probably the most famous paradox of brain-copying was invented by Simon Newcomb, then popularized by Robert Nozick [@nozick] and Martin Gardner [@gardner:newcomb].  In *Newcomb’s paradox*, a superintelligent Predictor presents you with two closed boxes, and offers you a choice between opening the first box only or opening both boxes.  Either way, you get to keep whatever you find in the box or boxes that you open.  The contents of the first box can vary—sometimes it contains \$1,000,000, sometimes nothing—but the second box always contains \$1,000. Just from what was already said, it seems that it must be preferable to open both boxes.  For whatever you would get by opening the first box only, you can get \$1,000 more by opening the second box as well.  But here’s the catch: using a detailed brain model, the Predictor has already foreseen your choice.  *If* it predicted that you would open both boxes, then the Predictor left the first box empty; while if it predicted that you would open the first box only, then the Predictor put \$1,000,000 in the first box.  Furthermore, the Predictor has played this game hundreds of times before, both with you and with other people, and its predictions have been right every time.  Everyone who opened the first box ended up with \$1,000,000, while everyone who opened both boxes ended up with only \$1,000.  Knowing all of this, what do you do? Some people dismiss the problem as contradictory—arguing that, if the assumed Predictor exists, then you have no free will, so there’s no use fretting over how many boxes to open since your choice is already predetermined anyway.  Among those willing to play along, opinion has been split for decades between one-boxers and two-boxers.  Lately, though, the one-boxers seem to have been gaining the upper hand—and reasonably so in my opinion, since by the assumptions of the thought experiment, the one-boxers *do* always walk away richer! As I see it, the real problem is to *explain* how one-boxing could possibly be rational, given that, at the time you’re contemplating your decision, the million dollars are either in the first box or not.  Can a last-minute decision to open both boxes somehow reach backwards in time, causing the million dollars that would have been in the first box to disappear?  Do we need to distinguish between your actual choices and your dispositions, and say that, while one-boxing is admittedly irrational, *making yourself into the sort of person* who one-boxes is rational? While I consider myself a one-boxer, the only justification for one-boxing that makes sense to me goes as follows.[^10]  In principle, you could base your decision of whether to one-box or two-box on anything you like: for example, on whether the name of some obscure childhood friend had an even or odd number of letters.  However, this suggests that the problem of predicting whether you will one-box or two-box is you-complete.[^11]  In other words, if the Predictor can solve this problem reliably, then it seems to me that it must possess a simulation of you so detailed as to constitute another *copy* of you (as discussed previously). But in that case, to whatever extent we want to think about Newcomb’s paradox in terms of a freely-willed decision at all, we need to imagine *two* entities separated in space and time—the flesh-and-blood you, and the simulated version being run by the Predictor—that are nevertheless tethered together and share common interests.  *If* we think this way, then we can easily explain why one-boxing can be rational, even without backwards-in-time causation.  Namely, as you contemplate whether to open one box or two, who’s to say that you’re not actually the simulation?  If you are, then of course your decision can affect what the Predictor does in an ordinary, causal way. For me, the takeaway is this.  *If* any of these technologies—brain-uploading, teleportation, the Newcomb predictor, etc.—were actually realized, then all sorts of woolly metaphysical questions about personal identity and free will would start to have *practical consequences*.  Should you fax yourself to Mars or not?  Sitting in the hospital room, should you bet that the coin landed heads or tails?  Should you expect to wake up as one of your backup copies, or as a simulation being run by the Newcomb Predictor?  These questions all seem empirical, yet one can’t answer them without taking an implicit stance on questions that many people would prefer to regard as outside the scope of science. Thus, the idea that we can escape all that philosophical crazy-talk by declaring that the human mind is a computer program running on the hardware of the brain, and that’s all there is to it, strikes me as ironically backwards.  Yes, we can say that, and we might even be right.  But far from bypassing all philosophical perplexities, such a move lands in a *swamp* of them!  For now we need to give some account of how a rational agent ought to make decisions and scientific predictions, in situations where it knows it’s only one of several exact copies of itself inhabiting the same universe. Many will try to escape the problem, by saying that such an agent, being (by assumption) just a computer program, simply does whatever its code determines it does given the relevant initial conditions.  For example, if a piece of code says to bet heads in a certain game, then all agents running that code will bet heads; if the code says to bet tails, then the agents will bet tails.  Either way, an *outside* observer who knew the code could easily calculate the probability that the agents will win or lose their bet.  So what’s the philosophical problem? For me, the problem with this response is simply that it gives up on science as *something agents can use to predict their future experiences.*  The agents wanted science to tell them, given such-and-such physical conditions, here’s what you *should* expect to see, and why.  Instead they’re getting the worthless tautology, if your internal code causes you to expect to see $X$, then you expect to see $X$, while if your internal code causes you to expect to see $Y$, then you expect to see $Y$.  But the same could be said about *anything*, with no scientific understanding needed!  To paraphrase Democritus,[^12] it seems like the ultimate victory of the mechanistic worldview is also its defeat. As far as I can see, the only hope for avoiding these difficulties is if—because of chaos, the limits of quantum measurement, or whatever other obstruction—minds *can’t* be copied perfectly from one physical substrate to another, as can programs on standard digital computers.  So that’s a possibility that this essay explores at some length.  To clarify, we can’t use any philosophical difficulties that would arise if minds were copyable, as evidence for the empirical claim that they’re *not* copyable.  The universe has never shown any particular tendency to cater to human philosophical prejudices!  But I’d say the difficulties provide more than enough reason to *care* about the copyability question. Determinism versus Predictability\[DETERPREDICT\] ------------------------------------------------- **I’m a determinist: I believe, not only that humans lack free will, but that everything that happens is completely determined by prior causes.  So why should an analysis of mere unpredictability change my thinking at all?  After all, I readily admit that, despite being metaphysically determined, many future events are unpredictable in practice.  But for me, the fact that we can’t predict something is *our* problem, not Nature’s!** There’s an observation that doesn’t get made often enough in free-will debates, but that seems extremely pertinent here.  Namely: *if you stretch the notion of determination far enough, then events become determined so trivially that the notion itself becomes vacuous.* For example, a religious person might maintain that all events are predetermined by God’s Encyclopedia, which of course only God can read.  Another, secular person might maintain that *by definition*, the present state of the universe contains all the data needed to determine future events, even if those future events (such as quantum measurement outcomes) aren’t *actually* predictable via present-day measurements.  In other words: if, in a given conception of physics, the present state does *not* fully determine all the future states, then such a person will simply add hidden variables to the present state until it does so. Now, if our hidden-variable theorist isn’t careful, and piles on additional requirements like spatial locality, then she’ll quickly find herself contradicting one or more of quantum mechanics’ no-hidden-variable theorems (such as the Bell [@bell], Kochen-Specker [@ks], or PBR [@pbr] theorems).  But the bare assertion that everything is determined by the current state is no more disprovable than the belief in God’s Encyclopedia. To me, this immunity from any possible empirical discovery shows just how *feeble* a concept determinism really is, unless it’s supplemented by further concepts like locality, simplicity, or (best of all) actual predictability.  A form of determinism that applies not merely to our universe, but to any *logically possible* universe, is not a determinism that has fangs, or that could credibly threaten any notion of free will worth talking about. Predictability in the Eye of the Beholder\[BEHOLDER\] ----------------------------------------------------- **A system that’s predictable to one observer might be unpredictable to another.  Given that predictability is such a relative notion, how could it possibly play the fundamental role you need it to?** This question was already briefly addressed in Section \[FWFREEDOM\], but since it arises so frequently, it might be worth answering again.  In this essay, I call a physical system $S$ predictable if (roughly speaking) there’s *any possible technology*, *consistent with the laws of physics*, that would allow an external observer to gain enough information about $S$, without destroying $S$, to calculate well-calibrated probabilities (to any desired accuracy) for the outcomes of all possible future measurements on $S$ within some allowed set.  Of course, this definition introduces many concepts that require further clarification: for example, what do we mean by destroying $S$?  What does it mean for probabilities to be well-calibrated?  Which measurements on $S$ are allowed?  What can the external observer be assumed to know about $S$ before encountering it?  For that matter, what exactly counts as an external observer, or a physical system?  I set out my thoughts about these questions, and even suggest a tentative formal definition of Knightian freedom in terms of other concepts, in Appendix \[MEAN\]. For now, though, the main point is that, whenever I talk about whether a system can be predicted, the word can has basically the same meaning as when physicists talk about whether information can get from point $A$ to point $B$.  Just like in the latter case, we don’t care whether the two points are *literally* connected by a phone line, so too in the former case, we don’t care whether the requisite prediction machine has actually been built, or could plausibly be built in the next millennium.  Instead, we’re allowed to imagine *arbitrarily-advanced technologies*, so long as our imaginations are constrained by the laws of physics.[^13] (Observe that, were our imaginations not constrained *even* by physics, we’d have to say that *anything whatsoever* can happen, except outright logical contradictions.  So in particular, we’d reach the uninteresting conclusion that *any* system can be perfectly predicted—by God, for example, or by magical demons.  For more see Question \[DETERPREDICT\].) Quantum Mechanics and Hidden Variables\[QMHV\] ---------------------------------------------- **Forget about free will or Knightian uncertainty: I deny even that *probability* plays any fundamental role in physics.  For me, like for Einstein, the much-touted randomness of quantum mechanics merely shows that we humans haven’t yet discovered the underlying deterministic rules.  Can you prove that I’m wrong?** With minimal use of Occam’s Razor, yes, I can!  In 1926, when Einstein wrote his famous aphorism about God and dice, the question of whether quantum events were truly random or merely pseudorandom could still be considered metaphysical.  After all, common sense suggests we can never say with confidence that *anything* is random: the most we can ever say is that *we* failed to find a pattern in it. But common sense is flawed here.  A large body of work, starting with that of Bell in the 1960s [@bell], has furnished evidence that quantum measurement outcomes *can’t* be governed by any hidden pattern, but must be random in just the way quantum mechanics says they are.  Crucially, this evidence doesn’t circularly assume that quantum mechanics is the final theory of nature.  Instead, it assumes just a few general principles (such as spatial locality and no cosmic conspiracies), together with the results of specific experiments that have already been done.  Since these points are often misunderstood, it might be worthwhile to spell them out in more detail. Consider the *Bell inequality*, whose violation by entangled particles (in accord with quantum mechanics) has been experimentally demonstrated more and more firmly since the 1980s [@aspect].  From a modern perspective, Bell simply showed that certain games, played by two cooperating but non-communicating players Alice and Bob, can be won with greater probability if Alice and Bob share entangled particles than if they merely share correlated *classical* information.[^14]  Bell’s theorem is usually presented as ruling out a class of theories called *local hidden-variable theories*.  Those theories sought to explain Alice and Bob’s measurement results in terms of ordinary statistical correlations between two random variables $X$ and $Y$, which are somehow associated with Alice’s and Bob’s particles respectively, and which have the properties that nothing Alice does can affect $Y$ and that nothing Bob does can affect $X$.  (One can imagine the particles flipping a coin at the moment of their creation, whereupon one of them declares, OK, if anyone asks, I’ll be spinning up and you’ll be spinning down!) In popular treatments, Bell’s theorem is usually presented as demonstrating the reality of what Einstein called spooky action at a distance.[^15]  However, as many people have pointed out over the years—see, for example, my 2002 critique [@aar:rev] of Stephen Wolfram’s *A New Kind of Science* [@wolfram]—one can also see Bell’s theorem in a different way: as using the *assumption* of no instantaneous communication to address the even more basic issue of *determinism*.  From this perspective, Bell’s theorem says the following: > *Unless* Alice and Bob’s particles communicate faster than light, the results of all possible measurements that Alice and Bob could make on those particles *cannot have been determined prior to measurement*—not even by some bizarre, as-yet-undiscovered uncomputable law—assuming the statistics of all the possible measurements agree with the quantum predictions.  Instead, the results *must* be generated randomly on-the-fly in response to whichever measurement is made, just as quantum mechanics says they are. The above observation was popularized in 2006 by John Conway and Simon Kochen, who called it the Free Will Theorem [@conwaykochen].  Conway and Kochen put the point as follows: if there’s no faster-than-light communication, *and* Alice and Bob have the free will to choose how to measure their respective particles, then the particles must have their own free will to choose how to respond to the measurements. Alas, Conway and Kochen’s use of the term free will has generated confusion.  For the record, *what Conway and Kochen mean by free will has only the most tenuous connection to what most people (including me, in this essay) mean by it!*  Their result might more accurately be called the freshly-generated randomness theorem.[^16]  For the indeterminism that’s relevant here is only probabilistic: indeed, Alice and Bob could be replaced by simple dice-throwing or quantum-state-measuring automata without affecting the theorem at all.[^17] Another recent development has made the conflict between quantum mechanics and determinism particularly vivid.  It’s now known how to exploit Bell’s theorem to generate so-called Einstein-certified random bits for use in cryptographic applications [@pironio; @vaziranividick], starting from a much smaller number of seed bits that are known to be random.[^18]  Here Einstein-certified means that, *if* the bits pass certain statistical tests, then they *must* be close to uniformly-random, unless nature resorted to cosmic conspiracies between separated physical devices to bias the bits. Thus, if one wants to restore determinism while preserving the empirical success of quantum mechanics, then one has to posit a conspiracy in which every elementary particle, measuring device, and human brain potentially colludes.  Furthermore, this conspiracy needs to be so diabolical as to leave essentially no trace of its existence!  For example, in order to explain why we can’t exploit the conspiracy to send faster-than-light signals, one has to imagine that the conspiracy prevents our own brains (or the quantum-mechanical random number generators in our computers, etc.) from making the choices that would cause those signals to be sent.  To my mind, this is no better than the creationists’ God, who planted fossils in the ground to confound the paleontologists. I should say that at least one prominent physicist, Gerard ’t Hooft, actually advocates such a cosmic conspiracy [@thooft:freewill] (under the name superdeterminism); he speculates that a yet-undiscovered replacement for quantum mechanics will reveal its workings.[^19]  For me, though, the crux is that once we start positing conspiracies between distant regions of spacetime, or between the particles we measure and our own instruments or brains, determinism becomes consistent with *any* possible scientific discovery, and therefore retreats into vacuousness.  As the extreme case, as pointed out in Question \[DETERPREDICT\], someone could always declare that everything that happens was determined by God’s unknowable book listing everything that will ever happen!  That sort of determinism can never be falsified, but has zero predictive or explanatory power. In summary, I think it’s fair to say that *physical indeterminism is now a settled fact to roughly the same extent as evolution, heliocentrism, or any other discovery in science*.  So *if* that fact is considered relevant to the free-will debate, then all sides might as well just accept it and move on!  (Of course, we haven’t yet touched the question of whether physical indeterminism *is* relevant to the free-will debate.) The Consequence Argument\[CONSEQUENCE\] --------------------------------------- **How does your perspective respond to Peter van Inwagen’s Consequence Argument?** Some background for non-philosophers: the Consequence Argument [@inwagen] is an attempt to formalize most people’s intuition for why free will is incompatible with determinism.  The argument consists of the following steps: 1. If determinism is true, then our choices today are determined by whatever the state of the universe was (say) $100$ million years ago, when dinosaurs roamed the earth. 2. The state of the universe $100$ million years ago is clearly outside our ability to alter. 3. Therefore, if determinism is true, then our choices today are outside our ability to alter. 4. Therefore, if determinism is true, then we don’t have free will. (A side note: as discussed in Question \[COMPAT\], the traditional obsession with determinism here seems unfortunate to me.  What people really mean to ask, I think, is whether free will is compatible with *any* mechanistic account of the universe, regardless of whether the account happens to be deterministic or probabilistic.  On the other hand, one could easily rephrase the Consequence Argument to allow for this, with the state of the universe $100$ million years ago now fully determining the *probabilities* of our choices today, if not the choices themselves.  And I don’t think that substitution would make any essential difference to what follows.) One can classify beliefs about free will according to how they respond to the Consequence Argument.  If you accept the argument as well as its starting premise of determinism (or mechanism), and hence also the conclusion of no free will, then you’re a *hard determinist (or mechanist)*.  If you accept the argument, but reject the conclusion by denying the starting premise of determinism or mechanism, then you’re a *metaphysical libertarian*.  If you reject the argument by denying that steps (iii) or (iv) follow from the previous steps, then you’re a *compatibilist*. What about the perspective I explore here?  It denies step (ii)—or rather, it denies the usual notion of the state of the universe $100$ million years ago, insisting on a distinction between macrofacts and microfacts about that state.  It agrees that the past *macro*facts—such as whether a dinosaur kicked a particular stone—have an objective reality that is outside our ability to alter.  But it denies that we can always speak in straightforward causal terms about the past *micro*facts, such as the quantum state $\left\vert \psi \right\rangle $ of some particular photon impinging on the dinosaur’s tail. As such, my perspective can be seen as an example of a little-known view that Fischer [@fischer] calls *multiple-pasts compatibilism*.  As I’d put it, multiple-pasts compatibilism agrees that the past microfacts about the world determine its future, and it also agrees that the past *macro*facts are outside our ability to alter.  However, it maintains that there might be many possible settings of the past microfacts—the polarizations of individual photons, etc.—that all coexist in the same past-ensemble.  By definition, such microfacts can’t possibly have made it into the history books, and a multiple-pasts compatibilist would deny that they’re necessarily outside our ability to alter.  Instead, our choices today might play a role in selecting *one* past from a giant ensemble of macroscopically-identical but microscopically-different pasts. While I take the simple idea above as a starting point, there are two main ways in which I try to go further than it.  First, I insist that whether we *can* make the needed distinction between microfacts and macrofacts is a *scientific* question—one that can only be addressed through detailed investigation of the quantum decoherence process and other aspects of physics.  Second, I change the focus from unanswerable metaphysical conundrums about what determines what, toward empirical questions about the actual *predictability* of our choices (or at least the probabilities of those choices) given the past macrofacts.  I argue that, by making progress on the predictability question, we can learn something about whether multiple-pasts compatibilism is a *viable* response to the Consequence Argument, even if we can never know for certain whether it’s the *right* response. Paradoxes of Prediction\[PARADOXES\] ------------------------------------ **You say you’re worried about the consequences for rational decision-making, Bayesian inference, and so forth if our choices were all mechanically predictable.  Why isn’t it reassurance enough that, logically, predictions of an agent’s behavior can never be known ahead of time to the agent itself?  For if they *were* known ahead of time, then in real life—as opposed to Greek tragedies, or stories like Philip K. Dick’s *Minority Report* [@dick]—the agent could simply defy the prediction by doing something else!  Likewise, why isn’t enough that, because of the Time Hierarchy Theorem from computational complexity, predicting an agent’s choices might require as much computational power as the agent itself expends in making those choices?** The obvious problem with such computational or self-referential arguments is that they can’t possibly prevent one agent, say Alice, from predicting the behavior of a *different* agent, say Bob.  And in order to do that, Alice doesn’t need unlimited computational power: *she only needs a bit more computational power than Bob has.*[^20]  Furthermore, Bob’s free will actually seems *more* threatened by Alice predicting his actions than by Bob predicting his own actions, supposing the latter were possible!  This explains why I won’t be concerned in this essay with computational obstacles to prediction, but only with obstacles that arise from Alice’s physical inability to gather the requisite information about Bob. Admittedly, as MacKay [@mackay], Lloyd [@lloyd:freewill], and others have stressed, if Alice wants to predict Bob’s choices, then she needs to be careful not to *tell* Bob her predictions before they come true!  And that does indeed make predictions of Bob’s actions an unusual sort of knowledge: knowledge that can be falsified by the very fact of Bob’s learning it. But unlike some authors, I don’t make much of these observations.  For even if Alice can’t tell Bob what he’s going to do, it’s easy enough for her to demonstrate to him afterwards that *she* knew.  For example, Alice could put her prediction into a sealed envelope, and let Bob open the envelope only after the prediction came true.  Or she could send Bob a *cryptographic commitment* to her prediction, withholding the decryption key until afterward.  If Alice could do these things reliably, then it seems likely that Bob’s self-conception would change just as radically as if he knew the predictions in advance. Singulatarianism\[SINGULAR\] ---------------------------- **How could it possibly make a difference to anyone’s life whether his or her neural computations were buffeted around by microscopic events subject to Knightian uncertainty?  Suppose that only ordinary quantum randomness and classical chaos turned out to be involved: how on earth could that matter, outside the narrow confines of free-will debates?  Is the variety of free will that apparently interests you—one based on the physical unpredictability of our choices—really a variety worth wanting, in Daniel Dennett’s famous phrase [@dennett:elbow]?** As a first remark, if there’s *anything* in this debate that all sides can agree on, hopefully they can agree that the truth (whatever it is) doesn’t care what we want, consider worth wanting, or think is necessary or sufficient to make our lives meaningful! But to lay my cards on the table, my interest in the issues discussed in this essay was sparked, in part, by considering arguments of the so-called Singulatarians.  These are people who look at current technological trends—including advances in neuroscience, nanotechnology, and AI, as well as the dramatic increases in computing power—and foresee a technological singularity perhaps $50$-$200$ years in our future (not surprisingly, the projected timescales vary).  By this, they mean not a mathematical singularity, but a rapid phase transition, perhaps analogous to the appearance of the first life on earth, the appearance of humanity, or the invention of agriculture or of writing.  In Singulatarians’ view, the next such change will happen around the time when humans manage to build artificial intelligences that are smarter than the smartest humans.  It stands to reason, the Singulatarians say, that such AIs will realize they can best further their goals (whatever those might be) by building AIs even smarter than themselves—and then the super-AIs will build yet smarter AIs, and so on, presumably until the fundamental physical limits on intelligence (whatever they are) are reached. Following the lead of science-fiction movies, of course one might wonder about the role of humans in the resulting world.  Will the AIs wipe us out, treating us as callously as humans have treated each other and most animal species?  Will the AIs keep us around as pets, or as revered (if rather dimwitted) creators and ancestors?  Will humans be invited to upload their brains to the post-singularity computer cloud—with each human, perhaps, living for billions of years in his or her own simulated paradise?  Or will the humans simply merge their consciousnesses into the AI hive-mind, losing their individual identities but becoming part of something unimaginably greater? Hoping to find out, many Singulatarians have signed up to have their brains cryogenically frozen upon their (biological) deaths, so that some future intelligence (before or after the singularity) might be able to use the information therein to revive them.[^21]  One leading Singulatarian, Eliezer Yudkowsky, has written at length about the irrationality of people who *don’t* sign up for cryonics: how they value social conformity and not being perceived as weird over a non-negligible probability of living for billions of years.[^22] With some notable exceptions, academics in neuroscience and other relevant fields have tended to dismiss Singulatarians as nerds with hyperactive imaginations: people who have no idea how great the difficulties are in modeling the human brain or building a general-purpose AI.  Certainly, one could argue that the Singulatarians’ *timescales* might be wildly off.  And even if one accepts their timescales, one could argue that (almost by definition) the unknowns in their scenario are so great as to negate any *practical* consequences for the humans alive now.  For example, suppose we conclude—as many Singulatarians have—that the greatest problem facing humanity today is how to ensure that, when superhuman AIs are finally built, those AIs will be friendly to human concerns.  The difficulty is: *given our current ignorance about AI, how on earth should we act on that conclusion?*  Indeed, how could we have any confidence that whatever steps we *did* take wouldn’t backfire, and increase the probability of an *un*friendly AI? Yet on questions of principle—that is, of what the laws of physics could ultimately allow—I think the uncomfortable truth is that it’s the Singulatarians who are the scientific conservatives, while those who reject their vision as fantasy are scientific radicals.  For at some level, all the Singulatarians are doing is taking conventional thinking about physics and the brain to its logical conclusion.  If the brain is a meat computer, then given the right technology, why *shouldn’t* we be able to copy its program from one physical substrate to another?  And why couldn’t we then run multiple copies of the program in parallel, leading to all the philosophical perplexities discussed in Section \[UPLOADING\]? Maybe the conclusion is that we should all become Singulatarians!  But given the stakes, it seems worth exploring the possibility that there are scientific reasons why human minds *can’t* be casually treated as copyable computer programs: not just practical difficulties, or the sorts of question-begging appeals to human specialness that are child’s-play for Singulatarians to demolish.  If one likes, the origin of this essay was my own refusal to accept the lazy cop-out position, which answers the question of whether the Singulatarians’ ideas are *true* by repeating that their ideas are crazy and weird.  If uploading our minds to digital computers is indeed a fantasy, then I demand to know what it is about the physical universe that *makes* it a fantasy. The Libet Experiments\[LIBET\] ------------------------------ **Haven’t neuroscience experiments already proved that our choices aren’t nearly as unpredictable as we imagine?  Seconds before a subject is conscious of making a decision, EEG recordings can already detect the neural buildup to that decision.  Given that empirical fact, isn’t any attempt to ground freedom in unpredictability doomed from the start?** It’s important to understand what experiments have and haven’t shown, since the details tend to get lost in popularization.  The celebrated experiments by Libet (see [@libet]) from the 1970s used EEGs to detect a readiness potential building up in a subject’s brain up to a second and a half before the subject made the freely-willed decision to flick her finger.  The implications of this finding for free will were avidly discussed—especially since the subject might not have been *conscious* of any intention to flick her finger until (say) half a second before the act.  So, did the prior appearance of the readiness potential prove that what we perceive as conscious intentions are just window-dressing, which our brains add after the fact? However, as Libet acknowledged, an important gap in the experiment was that it had inadequate control.  That is, how often did the readiness potential form, *without* the subject flicking her finger (which might indicate a decision that was vetoed at the last instant)?  Because of this gap, it was unclear to what extent the signal Libet found could actually be used for prediction. More recent experiments—see especially Soon et al. [@soonetal]—have tried to close this gap, by using fMRI scans to predict *which* of two buttons a person would press.  Soon et al. [@soonetal] report that they were able to do so four or more seconds in advance, with success probability significantly better than chance (around $60\%$).  The question is, how much should we read into these findings? My own view is that the quantitative aspects are crucial when discussing these experiments.  For compare a (hypothetical) ability to predict human decisions a full minute in advance, to an ability to predict the same decisions $0.1$ seconds in advance, in terms of the intuitive threat to free will.  The two cases seem utterly different!  A minute seems like clearly enough time for a deliberate choice, while $0.1$ seconds seems like clearly *not* enough time; on the latter scale, we are only talking about physiological reflexes.  (For intermediate scales like $1$ second, intuition—or at least my intuition—is more conflicted.) Similarly, compare a hypothetical ability to predict human decisions with 99% accuracy, against an ability to predict them with 51% accuracy.  I expect that only the former, and not the latter, would strike anyone as threatening or even uncanny.  For it’s obvious that human decisions are *somewhat* predictable: if they weren’t, there would be nothing for advertisers, salespeople, seducers, or demagogues to exploit!  Indeed, with zero predictability, we couldn’t even talk about *personality* or *character* as having any stability over time.  So *better-than-chance* predictability is just too low a bar for clearing it to have any relevance to the free-will debate.  One wants to know: *how much* better than chance?  Is the accuracy better than what my grandmother, or an experienced cold-reader, could achieve? Even within the limited domain of button-pressing, years ago I wrote a program that invited the user to press the ‘G’ or ‘H’ keys in any sequence—‘GGGHGHHHHGHG’—and that tried to predict which key the user would press next.  The program used only the crudest pattern-matching—in the past, was the subsequence GGGH more likely to be followed by G or H?  Yet humans are so poor at generating random digits that the program regularly achieved prediction accuracies of around $70\%$—no fMRI scans needed![^23] In summary, I believe neuroscience might *someday* advance to the point where it completely rewrites the terms of the free-will debate, by showing that the human brain is physically predictable by outside observers in the same sense as a digital computer.  But it seems nowhere close to that point today.  Brain-imaging experiments have succeeded in demonstrating predictability with better-than-chance accuracy, in limited contexts and over short durations.  Such experiments are deeply valuable and worth trying to push as far as possible.  On the other hand, the mere *fact* of limited predictability is something that humans knew millennia before brain-imaging technologies became available. Mind and Morality\[MIND\] ------------------------- **Notwithstanding your protests that you won’t address the mystery of consciousness, your entire project seems geared toward the old, disreputable quest to find some sort of dividing line between real, conscious, biological humans and digital computer programs, even supposing that the latter could perfectly emulate the former.  Many thinkers have sought such a line before, but most scientifically-minded people regard the results as dubious.  For Roger Penrose, the dividing line involves neural structures called microtubules harnessing exotic quantum-gravitational effects (see Section \[PENROSE\]).  For the philosopher John Searle [@searle:redis], the line involves the brain’s unique biological causal powers: powers whose existence Searle loudly asserts but never actually explains.  For you, the line seems to involve hypothesized limits on predictability imposed by the No-Cloning Theorem.  But regardless of where the line gets drawn, let’s discuss where the rubber meets the road.  Suppose that in the far future, there are trillions of emulated brains (or ems) running on digital computers.  In such a world, would you consider it acceptable to murder an em (say, by deleting its file), simply because it lacked the Knightian unpredictability that biological brains might or might not derive from amplified quantum fluctuations?  If so, then isn’t that a cruel, arbitrary, meatist double standard—one that violates the most basic principles of your supposed hero, Alan Turing?** For me, this *moral* objection to my project is possibly the most pressing objection of all.  Will *I* be the one to shoot a humanoid robot pleading for its life, simply because the robot lacks the supposedly-crucial freebits, or Knightian unpredictability, or whatever else the magical stuff is supposed to be that separates humans from machines? Thus, perhaps my most important reason to take the freebit picture seriously is that it *does* suggest a reply to the objection: one that strikes me as both intellectually consistent and moral.  I simply need to adopt the following ethical stance: *I’m against any irreversible destruction of knowledge, thoughts, perspectives, adaptations, or ideas, except possibly by their owner.*  Such destruction is worse the more valuable the thing destroyed, the longer it took to create, and the harder it is to replace.  From this basic revulsion to *irreplaceable loss*, hatred of murder, genocide, the hunting of endangered species to extinction, and even (say) the burning of the Library of Alexandria can all be derived as consequences. Now, what about the case of deleting an emulated human brain from a computer memory?  The same revulsion applies in full force—*if the copy deleted is the last copy in existence*.  If, however, there are other extant copies, then the deleted copy can always be restored from backup, so deleting it seems at worst like property damage.  For biological brains, by contrast, whether such backup copies *can* be physically created is of course exactly what’s at issue, and the freebit picture conjectures a negative answer. These considerations suggest that the moral status of ems really *could* be different than that of organic humans, but for straightforward practical reasons that have nothing to do with meat chauvinism, or with question-begging philosophical appeals to human specialness.  The crucial point is that even a program that passed the Turing Test would revert to looking crude and automaton-like, *if you could read, trace, and copy its code*.  And whether the code *could* be read and copied might depend strongly on the machine’s physical substrate.  Destroying something that’s both as complex as a human being *and* one-of-a-kind could be regarded as an especially heinous crime. I see it as the great advantage of this reply that it makes no direct reference to the first-person experience of *anyone*, neither biological humans nor ems.   On this account, we don’t *need* to answer probably-unanswerable questions about whether or not ems would be conscious, in order to constructed a workable moral code that applies to them.  Deleting the last copy of an em in existence should be prosecuted as murder, *not* because doing so snuffs out some inner light of consciousness (who is anyone else to know?), but rather because it deprives the rest of society of a unique, irreplaceable store of knowledge and experiences, precisely as murdering a human would. Knightian Uncertainty and Physics\[KNIGHTIANPHYS\] ================================================== Having spent almost half the essay answering *a priori* objections to the investigation I want to undertake, I’m finally ready to start the investigation itself!  In this section, I’ll set out and defend two propositions, both of which are central to what follows. The first proposition is that *probabilistic uncertainty (like that of quantum-mechanical measurement outcomes) can’t possibly, by itself, provide the sort of indeterminism that could be relevant to the free-will debate.*  In other words, *if* we see a conflict between free will and the deterministic predictability of human choices, then we should see the same conflict between free will and *probabilistic* predictability, assuming the probabilistic predictions are as accurate as those of quantum mechanics.  Conversely, if we hold free will to be compatible with quantum-like predictability, then we might as well hold free will to be compatible with *deterministic* predictability also.  In my perspective, for a form of uncertainty to be relevant to free will, a necessary condition is that it be what the economists call *Knightian uncertainty*.  Knightian uncertainty simply refers to uncertainty that we lack a clear, agreed-upon way to quantify—like our uncertainty about existence of extraterrestrial life, as opposed to our uncertainty about the outcome of a coin toss. The second proposition is that, in current physics, there appears to be only one source of Knightian uncertainty that could possibly be both fundamental and relevant to human choices.  That source is *uncertainty about the microscopic, quantum-mechanical details of the universe’s initial conditions (or the initial conditions of our local region of the universe)*.  In classical physics, there’s no known fundamental principle that prevents a predictor from learning the relevant initial conditions to whatever precision it likes, without disturbing the system to be predicted.  But in quantum mechanics there *is* such a principle, namely the uncertainty principle (or from a more modern standpoint, the No-Cloning Theorem).  It’s crucial to understand that this source of uncertainty is separate from the randomness of quantum measurement *outcomes*: the latter is much more often invoked in free-will speculations, but in my opinion it shouldn’t be.  If we know a system’s quantum state $\rho$, then quantum mechanics lets us calculate the probability of any outcome of any measurement that might later be made on the system.  But if we *don’t* know the state $\rho$, then $\rho$ itself can be thought of as subject to Knightian uncertainty. In the next two subsections, I’ll expand on and justify the above claims. Knightian Uncertainty\[KNIGHTIAN\] ---------------------------------- A well-known argument maintains that the very concept of free will is ** logically incoherent.  The argument goes like this: > Any event is either determined by earlier events (like the return of Halley’s comet), or else not determined by earlier events (like the decay of a radioactive atom).  If the event is determined, then clearly it isn’t free.  But if the event is *un*determined, it isn’t free either: it’s merely arbitrary, capricious, and random.  Therefore no event can be free. As I’m far from the first to point out, the above argument has a gap, contained in the vague phrase arbitrary, capricious, and random.  An event can be arbitrary, in the sense of being undetermined by previous events, without being random in the narrower technical sense of being generated by some known or knowable probabilistic process.  The distinction between arbitrary and random is not just word-splitting: it plays a huge practical role in computer science, economics, and other fields.  To illustrate, consider the following two events: > $E_{1}=$ Three weeks from today, the least significant digit of the Dow Jones average will be even > > $E_{2}=$ Humans will make contact with an extraterrestrial civilization within the next $500$ years For both events, we are ignorant about whether they will occur, but we are ignorant in completely different ways.  For $E_{1}$, we can quantify our ignorance by a probability distribution, in such a way that almost any reasonable person would agree with our quantification.  For $E_{2}$, we can’t. For another example, consider a computer program, which has a bug that only appears when a call to the random number generator returns the result $3456$.  That’s not necessarily a big deal—since with high probability, the program would need to be run thousands of times before the bug reared its head.  Indeed, today many problems are solved using *randomized algorithms* (such as Monte-Carlo simulation), which *do* have a small but nonzero probability of failure.[^24]  On the other hand, if the program has a bug that only occurs when the user *inputs* $3456$, that’s a much more serious problem.  For how can the programmer know, in advance, whether $3456$ is an input (maybe even the *only* input) that the user cares about?  So a programmer *must* treat the two types of uncertainty differently: she can’t just toss them both into a bin labeled arbitrary, capricious, and random.  And indeed, the difference between the two types of uncertainty shows up constantly in theoretical computer science and information theory.[^25] In economics, the second type of uncertainty—the type that can’t be objectively quantified using probabilities—is called *Knightian uncertainty*, after Frank Knight, who wrote about it extensively in the 1920s [@knight].  Knightian uncertainty has been invoked to explain phenomena from risk-aversion in behavioral economics to the 2008 financial crisis (and was popularized by Taleb [@taleb] under the name black swans).  An agent in a state of Knightian uncertainty might describe its beliefs using a convex set of probability distributions, rather than a single distribution.[^26]  For example, it might say that a homeowner will default on a mortgage with some probability between $0.1$ and $0.3$, but within that interval, be unwilling to quantify its uncertainty further.  The notion of probability intervals leads naturally to various generalizations of probability theory, of which the best-known is the *Dempster-Shafer theory of belief* [@shafer]. What does any of this have to do with the free-will debate?  As I said in Section \[FWFREEDOM\], from my personal perspective Knightian uncertainty seems like a *precondition* for free will as I understand the latter.  In other words, I *agree* with the free-will-is-incoherent camp when it says that a random event can’t be considered free in any meaningful sense.  Several writers, such as Kane [@fourviews], Balaguer [@balaguer], Satinover [@satinover], and Koch [@koch], have speculated that the randomness inherent in quantum-mechanical wavefunction collapse, were it relevant to brain function, could provide all the scope for free will that’s needed.  But I think those writers are mistaken on this point. For me, the bottom line is simply that it seems like a sorry and pathetic free will that’s ruled by ironclad, externally-knowable statistical laws, and that retains only a ceremonial role, analogous to spinning a roulette wheel or shuffling a deck of cards.  Should we say that a radioactive nucleus has free will, just because (according to quantum mechanics) we can’t predict exactly when it will decay, but can only calculate a precise probability distribution over the decay times?  That seems perverse—especially since given *many* nuclei, we can predict almost perfectly what *fraction* will have decayed by such-and-such a time.  Or imagine an artificially-intelligent robot that used nuclear decays as a source of random numbers.  Does anyone seriously maintain that, if we swapped out the actual decaying nuclei for a mere pseudorandom computer simulation of such nuclei (leaving all other components unchanged), the robot would suddenly be robbed of its free will?  While I’m leery of arguments from obviousness in this subject, it really *does* seem to me that if we say the robot has free will in the first case then we should also say so in the second. And thus, I think that the free-will-is-incoherent camp would be right, *if* all uncertainty were probabilistic.  But I consider it far from obvious that all uncertainty *is* (usefully regarded as) probabilistic.  Some uncertainty strikes me as Knightian, in the sense that rational people might never even reach agreement about how to assign the probabilities.  And while Knightian uncertainty might or might not be relevant to predicting human choices, I definitely (for reasons I’ll discuss later) don’t think that current knowledge of physics or neuroscience lets us exclude the possibility. At this point, though, we’d better hear from those who reject the entire concept of Knightian uncertainty.  Some thinkers—I’ll call them *Bayesian fundamentalists*—hold that Bayesian probability theory provides the only sensible way to represent uncertainty.  On that view, Knightian uncertainty is just a fancy name for someone’s failure to carry a probability analysis far enough. In support of their position, Bayesian fundamentalists often invoke the so-called *Dutch book arguments* (see for example Savage [@savage]), which say that any rational agent satisfying a few axioms must behave *as if* its beliefs were organized using probability theory.  Intuitively, even if you claim not to have any opinion whatsoever about (say) the probability of life being discovered on Mars, I can still elicit a probabilistic prediction from you, by observing which bets about the question you will or won’t accept. However, a central assumption on which the Dutch book arguments rely—basically, that a rational agent shouldn’t mind taking at least one side of any bet—has struck many commentators as dubious.  And if we drop that assumption, then the path is open to Knightian uncertainty (involving, for example, convex *sets* of probability distributions). Even if we accept the standard derivations of probability theory, the bigger problem is that Bayesian agents can have different priors.  If one strips away the philosophy, Bayes’ rule is just an elementary mathematical fact about how one should update one’s prior beliefs in light of new evidence.  So one can’t use Bayesianism to justify a belief in the existence of *objective* probabilities underlying all events, unless one is also prepared to defend the existence of an objective prior.  In economics, the idea that all rational agents can be assumed to start with the same prior is called the *Common Prior Assumption*, or CPA.  Assuming the CPA leads to some wildly counterintuitive consequences, most famously *Aumann’s agreement theorem* [@aumann].  That theorem says that two rational agents with common priors (but differing information) can never agree to disagree: as soon as their opinions on any subject become common knowledge, their opinions must be equal. The CPA has long been controversial; see Morris [@morris] for a summary of arguments for and against.  To my mind, though, the real question is: *what could possibly have led anyone to take the CPA seriously in the first place?* Setting aside methodological arguments in economics[^27] (which don’t concern us here), I’m aware of two substantive arguments in favor of the CPA.  The first argument is that, if two rational agents (call them Alice and Bob) have different priors, then Alice will realize that *if she had been born Bob*, she would have had Bob’s prior, and Bob will realize that if he had been born Alice, he would have had Alice’s.  But if Alice and Bob are indeed rational, then why should they assign any weight to personal accidents of their birth, which are clearly irrelevant to the objective state of the universe?  (See Cowen and Hanson [@cowenhanson] for a detailed discussion of this argument.) The simplest reply is that, even if Alice and Bob accepted this reasoning, they would *still* generally end up with different priors, unless they furthermore shared the same *reference class*: that is, the set of all agents who they imagine they could have been if they weren’t themselves.  For example, if Alice includes all humans in her reference class, while Bob includes only those humans capable of understanding Bayesian reasoning such as he and Alice are engaging in now, then their beliefs will differ.  But requiring agreement on the reference class makes the argument circular—presupposing, as it does, a God’s-eye perspective transcending the individual agents, the very thing whose existence or relevance was in question.  Section \[INDEXFREE\] will go into more detail about indexical puzzles (that is, puzzles concerning the probability of your own existence, the likelihood of having been born at one time rather than another, etc).  But I hope this discussion already makes clear how much debatable metaphysics lurks in the assumption that a single Bayesian prior governs (or *should* govern) every probability judgment of every rational being. The second argument for the CPA is more ambitious: it seeks to tell us *what* the true prior is, not merely that it exists.  According to this argument, any sufficiently intelligent being ought to use what’s called the *universal prior* from algorithmic information theory.  This is basically a distribution that assigns a probability proportional to $2^{-n}$ to every possible universe describable by an $n$-bit computer program.  In Appendix \[KOLMOG\], I’ll examine this notion further, explain why some people [@hutter; @schmidhuber] have advanced it as a candidate for the true prior, but also explain why, despite its mathematical interest, I don’t think it can fulfill that role.  (Briefly, a predictor using the universal prior can be thought of as a superintelligent entity that figures out the right probabilities almost as fast as is information-theoretically possible.  But that’s conceptually very different from an entity that *already knows* the probabilities.) Quantum Mechanics and the No-Cloning Theorem\[NOCLONE\] ------------------------------------------------------- While defending the meaningfulness of Knightian uncertainty, the last section left an obvious question unanswered: where, in a law-governed universe, could we possibly *find* Knightian uncertainty? Granted, in almost any part of science, it’s easy to find systems that are effectively subject to Knightian uncertainty, in that we don’t yet have models for the systems that capture all the important components and their probabilistic interactions.  The earth’s climate, a country’s economy, a forest ecosystem, the early universe, a high-temperature superconductor, or even a flatworm or a cell are examples.  Even if our probabilistic models of many of these systems are improving over time, none of them come anywhere close to (say) the quantum-mechanical model of the hydrogen atom: a model that answers essentially *everything* one could think to ask within its domain, modulo an unavoidable (but precisely-quantifiable) random element. However, in all of these cases (the earth’s climate, the flatworm, etc.), the question arises: what grounds could we ever have to think that Knightian uncertainty was *inherent* to the system, rather than an artifact of our own ignorance?  Of course, one could have asked the same question about probabilistic uncertainty, before the discovery of quantum mechanics and its no-hidden-variable theorems (see Section \[QMHV\]).  But the fact remains that today, we don’t have any physical theory that demands Knightian uncertainty in anything like the way that quantum mechanics demands probabilistic uncertainty.  And as I said in Section \[KNIGHTIAN\], I insist that the merely probabilistic aspect of quantum mechanics can’t do the work that many free-will advocates have wanted it to do for a century. On the other hand, no matter how much we’ve learned about the dynamics of the physical world, there remains one enormous source of Knightian uncertainty that’s been hiding in plain sight, and that receives surprisingly little attention in the free-will debate.  This is our ignorance of the relevant *initial conditions*.  By this I mean both the initial conditions of the entire universe or multiverse (say, at the Big Bang), and the indexical conditions, which characterize the part of the universe or multiverse in which *we* happen to reside.  To make a prediction, of course one needs initial conditions as well as dynamical laws: indeed, outside of idealized toy problems, the initial conditions are typically the larger part of the input.  Yet leaving aside recent cosmological speculations (and genericity assumptions, like those of thermodynamics), the specification of initial conditions is normally not even considered the *task* of physics.  So, if there are no laws that fix the initial conditions, or even a distribution over possible initial conditions—if there aren’t even especially promising *candidates* for such laws—then why isn’t this just what free-will advocates have been looking for? It will be answered immediately that there’s an excellent reason why not.  Namely, whatever else we do or don’t know about the universe’s initial state (e.g., at the Big Bang), clearly nothing about it was determined by any of *our* choices!  (This is the assumption made explicit in step (ii) of van Inwagen’s Consequence Argument from Section \[CONSEQUENCE\].) The above answer might strike the reader as conclusive.  Yet *if* our interest is in actual, physical predictability—rather than in the metaphysical concept of determination—then notice that it’s no longer conclusive at all.  For we still need to ask: how much can we *learn* about the initial state by making measurements?  This, of course, is where quantum mechanics might become relevant. It’s actually easiest for our purposes to forget the famous *uncertainty principle*, and talk instead about the *No-Cloning Theorem*.  The latter is simply the statement that there’s no physical procedure, consistent with quantum mechanics, that takes as input a system in an arbitrary quantum state $\left\vert \psi\right\rangle $,[^28] and outputs two systems *both* in the state $\left\vert \psi \right\rangle $.[^29]  Intuitively, it’s not hard to see why: a measurement of (say) a qubit $\left\vert \psi\right\rangle =\alpha\left\vert 0\right\rangle +\beta\left\vert 1\right\rangle $ reveals only a single, probabilistic bit of information about the continuous parameters $\alpha$ and $\beta$; the rest of the information vanishes forever.  The proof of the No-Cloning Theorem (in its simplest version) is as easy as observing that the cloning map,$$\begin{aligned} \left( \alpha\left\vert 0\right\rangle +\beta\left\vert 1\right\rangle \right) \left\vert 0\right\rangle & \longrightarrow\left( \alpha\left\vert 0\right\rangle +\beta\left\vert 1\right\rangle \right) \left( \alpha \left\vert 0\right\rangle +\beta\left\vert 1\right\rangle \right) \\ & =\alpha^{2}\left\vert 0\right\rangle \left\vert 0\right\rangle +\alpha \beta\left\vert 0\right\rangle \left\vert 1\right\rangle +\alpha \beta\left\vert 1\right\rangle \left\vert 0\right\rangle +\beta^{2}\left\vert 1\right\rangle \left\vert 1\right\rangle ,\end{aligned}$$ acts nonlinearly on the amplitudes, but in quantum mechanics, unitary evolution must be linear. Yet despite its mathematical triviality, the No-Cloning Theorem has the deep consequence that quantum states have a certain privacy: unlike classical states, they can’t be copied promiscuously around the universe.  One way to gain intuition for the No-Cloning Theorem is to consider some striking cryptographic protocols that rely on it.  In *quantum key distribution* [@bb84]—something that’s already (to a small extent) been commercialized and deployed—a sender, Alice, transmits a secret key to a receiver, Bob, by encoding the key in a collection of qubits.  The crucial point is that if an eavesdropper, Eve, tries to learn the key by measuring the qubits, then the very fact that she measured the qubits will be detectable by Alice and Bob—so Alice and Bob can simply keep trying until the channel is safe.  *Quantum money*, proposed decades ago by Wiesner [@wiesner] and developed further in recent years [@aar:qcopy; @achristiano; @knots], would exploit the No-Cloning Theorem even more directly, to create cash that can’t be counterfeited according to the laws of physics, but that can nevertheless be verified as genuine.[^30]  A closely related proposal, *quantum software copy-protection* (see [@aar:qcopy; @achristiano]), would exploit the No-Cloning Theorem in a still more dramatic way: to create quantum states $\left\vert \psi_{f}\right\rangle $ that can be used to evaluate some function $f$, but that can’t feasibly be used to create more states with which $f$ can be evaluated.  Research on quantum copy-protection has shown that, at least in a few special cases (and maybe more broadly), it’s possible to create a physical object that 1. interacts with the outside world in an interesting and nontrivial way, yet 2. effectively hides from the outside world the information needed to predict how the object will behave in future interactions. When put that way, the *possible* relevance of the No-Cloning Theorem to free-will discussions seems obvious!  And indeed, a few bloggers and others[^31] have previously speculated about a connection.  Interestingly, their motivation for doing so has usually been to *defend compatibilism* (see Section \[COMPAT\]).  In other words, they’ve invoked the No-Cloning Theorem to explain why, despite the mechanistic nature of physical law, human decisions will nevertheless remain unpredictable *in practice*.  In a discussion of this issue, one commenter[^32] opined that, while the No-Cloning Theorem *does* put limits on physical predictability, human brains will also remain unpredictable for countless more prosaic reasons that have nothing to do with quantum mechanics.  Thus, he said, invoking the No-Cloning Theorem in free-will discussions is like hiring the world’s most high-powered lawyer to get you out of a parking ticket. Personally, though, I think the world’s most high-powered lawyer might ultimately be needed here!  For the example of the Singulatarians (see Section \[SINGULAR\]) shows why, in these discussions, it doesn’t suffice to offer merely practical reasons why copying a brain state is hard.  Every practical problem can easily be countered by a speculative technological answer—one that assumes a future filled with brain-scanning nanorobots and the like.  If we want a proposed obstacle to copying to survive unlimited technological imagination, then the obstacle had better be grounded in the laws of physics. So as I see it, the real question is: once we disallow quantum mechanics, does there remain any *classical* source of fundamental unclonability in physical law?  Some might suggest, for example, the impossibility of keeping a system perfectly isolated from its environment during the cloning process, or the impossibility of measuring continuous particle positions to infinite precision.  But either of these would require very nontrivial arguments (and if one wanted to invoke continuity, one would also have to address the apparent breakdown of continuity at the Planck scale).  There are also formal analogues of the No-Cloning Theorem in various classical settings, but none of them seem able to do the work required here.[^33]  As far as current physics can say, *if* copying a bounded physical system is fundamentally impossible, then the reason for the impossibility seems ultimately traceable to quantum mechanics. Let me end this section by discussing *quantum teleportation*, since nothing more suggestively illustrates the philosophical work that the No-Cloning Theorem can potentially do.  Recall, from Section \[UPLOADING\], the paradox raised by teleportation machines.  Namely, after a perfect copy of you has been reconstituted on Mars from the information in radio signals, what should be done with the original copy of you back on Earth?  Should it be euthanized? Like other such paradoxes, this one need not trouble us if (because of the No-Cloning Theorem, or whatever other reason) we drop the assumption that such copying is possible—at least with great enough fidelity for the copy on Mars to be a second instantiation of you.  However, what makes the situation more interesting is that there *is* a famous protocol, discovered by Bennett et al. [@bbcjpw], for teleporting an arbitrary quantum state $\left\vert \psi\right\rangle $ by sending classical information only.  (This protocol also requires quantum entanglement, in the form of *Bell pairs* $\frac{\left\vert 00\right\rangle +\left\vert 11\right\rangle }{\sqrt{2}}$,* *shared between the sender and receiver in advance, and one Bell pair gets used up for every qubit teleported.) Now, a crucial feature of the teleportation protocol is that, in order to determine which classical bits to send, the sender needs to perform a measurement on her quantum state $\left\vert \psi\right\rangle $ (together with her half of the Bell pair) that *destroys* $\left\vert \psi\right\rangle $.  In other words, in quantum teleportation, the destruction of the original copy is not an extra decision that one needs to make; rather, it happens as an inevitable byproduct of the protocol itself!  Indeed this must be so, since otherwise quantum teleportation could be used to violate the No-Cloning Theorem. The Freebit Picture\[FREEBITS\] ------------------------------- At this point one might interject: theoretical arguments about the No-Cloning Theorem are well and good, but even if accepted, they still don’t provide any concrete picture of how Knightian uncertainty could be relevant to human decision-making. So let me sketch a possible picture (the only one I can think of, consistent with current physics), which I call the freebit picture.  At the Big Bang, the universe had some particular quantum state $\left\vert \Psi\right\rangle $.  If known, $\left\vert \Psi\right\rangle $ would of course determine the universe’s future history, modulo the probabilities arising from quantum measurement.  However, *because* $\left\vert \Psi\right\rangle $ is the state of the whole universe (including us), we might refuse to take a God’s-eye view of $\left\vert \Psi\right\rangle $, and insist on thinking about $\left\vert \Psi\right\rangle $ differently than about an ordinary state that we prepare for ourselves in the lab.  In particular, we might regard at least some (not all) of the qubits that constitute $\left\vert \Psi\right\rangle $ as what I’ll call *freebits*.  A freebit is simply a qubit for which the most complete physical description possible involves Knightian uncertainty.  While the details aren’t so important, I give a brief mathematical account of freebits in Appendix \[FREESTATES\].  For now, suffice it to say that a *freestate* is a convex set of quantum mixed states, and a freebit is a $2$-level quantum system in a freestate. Thus, by the *freebit picture*, I mean the picture of the world according to which 1. due to Knightian uncertainty about the universe’s initial quantum state $\left\vert \Psi\right\rangle $, at least some of the qubits found in nature are regarded as freebits, and 2. the presence of these freebits makes predicting certain future events—possibly including some human decisions—physically impossible, even probabilistically and even with arbitrarily-advanced future technology. Section \[CHAOS\] will say more about the biological aspects of the freebit picture: that is, the actual chains of causation that could in principle connect freebits to (say) a neuron firing or not firing.  In the rest of this section, I’ll discuss some physical and conceptual questions about freebits themselves. Firstly, why is it important that freebits be qubits, rather than *classical* bits subject to Knightian uncertainty?  The answer is that only for qubits can we appeal to the No-Cloning Theorem.  Even if the value of a classical bit $b$ can’t determined by measurements on the entire rest of the universe, a superintelligent predictor could always learn $b$ by non-invasively measuring $b$ *itself*.  But the same is not true for a qubit. Secondly, isn’t Knightian uncertainty in the eye of the beholder?  That is, why couldn’t one observer regard a given qubit as a freebit, while a different observer, with more information, described the same qubit by ordinary quantum mixed state $\rho$?  The answer is that our criterion for what counts a freebit is extremely stringent.  Given a $2$-level quantum system $S$, *if a superintelligent demon could reliably learn the reduced density matrix* $\rho$* of* $S$*, via arbitrary measurements on anything in the universe (including* $S$* itself), then* $S$ *is not a freebit.*  Thus, to qualify as a freebit, $S$ must be a freely moving part of the universe’s quantum state $\left\vert \Psi\right\rangle $: it must not be possible (even in principle) to trace $S$’s causal history back to any physical process that generated $S$ according to a known probabilistic ensemble.  Instead, our Knightian uncertainty about $S$ must (so to speak) go all the way back, and be traceable to uncertainty about the initial state of the universe. To illustrate the point: suppose we detect a beam of photons with varying polarizations.  For the most part, the polarizations look uniformly random (i.e., like qubits in the maximally mixed state).  But there is a slight bias toward the vertical axis, and the bias is slowly changing over time, in a way not fully accounted for by our model of the photons.  So far, we can’t rule out the possibility that freebits might be involved.  However, suppose we later learn that the photons are coming from a laser in another room, and that the polarization bias is due to drift in the laser that can be characterized and mostly corrected.  Then the scope for freebits is correspondingly reduced. Someone might interject: but *why* was there drift in the laser?  Couldn’t freebits have been responsible for the drift itself?  The difficulty is that, *even if so*, we still couldn’t use those freebits to argue for Knightian uncertainty in the laser’s *output*.  For between the output photons, and whatever freebits might have caused the laser to be configured as it was, there stands a classical, macroscopic intermediary: the laser itself.  If a demon had wanted to predict the polarization drift in the output photons, the demon could simply have traced the photons back to the laser, then *non-invasively measured the laser’s classical degrees of freedom*—cutting off the causal chain there and ignoring any further antecedents.  In general, given some quantum measurement outcome $Q$ that we’re trying to predict, if there exists a classical observable $C$ that could have been non-invasively measured long before $Q$, and that if measured, would have let the demon probabilistically predict $Q$ to arbitrary accuracy (in the same sense that radioactive decay is probabilistically predictable), then I’ll call $C$ a *past macroscopic determinant (PMD)* for $Q$. In the freebit picture, we’re exclusively interested in the quantum states—if there are any!—that *can’t* be grounded in PMDs, but can only be traced all the way back to the early universe, with no macroscopic intermediary along the way that screen off the early universe’s causal effects.  The reason is simple: because such states, if they exist, are the only ones that our superintelligent demon, able to measure all the macroscopic observables in the universe, would *still* have Knightian uncertainty about.  In other words, such states are the only possible freebits. Of course this immediately raises a question: > **(\*) In the actual universe, *are* there any quantum states that can’t be grounded in PMDs?** A central contention of this essay is that pure thought doesn’t suffice to answer question (\*): here we’ve reached the limits of where conceptual analysis can take us.  There are possible universes consistent with the rules of quantum mechanics where the requisite states exist, and other such universes where they don’t exist, and deciding which kind of universe *we* inhabit seems to require scientific knowledge that we don’t have. Some people, while agreeing that logic and quantum mechanics don’t suffice to settle question (\*), would nevertheless say we can settle it using simple facts about astronomy.  At least near the surface of the earth, they’d ask, what quantum states could there possibly be that *didn’t* originate in PMDs?  Most of the photons impinging on the earth come from the sun, whose physics is exceedingly well-understood.  Of the subatomic particles that could conceivably tip the scales of (say) a human neural event, causing it to turn out one way rather than another, others might have causal pathways that lead back to other astronomical bodies (such as supernovae), the earth’s core, etc.  But it seems hard to imagine how any of the latter possibilities *wouldn’t* serve as PMDs: that is, how they wouldn’t effectively screen off any Knightian uncertainty from the early universe. To show that the above argument is inconclusive, one need only mention the cosmic microwave background (CMB) radiation.  CMB photons pervade the universe, famously accounting for a few percent of the static in old-fashioned TV sets.  Furthermore, many CMB photons are believed to reach the earth having maintained quantum coherence ever since being emitted at the so-called *time of last scattering*, roughly $380,000$ years after the Big Bang.  Finally, unlike (say) neutrinos or dark matter, CMB photons readily interact with matter.  In short, we’re continually bathed with at least one type of radiation that seems to satisfy most of the freebit picture’s requirements! On the other hand, no sooner is the CMB suggested for this role than we encounter two serious objections.  The first is that the time of last scattering, when the CMB photons were emitted, is separated from the Big Bang itself by $380,000$ years.  So if we wanted to postulate CMB photons as freebit carriers, then we’d also need a story about why the hot early universe should *not* be considered a PMD, and about how a qubit might have made it intact from the Big Bang—or at any rate, from as far back as current physical theory can take us—to the time of last scattering.  The second objection asks us to imagine someone *shielded* from the CMB: for example, someone in a deep underground mine.  Would such a person be bereft of Knightian freedom, at least while he or she remained in the mine? Because of these objections, I find that, while the CMB might be one *piece* of a causal chain conveying a qubit to us from the early universe (without getting screened off by a PMD), it can’t possibly provide the full story.  It seems to me that convincingly answering question (\*) would require something like a census of the possible causal chains from the early universe to ourselves that are allowed by particle physics and cosmology.  I don’t know whether the requisite census is beyond present-day science, but it’s certainly beyond *me*!  Note that, if it could be shown that *all* qubits today can be traced back to PMDs, and that the answer to (\*) is negative, then the freebit picture would be falsified. Amplification and the Brain\[CHAOS\] ------------------------------------ We haven’t yet touched on an obvious question: *once freebits have made their way into (say) a brain, by whatever means, how could they then tip the scales of a decision?  *But it’s not hard to suggest plausible answers to this question, without having to assume anything particularly exotic about either physics or neurobiology.  Instead, one can appeal to the well-known phenomenon of chaos (i.e., sensitive dependence on initial conditions) in dynamical systems. The idea that chaos in brain activity might somehow underlie free will is an old one.  However, that idea has traditionally been rejected, on the sensible ground that a classical chaotic system is still perfectly deterministic!  Our inability to measure the initial conditions to unlimited precision, and our consequent inability to predict very far into the future, seem at best like practical limitations. Thus, a revised idea has held that the role of chaos for free will might be to take quantum fluctuations—which, as we know, are *not* deterministic (see Section \[QMHV\])—and amplify those fluctuations to macroscopic scale.  However, this revised idea has also been rejected, on the (again sensible) ground that, even if true, it would only make the brain a *probabilistic* dynamical system, which still seems mechanistic in any meaningful sense. The freebit picture makes a further revision: namely, it postulates that chaotic dynamics in the brain can have the effect of amplifying *freebits* (i.e., microscopic Knightian uncertainty) to macroscopic scale.  If nothing else, this overcomes the elementary objections above.  Yes, the resulting picture might still be wrong, but not for some simple *a priori* reason—and to me, that represents progress! It’s long been recognized that neural processes relevant to cognition can be sensitive to microscopic fluctuations.  An important example is the opening and closing of a neuron’s sodium-ion channels, which normally determines whether and for how long a neuron fires.  This process is modeled probabilistically (in particular, as a Markov chain) by the standard *Hodgkin-Huxley equations* [@hodgkinhuxley] of neuroscience.  Of course, one then has to ask: is the apparent randomness in the behavior of the sodium-ion channels ultimately traceable back to quantum mechanics (and if so, by what causal routes)?  Or does the randomness merely reflect our ignorance of relevant classical details? Balaguer [@balaguer] put the above question to various neuroscientists, and was told that either that the answer is unknown or that it’s outside neuroscience’s scope.  For example, he quotes Sebastian Seung as saying: The question of whether \[synaptic transmission and spike firing\] are ‘truly random’ processes in the brain isn’t really a neuroscience question.  It’s more of a physics question, having to do with statistical mechanics and quantum mechanics.  He also quotes Christof Koch as saying: At this point, we do not know to what extent the random, i.e. stochastic, neuronal processes we observe are due to quantum fluctuations (à la Heisenberg) that are magnified by chemical and biological mechanisms or to what extent they just depend on classical physics (i.e. thermodynamics) and statistical fluctuations in the underlying molecules. In his paper A scientific perspective on human choice [@sompolinsky], the neuroscientist Haim Sompolinsky offers a detailed review of what’s known about the brain’s sensitivity to microscopic fluctuations.  Though skeptical about any role for such fluctuations in cognition, he writes: > In sum, given the present state of our understanding of brain processes and given the standard interpretation of quantum mechanics, we cannot rule out the possibility that the brain is truly an indeterministic system; that because of quantum indeterminism, there are certain circumstances where choices produced by brain processes are not fully determined by the antecedent brain process and the forces acting on it.  If this is so, then the first prerequisite \[i.e., indeterminism\] of metaphysical free will ... may be consistent with the scientific understanding of the brain.[^34] To make the issue concrete: suppose that, with godlike knowledge of the quantum state $\left\vert \Psi\right\rangle $ of the entire universe, you wanted to change a particular decision made by a particular human being—and do so *without* changing anything else, except insofar as the other changes flowed from the changed decision itself.  Then the question that interests us is: *what sorts of changes to* $\left\vert \Psi \right\rangle $* would or wouldn’t suffice to achieve your aim?*  For example, would it suffice to change the energy of a single photon impinging on the subject’s brain?  Such a photon might get absorbed by an electron, thereby slightly altering the trajectories of a few molecules near one sodium-ion channel in one neuron, thereby initiating a chain of events that ultimately causes the ion channel to open, which causes the neuron to fire, which causes other neurons to fire, etc.  If that sort of causal chain is plausible—which, of course, is an empirical question—then at least as far as neuroscience is concerned, the freebit picture would seem to have the raw material that it needs. Some people might shrug at this, and regard our story of the photon that broke the camel’s back as so self-evidently plausible that the question isn’t even scientifically interesting!  So it’s important to understand that there are two details of the story that matter enormously, if we want the freebit picture to be viable.  The first detail concerns the *amount of time* needed for a microscopic change in a quantum state to produce a macroscopic change in brain activity.  Are we talking seconds? hours? days?[^35]  The second detail concerns *localization*.  We’d like our change to the state of a single photon to be surgical in its effects: it should change a person’s neural firing patterns enough to alter that person’s actions, but any other macroscopic effects of the changed photon state should be mediated through the change to the brain state.  The reason for this requirement is simply that, if it fails, then a superintelligent predictor might non-invasively learn about the photon by measuring its *other* macroscopic effects, and ignoring its effects on the brain state. To summarize our questions: > *What are the causal pathways by which a microscopic fluctuation can get chaotically amplified, in human or other animal brains?  What are the characteristic timescales for those pathways?  What side effects do the pathways produce, separate from their effects on cognition?*[^36] In Section \[FALSIFY\], I’ll use these questions—and the freebit picture’s dependence on their answers—to argue that the picture makes falsifiable predictions.  For now, I’ll simply say that these questions strike me as wide open to investigation, and not only in principle.  That is, I can easily imagine that in (say) fifty years, neuroscience, molecular biology, and physics will be able to say more about these questions than they can today.  And crucially, the questions strike me as scientifically interesting regardless of one’s philosophical predilections.  Indeed, one could reject the freebit picture completely, and still see progress on these questions as a rising tide that lifts all boats in the scientific understanding of free will.  The freebit picture seems unusual only in *forcing* us to address these questions. Against Homunculi\[HOMUNCULUS\] ------------------------------- A final clarification is in order about the freebit picture.  One might worry that freebits are playing the role of a homunculus: the secret core of a person; a smaller person inside who directs the brain like a manager, puppeteer, or pilot; Ryle’s ghost in the machine.  But in philosophy and cognitive science, the notion of a homunculus has been rightly laughed off the stage.  Like the theory that a clock can only work if there’s a smaller clock inside, the homunculus theory blithely offers a black box where a *mechanism* is needed, and it leads to an obvious infinite regress: who’s in charge of the homunculus? Furthermore, if this were really the claim at issue, one would want to know: why do humans (and other animals) *have* such complicated brains in the first place?  Why shouldn’t our skulls be empty, save for tiny freebit antennae for picking up signals from the Big Bang? But whatever other problems the freebit picture has, I think it’s innocent of the charge of homunculism.  On the freebit picture—as, I’d argue, on *any* sane understanding of the world—the physical activity of the brain retains its starring role in cognition.  To whatever extent your true self has *any* definite location in spacetime, that location is in your brain.  To whatever extent your behavior is predictable, that predictability ultimately derives from the predictability of your brain.  And to whatever extent your choices have an author, *you* are their author, not some homunculus secretly calling the shots. But if this is so, one might ask, then *what role could possibly be left for freebits*, besides the banal role of an unwanted noise source, randomly jostling neural firing patterns this way and that?  Perhaps the freebit picture’s central counterintuitive claim is that freebits *can* play a more robust role than that of glorified random-number generator, without usurping the brain’s causal supremacy.  Or more generally: *an organized, complex system can include a source of pure Knightian unpredictability, which foils probabilistic forecasts made by outside observers, yet need not play any role in explaining the system’s organization or complexity.*  While I confess that this claim is strange, I fail to see any *logical* difficulty with it, nor do I see any way to escape it if the freebit picture is accepted. To summarize, on the freebit picture, freebits are simply part of the explanation for how a brain can reach decisions that are not probabilistically predictable by outside observers, and that are therefore free in the sense that interests us.  As such, on this picture freebits are just one ingredient among many in the physical substrate of the mind.  I’d no more consider them the true essence of a person than the myelin coating that speeds transmission between neurons. Freedom from the Inside Out\[FIO\] ================================== The discussion in Section \[HOMUNCULUS\] might remind us about the importance of stepping back.  Setting aside any other concerns, *isn’t it anti-scientific insanity to imagine that our choices today could correlate nontrivially with the universe’s microstate at the Big Bang?  Why shouldn’t this idea just be thrown out immediately?* In this section, I’ll discuss an unusual perspective on time, causation, and boundary conditions: one that, *if* adopted, makes the idea of such a correlation seem not particularly crazy at all.  The interesting point is that, despite its strangeness, this perspective seems perfectly compatible with everything we know about the laws of physics.  The perspective is not new; it was previously suggested by Carl Hoefer [@hoefer] and independently by Cristi Stoica [@stoica; @stoica:qm].  (As Hoefer discusses, centuries before either of them, Kant appears to have made related observations in his *Critique of Practical Reason*, while trying to reconcile moral responsibility with free will!  However, Kant’s way of putting things strikes me as obscure and open to other interpretations.)  Adopting Hoefer’s terminology, I’ll call the perspective Freedom from the Inside Out, or FIO for short. The FIO perspective starts from the familiar fact that the known equations of physics are *time-reversible*: any valid solution to the equations is still a valid solution if we replace $t$ by $-t$.[^37]  Hence, there seems to be no particular reason to imagine time as flowing from past to future.  Instead, we might as well adopt what philosophers call the Block Universe picture, where the whole $4$-dimensional spacetime manifold is laid out at once as a frozen block.[^38]  Time is simply one more coordinate parameterizing that manifold, along with the spatial coordinates $x,y,z$.  The equations *do* treat the $t$ coordinate differently from the other three, but not in any way that seems to justify the intuitive notion of $t$ flowing in a particular direction, any more than the $x,y,z$ coordinates flow in a particular direction.  Of course, with the discovery of special relativity, we learned that the choice of $t$ coordinate is no more unique than the choice of $x,y,z$ coordinates; indeed, an event in a faraway galaxy that you judge as years in the future, might well be judged as years in the past by someone walking past you on the street.  To many philosophers, this seems to make the argument for a Block-Universe picture even stronger than it had been in Newtonian physics.[^39] The Block-Universe picture is sometimes described as leaving no room for free will—but that misses the much more important point, that the picture also leaves no room for *causation*!  If we adopt this mentality consistently, then *it’s every bit as meaningless to say that Jack and Jill went up the hill because of the prior microstate of the universe, as it is to say that Jack and Jill went up the hill because they wanted to.*  Indeed, we might as well say that Jack and Jill went up the hill because of the *future* microstate of the universe!  Or rather, the concept of because plays no role in this picture: there are the differential equations of physics, and the actual history of spacetime is one particular solution to those equations, and that’s that. Now, the idea of Freedom from the Inside Out is simply to embrace the Block-Universe picture of physics, then turn it on its head.  We say: if this frozen spacetime block has no intrinsic causal arrows anyway, then *why not annotate it with causal arrows ourselves*, in whichever ways seem most compelling and consistent to us?  And at least *a priori*, who’s to say that some of the causal arrows we draw can’t point backwards in time—for example, from human mental states and decisions to earlier microstates consistent with them?  Thus, we might decide that yesterday your brain absorbed photons in certain quantum states *because* today you were going to eat tuna casserole, and running the differential equations backward from the tuna casserole produced the photons.  In strict Block-Universe terms, this seems *absolutely* no worse than saying that you ate tuna casserole today because of the state of the universe yesterday. I’ll let Hoefer explain further: > The idea of freedom from the inside out is this: we are perfectly justified in viewing our own actions *not* as determined by the past, *nor* as determined by the future, but rather as simply determined (to the extent that this word sensibly applies) *by ourselves, by our own wills*.  In other words, they need not be viewed as *caused* or *explained* by the physical states of other, vast regions of the block universe.  Instead, we can view our own actions, *qua* physical events, as primary explainers, determining—in a very partial way—physical events outside ourselves to the past and future of our actions, in the block.  We adopt the perspective that the determination or explanation that matters is from the *inside* (of the block universe, where we live) *outward*, rather than from the *outside* (e.g. the state of things on a time slice 1 billion years ago) *in*.  And we adopt the perspective of downward causation, thinking of our choices and intentions as primary explainers of our physical actions, rather than letting microstates of the world usurp this role.  We are free to adopt these perspectives because, quite simply, physics—including \[a\] postulated, perfected deterministic physics—is perfectly compatible with them. [@hoefer p. 207-208, emphases in original] Some readers will immediately object as follows: > Yes, causality in the Block Universe might indeed be a subtle, emergent concept.  But doesn’t the FIO picture take that observation to a ludicrous extreme, by turning causality into a free-for-all?  For example, why couldn’t an FIO believer declare that the dinosaurs went extinct $65$ million years ago *because* if they hadn’t, today I might not have decided to go bowling? The reply to this objection is interesting.  To explain it, we first need to ask: if the notion of causality appears nowhere in fundamental physics, then why does it *look* like past events constantly cause future ones, and never the reverse?  Since the late $19^{th}$ century, physicists have had a profound answer to that question, or at least a reduction of the question to a different question. The answer goes something like this: causality is an emergent phenomenon, associated with the Second Law of Thermodynamics.  Though no one really knows why, the universe was in a vastly improbable, low-entropy state at the Big Bang—which means, for well-understood statistical reasons, that the further in time we move *away* from the Big Bang, the greater the entropy we’ll find.  Now, the creation of reliable memories and records is essentially always associated with an increase in entropy (some would argue by definition).  And in order for us, as observers, to speak sensibly about $A$ causing $B$, we must be able to create records of $A$ happening *and then* $B$ happening.  But by the above, this will essentially never be possible unless $A$ is closer in time than $B$ to the Big Bang. We’re now ready to see how the FIO picture evades the unacceptable conclusion of a causal free-for-all.  It does so by *agreeing* with the usual account of causality based on entropy increase, in all situations where the usual account is relevant.  While Hoefer [@hoefer] and Stoica [@stoica; @stoica:qm] are not explicit about this point, I would say that on the FIO picture, it can only make sense to draw causal arrows backwards in time, in those rare situations where entropy is *not* increasing with time. What are those situations?  To a first approximation, they’re the situations where physical systems are allowed to evolve reversibly, free from contact with their external environments.  In practice, such perfectly-isolated systems will almost always be microscopic, and the reversible equation relevant to them will just be the Schrödinger equation.  One example of such a system would be a photon of cosmic background radiation, which was emitted in the early universe and has been propagating undisturbed ever since.  But these are precisely the sorts of systems that I’d consider candidates for freebits!  As far as I can see, if we want the FIO picture not to lead to absurdity, then we can entertain the possibility of backwards-in-time causal arrows *only* for these systems.  For this is where the normal way of drawing causal arrows breaks down, and we have nothing else to guide us. The Harmonization Problem\[HARMONIZATION\] ------------------------------------------ There’s another potential problem with the FIO perspective—Hoefer [@hoefer] calls it the Harmonization Problem—that is so glaring that it needs to be dealt with immediately.  The problem is this: once we let certain causal arrows point backward in time, say from events today to the microstate at the Big Bang, we’ve set up what a computer scientist would recognize as a giant *constraint satisfaction problem*.  Finding a solution to the differential equations of physics is no longer just a matter of starting from the initial conditions and evolving them forward in time.  Instead, we can now also have constraints involving *later* times—for example, that a person makes a particular choice—from which we’re supposed to propagate backward.  But if this is so, then *why should a globally-consistent solution even exist, in general?*  In particular, what happens if there are *cycles* in the network of causal arrows?  In such a case, we could easily face the classic grandfather paradox of time travel to the past: for example, if an event $A$ causes another event $B$ in its future, but $B$ also causes $\operatorname*{not}\left( A\right) $ in its past—which in turn causes $\operatorname*{not}\left( B\right) $, which in turn causes $A$.  Furthermore, even if globally-consistent solutions *do* exist, one is tempted to ask what algorithm Nature uses to build a solution up.  Should we imagine that Nature starts at the initial conditions and propagates them forward, but then backtracks if a contradiction is found, adjusting the initial conditions slightly in an attempt to avoid the contradiction?  What if it gets into an infinite loop this way? With hindsight, we can see the discussion from Section \[UPLOADING\] about brain-uploading, teleportation, Newcomb’s Paradox, and so on as highlighting the same concerns about the consistency of spacetime—indeed, as using speculative future technologies to make those concerns more vivid.  Suppose, for example, that your brain has been scanned, and a complete physical description of it (sufficient to predict all your future actions) has been stored on a computer on Pluto.  And suppose you then make a decision.  Then from an FIO perspective, the question is: can we take your decision to explain, not only the state of your brain at the moment of your choice, and not only various microstates in your past lightcone[^40] (insofar as they need to be compatible with your choice), but *also what’s stored in the computer memory billions of kilometers away*?  Should we say that you and the computer now make your decisions in synchrony, whatever violence that might seem to do to the locality of spacetime?  Or should we say that the very act of copying your brain state removed your freedom where previously you had it, or proved that you were never free in the first place? Fortunately, it turns out that in the freebit picture, *none of these problems arise.*  The freebit picture might be rejected for other reasons, but it can’t be charged with logical inconsistency or with leading to closed timelike curves.  The basic observation is simply that we have to distinguish between what I’ll call *macrofacts* and *microfacts*.  A macrofact is a fact about a decohered, classical, macroscopic property of a physical system $S$ at a particular time.  More precisely, a macrofact is a fact $F$ that could, in principle, be learned by an external measuring device without disturbing the system $S$ in any significant way.  Note that, for $F$ to count as a macrofact, it’s not necessary that anyone has ever known $F$ or will ever know $F$: only that $F$ *could have been* known, if a suitable measuring device had been around at the right place and the right time.  So for example, there is a macrofact about whether or not a Stegosaurus kicked a particular rock 150 million years ago, even if no human will ever see the rock, and even if the relevant information can no longer be reconstructed, even in principle, from the quantum state of the entire solar system.  There are also macrofacts constantly being created in the interiors of stars and in interstellar space. By contrast, a microfact is a fact about an undecohered quantum state $\rho$: a fact that couldn’t have been learned, even in principle, without the potential for altering $\rho$ (if the measurement were performed in the wrong basis).  For example, the polarization of some particular photon of the cosmic microwave background radiation is a microfact.  A microfact might or might not concern a freebit, but the quantum state of a freebit is always a microfact. Within the freebit picture, the solution to the Harmonization Problem is now simply to impose the following two rules. 1. Backwards-in-time causal arrows can point only to microfacts, never to macrofacts. 2. No microfact can do double duty: if it is caused by a fact to its future, then it is not itself the cause of anything, nor is it caused by anything else. Together, these rules readily imply that no cycles can arise, and more generally, that the causality graph never produces a contradiction.  For whenever they hold, the causality graph with be a *directed acyclic graph* (a dag), with all arrows pointing forward in time, except for some dangling arrows pointing backward in time that never lead anywhere else. Rule (1) is basically imposed by fiat: it just says that, for all the events we actually observe, we must seek their causes only to their past, never to their future.  This rule might someday be subject to change (for example, if closed timelike curves were discovered), but for now, it seems like a pretty indispensable part of a scientific worldview.  By contrast, rule (2) can be justified by appealing to the No-Cloning Theorem.  If a microfact $f$ is directly caused by a macrofact $F$ to its future, then at some point a *measurement* must have occurred (or more generally, some decoherence event that we can *call* a measurement), in order to amplify $f$ to macroscopic scale and correlate it with $F$.  In the freebit picture, we think of the correlation with $F$ as completely fixing $f$, which explains why $f$ can’t also be caused by anything other than $F$.  But why can’t $f$, itself, cause some macrofact $F^{\prime}$ (which, by rule (1), would need to be to $f$’s future)?  Here there are two cases: $F^{\prime}=F$ or $F^{\prime}\neq F$.  If $F^{\prime}=F$, then we’ve simply created a harmless $2$-element cycle, where $f$ and $F$ cause each other.  It’s purely by convention that we disallow such cycles, and declare that $F$ causes $f$ rather than the reverse.  On the other hand, if $F^{\prime}\neq F$, then we have two independent measurements of $f$ to $f$’s future, violating the No-Cloning Theorem.[^41]  Note that this argument wouldn’t have worked if $f$ had been a macrofact, since macroscopic information *can* be measured many times independently. Subtle questions arise when we ask about the possibility of microfacts causing other microfacts.  Rules (1) and (2) allow that sort of causation, as long as it takes place forward in time—or, if backward in time, that it consists of a single dangling arrow only.  If we wanted, without causing any harm we could allow long chains (and even dags) of microfacts causing other microfacts backward in time, possibly originating at some macrofact to their future.  We would need to be careful, though, that none of those microfacts ever caused any facts to their future, since that would create the possibility of cycles.  A simpler option is just to declare the entire concept of causality irrelevant to the microworld.  On that view, whenever a microfact $f$ causes another microfact $f^{\prime}$, unitarity makes it just as legitimate to say that $f^{\prime}$ causes $f$, or that neither causes the other.  Because of the reversibility of microscopic laws, the temporal order of $f$ and $f^{\prime}$ is irrelevant: if $U\left\vert \psi\right\rangle =\left\vert \varphi \right\rangle $, then $U^{\dagger}\left\vert \varphi\right\rangle =\left\vert \psi\right\rangle $.  This view would regard causality as inextricably bound up with the thermodynamic arrow of time, and therefore with *ir*reversible processes, and therefore with macrofacts. Microfacts and Macrofacts\[MICROMACRO\] --------------------------------------- An obvious objection to the distinction between microfacts and macrofacts is that it’s poorly-defined.  The position of a rock might be obviously a macrofact, and the polarization of a photon obviously a microfact, but there is a continuous transition between the two.  Exactly how decohered and classical does a fact have to be, before it counts as a macrofact for our purposes?  This, of course, is reminiscent of the traditional objection to Bohr’s Copenhagen Interpretation of quantum mechanics, and in particular, to its unexplained yet crucial distinction between the microworld of quantum mechanics and the macroworld of classical observations. Here, my response is basically to admit ignorance.  The freebit picture is not particularly sensitive to the precise *way* we distinguish between microfacts and macrofacts.  But if the picture is to make sense, it does require that there *exist* a consistent way to make this distinction.  (Or at least, it requires that there exist a consistent way to *quantify* the macro-ness of a fact $f$.  The degree of macro-ness of $f$ might then correspond to the effort of will needed to affect $f$ retrocausally, with the effort becoming essentially infinite for ordinary macroscopic facts!) One obvious way to enforce a macro/micro distinction would be via a *dynamical collapse theory*, such as those of Ghirardi, Rimini, and Weber [@grw] or Penrose [@penrose:shadows].  In these theories, all quantum states periodically undergo spontaneous collapse to some classical basis, at a rate that grows with their mass or some other parameter, and in such a way that the probability of spontaneous collapse is close to $0$ for all quantum systems that have so far been studied, but close to $1$ for ordinary classical systems.  Unfortunately, the known dynamical-collapse theories tend to suffer from technical problems, such as small violations of conservation of energy, and of course there is as yet no experimental evidence for them.  More fundamentally, I personally cannot believe that Nature would solve the problem of the transition between microfacts and macrofacts in such a seemingly ad hoc way, a way that does so much violence to the clean rules of linear quantum mechanics. As I’ll discuss further in Section \[MWI\], my own hope is that a principled distinction between microfacts and macrofacts could ultimately emerge from cosmology.  In particular, I’m partial to the idea that, in a deSitter cosmology like our own, *a macrofact is simply any fact of which the news is already propagating outward at the speed of light*, so that the information can never, even in principle, be gathered together again in order to uncause the fact.  A microfact would then be any fact for which this propagation *hasn’t* yet happened.  The advantage of this distinction is that it doesn’t involve the slightest change to the principles of quantum mechanics.  The disadvantage is that the distinction is teleological: the line between microfacts and macrofacts is defined by what *could* happen arbitrarily far in the future.  In particular, this distinction implies that if, hypothetically, we could surround the solar system by a perfect reflecting boundary, then within the solar system, there would no longer be *any* macrofacts!  It also implies that there can be no macrofacts in an *anti*-deSitter (adS) cosmology, which *does* have such a reflecting boundary.  Finally, it suggests that there can probably be few if any macrofacts inside the event horizon of a black hole.  For even if the information in the black hole interior *eventually* emerges, in coded form, in the Hawking radiation, the Hawking evaporation process is so slow ($\thicksim10^{67}$ years for a solar-mass black hole) that there would seem to be plenty of time for an observer outside the hole to gather most of the radiation, and thereby prevent the information from spreading further.[^42] Because I can’t see a better alternative—and also, because I rather like the idea of cosmology playing a role in the foundations of quantum mechanics!—my current inclination is to bite the bullet, and accept these and other implications of the macro/micro distinction I’ve suggested.  But there’s enormous scope here for better ideas (or, of course, new developments in physics) to change my thinking. Further Objections\[OBJECTIONS\] ================================ In this section, I’ll present five objections to the freebit picture, together with my responses.  Some of these objections are obvious, and are generic to *any* analysis of freedom that puts significant stock in the actual unpredictability of human choices.  Others are less obvious, and are more specific to the freebit picture. The Advertiser Objection\[AD\] ------------------------------ The first objection is simply that human beings are depressingly predictable in practice: if they weren’t, then they wouldn’t be so easily manipulable!  So, does surveying the sorry history of humankind—in which most people, most of the time, did exactly the boring, narrowly self-interested things one might have expected them to do—furnish any evidence at all for the *existence* of freebits? **Response.**  I already addressed a related objection in Section \[LIBET\], but this one seems so important that it’s worth returning to it from a different angle. It’s obvious that humans are at least partly predictable—and sometimes *extremely* predictable, depending on what one is trying to predict.  For example, it’s vanishingly unlikely that tomorrow, the CEO of General Motors will show up to work naked, or that Noam Chomsky will announce his support for American militarism.  On the other hand, that doesn’t mean we know how to program a computer to *simulate* these people, anticipating every major or minor decision they make throughout their lives!  It seems crucial to maintain the distinction between the partial predictability that even the most vocal free-will advocate would concede, and the physics-like predictability of a comet.  To illustrate, imagine a machine that correctly predicted *most* decisions of *most* people: what they’ll order for dinner, which movies they’ll select, which socks they’ll take out of the drawer, and so on.  (In a few domains, this goal is already being approximated by recommendation systems, such as those of Amazon and Netflix.)  But imagine that the machine was regularly blindsided by the most *interesting, consequential, life-altering* decisions.  In that case, I suspect most people’s intuitions about their own freedom would be shaken slightly, but would basically remain intact.  By analogy, for most computer programs that arise in practice, it’s easy to decide whether they halt, but that hardly decreases the importance of the *general* unsolvability of the halting problem.  Perhaps, as Kane [@fourviews] speculates, we truly exercise freedom only for a relatively small number of self-forming actions (SFAs)—that is, actions that help to *define* who we are—and the rest of the time are essentially running on autopilot.  Perhaps these SFAs are common in childhood and early adulthood, but become rare later in life, as we get set in our ways and correspondingly more predictable. Having said this, I concede that the intuition in favor of humans’ predictability is a powerful one.  Indeed, even supposing humans *did* have the capacity for Knightian freedom, one could argue that that capacity can’t be particularly important, if almost all humans choose to run on autopilot almost all of the time! However, against the undeniable fact of humans so often being manipulable like lab rats, there’s a *second* undeniable fact, which should be placed on the other side of the intuitive ledger.  This second fact is the conspicuous failure of investors, pundits, intelligence analysts, and so on *actually to predict*, with any reliability, what individuals or even entire populations will do.  Again and again the best forecasters are blindsided (though it must be admitted that *after the fact*, forecasters typically excel at explaining the inevitability of whatever people decided to do!). The Weather Objection\[WEATHER\] -------------------------------- What’s special about the human brain?  If we want to describe it as having Knightian freedom, then why not countless *other* complicated dynamical systems, such as the Earth’s weather, or a lava lamp? **Response.**  For systems like the weather or a lava lamp, I think a plausible answer can actually be given.  These systems are indeed unpredictable, but the unpredictability seems much more probabilistic than Knightian in character.  To put it another way, the famous butterfly effect seems likely to be an artifact of *deterministic* models of those systems; one expects it to get washed out as soon as we model the systems’ microscopic components probabilistically.  To illustrate, imagine that the positions and momenta of all the molecules in the Earth’s atmosphere had been measured to roughly the maximum precision allowed by quantum mechanics; and that, on the basis of those measurements, a supercomputer had predicted a $23\%$ probability of a thunderstorm in Shanghai at a specific date next year.  Now suppose we changed the initial conditions by adding a single butterfly.  In classical, deterministic physics, that could certainly change whether the storm happens, due to chaotic effects—but that isn’t the relevant question.  The question is: how much does adding the butterfly change the *probability* of the storm?  The answer seems likely to be hardly at all.  After all, the original $23\%$ was already obtained by averaging over a huge number of possible histories.  So unless the butterfly somehow changes the *general features* of the statistical ensemble, its effects should be washed out by the unmeasured randomness in the millions of *other* butterflies (and whatever else is in the atmosphere). Yet with brains, the situation seems plausibly different.  For brains seem balanced on a knife-edge between order and chaos: were they as orderly as a pendulum, they couldn’t support interesting behavior; were they as chaotic as the weather, they couldn’t support rationality.  More concretely, a brain is composed of neurons, each of which (in the crudest idealization) has a firing rate dependent on whether or not the sum of signals from incoming neurons exceeds some threshold.  As such, one expects there to be many molecular-level changes one could make to a brain’s state that don’t affect the overall firing pattern at all, but a few changes—for example, those that push a critical neuron just over the edge to firing or not firing—that affect the overall firing pattern drastically.  So for a brain—unlike for the weather—a single freebit *could* plausibly influence the probability of some macroscopic outcome, even if we model all of the system’s constituents quantum-mechanically. A closely-related difference between brains and the weather is that, while both are presumably chaotic systems able to amplify tiny effects, only in the case of brains are the amplifications likely to have irreversible consequences.  Even if a butterfly flapping its wings can cause a thunderstorm halfway around the world, a butterfly almost certainly won’t change the average *frequency* of thunderstorms—at least, not without changing something other than the weather as an intermediary.  To change the frequency of thunderstorms, one needs to change the trajectory of the earth’s *climate*, something less associated with butterflies than with macroscopic forcings (for example, increasing the level of carbon dioxide in the atmosphere, or hitting the earth with an asteroid).  With brains, by contrast, it seems perfectly plausible that a single neuron’s firing or not firing could affect the rest of a person’s life (for example, if it caused the person to make a *very* bad decision). The Gerbil Objection\[GERBIL\] ------------------------------ Even if it’s accepted that brains are very different from lava lamps or the weather considered purely as dynamical systems, one could attempt a different *reductio ad absurdum* of the freebit picture, by constructing a physical system that *behaves* almost exactly like a brain, yet whose Knightian uncertainty is decoupled from its intelligence in what seems like an absurd way.  Thus, consider the following thought experiment: on one side of a room, we have a digital computer, whose internal operations are completely deterministic.  The computer is running an AI program that easily passes the Turing test: many humans, let’s say, have maintained long Internet correspondences with the AI on diverse subjects, and not one ever suspected its consciousness or humanity.  On the other side of the room, we have a gerbil in a box.  The gerbil scurries in its box, in a way that we can imagine to be subject to at least some Knightian uncertainty.  Meanwhile, an array of motion sensors regularly captures information about the gerbil’s movements and transmits it across the room to the computer, which uses the information as a source of random bits for the AI.  Being extremely sophisticated, of course the AI doesn’t make *all* its decisions randomly.  But if it can’t decide between two roughly-equal alternatives, then it sometimes uses the gerbil movements to break a tie, much as an indecisive human might flip a coin in a similar situation. The problem should now be obvious.  By assumption, we have a system that *acts* with human-level intelligence (i.e., it passes the Turing test), and that’s *also* subject to Knightian uncertainty, arising from amplified quantum fluctuations in a mammalian nervous system.  So if we believe that humans have a capacity for freedom because of those two qualities, then we seem *obligated* to believe that the AI/gerbil hybrid has that capacity as well.  Yet if we simply disconnect the computer from the gerbil box, then the AI loses its capacity for freedom!  For then the AI’s responses, though they might still *seem* intelligent, could be unmasked as mechanistic by anyone who possessed a copy of the AI’s program.  Indeed, even if we replaced the gerbil box by an ordinary quantum-mechanical random number generator, the AI’s responses would still be *probabilistically* predictable; they would no longer involve Knightian uncertainty. Thus, a believer in the freebit picture seems forced to an insane conclusion: that the gerbil, though presumably oblivious to its role, is like a magic amulet that gives the AI a capacity for freedom it wouldn’t have had otherwise.  Indeed, the gerbil seems uncomfortably close to the soul-giving potions of countless children’s stories (stories that always end with the main character realizing that she *already had a soul*, even without the potion!).  Yet, if we reject this sort of thinking in the gerbil-box scenario, then why shouldn’t we also reject it for the human brain?  With the brain, it’s true, the Knightian-indeterminism-providing gerbil has been moved physically closer to the locus of thought: now it scurries around in the synaptic junctions and neurons, rather than in a box on the other side of the room.  But why should proximity make a difference?  *Wherever* we put the gerbil, it just scurries around aimlessly!  Maybe the scurrying is probabilistic, maybe it’s Knightian, but either way, it clearly plays no more *explanatory* role in intelligent decision-making than the writing on the Golem’s forehead. In summary, it seems the only way to rescue the freebit picture is to paper over an immense causal gap—between the brain’s Knightian noise and its cognitive information processing—with superstition and magic. **Response.**  Of all the arguments directed specifically against the freebit picture, this one strikes me as the most serious, which is why I tried to present it in a way that would show its intuitive force. On reflection, however, there *is* at least one potentially-important difference between the AI/gerbil system and the brain.  In the AI/gerbil system, the intelligence and Knightian noise components were *cleanly separable* from one another.  That is, the computer *could* easily be disconnected from the gerbil box, and reattached to a *different* gerbil box, or an ordinary random-number generator, or nothing at all.  And after this was done, there’s a clear sense in which the AI would still be running the exact same program: only the indeterminism-generating peripheral would have been swapped out.  For this reason, it seems best to think of the gerbil as simply *yet another part of the AI’s external environment*—along (for example) with all the questions sent to the AI over the Internet, which could *also* be used as a source of Knightian indeterminism. With the brain, by contrast, it’s not nearly so obvious that the Knightian indeterminism source *can* be physically swapped out for a different one, without destroying or radically altering the brain’s *cognitive* functions as well.  Yes, we can imagine futuristic nanorobots swarming through a brain, recording all the macroscopically measurable information about the connections between neurons and the strengths of synapses, then building a new brain that was macroscopically identical to the first, differing only in its patterns of microscopic noise.  But how would we know whether the robots had recorded *enough* information about the original brain?  What if, in addition to synaptic strengths, there was also cognitively-relevant information at a smaller scale?  Then we’d need more advanced nanorobots, able to distinguish even smaller features.  Ultimately, we could imagine robots able to record *all* classical or even quasi-classical features.  But by definition, the robots would then be measuring features that were *somewhat* quantum-mechanical, and therefore inevitably changing those features. Of course, this is hardly a conclusive argument, since maybe there’s a gap of many orders of magnitude between (a) the smallest possible scale of cognitive relevance, and (b) the scale where the No-Cloning Theorem becomes relevant.  In that case, at least in principle, the nanorobots really *could* complete their scan of the brain’s cognitive layer without risking the slightest damage to it; only the (easily-replaced?) Knightian indeterminism layer would be altered by the nanorobots’ presence.  Whether this is possible is an empirical question for neuroscience. However, the discussion of brain-scanning raises a broader point: that, against the gerbil-box scenario, we need to weigh some other, older thought experiments where many people’s intuitions go the other way.  Suppose the nanorobots *do* eventually complete their scan of all the macroscopic, cognitively-relevant information in *your* brain, and suppose they then transfer the information to a digital computer, which proceeds to run a macroscopic-scale simulation of your brain.  Would that simulation *be* you?  If your original brain were destroyed in this process, or simply anesthetized, would you expect to wake up as the digital version?  (Arguably, this is not even a philosophical question, just a straightforward empirical question asking you to predict a future observation!) Now, suppose you believe there’s some conceivable digital doppelgänger that would *not* be you, and you also believe that the difference between you and it resides somewhere in the physical world. Then since (by assumption) the doppelgänger is functionally indistinguishable from you, it would seem to follow that the difference between you and it *must* reside in functionally-irrelevant degrees of freedom, such as microscopic ones.  Either that, or else the boundary between the functional and non-functional degrees of freedom is not even sharp enough for the doppelgängers to be created in the first place. My conclusion is that *either* you can be uploaded, copied, simulated, backed up, and so forth, leading to all the puzzles of personal identity discussed in Section \[UPLOADING\], or else you *can’t* bear the same sort of uninteresting relationship to the non-functional degrees of freedom in your brain that the AI bore to the gerbil box. To be clear, nothing I’ve said even hints at any *sufficient condition* on a physical system, in order for the system to have attributes such as free will or consciousness.  That is, even if human brains were subject to Knightian noise at the microscopic scale, and even if the sources of such noise could *not* be cleanly separated from the cognitive functions, human beings might still fail to be truly conscious or free—whatever those things mean!—for other reasons.  At most, I’m investigating plausible *necessary* conditions for Knightian freedom as defined in this essay (and hence, in my personal perspective, for free will also). The Initial-State Objection\[INITIAL\] -------------------------------------- The fourth objection holds that the notion of freebits from the early universe nontrivially influencing present-day events is not merely strange, but inconsistent with known physics.  More concretely, Stenger [@stenger] has claimed that it *follows* from known physics that the initial state at the Big Bang was essentially random, and can’t have encoded any interesting information.[^43]  His argument is basically that the temperature at the Big Bang was enormous; and in quantum field theory (neglecting gravity), such extreme temperatures seem manifestly incompatible with any nontrivial structure. **Response.**  Regardless of whether Stenger’s *conclusion* holds, today there are strong indications, from cosmology and quantum gravity, that something has to be wrong with the above *argument* for a thermal initial state. First, by this argument, the universe’s entropy should have been maximal at the Big Bang, but the Second Law tells us that the entropy was minimal!  Stenger [@stenger] notices the obvious problem, and tries to solve it by arguing that the entropy at the Big Bang really *was* maximal, given the tiny size of the observable universe at that time.  On that view, the reason why entropy can increase as we move away from the Big Bang is simply that the universe is expanding, and with it the dimension of the state space.  But others, such as Carroll [@carroll:eternity] and Penrose [@penrose:road], have pointed out severe problems with that answer.  For one thing, if the dimension of the state space can increase, then we give up on *reversibility*, a central feature of quantum mechanics.  For another, this answer has the unpalatable implication that the entropy should turn around and decrease in a hypothetical Big Crunch.  The alternative, which Carroll and Penrose favor, is to hold that *despite* the enormous temperature at the Big Bang, the universe’s state then was every bit as special and low-entropy as the Second Law demands, but its specialness must have resided in *gravitational* degrees of freedom that we don’t yet fully understand. The second indication that the thermal Big Bang picture is incomplete is that *quantum field theory has misled us in a similar way before*.  In 1975, Hawking [@hawking] famously used quantum field theory in curved spacetime to calculate the temperature of black holes, and to prove the existence of the Hawking radiation by which black holes eventually lose their mass and disappear.  However, Hawking’s calculation seemed to imply that the radiation was thermal—so that in particular, it couldn’t encode any nontrivial information about objects that fell into the black hole.  This led to the *black-hole information-loss paradox*, since quantum-mechanical reversibility forbids the quantum states of the infalling objects simply to disappear in this way.  Today, largely because of the *AdS/CFT correspondence* (see [@deboer]), there’s a near-consensus among experts that the quantum states of infalling objects *don’t* disappear as Hawking’s calculation suggested.  Instead, at least from the perspective of someone outside the black hole, the infalling information should stick around on or near the event horizon, in not-quite-understood quantum-gravitational degrees of freedom, before finally emerging in garbled form in the Hawking radiation.  And if quantum field theory says otherwise, that’s because quantum field theory is only a limiting case of a quantum theory of gravity. The AdS/CFT correspondence is just one realization of the *holographic principle*, which has emerged over the last two decades as a central feature of quantum theories of gravity.  It’s now known that many physical theories have both a $D$-dimensional bulk formulation and a $\left( D-1\right) $-dimensional boundary formulation.  In general, these two formulations *look* completely different: states that are smooth and regular in one formulation might look random and garbled in the other; questions that are trivial to answer in one formulation might seem hopeless in the other.[^44]  Nevertheless, there exists an isomorphism between states and observables, by which the boundary formulation holographically encodes everything that happens in the bulk formulation.  As a classic example, if Alice jumps into a black hole, then she might perceive herself falling smoothly toward the singularity.  Meanwhile Bob, far from the black hole, might describe exactly the same physical situation using a dual description in which Alice never makes it past the event horizon, and instead her quantum information gets pancaked across the horizon in a horrendously complicated way.  In other words, not only is not absurd to suppose that a disorganized mess of entropy on the boundary of a region could be isomorphic to a richly-structured state in the region’s interior, but there are now detailed examples where that’s exactly what happens.[^45] The bottom line is that, when discussing extreme situations like the Big Bang, it’s *not okay* to ignore quantum-gravitational degrees of freedom simply because we don’t yet know how to model them.  And including those degrees of freedom seems to lead straight back to the unsurprising conclusion that *no one knows* what sorts of correlations might have been present in the universe’s initial microstate. The Wigner’s-Friend Objection\[MWI\] ------------------------------------ A final objection comes from the Many-Worlds Interpretation of quantum mechanics—or more precisely, from taking seriously the possibility that a conscious observer could be measured in a coherent superposition of states. Let’s start with the following thought experiment, called *Wigner’s friend* (after Eugene Wigner, who wrote about it in 1962 [@wigner:friend]).  An intelligent agent $A$ is placed in a coherent superposition of two different mental states, like so:$$\left\vert \Phi\right\rangle =\frac{\left\vert 1\right\rangle \left\vert A_{1}\right\rangle +\left\vert 2\right\rangle \left\vert A_{2}\right\rangle }{\sqrt{2}}.$$ We imagine that these two states correspond to two different questions that the agent could be asked: for example, $\left\vert 1\right\rangle \left\vert A_{1}\right\rangle $ represents $A$ being asked its favorite color, while $\left\vert 2\right\rangle \left\vert A_{2}\right\rangle $ denotes $A$ being asked its favorite ice cream flavor.  Then, crucially, a second agent $B$ comes along and measures $\left\vert \Phi\right\rangle $ in the basis$$\left\{ \frac{\left\vert 1\right\rangle \left\vert A_{1}\right\rangle +\left\vert 2\right\rangle \left\vert A_{2}\right\rangle }{\sqrt{2}},\frac{\left\vert 1\right\rangle \left\vert A_{1}\right\rangle -\left\vert 2\right\rangle \left\vert A_{2}\right\rangle }{\sqrt{2}}\right\} , \label{basis}$$ in order to verify that $A$ really *was* in a superposition of two mental states, not just in one state or the other. Now, there are many puzzling questions one can ask about this scenario: most obviously, what is it like for $A$ to be manipulated in this way?  If $A$ perceives itself in a definite state—either $A_{1}$ *or* $A_{2}$, but not both—then will $B$’s later measurement of $A$ in the basis (\[basis\]) appear to $A$ as a violation of the laws of physics? However, let’s pass over this well-trodden ground, and ask a different question more specific to the freebit picture.  According to that picture, $A$’s free decision of how to answer whichever question it was asked should be correlated with one or more freebits $w$.  But if we write out the combined state of the superposed $A$ and the freebits,$$\frac{\left\vert 1\right\rangle \left\vert A_{1}\right\rangle +\left\vert 2\right\rangle \left\vert A_{2}\right\rangle }{\sqrt{2}}\otimes\left\vert w\right\rangle ,$$ then a problem becomes apparent.  Namely, the same freebits $w$ need to do double duty, correlating with $A$’s decision in both the $\left\vert 1\right\rangle $ branch and the $\left\vert 2\right\rangle $ branch!  In other words, even supposing microscopic details of the environment could somehow explain what happens in one branch, how could the *same* details explain both branches?  As $A_{1}$ contemplated its favorite color, would it find itself oddly constrained by $A_{2}$’s simultaneous contemplations of its favorite ice cream flavor, or vice versa? One might think we could solve this problem by stipulating that $w$ is split into two collections of freebits, $\left\vert w\right\rangle =\left\vert w_{1}\right\rangle \otimes\left\vert w_{2}\right\rangle $, with $\left\vert w_{1}\right\rangle $ corresponding to $A_{1}$’s response and $\left\vert w_{2}\right\rangle $ corresponding to $A_{2}$’s.  But this solution quickly runs into an exponential explosion: if we considered a state like$$\frac{1}{2^{500}}\sum_{x\in\left\{ 0,1\right\} ^{1000}}\left\vert x\right\rangle \left\vert A_{x}\right\rangle ,$$ we would find we needed $2^{1000}$ freebits to allow each $A_{x}$ to make a yes/no decision independently of all the other $A_{x}$’s. Another obvious way out would be if the freebits were entangled with $A$:$$\left\vert \Phi^{\prime}\right\rangle =\frac{\left\vert 1\right\rangle \left\vert A_{1}\right\rangle \left\vert w_{1}\right\rangle +\left\vert 2\right\rangle \left\vert A_{2}\right\rangle \left\vert w_{2}\right\rangle }{\sqrt{2}}.$$ The problem is that there seems to be no way to *produce* such entanglement, without violating quantum mechanics.  If $w$ is supposed to represent microscopic details of $A$’s environment, ultimately traceable to the early universe, then it would be extremely mysterious to find $A$ and $w$ entangled.  Indeed, in a Wigner’s-friend experiment, such entanglement would show up as a *fundamental decoherence source*: something which was *not* traceable to any leakage of quantum information from $A$, yet which somehow prevented $B$ from observing quantum interference between the $\left\vert 1\right\rangle \left\vert A_{1}\right\rangle $ and $\left\vert 2\right\rangle \left\vert A_{2}\right\rangle $ branches, when $B$ measured in the basis (\[basis\]).  If, on the other hand, $B$ *did* ultimately trace the entanglement between $A$ and $w$ to a leakage of information from $A$, then $w$ would have been revealed to have never contained freebits at all!  For in that case, $w$ would merely be the result of unitary evolution coupling $A$ to its environment—so it could presumably be predicted by $B$, who could even verify its hypothesis by measuring the state $\left\vert \Phi^{\prime }\right\rangle $ in the basis$$\left\{ \frac{\left\vert 1\right\rangle \left\vert A_{1}\right\rangle \left\vert w_{1}\right\rangle +\left\vert 2\right\rangle \left\vert A_{2}\right\rangle \left\vert w_{2}\right\rangle }{\sqrt{2}},\frac{\left\vert 1\right\rangle \left\vert A_{1}\right\rangle \left\vert w_{1}\right\rangle -\left\vert 2\right\rangle \left\vert A_{2}\right\rangle \left\vert w_{2}\right\rangle }{\sqrt{2}}\right\} .$$ **Response.** As in Section \[MICROMACRO\]—where asking about the definition of microfacts and macrofacts led to a closely-related issue—my response to this important objection is to *bite the bullet*.  That is, I accept the existence of a deep incompatibility between the freebit picture and the physical feasibility of the Wigner’s-friend experiment.  To state it differently: *if* the freebit picture is correct, *and* the Wigner’s-friend experiment can be carried out, then I think we’re forced to conclude that—at least for the duration of the experiment—*the subject no longer has the capacity for Knightian freedom,* and is now a mechanistic, externally-characterized physical system similar to a large quantum computer. I realize that the position above sounds crazy, but it becomes less so once one starts thinking about what would *actually* be involved in performing a Wigner’s-friend experiment on a human subject.  Because of the immense couplings between a biological brain and its external environment (see Tegmark [@tegmark:qmbrain] for example), the experiment is likely impossible with any entity we would currently recognize as human.  Instead, as a first step (!), one would presumably need to solve the problem of *brain-uploading*: that is, transferring a human brain into a digital substrate.  Only then could one even *attempt* the second part, of transferring the now-digitized brain onto a quantum computer, and preparing and measuring it in a superposition of mental states.  I submit that, while the resulting entity *might* be freely-willed, conscious, etc., it certainly wouldn’t be uncontroversially so, nor can we form any intuition by reasoning by analogy with ourselves (even if we were inclined to try). Notice in particular that, if the agent $A$ could be manipulated in superposition, then as a direct byproduct of those manipulations, $A$ would presumably undergo *the same mental processes over and over, forwards in time as well as backwards in time*.  For example, the obvious way for $B$ to measure $A$ in the basis (\[basis\]), would simply be for $B$ to uncompute whatever unitary transformation $U$ had placed $A$ in the superposed state $\left\vert \Phi\right\rangle $ in the first place.  Presumably, the process would then be repeated many times, as $B$ accumulated more statistical evidence for the quantum interference pattern.  So, during the uncomputation steps, would $A$ experience time running backwards?  Would the inverse map $U^{\dag}$ feel different than $U$?  Or would all the applications of $U$ and $U^{\dag}$ be experienced simultaneously, being functionally indistinguishable from one another?  I hope I’m not alone in feeling a sense of vertigo about these questions!  To me, it’s at least a plausible speculation that $A$ doesn’t experience *anything*, and that the reasons why it doesn’t are related to $B$’s very ability to manipulate $A$ in these ways. More broadly, the view I’ve taken here on superposed agents strikes me as almost a *consequence* of the view I took earlier, on agents whose mental states can be perfectly measured, copied, simulated, and predicted by other agents.  For there’s a close connection between being able to *measure* the exact state $\left\vert S\right\rangle $ of a physical system, and being able to detect quantum interference in a superposition of the form$$\left\vert \psi\right\rangle =\frac{\left\vert S_{1}\right\rangle +\left\vert S_{2}\right\rangle }{\sqrt{2}},$$ consisting of two slight variants of $\left\vert S\right\rangle $.  If we know $\left\vert S\right\rangle $, then among other things, we can load a copy of $\left\vert S\right\rangle $ onto a quantum computer, and thereby prepare and measure a superposition like $\left\vert \psi\right\rangle $—provided, of course, that one counts the quantum computer’s *encodings* as $\left\vert S_{1}\right\rangle $ and $\left\vert S_{2}\right\rangle $ as just as good as the real thing.  Conversely, the ability to detect interference between $\left\vert S_{1}\right\rangle $ and $\left\vert S_{2}\right\rangle $ presupposes that we know, and can control, *all* the degrees of freedom that make them different: quantum mechanics tells us that, if any degrees of freedom are left unaccounted for, then we will simply see a probabilistic mixture of $\left\vert S_{1}\right\rangle $ and $\left\vert S_{2}\right\rangle $, not a superposition. But if this is so, one might ask, then *what makes humans any different?*  According to the most literal reading of quantum mechanics’ unitary evolution rule—which some call the Many-Worlds Interpretation—don’t we *all* exist in superpositions of enormous numbers of branches, and isn’t our inability to measure the interference between those branches merely a practical problem, caused by rapid decoherence?  Here I reiterate the speculation put forward in Section \[MICROMACRO\]: that the decoherence of a state $\left\vert \psi\right\rangle $ should be considered fundamental and irreversible, precisely when $\left\vert \psi\right\rangle $ becomes entangled with degrees of freedom that are receding toward our deSitter horizon at the speed of light, and that can no longer be collected together even in principle.  That sort of decoherence could be avoided, at least in principle, by a fault-tolerant quantum computer, as in the Wigner’s-friend thought experiment above.  But it plausibly *can’t* be avoided by any entity that we would currently recognize as human. One could also ask: *if* the freebit picture were accepted, what would be the implications for the foundations of quantum mechanics?  For example, would the Many-Worlds Interpretation then have to be rejected?  Interestingly, I think the answer is no.  Since I haven’t suggested any change to the formal rules of quantum mechanics, *any* interpretations that accept those rules—including Many-Worlds, Bohmian mechanics, and various Bayesian/subjectivist interpretations—would in some sense remain on the table (to whatever extent they were on the table before!).  As far as we’re concerned in this essay, if one wants to believe that different branches of the wavefunction, in which one’s life followed a different course than what one observes, really exist, that’s fine; if one wants to deny the reality of those branches, that’s fine as well.  Indeed, if the freebit picture were correct, then *Nature would have conspired so that we had no hope, even in principle, of distinguishing the various interpretations by experiment*. Admittedly, one might think it’s obvious that the interpretations can’t be distinguished by experiment, with or without the freebit picture.  Isn’t that why they’re *called* interpretations?  But as Deutsch [@deutsch:universal] already pointed out in 1985, scenarios like Wigner’s friend seriously challenge the idea that the interpretations are empirically equivalent.  For example, if one could perform a quantum interference experiment on *one’s own mental state*, then couldn’t one directly experience what it was like for the different components of the wavefunction describing that state to evolve unitarily?[^46]  And wouldn’t that, essentially, vindicate a Many-Worlds-like perspective, while ruling out the subjectivist views that refuse to countenance the reality of parallel copies of oneself?  By denying that the subject of a Wigner’s-friend experiment has the capacity for Knightian freedom, the freebit picture suggests that maybe there’s nothing that it’s like to *be* such a subject—and hence, that debates about quantum interpretation can freely continue forever, with not even the in-principle possibility of an empirical resolution. Comparison to Penrose’s Views\[PENROSE\] ======================================== Probably the most original thinker to have speculated about physics and mind in the last half-century has been Roger Penrose.  In his books, including *The Emperor’s New Mind* [@penrose] and *Shadows of the Mind* [@penrose:shadows],[^47] Penrose has advanced three related ideas, all of them extremely controversial: 1. That arguments related to Gödel’s Incompleteness Theorem imply that the physical action of the human brain cannot be algorithmic (i.e., that it must violate the Church-Turing Thesis). 2. That there must be an objective physical process that collapses quantum states and produces the definite world we experience, and that the best place to look for such a process is in the interface of quantum mechanics, general relativity, and cosmology (the specialness of the universe’s initial state providing the only known source of time-asymmetry in physics, not counting quantum measurement). 3. That objective collapse events, possibly taking place in the cytoskeletons of the brain’s neurons (and subsequently amplified by conventional brain activity), provide the best candidate for the source of noncomputability demanded by (1). An obvious question is how Penrose’s views relate to the ones discussed here.  Some people might see the freebit picture as Penrose lite.  For it embraces Penrose’s core belief in a relationship between the mysteries of mind and those of modern physics, and even follows Penrose in focusing on certain *aspects* of physics, such as the specialness of the initial state.  On the other hand, the account here rejects almost all of Penrose’s further speculations: for example, about noncomputable dynamics in quantum gravity and the special role of the cytoskeleton in exploiting those dynamics.[^48]  Let me now elaborate on seven differences between my account and Penrose’s. 1. **I make no attempt to explain consciousness.**  Indeed, that very goal seems misguided to me, at least if consciousness is meant in the phenomenal sense rather than the neuroscientists’ more restricted senses.[^49]  For as countless philosophers have pointed out over the years (see McGinn [@mcginn] for example), *all* scientific explanations seem equally compatible with a zombie world, which fulfills the right causal relations but where no one really experiences anything.  More concretely, even if Penrose were right that the human brain had super-Turing computational powers—and I see no reason to think he is—I’ve never understood how that would help with what Chalmers [@chalmers] calls the hard problem of consciousness.  For example, could a Turing machine equipped with an oracle for the halting problem perceive the redness of red any better than a Turing machine without such an oracle? Given how much Penrose says about consciousness, I find it strange that he says almost nothing about the related mystery of *free will*.  My central claim, in this essay, is that there exists an empirical counterpart of free will (what I call Knightian freedom), whose investigation really *does* lead naturally to questions about physics and cosmology, in a way that I don’t know to happen for any of the usual empirical counterparts of consciousness. 2. **I make no appeal to the Platonic perceptual abilities of human mathematicians.**  Penrose’s arguments rely on human mathematicians’ supposed power to see the consistency of (for example) the Peano axioms of arithmetic (rather than simply *assuming* or *asserting* that consistency, as a computer program engaging in formal reasoning might do).  As far as I can tell, to whatever extent this power exists at all, it’s just a particularly abstruse type of *qualia* or *subjective experience*, as empirically inaccessible as any other type.  In other words, instead of talking about the consistency of Peano arithmetic, I believe Penrose might as well have fallen back on the standard arguments about how a robot could never really enjoy fresh strawberries, but at most *claim* to enjoy them. In both cases, the reply seems obvious: *how do you know* that the robot doesn’t really enjoy the strawberries that it claims to enjoy, or see the consistency of arithmetic that it claims to see?  And how do you know other people *do* enjoy or see those things?  In any case, none of the arguments in this essay turn on these sorts of considerations.  If any important difference is to be claimed between a digital computer and a human brain, then I insist that the difference correspond to something empirical: for example, that computer memories can be reliably measured and copied without disturbing them, while brain states quite possibly can’t.  The difference must *not* rely, even implicitly, on a question-begging appeal to the author’s or reader’s own subjective experience. 3. **I make no appeal to Gödel’s Incompleteness Theorem.**  Let me summarize Penrose’s (and earlier, Lucas’s [@lucas]) Gödelian argument for human specialness.  Consider any finitely describable machine $M$ for deciding the truth or falsehood of mathematical statements, which never errs but which might sometimes output I don’t know.  Then by using the code of $M$, it’s possible to construct a mathematical statement $S_{M}$—one example is a formal encoding of $M$ will never affirm this statement—that we humans, looking in from the outside, can clearly see is true, yet that $M$ itself can’t affirm without getting trapped in a contradiction. The difficulties with this argument have been explained at length elsewhere [@aar:qcsd; @dennett; @russellnorvig]; some standard replies to it are given in a footnote.[^50] Here, I’ll simply say that I think the Penrose-Lucas argument establishes *some* valid conclusion, but the conclusion is much weaker than what Penrose wants, and it can also be established much more straightforwardly, without Gödelian considerations.  The valid conclusion is that, *if you know the code of an AI,* then regardless of how intelligent the AI seems to be, you can unmask it as an automaton, blindly following instructions. * *To do so, however, you don’t need to trap the AI in a self-referential paradox: it’s enough to verify that the AI’s responses are precisely the ones predicted (or probabilistically predicted) by the code that you possess!  Both with the Penrose-Lucas argument and with this simpler argument, it seems to me that the real issue is not whether the AI follows a program, but rather, whether it follows a program that’s *knowable by other physical agents*.  That’s why this essay focusses from the outset on the latter issue. 4. **I don’t suggest any barrier to a suitably-programmed digital computer passing the Turing Test.**  Of course, if the freebit picture were correct, then there *would* be a barrier to duplicating the mental state and predicting the responses of a *specific* human.  Even here, though, it’s possible that, through non-invasive measurements, one could learn enough to create a digital mockup of a given person that would fool that person’s closest friends and relatives, possibly for decades.  (For this purpose, it might not matter if the mockup’s responses eventually diverged badly from the original person’s!)  And such a mockup would certainly succeed at the weaker task of passing the Turing Test—i.e., of fooling interrogators into thinking it was human, at least until its code was revealed.  If these sorts of mockups *couldn’t* be built, then it would have to be for reasons well beyond anything explored in this essay. 5. **I don’t imagine anything particularly exotic about the biology of the brain.**  In *The Emperor’s New Mind* [@penrose], Penrose speculates that the brain might act as what today we would call an adiabatic quantum computer: a device that generates highly-entangled quantum states, and might be able to solve certain optimization problems faster than any classical algorithm.  (In this case, presumably, the entanglement would be *between neurons*.)  In *Shadows* [@penrose:shadows], Penrose goes further, presenting a proposal of himself and Stuart Hameroff that ascribes a central role to *microtubules*, a component of the neuronal cytoskeleton.  In this proposal, the microtubules would basically be antennae sensitive to yet-to-be-discovered quantum-gravity effects.  Since Penrose *also* conjectures that a quantum theory of gravity would include Turing-uncomputable processes (see point 7 below), the microtubules would therefore let human beings surpass the capabilities of Turing machines. Unfortunately, subsequent research hasn’t been kind to these ideas.  Calculations of decoherence rates leave almost no room for the possibility of quantum coherence being maintained in the hot, wet environment of the brain for anything like the timescales relevant to cognition, or for long-range entanglement between neurons (see Tegmark [@tegmark:qmbrain] for example).  As for microtubules, they are common structural elements in cells—not only in neurons—and no clear evidence has emerged that they are particularly sensitive to the quantum nature of spacetime.  And this is setting aside the question of the evolutionary pathways by which the quantum-gravitational antennae could have arisen. The freebit perspective requires none of this: at least from a biological perspective, its picture of the human brain is simply that of conventional neuroscience.  Namely, the human brain is a massively-parallel, highly-connected *classical* computing organ, whose design was shaped by millions of years of natural selection.  Neurons perform a role vaguely analogous to that of *gates* in an electronic circuit (though neurons are far more complicated in detail), while synaptic strengths serve as a readable and writable memory.  If we restrict to issues of principle, then perhaps the most salient *difference* between the brain and today’s electronic computers is that the brain is a digital/analog hybrid.  This means, for example, that we have no practical way to measure the brain’s exact computational state at a given time, copy the state, or restore the brain to a previous state; and it is not even obvious whether these things can be done in principle.  It also means that the brain’s detailed activity might be sensitive to microscopic fluctuations (for example, in the sodium-ion channels) that get chaotically amplified; this amplification might even occur over timescales relevant to human decision-making (say, $30$ seconds).  Of course, if those fluctuations were quantum-mechanical in origin—and at a small enough scale, they *would* be—then they couldn’t be measured even in principle without altering them. From the standpoint of neuroscience, the last parts of the preceding paragraph are certainly not established, but neither does there seem to be any good evidence against them.  I regard them as plausible guesses, and hope future work will confirm or falsify them.  To the view above, the freebit picture adds only a single further speculation—a speculation that, moreover, I think *does not even encroach on neuroscience’s turf*.  This is simply that, if we consider the quantum states $\rho$ relevant to the microscopic fluctuations, then those states are subject to at least some Knightian uncertainty (i.e., they are freestates as defined in Appendix \[FREESTATES\]); and furthermore, at least some of the Knightian uncertainty could ultimately be traced, if we wished, back to our ignorance of the detailed microstate of the early universe.  This might or might not be true, but it seems to me that it’s not a question for neuroscience at all, but for physics and cosmology (see Section \[FALSIFY\]).  What the freebit picture needs from neuroscience, then, is extremely modest—certainly compared to what Penrose’s picture needs! 6. **I don’t propose an objective reduction process that would modify quantum mechanics.**  Penrose speculates that, when the components of a quantum state achieve a large enough energy difference that they induce appreciably different configurations of the spacetime metric (roughly, configurations that differ by one Planck length or more), new quantum-gravitational laws beyond of unitary quantum mechanics should come into effect, and cause an objective reduction of the quantum state.  This hypothetical process would underlie what we perceive as a measurement or collapse.  Penrose has given arguments that his reduction process, if it existed, would have escaped detection by current quantum interference experiments, but *could* conceivably be detected or ruled out by experiments in the foreseeable future [@mspb].  Penrose’s is far from the only objective reduction model on the table: for example, there’s a well-known earlier model due to Ghirardi, Rimini, and Weber (GRW) [@grw], but that one was purely phenomenological, rather than being tied to gravity or some other known part of physics. If an objective reduction process were ever discovered, then it would provide a ready-made distinction between microfacts and macrofacts (see Section \[MICROMACRO\]), of exactly the sort the freebit picture needs.  Despite this, I’m profoundly skeptical that any of the existing objective reduction models are close to the truth.  The reasons for my skepticism are, first, that the models seem too ugly and ad hoc (GRW’s more so than Penrose’s); and second, that the AdS/CFT correspondence now provides evidence that quantum mechanics can emerge unscathed even from the combination with gravity.  That’s why, in Sections \[MICROMACRO\] and \[MWI\], I speculated that the distinction between microfacts and macrofacts might ultimately be defined in terms of deSitter space cosmology, with a macrofact being any fact already irreversibly headed toward the deSitter horizon. 7. **I don’t propose that quantum gravity leads to Turing-uncomputable processes.  **One of Penrose’s most striking claims is that the laws of physics should involve *uncomputability*: that is, transitions between physical states that cannot in principle be simulated by a Turing machine, even given unlimited time.  Penrose arrives at this conclusion via his Gödel argument (see point 3); he then faces the formidable challenge of where to *locate* the necessary uncomputability in anything plausibly related to physics.  Note that this is *separate* from the challenge (discussed in point 5) of how to make the human brain sensitive to the uncomputable phenomena, supposing they exist!  In *Shadows* [@penrose:shadows], Penrose seems to admit that this is a weak link in his argument.  As evidence for uncomputability, the best he can offer is a theorem of Markov that the $4$-manifold homeomorphism problem is undecidable (indeed, equivalent to the halting problem) [@markov:4manif], and a speculation of Geroch and Hartle [@gerochhartle] that maybe that fact has something to do with quantum gravity, since some attempted formulations of quantum gravity involve sums over $4$-manifolds. Personally, I see no theoretical or empirical reason to think that the laws of physics should let us solve Turing-uncomputable problems—either with our brains, or with any other physical system.  Indeed, I would go further: in [@aar:np], I summarized the evidence that the laws of physics seem to conspire to prevent us from solving $\mathsf{NP}$-complete problems (like the Traveling Salesman Problem) in polynomial time.  But the $\mathsf{NP}$-complete problems, being solvable in merely exponential time, are child’s play compared to Turing-uncomputable problems like the halting problem!  For this reason, I regard it as a serious drawback of Penrose’s proposals that they demand uncomputability in the dynamical laws, and as an advantage of the freebit picture that it suggests nothing of the kind.  Admittedly, the freebit picture does require that there be no complete, computationally-simple description of the *initial conditions*.[^51]  But it seems to me that nothing in established physics should have led us to expect that such a description would exist *anyway*![^52]  The freebit picture is silent on whether detailed properties of the initial state can be actually be *used* to solve otherwise-intractable computational problems, such as $\mathsf{NP}$-complete problems, in a reliable way.  But the picture certainly gives no reason to *think* this is possible, and I see no evidence for its possibility from any other source. Application to Boltzmann Brains\[BOLTZMANN\] ============================================ In this section, I’ll explain how the freebit perspective, if adopted, seems to resolve the notorious Boltzmann brain problem of cosmology.  No doubt some people will feel that the cure is even worse than the disease!  But even if one thinks that, the mere fact of a connection between freebits and Boltzmann brains seems worth spelling out. First, what is the Boltzmann brain problem?  Suppose that—as now seems all but certain [@perlmutter]—our universe will *not* undergo a Big Crunch, but will simply continue to expand forever, its entropy increasing according to the Second Law.  Then eventually, after the last black holes have decayed into Hawking radiation, the universe will reach the state known as *thermal equilibrium*: basically an entropy-maximizing soup of low-energy photons, flying around in an otherwise cold and empty vacuum.  The difficulty is that, even in thermal equilibrium, there’s still a tiny but nonzero probability that *any given (finite) configuration* will arise randomly: for example, via a chance conglomeration of photons, which could give rise to other particles via virtual processes.  In general, we expect a configuration with total entropy $S$ to arise at a particular time and place with probability of order $\thicksim1/\exp\left( S\right) $.  But eternity being a long time, even such exponentially-unlikely fluctuations should not only occur, but occur *infinitely often*, for the same reason why all of Shakespeare’s plays presumably appear, in coded form, infinitely often in the decimal expansion of $\pi$.[^53] So in particular, we would eventually expect (say) beings physically identical to you, who’d survive just long enough to have whatever mental experiences you’re now having, then disappear back into the void.  These hypothetical observers are known in the trade as *Boltzmann brains* (see [@dks]), after Ludwig Boltzmann, who speculated about related matters in the late $19^{th}$ century.  So, how do you know that *you* aren’t a Boltzmann brain? But the problem is worse.  Since in an eternal universe, you would have infinitely many Boltzmann-brain doppelgängers, any observer with your memories and experiences seems infinitely *more* likely to be a Boltzmann brain, than to have arisen via the normal processes of Darwinian evolution and so on starting from a Big Bang!  Silly as it sounds, this has been a major problem plaguing recent cosmological proposals, since they keep wanting to assign enormous probability measure to Boltzmann brains (see [@carroll:eternity]). But now suppose you believed the freebit picture, and also believed that possessing Knightian freedom is a necessary condition for counting as an observer.  Then I claim that the Boltzmann brain problem would immediately go away.  The reason is that, in the freebit picture, Knightian freedom *implies* a certain sort of correlation with the universe’s initial state at the Big Bang—so that lack of complete knowledge of the initial state corresponds to lack of complete predictability (even probabilistic predictability) of one’s actions by an outside observer.  But a Boltzmann brain wouldn’t have that sort of correlation with the initial state.  By the time thermal equilibrium is reached, the universe will (by definition) have forgotten all details of its initial state, and any freebits will have long ago been used up.  In other words, there’s no way to make a Boltzmann brain think one thought rather than another by toggling freebits.  So, on this account, Boltzmann brains wouldn’t be free, even during their brief moments of existence.  This, perhaps, invites the further speculation that there’s nothing that it’s like to be a Boltzmann brain. What Happens When We Run Out of Freebits?\[RUNOUT\] --------------------------------------------------- The above discussion of Boltzmann brains leads to a more general observation about the freebit picture.  Suppose that 1. the freebit picture holds, 2. the observable universe has a finite extent (as it does, assuming a positive cosmological constant), and 3. the holographic principle (see Section \[INITIAL\]) holds. Then the number of freebits accessible to any one observer must be finite—simply because the number of bits of *any* kind is then upper-bounded by the observable universe’s finite holographic entropy.  (For details, see Bousso [@bousso:vac] or Lloyd [@lloyd], both of whom estimate the observable universe’s information capacity as roughly $\thicksim10^{122}$ bits.) But the nature of freebits is that they get permanently used up whenever they are amplified to macroscopic scale.  So under the stated assumptions, we conclude that only a finite number of free decisions can possibly be made, before the observable universe runs out of freebits!  In my view, this should not be too alarming.  After all, even *without* the notion of freebits, the Second Law of Thermodynamics (combined with the holographic principle and the positive cosmological constant) already told us that the observable universe can witness at most $\thicksim10^{122}$ interesting events, of any kind, before it settles into thermal equilibrium.  For more on the theme of freebits as a finite resource, see Appendix \[KOLMOG\]. Indexicality and Freebits\[INDEXFREE\] ====================================== The Boltzmann brain problem is just one example of what philosophers call an *indexical* puzzle: a puzzle involving the first-person facts of who, what, where, and when *you* are, which seems to persist even after all the third-person facts about the physical world have been specified.  Indexical puzzles, and the lack of consensus on how to handle them, underlie many notorious debates in science and philosophy—including the debates surrounding the anthropic principle and fine-tuning of cosmological parameters, the multiverse and string theory landscape, the Doomsday argument, and the Fermi paradox of where all the extraterrestrial civilizations are.  I won’t even try to summarize the vast literature on these problems (see Bostrom [@bostrom] for an engaging introduction).  Still, it might be helpful to go through a few examples, just to illustrate what we mean by indexical puzzles. - When doing Bayesian statistics, it’s common to use a *reference class*: roughly speaking, the set of observers from which you consider yourself to have been sampled.  For an uncontroversial example, suppose you want to estimate the probability that you have some genetic disease, in order to decide (say) whether it’s worth getting tested.  In reality, you either have the disease or you don’t.  Yet it seems perfectly unobjectionable to estimate the *probability* that you have the disease, by imagining that you were chosen randomly from among all people with the same race, sex, and so forth, then looking up the relevant statistics.  However, things quickly become puzzling when we ask how *large* a reference class can be invoked.  Can you consider yourself to have been sampled uniformly at random from the set of all humans who ever lived or ever will live?  If so, then why not also include early hominids, chimpanzees, dolphins, extraterrestrials, or sentient artificial intelligences?  Many would simply reply that you’re *not* a dolphin or any of those other things, so there’s no point worrying about the hypothetical possibility of having been one.  The problem is that you’re *also* not some other person of the same race and sex—you’re *you*—but for medical and actuarial purposes, we clearly *do* reason as if you could have been someone different.  So why can your reference class include those other people but not dolphins? - Suppose you’re an astronomer who’s trying to use Bayesian statistics to estimate the probability of one cosmological model versus another one, conditioned on the latest data about the cosmic background radiation and so forth.  Of course, as discussed in Section \[KNIGHTIAN\], any such calculation requires a specification of *prior probabilities*.  The question is: should your prior include the assumption that, all else being equal, we’re twice as likely to find ourselves in a universe that’s twice as large (and thus presumably has twice as many civilizations, in expectation)?  If so, then how do we escape the absurd-seeming conclusion that we’re *certain* to live in an infinite universe, if such a universe is possible at all—since we expect there to be infinitely many more observers in an infinite universe than in a finite one?  Surely we can’t deduce the size of the universe without leaving our armchairs, like a medieval scholastic?  The trouble is that, as Bostrom [@bostrom] points out, *not* adjusting the prior probabilities for the expected number of observers leads to its own paradoxes.  As a fanciful example, suppose we’re trying to decide between two theories, which on physical grounds are equally likely.  Theory $A$ predicts the universe will contain a single civilization of two-legged creatures, while theory $B$ predicts the universe will contain a single civilization of two-legged creatures, *as well as* a trillion equally-advanced civilizations of nine-legged creatures.  Observing ourselves to be two-legged, can we conclude that theory $A$ is overwhelmingly more likely—since if theory $B$ were correct, then we would almost certainly have been pondering the question on nine legs?  A straightforward application of Bayes’ rule seems to imply that the answer should be yes—*unless* we perform the adjustment for the number of civilizations that led to the first paradox! - Pursuing thoughts like the above quickly leads to the notorious *Doomsday argument*.  According to that argument, the likelihood that human civilization will kill itself off in the near future is much larger than one would naïvely think—where naïvely means before taking indexical considerations into account.  The logic is simple: suppose human civilization will continue for billions of years longer, colonizing the galaxy and so forth.  In that case, our own position near the very beginning of human history would seem absurdly improbable—the more so when one takes into account that such a long-lived, spacefaring civilization would probably have a much larger population than exists today (just as *we* have a much larger population than existed hundreds of years ago).  If we’re Bayesians, and willing to speak at all about ourselves as drawn from an ensemble (as most people are, in the medical example), that presumably means we should revise downward the probability that the spacefaring scenario is correct, and revise upward the probability of scenarios that give us a more average position in human history.  But because of exponential population growth, one expects the latter to be heavily weighted toward scenarios where civilization kills itself off in the very near future.  Many commentators have tried to dismiss this argument as flat-out erroneous (see Leslie [@leslie] or Bostrom [@bostrom] for common objections to the argument and responses to them).[^54]  However, while the modern, Bayesian version of the Doomsday argument might indeed be wrong, it’s not wrong because of some trivial statistical oversight.  Rather, the argument might be wrong because it embodies an interestingly false way of thinking about indexical uncertainty. Perplexing though they might be, what do any of these indexical puzzles have to do with *our* subject, free will?  After all, presumably no one thinks that we have the free will to choose where and when we’re born!  Yet, while I’ve never seen this connection spelled out before, it seems to me that indexical puzzles like those above *do* have some bearing on the free will debate.  For the indexical puzzles make it apparent that, even if we assume the laws of physics are completely mechanistic, there remain large aspects of our experience that those laws fail to determine, even probabilistically.  Nothing in the laws picks out one particular chunk of suitably organized matter from the immensity of time and space, and says, here, *this* chunk is you; its experiences are your experiences.  Nor does anything in the laws give us even a probability distribution over the possible such chunks.  Despite its obvious importance even for empirical questions, our uncertainty about who we are and who we could have been (i.e., our reference class) bears all the marks of Knightian uncertainty.  Yet once we’ve admitted indexical uncertainty into our worldview, it becomes less clear why we should reject the sort of uncertainty that the freebit picture needs!  If whether you find yourself born in $8^{th}$-century China, $21^{st}$-century California, or Planet Zorg is a variable subject to Knightian uncertainty, then why not what you’ll have for dessert tonight? More concretely, suppose that there are numerous planets nearly identical to Earth, down to the same books being written, people with the same names and same DNA being born, etc.  If the universe is spatially infinite—which cosmologists consider a serious possibility[^55]—then there’s no need to imagine this scenario: for simple probabilistic reasons, it’s almost certainly true!  Even if the universe is spatially finite, the probability of such a twin Earth would approach $1$ as the number of stars, galaxies, and so on went to infinity.  Naturally, we’d expect any two of these twin Earths to be separated by exponentially large distances—so that, because of the dark energy pushing the galaxies apart, we would *not* expect a given Earth-twin ever to be in communication with any of its counterparts. Assume for simplicity that there are at most two of these Earth-twins, call them $A$ and $B$.  (That assumption will have no effect on our conclusions.)  Let’s suppose that, because of (say) a chaotically amplified quantum fluctuation, these two Earths are about to diverge significantly in their histories for the first time.  Let’s further suppose that *you*—or rather, beings on $A$ and $B$ respectively who look, talk, and act like you—are the proximate cause of this divergence.  On Earth $A$, the quantum fluctuation triggers a chain of neural firing events in your brain that ultimately causes you to take a job in a new city.  On Earth $B$, a different quantum fluctuation triggers a chain of neural firings that causes you to stay where you are. We now ask: from the perspective of a superintelligence that knows everything above, what’s the total probability $p$ that you take the new job?  Is it simply $\frac{1}{2}$, since the two actual histories should be weighted equally?  What if Earth $B$ had a greater probability than Earth $A$ of having formed in the first place—or did under one cosmological theory but not another?  And why do we need to average over the two Earths at all?  Maybe you$_{A}$ is the real you, and taking the new job is a defining property of who you are, much as Shakespeare wouldn’t be Shakespeare had he not written his plays.  So maybe you$_{B}$ isn’t even part of your reference class: it’s just a faraway doppelgänger you’ll never meet, who looks and acts like you (at least up to a certain point in your life) but *isn’t* you.  So maybe $p=1$.  Then again, maybe you$_{B}$ is the real you and $p=0$.  Ultimately, not even a superintelligence could calculate $p$ without knowing something about *what it means to be you,* a topic about which the laws of physics are understandably silent. Now, someone who accepted the freebit picture would say that the superintelligence’s inability to calculate $p$ is no accident.  For whatever quantum fluctuation separated Earth $A$ from Earth $B$ could perfectly well have been a freebit.  In that case, before you made the decision, the right representation of your physical state would have been a Knightian combination of you$_{A}$ and you$_{B}$.  (See Appendix \[FREESTATES\] for details of how these Knightian combinations fit in with the ordinary density matrix formalism of quantum mechanics.)  After you make the decision, the ambiguity is resolved in favor of you$_{A}$ or you$_{B}$.  Of course, you$_{A}$ might then turn out to be a Knightian combination of two further entities, you$_{A_{1}}$ and you$_{A_{2}}$, and so on. For me, the appeal of this view is that it cancels two philosophical mysteries against each other: free will and indexical uncertainty.  As I said in Section \[FWFREEDOM\], free will seems to me to *require* some source of Knightian uncertainty in the physical world.  Meanwhile, indexical uncertainty is a type of Knightian uncertainty that’s been considered troublesome and unwanted—though attempts to replace it with probabilistic uncertainty have led to no end of apparent paradoxes, from the Doomsday argument to Bostrom’s observer-counting paradoxes to the Boltzmann brain problem.  So it seems natural to try to fit the free-will peg into the indexical-uncertainty hole. Is The Freebit Picture Falsifiable?\[FALSIFY\] ============================================== An obvious question about the freebit picture is whether it leads to any new, falsifiable predictions.  At one level, I was careful not to commit myself to such predictions in this essay!  My goal was to clarify some conceptual issues about the physical predictability of the brain.  Whenever I ran up against an unanswered scientific question—for example, about the role of amplified quantum events in brain activity—I freely confessed my ignorance. On the other hand, it’s natural to ask: *are there empirical conditions that the universe has to satisfy, in order for the freebit perspective to have even a **chance** of being related to free will as most people understand the concept?* I submit that the answer to the above question is yes.  To start with the most straightforward predictions: first, it’s necessary that psychology will never become physics.  If human beings could be predicted as accurately as comets, then the freebit picture would be falsified.  For in such a world, it would have *turned out* that, whatever it is we called free will (or even the illusion of free will) in ordinary life, that property was not associated with any fundamental unpredictability of our choices. It’s also necessary that a quantum-gravitational description of the early universe will *not* reveal it to have a simply-describable pure or mixed state.  Or at least, it’s necessary that indexical uncertainty—that is, uncertainty about our own location in the universe or multiverse—will forever prevent us from reducing arbitrary questions about our own future to well-posed mathematical problems.  (Such a math problem might ask, for example, for the probability distribution $\mathcal{D}$ that results when the known evolution equations of physics are applied to the known initial state at the Big Bang, marginalized to the vicinity of the earth, and conditioned on some relevant subset of branches of the quantum-mechanical wavefunction in which we happen to find ourselves.) However, the above predictions have an unsatisfying, god-of-the-gaps character: they’re simply predictions that certain scientific problems will never be completely solved!  Can’t we do better, and give *positive* predictions?  Perhaps surprisingly, I think we can. The first prediction was already discussed in Section \[CHAOS\].  In order for the freebit picture to work, *it’s necessary that quantum uncertainty—for example, in the opening and closing of sodium-ion channels—can not only get chaotically amplified by brain activity, but can do so surgically and on reasonable timescales.*  In other words, the elapsed time between (a) the amplification of a quantum event and (b) the neural firings influenced by that amplification, must not be so long that the idea of a connection between the two retreats into absurdity.  (Ten seconds would presumably be fine, whereas a year would presumably not be.)  Closely related to that requirement, the quantum event must not affect countless *other* classical features of the world, separately from its effects on the brain activity.  For if it did, then a prediction machine could in principle measure those other classical features to forecast the brain activity, with no need for potentially-invasive measurements on either the original quantum state or the brain itself. It’s tempting to compare the empirical situation for the freebit picture to that for supersymmetry in physics.  Both of these frameworks are very hard to falsify—since no matter what energy scale has been explored, or how far into the future neural events have been successfully predicted, a diehard proponent could always hold out for the superparticles or freebits making their effects known at the *next* higher energy, or the next longer timescale.  Yet despite this property, supersymmetry and freebits are both falsifiable in degrees.  In other words, if the superparticles can be chased up to a sufficiently high energy, then *even if present, they would no longer do most of the work they were originally invented to do*.  The same is true of freebits, if the time between amplification and decision is long enough. Moreover, there’s also a *second* empirical prediction of the freebit picture, one that doesn’t involve the notion of a reasonable timescale.  Recall, from Section \[FREEBITS\], the concept of a past macroscopic determinant (PMD): a set of classical facts (for example, the configuration of a laser) to the causal past of a quantum state $\rho$ that, if known, completely determine $\rho$.  Now consider an omniscient demon, who wants to influence your decision-making process by changing the quantum state of a single photon impinging on your brain.  Imagine that there are indeed photons that would serve the demon for this purpose.  However, now imagine that *all such photons can be grounded in PMDs*.  That is, imagine that the photons’ quantum states *cannot* be altered, maintaining a spacetime history consistent with the laws of physics, without *also* altering classical degrees of freedom in the photons’ causal past.  In that case, the freebit picture would once again fail.  For *if* a prediction machine had simply had the foresight to measure the PMDs, then (by assumption) it could also calculate the quantum states $\rho$, and therefore the probability of your reaching one decision rather than another.  Indeed, not only could the machine probabilistically predict your actions; it could even provide a complete quantum-mechanical account of where the probabilities came from.  Given such a machine, your choices would remain unpredictable only in the sense that a radioactive atom is unpredictable, a sense that doesn’t interest us (see Section \[KNIGHTIANPHYS\]).  The conclusion is that, for the freebit picture to work, it’s necessary that some of the relevant quantum states *can’t* be grounded in PMDs, but only traced back to the early universe (see Figure \[chainsfig\]). [chains.eps]{} Conclusions\[CONC\] =================== At one level, all I did in this essay was to invoke what David Deutsch, in his book *The Beginning of Infinity* [@deutsch:infinity], called the momentous dichotomy: > *Either a given technology is possible, or else there must be some reason (say, of physics or logic) why it isn’t possible.* Granted, the above statement is a near-tautology.  But as Deutsch points out, the implications of applying the tautology consistently can be enormous.  One illustrative application, *slightly* less contentious than free will, involves my own field of quantum computing.  The idea there is to apply the principles of quantum mechanics to build a new type of computer, one that could solve certain problems (such as factoring integers) exponentially faster than we know how to solve them with any existing computer.  Quantum computing research is being avidly pursued around the world, but there are also a few vocal skeptics of such research.  Many (though not all) of the skeptics appear to subscribe to the following three positions: 1. A useful quantum computer is an almost self-evident absurdity: noise, decoherence, and so forth must conspire to prevent such a machine from working, just as the laws of physics always conspire to prevent perpetual-motion machines from working. 2. No addition or revision is needed to quantum mechanics: the physical framework that underlies quantum computing, and that describes the state of an isolated physical system as a vector in an exponentially-large Hilbert space (leading to an apparent exponential *slowdown* when simulating quantum mechanics with a conventional computer). 3. Reconciling (a) and (b) is not even a particularly interesting problem.  At any rate, the burden is on the quantum computing *proponents* to sort these matters out!  Skeptics can be satisfied that *something* must prevent quantum computers from working, and leave it at that. What Deutsch’s dichotomy suggests is that such blasé incuriosity, in the face of a glaring conflict between ideas, is *itself* the absurd position.  Such a position only pretends to be the conservative one: secretly it is radical, in that it rejects the whole idea of science advancing by identifying apparent conflicts and then trying to resolve them. In this essay, I applied Deutsch’s momentous dichotomy to a different question: > Could there exist a machine, consistent with the laws of physics, that non-invasively cloned all the information in a particular human brain that was relevant to behavior—so that the human could emerge from the machine unharmed, but would thereafter be fully probabilistically predictable given his or her future sense-inputs, in much the same sense that a radioactive atom is probabilistically predictable? My central thesis is simply that *there is no safe, conservative answer to this question.*  Of course, one can debate what exactly the question means, and how we would know whether the supposed cloning machine had succeeded.  (See Appendix \[MEAN\] for my best attempt at formalizing the requirements.)  But I contend that philosophical analysis can only take us so far.  The question also has an empirical core that could turn out one way or another, depending on details of the brain’s physical organization that are not yet known.  In particular, does the brain possess what one could call a *clean digital abstraction layer*: that is, a set of macroscopic degrees of freedom that 1. encode everything relevant to memory and cognition, 2. can be accurately modeled as performing a classical digital computation, and 3. notice the microscopic, quantum-mechanical degrees of freedom at most as pure random-number sources, generating noise according to prescribed probability distributions? Or is such a clean separation between the macroscopic and microscopic levels unavailable—so that any attempt to clone a brain would either miss much of the cognitively-relevant information, or else violate the No-Cloning Theorem? In my opinion, *neither* answer to the question should make us wholly comfortable: if it does, then we haven’t sufficiently thought through the implications!  Suppose, on the one hand, that the brain-cloning device is possible.  Then we immediately confront countless paradoxes of personal identity like those discussed in Section \[UPLOADING\].  Would you feel comfortable being (painlessly) killed, provided that a perfect clone of your brain’s digital abstraction layer remained?  Would it matter if the cloned data was moved to a new biological body, or was only simulated electronically?  If the latter, what would count as an acceptable simulation?  Would it matter if the simulation was run backwards in time, or in heavily-encrypted form, or in only one branch of a quantum computation?  Would you literally expect to wake up as your clone?  What if two clones were created: would you expect to wake up as each one with 50% probability?  When applying Bayesian reasoning, should you, all other things equal, judge yourself twice as likely to wake up in a possible world with twice as many clones of yourself?  The point is that, in a world with the cloning device, these would no longer be metaphysical conundrums, but in some sense, just *straightforward empirical questions* about what you should expect to observe!  Yet even people who agreed on every possible third-person fact about the physical world and its contents, might answer such questions completely differently. To be clear: it seems legitimate to me, given current knowledge, to conjecture that there is no principled obstruction to perfect brain-cloning, and indeed that this essay was misguided even to speculate about such an obstruction. However, *if* one thinks that by taking the pro-cloning route, one can sidestep the need for any weird metaphysical commitments of one’s own, then I’d say one is mistaken. So suppose, on the other hand, that the perfect brain-cloning device is *not* possible.  Here, exactly like in the case of quantum computing, no truly inquisitive person will ever be satisfied by a bare *assertion* of the device’s impossibility, or by a listing of practical difficulties.  Instead, such a person will demand to know: what *principle* explains why perfect brain-cloning can’t succeed, not even a million years from now?  How do we reconcile its impossibility with everything we know about the mechanistic nature of the brain?  Indeed, how should we think about the laws of physics *in general*, so that the impossibility of perfect brain-cloning would no longer seem surprising or inexplicable? As soon as we try to answer these questions, I’ve argued that we’re driven, more-or-less inevitably, to the view that the brain’s detailed evolution would have to be buffeted around by chaotically-amplified Knightian surprises, which I called freebits.  Before their amplification, these freebits would need to live in quantum-mechanical degrees of freedom, since otherwise a cloning machine could (in principle) non-invasively copy them.  Furthermore, our ignorance about the freebits would ultimately need to be traceable back to ignorance about the microstate of the early universe—again because otherwise, cloning would become possible in principle, through vigilant monitoring of the freebits’ sources. Admittedly, all this sounds like a tall order!  But, strange though it sounds, I don’t see that any of it is ruled out by current scientific understanding—though conceivably it *could* be ruled out in the future.  In any case, setting aside one’s personal beliefs, it seems worthwhile to understand that *this* is the picture one seems forced to, if one starts from the hypothesis that brain-cloning (with all its metaphysical difficulties) should be fundamentally impossible, then tries to make that hypothesis compatible with our knowledge of the laws of physics. Reason and Mysticism\[MYSTICISM\] --------------------------------- At this point, I imagine some readers might press me: but what do I *really* think?  Do I actually take seriously the notion of quantum-mechanical freebits from the early universe playing a role in human decisions?  The easiest response is that, in laying out my understanding of the various alternatives—yes, brain states might be perfectly clonable, but if we want to avoid the philosophical weirdness that such cloning would entail, then we’re led in such-and-such an interesting direction, etc.—I *already said* what I really think.  In pondering these riddles, I don’t have any sort of special intuition, for which the actual arguments and counterarguments that I can articulate serve as window-dressing.  The arguments exhaust my intuition. I’ll observe, however, that even uncontroversial facts can be made to sound incredible when some of their consequences are spelled out; indeed, the spelling out of such consequences has always been a mainstay of popular science writing.  To give some common examples: everything you can see in the night sky was once compressed into a space smaller than an atom.  The entire story of your life, including the details of your death, is almost certainly encoded (in fact, infinitely many times) somewhere in the decimal expansion of $\pi$.  If Alois Hitler and Klara Pölzl had moved an inch differently while having intercourse, World War II would probably have been prevented.  When you lift a heavy bag of groceries, what you’re really feeling is the coupling of gravitons to a stress-energy tensor generated mostly by the rapid movements of gluons.  In each of these cases, the same point *could* be made more prosaically, and many would prefer that it was.  But when we state things vividly, at least we can’t be accused of trying to *hide* the full implications of our abstract claims, for fear of being laughed at were those implications understood. Thus, suppose I’d merely argued, in this essay, that it’s possible that humans will never become as predictable (even probabilistically predictable) as digital computers, because of chaotic amplification of unknowable microscopic events, our ignorance of which can be traced as far backward in time as one wishes.  In that case, some people would likely agree and others would likely disagree, with many averring (as I do) that the question remains open.  Hardly anyone, I think, would consider the speculation an absurdity or a gross affront to reason.  But if exactly the same idea is phrased in terms of a quantum pixie-dust left over from the Big Bang, which gets into our brains and gives us the capacity for free will—well then, *of course* it sounds crazy!  Yet the second phrasing is nothing more than a dramatic rendering of the worldview that the first phrasing implies. Perhaps some readers will accuse me of mysticism.  To this, I can only reply that the view I flirt with in this essay seems like mysticism of an unusually tame sort: one that embraces the mechanistic and universal nature of the laws of physics, as they’ve played out for 13.7 billion years; that can accept even the collapse of the wavefunction as an effect brought about by ordinary unitary evolution; that’s consumed by doubts; that welcomes corrections and improvement; that declines to plumb the cosmos for self-help tips or moral strictures about our sex lives; and that sees science—not introspection, not ancient texts, not science redefined to mean something different, but *just science in the ordinary sense*—as our best hope for making progress on the ancient questions of existence. To any mystical readers, who want human beings to be as free as possible from the mechanistic chains of cause and effect, I say: *this picture represents the absolute maximum that I can see how to offer you, if I confine myself to speculations that I can imagine making contact with our current scientific understanding of the world.*  Perhaps it’s less than you want; on the other hand, it does seem like more than the usual compatibilist account offers!  To any rationalist readers, who cheer when consciousness, free will, or similarly woolly notions get steamrolled by the advance of science, I say: you can feel vindicated, if you like, that despite searching (almost literally) to the ends of the universe, I wasn’t able to offer the mystics anything more than I was!  And even what I *do* offer might be ruled out by future discoveries. Indeed, the freebit picture’s falsifiability is perhaps the single most important point about it.  Consider the following questions: On what timescales can microscopic fluctuations in biological systems get amplified, and change the probabilities of macroscopic outcomes?  What other side effects do those fluctuations have?  Is the brain interestingly different in its noise sensitivity than the weather, or other complicated dynamical systems?  Can the brain’s microscopic fluctuations be fully understood probabilistically, or are they subject to Knightian uncertainty?  That is, can the fluctuations all be grounded in past macroscopic determinants, or are some ultimately cosmological in origin?  Can we have a complete theory of cosmological initial conditions?  Few things would make me happier than if progress on these questions led to the discovery that the freebit picture was wrong.  For then at least we would have learned something. Acknowledgments =============== I thank Yakir Aharonov, David Albert, Julian Barbour, Silas Barta, Wolfgang Beirl, Alex Byrne, Sean Carroll, David Chalmers, Alessandro Chiesa, Andy Drucker, Owain Evans, Andrew Hodges, Sabine Hossenfelder, Guy Kindler, Seth Lloyd, John Preskill, Huw Price, Haim Sompolinsky, Cristi Stoica, Jaan Tallinn, David Wallace, and others I’ve no doubt forgotten for helpful discussions about the subject of this essay; and especially Dana Moshkovitz Aaronson, David Aaronson, Steve Aaronson, Cris Moore, Jacopo Tagliabue, and Ronald de Wolf for their comments on earlier drafts.  Above all, I thank S. Barry Cooper and Andrew Hodges for commissioning this essay, and for their near-infinite patience humoring my delays with it.  It goes without saying that none of the people mentioned necessarily endorse anything I say here (indeed, some of them definitely don’t!). Appendix: Defining Freedom\[MEAN\] ================================== In this appendix, I’ll use the notion of Knightian uncertainty (see Section \[KNIGHTIANPHYS\]) to offer a possible mathematical formalization of freedom for use in free-will discussions.  Two caveats are immediately in order.  The first is that my formalization only tries to capture what I’ve called Knightian freedom—a strong sort of in-principle physical unpredictability—and not metaphysical free will.  For as discussed in Section \[FWFREEDOM\], I don’t see how *any* definition grounded in the physical universe could possibly capture the latter, to either the believers’ or the deniers’ satisfaction.  Also, as we’ll see, formalizing Knightian freedom is *already* a formidable challenge! The second caveat is that, by necessity, my definition will be in terms of more basic concepts, which I’ll need to assume as unanalyzed primitives.  Foremost among these is the concept of a *physical system*: something that occupies space; exchanges information (and matter, energy, etc.) with other physical systems; and crucially, retains an identity through time even as its internal state changes.  Examples of physical systems are black holes, the earth’s atmosphere, human bodies, and digital computers.  Without some concept like this, it seems to me that we can never specify *whose* freedom we’re talking about, or even which physical events we’re trying to predict. Yet as philosophers know well, the concept of physical system already has plenty of traps for the unwary.  As one illustration, should we say that a human body remains the same physical system after its death?  If so, then an extremely reliable method to predict a human subject immediately suggests itself: namely, first shoot the subject; then predict that the subject will continue to lay on the ground doing nothing! Now, it might be objected that this prediction method shouldn’t count, since it *changes the subject’s state* (to put it mildly), rather than just passively gathering information about the subject.  The trouble with that response is that putting the subject in an fMRI machine, interviewing her, or even just having her sign a consent form or walking past her on the street also change her state!  If we don’t allow *any* interventions that change the subject’s state from what it would have been otherwise, then prediction—at least with the science-fiction accuracy we’re imagining—seems hopeless, but in an uninteresting way.  So which interventions are allowed and which aren’t? I see no alternative but to take the *set of allowed interventions* as another unanalyzed primitive.  When formulating the prediction task (and hence, in this essay, when defining Knightian freedom), we simply declare that certain interventions—such as interviewing the subject, putting her in an fMRI machine, or perhaps even having nanorobots scan her brain state—are allowed; while other interventions, such as killing her, are not allowed. An important boundary case, much discussed by philosophers, is an intervention that would destroy each neuron of the subject’s brain one by one, replacing the neurons by microchips claimed to be functionally equivalent.  Is *that* allowed?  Note that such an operation could certainly make it easier to predict the subject—since from that point forward, the predictor would only have to worry about simulating the microchips, not the messy biological details of the original brain.  Here I’ll just observe that, if we like, we can disallow such drastic interventions, without thereby taking any position on the conundrum of what such siliconization would do to the subject’s conscious experience.  Instead we can simply say that, while the subject might indeed be perfectly predictable after the operation, that fact *doesn’t settle the question at hand*, which was about the subject’s predictability *before* the operation.  For a large part of what we wanted to know was to what extent the messy biological details *do* matter, and we can’t answer that question by defining it out of consideration. But one might object: if the messy biological details need to be left in place when trying to predict a brain, what *doesn’t* need to be left in place?  After all, brains are not isolated physical systems: they constantly receive inputs from the sense organs, from hormones in the bloodstream, etc.  So when modeling a brain, do we also need to model the entire environment in which that brain is immersed—or at least, all aspects of the environment that might conceivably affect behavior?  If so, then prediction seems hopeless, but again, not for any interesting reasons: merely for the boring reason that we can’t possibly measure *all* relevant aspects of the subject’s environment, being embedded in the environment ourselves. Fortunately, I think there’s a way around this difficulty, at the cost of one more unanalyzed primitive.  Given a physical system $S$, denote by $I\left( S\right) $ the set of *screenable inputs* to $S$—by which I mean, the inputs to $S$ that we judge that any would-be predictor of $S$ should also be provided, in order to ensure a fair contest.  For example, if $S$ is a human brain, then $I\left( S\right) $ would probably include (finite-precision digital encodings of) the signals entering the brain through the optic, auditory, and other sensory systems, the levels of various hormones in the blood, and other measurable variables at the interface between the brain and its external environment.  On the other hand, $I\left( S\right) $ probably *wouldn’t* include, for example, the exact quantum state of every photon impinging on the brain.  For arguably, we have no idea how to screen off all those microscopic inputs, short of siliconization or some equally drastic intervention. Next, call a system $S$ *input-monitorable* if there exists an allowed intervention to $S$, the result of which is that, after the intervention, all signals in $I\left( S\right) $ get carbon-copied to the predictor’s computer at the same time as they enter $S$.  For example, using some future technology, a brain might be input-monitored by installing microchips that scan all the electrical impulses in the optic and auditory nerves, the chemical concentrations in the blood-brain barrier, etc., and that faithfully transmit that information to a predictor in the next room via wireless link.  Crucially, and in contrast to siliconization, input-monitoring doesn’t strike me as raising any profound issues of consciousness or selfhood.  That is, it seems fairly clear that an input-monitored human would still be the same human, just hooked up to some funny devices!  Input-monitoring also differs from siliconization in that it seems much closer to practical realization. *The definition of freedom that I’ll suggest will only make sense for input-monitorable physical systems.*  If $S$ is not input-monitorable, then I’ll simply hold that the problem of predicting $S$’s behavior isn’t well-enough defined: $S$ is so intertwined with its environment that one can’t say where predicting $S$ ends and predicting its environment begins.  One consequence is that, in this framework, we can’t even *pose the question* of whether humans have Knightian freedom, unless we agree (at least provisionally) that humans are input-monitorable.  Fortunately, as already suggested, I don’t see any major scientific obstacles to supposing that humans *are* input-monitorable, and I even think input-monitoring could plausibly be achieved in $50$ or $100$ years. Admittedly, in discussing whether humans are input-monitorable, a lot depends on our choice of screenable inputs $I\left( S\right) $.  If $I\left( S\right) $ is small, then input-monitoring $S$ might be easy, but predicting $S$ after the monitoring is in place might be hard or impossible, simply because of the predictor’s ignorance about crucial features of $S$’s environment.  By contrast, if $I\left( S\right) $ is large, then input-monitoring $S$ might be hard or impossible, but supposing $S$ were input-monitored, predicting $S$ might be easy.  Since our main interest is the inherent difficulty of prediction, our preference should always be for the largest $I\left( S\right) $ possible. So, suppose $S$ is input-monitorable, and suppose we’ve arranged things so that all the screenable inputs to $S$—what $S$ sees, what $S$ hears, etc.—are transmitted in real-time to the predictor.  We then face the question: what aspects of $S$ are we trying to predict, and what does it mean to predict those aspects? Our next primitive concept will be that of $S$’s *observable behaviors*.  For the earth’s atmosphere, observable behaviors might include snow and thunderstorms; for a human brain, they might include the signals sent out by the motor cortex, or even just a high-level description of which words will be spoken and which decisions taken.  Fortunately, it seems to me that, for any sufficiently complex system, the prediction problem is *not* terribly sensitive to which observable behaviors we focus on, provided those behaviors belong to a large universality class.  By analogy, in computability theory, it doesn’t matter whether we ask whether a given computer program will ever halt, or whether the program will ever return to its initial state, or whether it will ever print YES to the console, or some other question about the program’s future behavior.  For these problems are all *reducible* to each other: if we had a reliable method to predict whether a program would halt, then we could also predict whether the program would print YES to the console, by modifying the program so that it prints YES if and only if it halts.  In the same way, if we had a reliable method to predict a subject’s hand movements in arbitrary situations, I claim that we could also predict the subject’s speech.  For arbitrary situations include those where we direct the subject to translate everything she says into sign language!  And thus, assuming we’ve built a hand-prediction algorithm that works in those situations, we must also have built (or had the ability to build) a speech-prediction algorithm as well. So suppose we fix some set $B$ of observable behaviors of $S$.  What should count as *predicting* the behaviors?  From the outset, we should admit that $S$’s behavior might be inherently probabilistic—as, for example, if it depended on amplified quantum events taking place inside $S$.  So we should be satisfied if we can predict $S$ in merely the same sense that physicists can predict a radioactive atom: namely, by giving a probability distribution over $S$’s possible future behaviors. Here difficulties arise, which are well-known in the fields of finance and weather forecasting.  How exactly do we test predictions that take the form of probability distributions, if the predictions apply to events that might not be repeatable?  Also, what’s to prevent someone from declaring success on the basis of absurdly conservative predictions: for example, 50/50 for every yes/no question?  Briefly, I’d say that *if* the predictor’s forecasts take the form of probability distributions, then to whatever extent those forecasts are unimpressive (50/50), *the burden is on the predictor* to convince skeptics that the forecasts nevertheless encoded everything that *could* be predicted about $S$ via allowed interventions.  That is, the predictor needs to rule out the hypothesis that the probabilities merely reflected ignorance about unmeasured but measurable variables.  In my view, this would ultimately require the predictor to give a causal account of $S$’s behavior, which showed explicitly how the observed outcome depended on quantum events—the only sort of events that we know to be probabilistic on physical grounds (see Section \[QMHV\]). But it’s not enough to let the predictions be probabilistic; we need to scale back our ambitions still further.  For even with a system $S$ as simple as a radioactive atom, there’s no hope of calculating the *exact* probability that (say) $S$ will decay within a certain time interval—if only because of the error bars in the physical constants that enter into that probability.  But intuitively, this lack of precision doesn’t make the atom any less mechanistic. Instead, it seems to me that we should call a probabilistic system $S$ mechanistic *if—and only if—the differences between our predicted probabilities for* $S$*’s behavior and the true probabilities can be made as small as desired by repeated experiments.* Yet we are still not done.  For what does the predictor $P$ *already know* about $S$, before $P$’s data-gathering process even starts?  If $P$ were initialized with a magical copy of $S$, then of course predicting $S$ would be trivial.[^56]  On the other hand, it also seems unreasonable not to tell $P$ *anything* about $S$: for example, if $P$ could accurately predict $S$, but only if given the hint that $S$ is a human being, that would still be rather impressive, and intuitively incompatible with $S$’s freedom.  So, as our final primitive concept, we assume a *reference class* $C$ of possible physical systems; $P$ is then told only that $S\in C$ and needs to succeed under that assumption.  For example, $C$ might be the class of all members of the species *Homo sapiens*, or even the class of all systems macroscopically identifiable as some *particular* Homo sapien.[^57] This, finally, leads to my attempted definition of freedom.  Before offering it, let me stress that *nothing in the essay depends much on the details of the definition—and indeed, I’m more than willing to tinker with those details.*  So then what’s the point of *giving* a definition?  One reason is to convince skeptics that the concept of Knightian freedom *can* be made precise, once one has a suitable framework with which to discuss these issues.  A second reason is to illustrate just how much little-examined complexity lurks in the commonsense notion of a physical system’s being predictable—and to show how non-obvious the questions of freedom and predictability actually become, once we start to unravel that complexity. > Let $S$ be any input-monitorable physical system drawn from the reference class $C$.  Suppose that, as the result of allowed interventions, $S$ is input-monitored by another physical system $P=P\left( C\right) $ (the predictor), starting at some time[^58] $0$.$\ $ Given times $0\leq t<u\leq\infty$, let $I_{t,u}$ encode all the information in the screenable inputs that $S$ receives between times $t$ and $u$, with $I_{t,\infty}$ denoting the information received from time $t$ onwards.  Likewise, let $B_{t,u}$ encode all the information in $S$’s observable behaviors between times $t$ and $u$.  (While this is not essential, we can assume that $I_{t,u}$ and $B_{t,u}$ both consist of finite sequences of bits whenever $u<\infty$.) > > Let $\mathcal{D}\left( B_{t,u}|I_{t,u}\right) $ be the true probability distribution[^59] over $B_{t,u}$ conditional on the inputs $I_{t,u}$, where true means the distribution that would be predicted by a godlike intelligence who knew the exact physical state of $S$ and its external environment at time $t$.  We assume that $\mathcal{D}\left( > B_{t,u}|I_{t,u}\right) $ satisfies the causal property: that $\mathcal{D}\left( B_{t,v}|I_{t,u}\right) > =\mathcal{D}\left( B_{t,v}|I_{t,v}\right) $ depends only on $I_{t,v}$ for all $v<u$. > > Suppose that, from time $0$ to $t$, the predictor $P$ has been monitoring the screenable inputs $I_{0,t}$ and observable behaviors $B_{0,t}$, and more generally, interacting with $S$ however it wants via allowed interventions (for example, submitting questions to $S$ by manipulating $I_{0,t}$, and observing the responses in $B_{0,t}$).  Then, at time $t$, we ask $P$ to output a description of a function $f$, which maps the future inputs $I_{t,\infty}$ to a distribution $\mathcal{E}\left( B_{t,\infty}|I_{t,\infty}\right) $ satisfying the causal property.  Here $\mathcal{E}\left( B_{t,\infty}|I_{t,\infty}\right) $ represents $P$’s best estimate for the distribution $\mathcal{D}\left( B_{t,\infty}|I_{t,\infty}\right) $.  Note that the description of $f$ might be difficult to unpack computationally—for example, it might consist of a complicated algorithm that outputs a description of $\mathcal{E}\left( > B_{t,u}|I_{t,u}\right) $ given as input $u\in\left( t,\infty\right) $ and $I_{t,u}$.  All we require is that the description be *information-theoretically* complete, in the sense that one *could* extract $\mathcal{E}\left( B_{t,u}|I_{t,u}\right) $ from it given enough computation time. > > Given $\varepsilon,\delta>0$, we call $P$ a $\left( t,\varepsilon > ,\delta\right) $*-predictor* for the reference class $C$ if the following holds.  For all $S\in C$, with probability at least $1-\delta > $ over any random inputs in $I_{t,u}$ (controlled neither by $S$ nor by $P$), we have$$\left\Vert \mathcal{E}\left( B_{t,\infty}|I_{t,\infty}\right) -\mathcal{D}\left( B_{t,\infty}|I_{t,\infty}\right) \right\Vert <\varepsilon$$ for the actual future inputs $I_{t,\infty}$ (not necessarily for every *possible* $I_{t,\infty}$).  Here $\left\Vert \mathcal{\cdot > }\right\Vert $ denotes the variation distance.[^60] > > We call $C$ *mechanistic* if for all $\varepsilon,\delta>0$, there exists a $t=t_{\varepsilon,\delta}$ and a $P=P_{\varepsilon,\delta}$ such that $P$ is a $\left( t,\varepsilon,\delta\right) $-predictor for $C$.  We call $C$ *free* if $C$ is not mechanistic. Two important sanity checks are the following: - According to the above definition, classes $C$ of physical systems like thermostats, digital computers, and radioactive nuclei are indeed mechanistic (given reasonable sets of screenable inputs, allowed interventions, and observable behaviors).  For example, suppose that $C$ is the set of all possible configurations of a particular digital computer; the allowed interventions include reading the entire contents of the disk drives and memory and eavesdropping on all the input ports (all of which is known to be technologically doable without destroying the computer); and the observable behaviors include everything sent to the output ports.  In that case, even with no further interaction, the predictor can clearly emulate the computer arbitrarily far into the future.  Indeed, even if the computer $S\in C$ has an internal quantum-mechanical random number generator, the probability distribution $\mathcal{D}$ over *possible* future behaviors can still be approximately extremely well. - On the other hand, at least mathematically, one can construct classes of systems $C$ that are free.  Indeed, this is trivial to arrange, by simply restricting the screenable inputs so that $S$’s future behavior is determined by some input stream to which $P$ does not have access. Like many involved definitions in theoretical computer science, cryptography, economics, and other areas, my definition of freedom is merely an attempt to approximate an informal concept that one had prior to formalization.  And indeed, there are many changes one could contemplate to the definition.  To give just a few examples, instead of requiring that $\mathcal{E}\left( B_{t,\infty}|I_{t,\infty}\right) $ approximate $\mathcal{D}\left( B_{t,\infty}|I_{t,\infty}\right) $ only for the actual future inputs $I_{t,\infty}$, one could demand that it do so for all *possible* $I_{t,\infty}$.  Or one could assume a distribution over the future $I_{t,\infty}$, and require success on *most* of them.  Or one could require success only for most $S\in C$, again assuming a distribution over $S$’s.  Or one could switch around the quantifiers—e.g., requiring a single predictor $P$ that achieves greater and greater prediction accuracy $\varepsilon>0$ the longer it continues.  Or one could drop the requirement that $P$ forecast all of $B_{t,\infty}$, requiring only that it forecast $B_{t,u}$ for some large but finite $u$.  It would be extremely interesting to develop the mathematical theory of these different sorts of prediction—something I reluctantly leave to future work.  Wouldn’t it be priceless if, after millennia of debate, the resolution of the question are humans free? turned out to be yes if you define ‘free’ with the $\varepsilon,\delta $ quantifiers inside, but no if you put the quantifiers outside? A central limitation of the definition, as it stands, is that it’s qualitative rather than quantitative, closer in spirit to computability theory than complexity theory.  More concretely, the definition of mechanistic only requires that there *exist* a finite time $t$ after which the predictor succeeds; it puts no limit on the amount of time.  But this raises a problem: what if the predictor could succeed in learning to emulate a human subject, but only after observing the subject’s behavior for (say) $10^{100}$ years?  Does making the subject immortal, in order to give the predictor enough time, belong to the set of allowed interventions?  Likewise, suppose that, after observing the subject’s behavior for $20$ years, the predictor becomes able to predict the subject’s future behavior probabilistically, but *only for the next* $20$* years*, not indefinitely?  The definition doesn’t consider this sort of time-limited prediction, even though intuitively, it seems almost as hard to reconcile with free will as the unlimited kind.  On the other hand, the actual numbers matter: a predictor that needed $20$ years of data-gathering, in order to learn enough to predict the subject’s behavior for the $5$ seconds immediately afterward, would seem intuitively compatible with freedom.  In any case, in this essay I mostly ignore quantitative timing issues (except for brief discussions in Sections \[LIBET\] and \[FALSIFY\]), and imagine for simplicity that we have a predictor that after some finite time learns to predict the subject’s responses arbitrarily far into the future. Appendix: Prediction and Kolmogorov Complexity\[KOLMOG\] ======================================================== As mentioned in Section \[KNIGHTIAN\], some readers will take issue with the entire concept of Knightian uncertainty—that is, with uncertainty that can’t even be properly quantified using probabilities.  Among those readers, some might be content to assert that there exists a true, objective prior probability distribution $\mathcal{D}$ over all events in the physical world—and while we might not know any prescription to *calculate* $\mathcal{D}$ that different agents can agree on, we can be sure that agents are irrational to whatever extent their own priors deviate from $\mathcal{D}$.  However, more sophisticated readers might try to *derive* the existence of a roughly-universal prior, using ideas from *algorithmic information theory* (see Li and Vitányi [@livitanyi] for an excellent introduction).  In this appendix, I’d like to sketch how the latter argument would go and offer a response to it. Consider an infinite sequence of bits $b_{1},b_{2},b_{3},\ldots$, which might be generated randomly, or by some hidden computable pattern, or by some process with elements of both.  (For example, maybe the bits are uniformly random, except that every hundredth bit is the majority vote of the previous $99$ bits.)  We can imagine, if we like, that these bits represent a sequence of yes-or-no decisions made by a human being.  For each $n\geq1$, a superintelligent predictor is given $b_{1},\ldots,b_{n-1}$, and asked to predict $b_{n}$.  Then the idea of algorithmic statistics is to give a *single* rule, which can be proved to predict $b_{n}$ almost as well as any other computable rule, in the limit $n\rightarrow\infty$. Here’s how it works.  Choose any Turing-universal programming language $L$, which satisfies the technical condition of being *prefix-free*: that is, adding characters to the end of a valid program never yields another valid program.  Let $P$ be a program written in $L$, which runs for an infinite time and has access to an unlimited supply of random bits, and which generates an infinite sequence $B=\left( b_{1},b_{2},\ldots\right) $ according to some probability distribution $\mathcal{D}_{P}$.  Let $\left\vert P\right\vert $ be the number of bits in $P$.  Then for its initial guess as to the behavior of $B$, our superintelligent predictor will use the so-called *universal prior* $\mathcal{U}$, in which each distribution $\mathcal{D}_{P}$ appears with probability $2^{-\left\vert P\right\vert }/C$, for some normalizing constant $C=\sum_{P}2^{-\left\vert P\right\vert }\leq1$.  (The reason for the prefix-free condition was to ensure that the sum $\sum_{P}2^{-\left\vert P\right\vert }$ converges.)  Then, as the bits $b_{1},b_{2},\ldots$ start appearing, the predictor repeatedly updates $\mathcal{U}$ using Bayes’ rule, so that its estimate for $\Pr\left[ b_{n}=1\right] $ is always$$\frac{\Pr_{\mathcal{U}}\left[ b_{1}\ldots b_{n-1}1\right] }{\Pr _{\mathcal{U}}\left[ b_{1}\ldots b_{n-1}\right] }.$$ Now suppose that the true distribution over $B$ is $\mathcal{D}_{Q}$, for some particular program $Q$.  Then I claim that, in the limit $n\rightarrow\infty$, a predictor that starts with $\mathcal{U}$ as its prior will do just as well as if it had started with $\mathcal{D}_{Q}$.  The proof is simple: by definition, $\mathcal{U}$ places a constant fraction of its probability mass on $\mathcal{D}_{Q}$ from the beginning (where the constant, $2^{-\left\vert Q\right\vert }/C$, admittedly depends on $\left\vert Q\right\vert $).  So for all $n$ and $b_{1}\ldots b_{n}$,$$\frac{\Pr_{\mathcal{U}}\left[ b_{1}\ldots b_{n}\right] }{\Pr_{\mathcal{D}_{Q}}\left[ b_{1}\ldots b_{n}\right] }=\frac{\Pr_{\mathcal{U}}\left[ b_{1}\right] \Pr_{\mathcal{U}}\left[ b_{2}|b_{1}\right] \cdots \Pr_{\mathcal{U}}\left[ b_{n}|b_{1}\ldots b_{n-1}\right] }{\Pr _{\mathcal{D}_{Q}}\left[ b_{1}\right] \Pr_{\mathcal{D}_{Q}}\left[ b_{2}|b_{1}\right] \cdots\Pr_{\mathcal{D}_{Q}}\left[ b_{n}|b_{1}\ldots b_{n-1}\right] }\geq2^{-\left\vert Q\right\vert }.$$ Hence$${\displaystyle\prod\limits_{n=1}^{\infty}} \frac{\Pr_{\mathcal{U}}\left[ b_{n}|b_{1}\ldots b_{n-1}\right] }{\Pr_{\mathcal{D}_{Q}}\left[ b_{n}|b_{1}\ldots b_{n-1}\right] }\geq2^{-\left\vert Q\right\vert }$$ as well.  But for all $\varepsilon>0$, this means that $\mathcal{U}$ can assign a probability to the correct value of $b_{n}$ less than $1-\varepsilon $ times the probability assigned by $\mathcal{D}_{Q}$, only for $O\left( \left\vert Q\right\vert /\varepsilon\right) $ values of $n$ or fewer. Thus, an algorithmic Bayesian might argue that there are only two possibilities: either a physical system is predictable by the universal prior $\mathcal{U}$, or else—to whatever extent it isn’t—in any meaningful sense the system behaves randomly.  There’s no third possibility that we could identify with Knightian uncertainty or freebits. One response to this argument—perhaps the response Penrose would prefer—would be that we can easily defeat the so-called universal predictor $\mathcal{U}$, using a sequence of bits $b_{1},b_{2},\ldots$ that’s deterministic but noncomputable.  One way to construct such a sequence is to diagonalize against $\mathcal{U}$, defining$$b_{n}:=\left\{ \begin{array} [c]{cc}0 & \text{if }\Pr_{\mathcal{U}}\left[ b_{1}\ldots b_{n-1}1\right] >\Pr_{\mathcal{U}}\left[ b_{1}\ldots b_{n-1}0\right] ,\\ 1 & \text{otherwise}\end{array} \right.$$ for all $n\geq1$.  Alternatively, we could let $b_{n}$ be the $n^{th}$ binary digit of Chaitin’s constant $\Omega$ [@chaitin] (basically, the probability that a randomly generated computer program halts).[^61]  In either case, $\mathcal{U}$ will falsely judge the $b_{n}$’s to be random even in the limit $n\rightarrow\infty$.  Note that an even more powerful predictor $\mathcal{U}^{\prime}$, equipped with a suitable oracle, could predict either of these sequences perfectly.  But then we could construct new sequences $b_{1}^{\prime},b_{2}^{\prime},\ldots$ that were unpredictable even by $\mathcal{U}^{\prime}$, and so on. The response above is closely related to a notion called *sophistication* from algorithmic information theory (see [@antunesfortnow; @gtv; @livitanyi]).  Given a binary string $x$, recall that the *Kolmogorov complexity* $K\left( x\right) $ is the length of the shortest program, in some Turing-universal programming language, whose output (given a blank input) is $x$.  To illustrate, the Kolmogorov complexity of the first $n$ bits of $\pi\approx11.00100100\ldots_{2}$ is small ($\log _{2}n+O\left( \log\log n\right) $), since one only has to provide $n$ (which takes $\log_{2}n$ bits), together with a program for computing $\pi$ to a given accuracy (which takes some small, fixed number of bits, independent of $n$).  By contrast, if $x$ is an $n$-bit string chosen uniformly at random, then $K\left( x\right) \approx n$ with overwhelming probability, simply by a counting argument.  Now, based on those two examples, it’s tempting to conjecture that every string is either highly-patterned or random: that is, either 1. $K\left( x\right) $ is small, or else 2. $K\left( x\right) $ is large, but only because of boring, random, patternless entropy in $x$. Yet the above conjecture, when suitably formalized, turns out to be false.  Given a set of strings $S\subseteq\left\{ 0,1\right\} ^{n}$, let $K\left( S\right) $ be the length of the shortest program that lists the elements of $S$.  Then given an $n$-bit string $x$ and a small parameter $c$, one can define the $c$*-sophistication* of $x$, or $\operatorname*{Soph}_{c}\left( x\right) $, to be the minimum of $K\left( S\right) $, over all sets $S\subseteq\left\{ 0,1\right\} ^{n}$ such that $x\in S$ and$$K\left( S\right) +\log_{2}\left\vert S\right\vert \leq K\left( x\right) +c.$$ Intuitively, the sophistication of $x$ is telling us, in a near-minimal program for $x$, how many bits of the program need to be interesting code rather than algorithmically random data.  Certainly $\operatorname*{Soph}_{c}\left( x\right) $ is well-defined and at most $K\left( x\right) $, since we can always just take $S$ to be the singleton set $\left\{ x\right\} $.  Because of this, highly-patterned strings are unsophisticated.  On the other hand, random strings are *also* unsophisticated, since for them, we can take $S$ to be the entire set $\left\{ 0,1\right\} ^{n}$.  Nevertheless, it’s possible to prove [@antunesfortnow; @gtv] that there exist highly sophisticated strings: indeed, strings $x$ such that $\operatorname*{Soph}_{c}\left( x\right) \geq n-O\left( \log n\right) $.  These strings could thus be said to inhabit a third category between patterned and random.  Not surprisingly, the construction of sophisticated strings makes essential use of uncomputable processes. However, for reasons explained in Section \[PENROSE\], I’m exceedingly reluctant to postulate uncomputable powers in the laws of physics (such as an ability to generate the digits of $\Omega$).  Instead, I would say that, if there’s scope for freedom, then it lies in the fact that even when a sequence of bits $b_{1},b_{2},\ldots$ is computable, the universal predictor is only guaranteed to work in the limit $n\rightarrow\infty$.  Intuitively, once the predictor has figured out the program $Q$ generating the $b_{n}$’s, it can *then* predict future $b_{n}$’s as well as such prediction is possible.  However, the number of serious mistakes that the predictor makes before converging on the correct $Q$ could in general be as large as $Q$’s bit-length.  Worse yet, there’s no finite time after which the predictor can *know* that it’s converged on the correct $Q$.  Rather, in principle the predictor can always be surprised by a bit $b_{n}$ that diverges from the predictions of whatever hypothesis $Q$ it favored in the past, whereupon it needs to find a new hypothesis $Q^{\prime}$, and so on. Some readers might object that, in the real world, it’s reasonable to assume an upper bound on the number of bits needed to describe a given physical process (for example, a human brain).  In that case, the predictor would indeed have an absolute upper bound on $\left\vert Q\right\vert $, and hence on the number of times it would need to revise its hypothesis substantially. I agree that such bounds on $\left\vert Q\right\vert $ almost certainly exist—indeed, they must exist, if we accept the holographic principle from quantum gravity (see Section \[INITIAL\]).  For me, the issue is simply that the relevant bounds seem too large to be of any practical interest.  Suppose, for example, that we believed $10^{14}$ bits—or roughly one bit per synapse—sufficed to encode everything of interest about a particular human brain.  While that strikes me as an underestimate, it still works out to roughly $40,000$ bits per second, assuming an $80$-year lifespan.  In other words, it seems that a person of normal longevity would have more than enough bits to keep the universal predictor $\mathcal{U}$ on its toes! The above estimate leads to amusing thought: *if* one lived forever, then perhaps one’s store of freedom would eventually get depleted, much like an $n$-bit computer program can surprise $\mathcal{U}$ at most $O\left( n\right) $ times.  (Arguably, this depletion happens to some extent over our actual lifetimes, as we age and become increasingly predictable and set in our ways.)  From this perspective, freedom could be considered merely a finite-$n$ effect—but this would be one case where the value of $n$ matters! Appendix: Knightian Quantum States\[FREESTATES\] ================================================ In Section \[NOCLONE\], I introduced the somewhat whimsically-named *freestate*: a representation of knowledge that combines probabilistic, quantum-mechanical, and Knightian uncertainty, thereby generalizing density matrices, which combine probabilistic and quantum uncertainty.  (The freebits referred to throughout the essay are then just $2$-level freestates.)  While there might be other ways to formalize the concept of freestates, in this appendix I’ll give a particular formalization that I prefer. A good starting point is to combine probabilistic and Knightian uncertainty, leaving aside quantum mechanics.  For simplicity, consider a bit $b\in\left\{ 0,1\right\} $.  In the probabilistic case, we can specify our knowledge of $b$ with a single real number, $p=\Pr\left[ b=1\right] \in\left[ 0,1\right] $.  In the Knightian case, however, we might have a set of *possible* probabilities: for example,$$p\in\left\{ 0.1\right\} \cup\left[ 0.2,0.3\right] \cup\left( 0,4.0.5\right) . \tag{*}\label{star}$$ This seems rather complicated!  Fortunately, we can make several simplifications.  Firstly, since we don’t care about infinite precision, we might as well take all the probability intervals to be closed.  More importantly, I believe we should assume *convexity*: that is, if $p<q$ are both possible probabilities for some event $E$, then so is every intermediate probability $r\in\left[ p,q\right] $.  My argument is simply that Knightian uncertainty includes probabilistic uncertainty as a special case: if, for example, we have no idea whether the bit $b$ was generated by process $P$ or process $Q$, then for all we know, $b$ might *also* have been generated by choosing between $P$ and $Q$ with some arbitrary probabilities. Under the two rules above, the disjunction (\*) can be replaced by $p\in\left[ 0.1,0.5\right] $.  More generally, it’s easy to see that our states will always be nonempty, convex regions of the probability simplex: that is, nonempty sets $S$ of probability distributions that satisfy $\alpha \mathcal{D}_{1}+\left( 1-\alpha\right) \mathcal{D}_{2}\in S$ for all $\mathcal{D}_{1},\mathcal{D}_{2}\in S$ and all $\alpha\in\left[ 0,1\right] $.  Such a set $S$ can be used to calculate upper and lower bounds on the probability $\Pr\left[ E\right] $ for any event $E$.  Furthermore, there’s no redundancy in this description: if $S_{1}\neq S_{2}$, then it’s easy to see that there exists an event $E$ for which $S_{1}$ allows a value of $\Pr\left[ E\right] $ not allowed by $S_{2}$ or vice versa. One might worry about the converse case: probabilistic uncertainty over different states of Knightian uncertainty.  However, I believe this case can be expanded out into Knightian uncertainty about probabilistic uncertainty, like so:$$\frac{\left( A\text{ OR }B\right) +\left( C\text{ OR }D\right) }{2}=\left( \frac{A+C}{2}\right) \text{ OR }\left( \frac{A+D}{2}\right) \text{ OR }\left( \frac{B+C}{2}\right) \text{ OR }\left( \frac{B+D}{2}\right) .$$ By induction, any hierarchy of probabilistic uncertainty about Knightian uncertainty about probabilistic uncertainty about... etc. can likewise be collapsed, by such a procedure, into simply a convex set of probability distributions. The quantum case, I think, follows exactly the same lines, except that now, instead of a convex set of probability distributions, we need to talk about a convex set of density matrices.  Formally, an $n$*-dimensional freestate* is a nonempty set $S$ of $n\times n$ density matrices such that $\alpha\rho+\left( 1-\alpha\right) \sigma\in S$ for all $\rho,\sigma\in S$ and all $\alpha\in\left[ 0,1\right] $.  Once again, there is no redundancy involved in specifying our knowledge about a quantum system in this way.  The argument is simply the following: for all nonempty convex sets $S_{1}\neq S_{2}$, there either exists a state $\rho\in S_{1}\setminus S_{2}$ or a state $\rho\in S_{2}\setminus S_{1}$.  Suppose the former without loss of generality.  Then by the convexity of $S_{2}$, it is easy to find a pure state $\left\vert \psi\right\rangle $ such that $\left\langle \psi |\rho|\psi\right\rangle \notin\left\{ \left\langle \psi|\sigma|\psi \right\rangle :\sigma\in S_{2}\right\} $. [^1]: MIT.  Email: aaronson@csail.mit.edu.   This material is based upon work supported by the National Science Foundation under Grant No. 0844626, as well as by an NSF STC grant, a TIBCO Chair, a Sloan Fellowship, and an Alan T. Waterman Award. [^2]: Invented by the mathematician John Conway in 1970, the Game of Life involves a large two-dimensional array of pixels, with each pixel either live or dead.  At each (discrete) time step, the pixels get updated via a deterministic rule: each live pixel dies if less than $2$ or more than $3$ of its $8$ neighbors were alive, and each dead pixel comes alive if exactly $3$ of its $8$ neighbors were alive.  ‘Life’ is famous for the complicated, unpredictable patterns that typically arise from a simple starting configuration and repeated application of the rules.  Conway (see [@levy]) has expressed certainty that, on a large enough Life board, living beings would arise, who would then start squabbling over territory and writing learned PhD theses!  Note that, with an *exponentially*-large Life board (and, say, a uniformly-random initial configuration), Conway’s claim is vacuously true, in the sense that one could find essentially any regularity one wanted just by chance.  But one assumes that Conway meant something stronger. [^3]: As Hodges (personal communication) points out, it’s interesting to contrast these remarks with a view Turing had expressed just a year earlier, in Computing Machinery and Intelligence [@turing:ai]: It is true that a discrete-state machine must be different from a continuous machine.  But if we adhere to the conditions of the imitation game, the interrogator will not be able to take any advantage of this difference.  Note that there’s no actual contradiction between this statement and the one about the uncertainty principle, especially if we distinguish (as I will) between simulating a *particular* brain and simulating *some* brain-like entity able to pass the Turing test.  However, I’m not aware of any place where Turing explicitly makes that distinction. [^4]: My own attempt to do so is in Appendix \[MEAN\]. [^5]: A perfect example of this phenomenon is provided by the countless people who claim that even if a computer program passed the Turing Test, it still wouldn’t be conscious—and then, without batting an eye, defend that claim using arguments that presuppose that the program *couldn’t* pass the Turing Test after all!  (Sure, the program might solve math problems, but it could never write love poetry, etc. etc.)  The temptation to hitch metaphysical claims to empirical ones, without even realizing the chasm one is crossing, seems incredibly strong. [^6]: For the general reader, other good background reading for this essay might include *From Eternity to Here* by Sean Carroll [@carroll:eternity], *The Beginning of Infinity* by David Deutsch [@deutsch:infinity], *The Emperor’s New Mind* by Roger Penrose [@penrose], or *Free Will as an Open Scientific Problem* by Mark Balaguer [@balaguer].  Obviously, none of these authors necessarily endorse everything I say (or vice versa)!  What the books have in common is simply that they explain one or more concepts invoked in this essay in much more detail than I do. [^7]: See  blogs.discovermagazine.com/cosmicvariance/2011/07/13/free-will-is-as-real-as-baseball/ [^8]: Assuming, of course, that we don’t *condition* on Alice’s knowledge—something that could change Bob’s probabilities even in the case of mere classical correlation between the particles. [^9]: This is not to say, of course, that the brain’s activity might not *amplify* such effects to the macroscopic, classical scale—a possibility that will certainly concern us later on. [^10]: I came up with this justification around 2002, and set it out in a blog post in 2006: see www.scottaaronson.com/blog/?p=30.  Later, I learned that Radford Neal [@neal] had independently proposed similar ideas. [^11]: In theoretical computer science, a problem belonging to a class $C$ is called $C$-*complete*, if solving the problem would suffice to solve any other problem in $C$. [^12]: In Democritus’s famous dialogue between the intellect and the senses, the intellect declares: By convention there is sweetness, by convention bitterness, by convention color, in reality only atoms and the void.  To which the senses reply: Foolish intellect! Do you seek to overthrow us, while it is from us that you take your evidence?  Your victory is your defeat. [^13]: By the laws of physics, I mean the currently-accepted laws—*unless* we’re explicitly tinkering with those laws to see what happens.  Of course, whenever known physics imposes an inconvenient limit (for example, no faster-than-light communication), the convention in most science-fiction writing is simply to stipulate that physicists of the future will discover some way around the limit (such as wormholes, tachyons, hyperdrive,  etc).  In this essay I take a different approach, trying to be as conservative as I can about fundamental physics. [^14]: The standard example is the *CHSH game* [@chsh].  Here Alice and Bob are given bits $x$ and $y$ respectively, which are independent and uniformly random.  Their goal is for Alice to output a bit $a$, and Bob a bit $b$, such that $a+b\left( \operatorname{mod}2\right) =xy$.  Alice and Bob can agree on a strategy in advance, but can’t communicate after receiving $x$ and $y$.  Classically, it’s easy to see that the best they can do is always to output $a=b=0$, in which case they win the game with probability $3/4$.  By contrast, if Alice and Bob own one qubit each of the entangled state $\frac{1}{\sqrt{2}}\left( \left\vert 00\right\rangle +\left\vert 11\right\rangle \right) $, then there exists a strategy by which they can win with probability $\cos^{2}\left( \pi/8\right) \approx0.85$.  That strategy has the following form: Alice measures her qubit in a way that depends on $x$ and outputs the result as $a$, while Bob measures his qubit in a way that depends on $y$ and outputs the result as $b$. [^15]: The reason why many people (including me) cringe at that sort of talk is the *no-communication theorem*, which explains why, despite Bell’s theorem, entanglement *can’t* be used to send actual messages faster than light.  (Indeed, if it could, then quantum mechanics would flat-out contradict special relativity.) The situation is this: *if* one wanted to violate the Bell inequality using classical physics, *then* one would need faster-than-light communication.  But that doesn’t imply that quantum mechanics’ violation of the same inequality should *also* be understood in terms of faster-than-light communication!  We’re really dealing with an intermediate case here—more than classical locality, but less than classical nonlocality—which I don’t think anyone even recognized as a logical possibility until quantum mechanics forced it on them. [^16]: Or the Free Whim Theorem, as Conway likes to suggest when people point out the irrelevance of human free will to the theorem. [^17]: This point was recently brought out by Fritz, in his paper Bell’s theorem without free will [@fritz].  Fritz replaces the so-called free will assumption of the Bell inequality—that is, the assumption that Alice and Bob get to choose which measurements to perform—by an assumption about the *independence* of separated physical devices. [^18]: The protocol of Vazirani and Vidick [@vaziranividick] needs only $O\left( \log n\right) $ seed bits to generate $n$ Einstein-certified output bits. [^19]: Some people might argue that Bohmian mechanics [@bohm], the interpretation of quantum mechanics that originally inspired Bell’s theorem, is also superdeterministic.  But Bohmian mechanics is empirically equivalent to standard quantum mechanics—from which fact it follows immediately that the determinism of Bohm’s theory is a formal construct that, whatever else one thinks about it, has no actual consequences for prediction.  To put it differently: at least in its standard version, Bohmian mechanics buys its determinism via the mathematical device of pushing all the randomness back to the beginning of time.  It then accepts the nonlocality that such a tactic inevitably entails because of Bell’s theorem. [^20]: This is analogous to how, in computational complexity theory, there exists a program that that uses (say) $n^{2.0001}$ time steps, and that simulates any $n^{2}$-step program provided to it as input.  The time hierarchy theorem, which is close to tight, only rules out the simulation of $n^{2}$-step programs using significantly *less* than $n^{2}$ time steps. [^21]: The largest cryonics organization is Alcor, www.alcor.org. [^22]: See, for example, lesswrong.com/lw/mb/lonely\_dissent/ [^23]: Soon et al. [@soonetal] (see the Supplementary Material, p. 14-15) argue that, by introducing a long delay between trials and other means, they were able to rule out the possibility that their prediction accuracy was due purely to carryover between successive trials.  They also found that the probability of a run of $N$ successive presses of the same button decreased exponentially with $N$, as would be expected if the choices were independently random.  However, it would be interesting for future research to compare fMRI-based prediction head-to-head against prediction using carefully designed machine learning algorithms that see only the sequence of previous button presses. [^24]: The probability of failure can be made arbitrarily small by the simple expedient of running the algorithm over and over and taking a majority vote. [^25]: For example, it shows up in the clear distinctions made between random and adversarial noise, between probabilistic and nondeterministic Turing machines, and between average-case and worst-case analysis of algorithms. [^26]: In Appendix \[FREESTATES\], I briefly explain why I think convex sets provide the right representation of Knightian uncertainty, though this point doesn’t much matter for the rest of the essay. [^27]: E.g., that even if the CPA is wrong, we should assume it because economic theorizing would be too unconstrained without it, or because many interesting theorems need it as a hypothesis. [^28]: Here we assume for simplicity that we’re talking about pure states, not mixed states. [^29]: The word arbitrary is needed because, if we knew how $\left\vert \psi\right\rangle $ was prepared, then of course we could simply run the preparation procedure a second time.  The No-Cloning Theorem implies that, if we *don’t* already know a preparation procedure for $\left\vert \psi\right\rangle $, then we can’t learn one just by measuring $\left\vert \psi\right\rangle $.  (And conversely, the inability to learn a preparation procedure implies the No-Cloning Theorem.  If we could copy $\left\vert \psi\right\rangle $ perfectly, then we could keep repeating to make as many copies as we wanted, then use quantum state tomography on the copies to learn the amplitudes to arbitrary accuracy.) [^30]: Unfortunately, unlike quantum key distribution, quantum money is not yet practical, because of the difficulty of protecting quantum states against decoherence for long periods of time. [^31]: See, for example, www.daylightatheism.org/2006/04/on-free-will-iii.html [^32]: Sadly, I no longer have the reference. [^33]: For example, No-Cloning holds for classical probability distributions: there’s no procedure that takes an input bit $b$ that equals $1$ with unknown probability $p$, and produces two output bits that are both $1$ with probability $p$ independently.  But this observation lacks the import of the quantum No-Cloning Theorem, because regardless of what one wanted to do with the bit $b$, one might as well have *measured* it immediately, thereby collapsing it to a deterministic bit—which *can* of course be copied. Also, seeking to clarify foundational issues in quantum mechanics, Spekkens [@spekkens] constructed an epistemic toy theory that’s purely classical, but where an analogue of the No-Cloning Theorem holds.  However, the toy theory involves magic boxes that we have no reason to think can be physically realized. [^34]: However, Sompolinsky then goes on to reject metaphysical free will as incompatible with a scientific worldview: if, he says, there were laws relevant to brain function beyond the known laws of physics and chemistry, then those laws would themselves be incorporated into science, leaving us back where we started.  I would agree if it weren’t for the logical possibility of Knightian laws, which explained to us why we couldn’t even predict the probability distributions for certain events. [^35]: Or one could ask: if we model the brain as a chaotic dynamical system, what’s the Lyapunov exponent? [^36]: Another obvious question is whether brains *differ* in any interesting way from other complicated dynamical systems like lava lamps or the Earth’s atmosphere, in terms of their response to microscopic fluctuations.  This question will be taken up in Sections \[WEATHER\] and \[GERBIL\]. [^37]: One also needs to interchange left with right and particles with antiparticles, but that doesn’t affect the substance of the argument. [^38]: Of course, if the physical laws were probabilistic, then we’d have a probability distribution over possible blocks.  This doesn’t change anything in the ensuing discussion. [^39]: As Einstein himself famously wrote to Michele Besso’s family in 1955: Now Besso has departed from this strange world a little ahead of me.  That means nothing.  People like us, who believe in physics, know that the distinction between past, present and future is only a stubbornly persistent illusion. [^40]: Given a spacetime point $x$, the past lightcone of $x$ is the set of points from which $x$ can receive a signal, and the future lightcone of $x$ is the set of points to which it can send one. [^41]: For the same reason, we also have the rule that a microfact cannot cause two macrofacts to its future via disjoint causal pathways.  The only reason this rule wasn’t mentioned earlier is that it plays no role in eliminating cycles. [^42]: To be more precise here, one would presumably need to know the detailed *mapping* between the qubits of Hawking radiation and the degrees of freedom inside the hole, which in turn would require a quantum theory of gravity. [^43]: Here Stenger’s concern was not free will or human predictability, but rather ruling out the possibility (discussed by some theologians) that God could have arranged the Big Bang with foreknowledge about life on Earth. [^44]: As the simplest example, the boundary formulation makes it obvious that the total entropy in a region should be upper-bounded by its *surface area*, rather than its volume.  In the bulk formulation, that property is strange and unexpected. [^45]: Admittedly, the known examples involve isomorphisms between two theories with different numbers of spatial dimensions but both with a time dimension.  There don’t seem to be any nontrivial examples where the boundary theory lives on an initial spacelike or null hypersurface of the bulk theory.  (One could, of course, produce a trivial example, by simply defining the boundary theory to consist of the initial conditions of the bulk theory, with no time evolution!  By nontrivial, I mean something more interesting than that.) [^46]: A crucial caveat is that, after the interference experiment was over, one would retain no reliable memories or publishable records about what it was like!  For the very fact of such an experiment implies that one’s memories are being created and destroyed at will.  Without the destruction of memories, we can’t get interference.  After the experiment is finished, one might have *something* in one’s memory, but what it is could have been probabilistically predicted even before the experiment began, and can in no way depend on what it was like in the middle.  Still, *at least while the experiment was underway*, maybe one would know which interpretation of quantum mechanics was correct! [^47]: A later book, *The Road to Reality* [@penrose:road], says little directly about mind, but is my favorite.  I think it makes Penrose’s strongest case for a gap in our understanding of quantum mechanics, thermodynamics, and cosmology that radical new ideas will be needed to fill. [^48]: Along another axis, though, some people might see the freebit picture as *more* radical, in that it suggests the impossibility of *any* non-tautological explanation for certain events and decisions, even an explanation invoking oracles for Turing-uncomputable problems. [^49]: Just like free will, the word consciousness has been the victim of ferocious verbal overloading, having been claimed for everything from that which disappears under anesthesia, to that which a subject can give verbal reports about, to the brain’s executive control system!  Worse, consciousness has the property that, even if one specifies *exactly* what one means by it, readers are nevertheless certain to judge anything one says against their own preferred meanings.  For this reason, just as I ultimately decided to talk about freedom (or Knightian freedom) rather than free will in this essay, so I’d much rather use less fraught terms for executive control, verbal reportability, and so on, and restrict the word consciousness to mean the otherwise-undefinable thing that people have tried to get at for centuries with the word ‘consciousness,’ supposing that thing exists. [^50]: Why is it permissible to assume that $M$ never errs, if no *human* mathematician (or even, arguably, the entire mathematical community) has ever achieved that standard of infallibility? Even if $M$ never *did* affirm $S_{M}$, or never erred more generally, how could we ever *know* that?  Indeed, much like with consciousness itself, even if one person had the mysterious Platonic ability to see $M$’s soundness, how could that person ever convince a skeptical third party? Finally, if we believed that the human brain was itself finitely describable, then why couldn’t we construct a similar mathematical statement (e.g., Penrose will never affirm this statement), which *Penrose* couldn’t affirm without contradicting himself, even though a different human, or indeed an AI program, could easily affirm it? [^51]: More precisely, the initial state, when encoded in some natural way as a binary string, must have non-negligibly large sophistication: see Appendix \[KOLMOG\]. [^52]: By contrast, when it comes to dynamical behavior, we have centuries of experience discovering laws that can indeed be simulated on a computer, given the initial conditions as input, and no experience discovering laws that can’t be so simulated. [^53]: This would follow from the conjecture, as yet unproved, that $\pi$ is a base-$10$ normal number: that is, that just like for a random sequence, every possible sequence of $k$ consecutive digits appears in $\pi$’s decimal expansion with asymptotic frequency $1/10^{k}$. [^54]: Many people point out that cavemen could have made exactly the same argument, and would have been wrong.  This is true but irrelevant: the whole point of the Doomsday argument is that *most* people who make it will be right! Another common way to escape the argument’s conclusion is to postulate the existence of large numbers of extraterrestrial civilizations, which are there regardless of what humans do or don’t do.  If the extraterrestrials are included in our reference class, they can then swamp the effect of the number of future humans in the Bayesian calculation. [^55]: Astronomers can only see as far as light has reached since the Big Bang.  If a positive spatial curvature was ever detected on cosmological scales, it would strongly suggest that the universe wraps around—much like hypothetical ancients might have deduced that the earth was round by measuring the curvature of a small patch.  So far, though, except for local perturbations, the universe appears perfectly flat to within the limits of measurement, suggesting that it is either infinite or else extends far beyond our cosmological horizon.  On the other hand, it is logically possible that the universe could be *topologically* closed (and hence finite), despite having zero spatial curvature.  Also, assuming a positive cosmological constant, sufficiently far parts of the universe would be forever out of causal contact with us—leading to philosophical debate about whether those parts should figure into scientific explanations, or even be considered to exist. [^56]: I thank Ronald de Wolf for this observation. [^57]: We could also formulate a stronger notion of a universal predictor, which has to work for *any* physical system $S$ (or equivalently, whose reference class $C$ is the set of all physical systems).  My own guess is that, *if* there exists a predictor for sufficiently complex systems like human brains, then there also exists a universal predictor.  But I won’t attempt to argue for that here. [^58]: Here we don’t presuppose that time is absolute or continuous.  Indeed, all we need is that $S$ passes through a discrete series of instants, which can be ordered by increasing values of $t$. [^59]: Or probability measure over infinite sequences, in the case $u=\infty$. [^60]: Variation distance is a standard measure of distance between two probability distributions, and is defined by $\left\Vert \left\{ p_{x}\right\} -\left\{ q_{x}\right\} \right\Vert :=\frac{1}{2}\sum_{x}\left\vert p_{x}-q_{x}\right\vert $. [^61]: For completeness, let me prove that the universal predictor $\mathcal{U}$ fails to predict the digits of $\Omega=0.b_{1}b_{2}b_{3}\ldots $.  Recall that $\Omega$ is *algorithmically random*, in the sense that for all $n$, the shortest program to generate $b_{1}\ldots b_{n}$ has length $n-O\left( 1\right) $.  Now, suppose by contradiction that $\Pr_{\mathcal{U}}\left[ b_{1}\ldots b_{n}\right] \geq L/2^{n}$, where $L\gg n$.  Let $A_{n}$ be a program that dovetails over all programs $Q$, in order to generate better and better lower bounds on $\Pr_{\mathcal{U}}\left[ x\right] $ for all $n$-bit strings $x$ (converging to the correct probabilities in the infinite limit).  Then we can specify $b_{1}\ldots b_{n}$ by saying: when $A_{n}$ is run, $b_{1}\ldots b_{n}$ is the $j^{th}$ string $x\in\left\{ 0,1\right\} ^{n}$ such that $A_{n}$’s lower bound on $\Pr_{\mathcal{U}}\left[ x\right] $ exceeds $L/2^{n}$.  Since there are at most $2^{n}/L$ such strings $x$, this description requires at most $n-\log_{2}L+\log_{2}n+O\left( 1\right) $ bits.  Furthermore, it clearly gives us a procedure to generate $b_{1}\ldots b_{n}$.  But if $L\gg n$, then this contradicts the fact that $b_{1}\ldots b_{n}$ has description length $n-O\left( 1\right) $.  Therefore$$\Pr_{\mathcal{U}}\left[ b_{1}\ldots b_{n}\right] ={\displaystyle\prod\limits_{i=1}^{n}} \Pr_{\mathcal{U}}\left[ b_{i}|b_{1}\ldots b_{i-1}\right] =O\left( \frac {n}{2^{n}}\right) ,$$ and $\mathcal{U}$ hardly does better than chance.
--- abstract: 'In this paper, we consider the volume enclosed by the microcanonical ensemble in phase space as a statistical ensemble. This can be interpreted as an intermediate image between the microcanonical and the canonical pictures. By maintaining the ergodic hypothesis over this ensemble, that is, the equiprobability of all its accessible states, the equivalence of this ensemble in the thermodynamic limit with the microcanonical and the canonical ensembles is suggested by means of geometrical arguments. The Maxwellian and the Boltzmann-Gibbs distributions are obtained from this formalism. In the appendix, the derivation of the Boltzmann factor from a new [*microcanonical image*]{} of the canonical ensemble is also given.' author: - 'Ricardo López-Ruiz' - Jaime Sañudo - Xavier Calbet title: 'On the equivalence of the microcanonical and the canonical ensembles: a geometrical approach' --- The microcanonical and the canonical ensembles represent two clearly different physical situations in a statistical system [@huang]. The microcanonical ensemble is presented in the literature as modeling an isolated system that conserves its energy in time. The canonical ensemble models a system in contact with a heat reservoir containing an infinite energy, which allows to fluctuate the energy of the system but maintains its mean value constant in time. In the thermodynamic limit, and under certain assumptions on the entropy function [@ellis], both formalisms converge and they give the same macroscopic statistical results [@huang; @ellis]. Here, we interpret the volume enclosed by the microcanonical ensemble in phase space as a statistical ensemble. It implies the existence of some kind of heat reservoir with an upper limit energy in order that the system can visit all the accessible states enclosed in that volume. The geometrical reason why this picture is equivalent to the microcanonical ensemble in the thermodynamic limit is discussed. Thus, if the microcanonical ensemble is supposed to be represented by the equiprobability over the hypersurface on which the system evolves as consequence of conserving an energy $E$ (as recently explained in Refs. ), then the volume-based ensemble can be interpreted as the equiprobability over the whole volume which is enclosed by that hypersurface. This means that, in the latter ensemble, the system can visit states with different energies, with an upper limit $E$ given by the energy defined in its equivalent microcanonical picture. As we have said, we can think that in this image the system is exchanging energy with a heat (or energy) reservoir containing a maximum energy $E$. This constraint is removed in the thermodynamic limit when the number of degrees of freedom and the energy $E$ are supposed to become infinite. Let us observe that this infinite limit establishes also the equivalence between this ensemble and the canonical ensemble (when certain conditions of smoothness in the entropy function are implicit [@ellis]), just because in this case the reservoir contains an infinite energy and then both pictures become identical. Hence, when the number of dimensions of the system increases infinitely, almost all the volume enclosed by that hypersurface is located in the vanishingly thin layer close to the hypersurface, and, in consequence, surface and volume, tend to coincide. This is the reason why the microcanonical ensemble and the volume-based ensemble, and by extension the canonical ensemble, give the same results for systems well-behaved [@ellis] in the thermodynamic limit. We proceed to obtain different classical results from this volume-based statistical ensemble. We start by deriving (recalling) the Maxwellian (Gaussian) distribution from geometrical arguments over the volume of an $N$-sphere. Following the same insight, we also explain the origin of the Boltzmann-Gibbs (exponential) distribution by means of the geometrical properties of the volume of an $N$-dimensional pyramid. We finish claiming a possible general statistical result that follows from the properties of the volume enclosed by a one-parameter dependent family of hypersurfaces, in which $N$-spheres and $N$-dimensional pyramids are included. In the appendix, an alternative microcanonical image of the canonical ensemble is also given. Derivation of the Maxwellian distribution {#derivation-of-the-maxwellian-distribution .unnumbered} ========================================= Let us suppose a one-dimensional ideal gas of $N$ non-identical classical particles with masses $m_i$, with $i=1,\ldots,N$, and total maximum energy $E$. If particle $i$ has a momentum $m_iv_i$, we define a kinetic energy: $$K \equiv p_i^2 \equiv {1 \over 2}{ m_iv_i^2}, \label{eq-p_i}$$ where $p_i$ is the square root of the kinetic energy. If the total maximum energy is defined as $E \equiv R^2$, we have $$p_1^2+p_2^2+\cdots +p_{N-1}^2+p_N^2 \leq R^2. \label{eq-E}$$ We see that the system has accessible states with different energy, which is supplied by the heat reservoir. These states are all those enclosed into the volume of the $N$-sphere given by Eq. (\[eq-E\]). The formula for the volume $V_N(R)$ of an $N$-sphere of radius $R$ is $$V_N(R) = {\pi^{N\over 2}\over \Gamma({N\over 2}+1)}R^{N}, \label{eq-S_n}$$ where $\Gamma(\cdot)$ is the gamma function. If we suppose that each point into the $N$-sphere is equiprobable, then the probability $f(p_i)dp_i$ of finding the particle $i$ with coordinate $p_i$ (energy $p_i^2$) is proportional to the volume formed by all the points on the $N$-sphere having the $i$th-coordinate equal to $p_i$. Our objective is to show that $f(p_i)$ is the Maxwellian distribution, with the normalization condition $$\int_{-R}^Rf(p_i)dp_i = 1. \label{eq-p_n}$$ If the $i$th particle has coordinate $p_i$, the $(N-1)$ remaining particles share an energy less than the maximum energy $R^2-p_i^2$ on the $(N-1)$-sphere $$p_1^2+p_2^2 \cdots +p_{i-1}^2 + p_{i+1}^2 \cdots +p_N^2 \leq R^2-p_i^2, \label{eq-E1}$$ whose volume is $V_{N-1}(\sqrt{R^2-p_i^2})$. It can be easily proved that $$V_N(R) = \!\int_{-R}^{R}\!V_{N-1}(\sqrt{R^2-p_i^2})dp_i. \label{eq-theta1}$$ Hence, the volume of the $N$-sphere for which the $i$th coordinate is between $p_i$ and $p_i+dp_i$ is $V_{N-1}(\sqrt{R^2-p_i^2})dp_i$. We normalize it to satisfy Eq. (\[eq-p\_n\]), and obtain $$f(p_i) = {V_{N-1}(\sqrt{R^2-p_i^2})\over V_N(R)}, \label{eq-f_n}$$ whose final form, after some calculation is $$f(p_i) = C_N R^{-1}\Big(1-{p_i^2\over R^2} \Big)^{N-1\over 2}, \label{eq-mm}$$ with $$C_N = {1\over\sqrt{\pi}}{\Gamma({N+2\over 2})\over \Gamma({N+1\over 2})}. \label{eq-cn}$$ For $N\gg 1$, Stirling’s approximation can be applied to Eq. (\[eq-cn\]), leading to $$\lim_{N\gg 1} C_N \simeq {1\over\sqrt{\pi}}\sqrt{N\over 2}. \label{eq-cc}$$ If we call $\epsilon$ the mean energy per particle, $E=R^2=N\epsilon$, then in the limit of large $N$ we have $$\lim_{N\gg 1}\left(1-{p_i^2\over R^2}\right)^{N-1\over 2} \simeq e^{-{p_i^2/2\epsilon}}. \label{eq-ee}$$ The factor $e^{-{p_i^2/2\epsilon}}$ is found when $N\gg 1$ but, even for small $N$, it can be a good approximation for particles with low energies. After substituting Eqs. (\[eq-cc\])–(\[eq-ee\]) into Eq. (\[eq-mm\]), we obtain the Maxwellian distribution in the asymptotic regime $N\rightarrow\infty$ (which also implies $E\rightarrow\infty$): $$f(p)dp = \sqrt{1\over 2\pi\epsilon}\,e^{-{p^2/2\epsilon}}dp, \label{eq-gauss}$$ where the index $i$ has been removed because the distribution is the same for each particle, and thus the velocity distribution can be obtained by averaging over all the particles. Depending on the physical situation the mean energy per particle $\epsilon$ takes different expressions. For a one-dimensional gas in thermal equilibrium we can calculate the dependence of $\epsilon$ on the temperature, which, as in the microcanonical ensemble, can be calculated by differentiating the entropy with respect to the energy. The entropy can be written as $S=-kN\!\int_{-\infty} ^{\infty} f(p)\ln f(p)\,dp$, where $f(p)$ is given by Eq. (\[eq-gauss\]) and $k$ is the Boltzmann constant. If we recall that $\epsilon=E/N$, we obtain $$S(E)= {1\over 2}kN\ln\left({E\over N} \right) + {1\over 2}kN(\ln(2\pi)+1). \label{eq-s2}$$ The calculation of the temperature $T$ gives $$T^{-1}= \left({\partial S\over \partial E} \right)_N = {kN\over 2E} = {k\over 2\epsilon}.$$ Thus $\epsilon=kT/2$, consistent with the equipartition theorem. If $p^2$ is replaced by ${1\over 2}mv^2$, the Maxwellian distribution is a function of particle velocity, as it is usually given in the literature: $$g(v)dv = \sqrt{m\over 2\pi kT}\,e^{-{mv^2/2kT}}dv.$$ This shows that the geometrical image of the volume-based statistical ensemble allows us to recover the same result than that obtained from the microcanonical and canonical ensembles [@lopez2007-1; @klages]. Also, it confirms for this case the equivalence among all these ensembles in the thermodynamic limit. Derivation of the Boltzmann-Gibbs distribution {#derivation-of-the-boltzmann-gibbs-distribution .unnumbered} ============================================== Here we start by assuming $N$ agents, each one with coordinate $x_i$, $i=1,\ldots,N$, with $x_i\geq 0$ representing the wealth or money of the agent $i$, and a total available amount of money $E$: $$x_1+x_2+\cdots +x_{N-1}+x_N \leq E. \label{eq-e}$$ Under random evolution rules for the exchanging of money among agents [@yakovenko1], let us suppose that this system evolves in the interior of the $N$-dimensional pyramid given by Eq. (\[eq-e\]). The role of the heat reservoir, that in this model supplies money instead of energy, could be played by the state or by the bank system in western societies. The formula for the volume $V_N(E)$ of an equilateral $N$-dimensional pyramid formed by $N+1$ vertices linked by $N$ perpendicular sides of length $E$ is $$V_N(E) = {E^N\over N!}. \label{eq-S_n1}$$ We suppose that each point on the $N$-dimensional pyramid is equiprobable, then the probability $f(x_i)dx_i$ of finding the agent $i$ with money $x_i$ is proportional to the volume formed by all the points into the $(N-1)$-dimensional pyramid having the $i$th-coordinate equal to $x_i$. Our objective is to show that $f(x_i)$ is the Boltzmann factor (or the Maxwell-Bolztamnn distribution), with the normalization condition $$\int_{0}^Ef(x_i)dx_i = 1. \label{eq-p_n1}$$ If the $i$th agent has coordinate $x_i$, the $N-1$ remaining agents share, at most, the money $E-x_i$ on the $(N-1)$-dimensional pyramid $$x_1+x_2 \cdots +x_{i-1} + x_{i+1} \cdots +x_N\leq E-x_i, \label{eq-e1}$$ whose volume is $V_{N-1}(E-x_i)$. It can be easily proved that $$V_N(E) = \!\int_{0}^{E}\!V_{N-1}(E-x_i) {dx_i }. \label{eq-theta11}$$ Hence, the volume of the $N$-dimensional pyramid for which the $i$th coordinate is between $x_i$ and $x_i+dx_i$ is $V_{N-1}(E-x_i)dx_i$. We normalize it to satisfy Eq. (\[eq-p\_n1\]), and obtain $$f(x_i) = {V_{N-1}(E-x_i)\over V_N(E)}, \label{eq-f_n1}$$ whose final form, after some calculation is $$f(x_i) = NE^{-1}\Big(1-{x_i\over E} \Big)^{N-1}, \label{eq-mm1}$$ If we call $\epsilon$ the mean wealth per agent, $E=N\epsilon$, then in the limit of large $N$ we have $$\lim_{N\gg 1}\left(1-{x_i\over E}\right)^{N-1} \simeq e^{-{x_i/\epsilon}}. \label{eq-ee1}$$ The Boltzmann factor $e^{-{x_i/\epsilon}}$ is found when $N\gg 1$ but, even for small $N$, it can be a good approximation for agents with low wealth. After substituting Eq. (\[eq-ee1\]) into Eq. (\[eq-mm1\]), we obtain the Maxwell-Boltzmann distribution in the asymptotic regime $N\rightarrow\infty$ (which also implies $E\rightarrow\infty$): $$f(x)dx = {1\over \epsilon}\,e^{-{x/\epsilon}}dx, \label{eq-gauss11}$$ where the index $i$ has been removed because the distribution is the same for each agent, and thus the wealth distribution can be obtained by averaging over all the agents. This distribution has been found to fit the real distribution of incomes in western societies[@yakovenko1]. Depending on the physical situation the mean wealth per agent $\epsilon$ takes different expressions and interpretations. For instance, doing a thermodynamic simile, we can calculate the dependence of $\epsilon$ on the temperature, which, as in the microcanonical ensemble, can be obtained in this case by differentiating the entropy with respect to the total wealth. The entropy can be written as $S=-kN\!\int_{0} ^{\infty} f(x)\ln f(x)\,dx$, where $f(x)$ is given by Eq. (\[eq-gauss11\]) and $k$ is the Boltzmann constant. If we recall that $\epsilon=E/N$, we obtain $$S(E)= kN\ln\left({E\over N} \right) + kN.$$ The calculation of the temperature $T$ gives $$T^{-1}= \left({\partial S\over \partial E} \right)_N = {kN\over E} = {k\over \epsilon}.$$ Thus $\epsilon=kT$, and the Boltzmann-Gibbs distribution is obtained as it is usually given in the literature: $$f(x)dx = {1\over kT}\,e^{-x/kT}dx.$$ This shows that the geometrical image of the volume-based statistical ensemble allows us to recover the same result than that obtained from the microcanonical and canonical ensembles [@lopez2007-2]. Also, it confirms for this case the equivalence among all these ensembles in the thermodynamic limit. General derivation of the asymptotic distribution: an open problem {#general-derivation-of-the-asymptotic-distribution-an-open-problem .unnumbered} ================================================================== Now the problem is stated in a general way. Let $b$ be a real constant. If we have a set of positive variables $(x_1,x_2,\ldots,x_N)$ verifying $$x_1^b+x_2^b+\cdots +x_{N-1}^b+x_N^b \leq E \label{eq-Ek}$$ with an adequate mechanism assuring the equiprobability of all the possible states $(x_1,x_2,\ldots,x_N)$ into the volume given by expression (\[eq-Ek\]), will we have for the generic variable $x$ the distribution $$f(x)dx \sim \epsilon^{-1/b}\,e^{-{x^b/b\epsilon}}dx, \label{eq-gaussn}$$ when we average over the ensemble in the limit $N\rightarrow\infty$?. Let us suppose that the answer to this last question is affirmative (as it will be probably shown in a next paper). If we define $$c_b=\left[\!\int_{0} ^{\infty} e^{-y^b/b}\,dy\right]^{-1}, \label{eq-cb1}$$ then expression (\[eq-gaussn\]), redefined as $$f(x)dx = c_b \epsilon^{-1/b}\,e^{-{x^b/b\epsilon}}dx, \label{eq-gaussn1}$$ is normalized, i.e., $\!\int_{0} ^{\infty} f(x)dx=1$. Following the thermodynamic simile done in the cases $b=1,2$, we can calculate the dependence of $\epsilon$ on the temperature by differentiating the entropy with respect to the energy. The entropy can be written as $S=-kN\!\int_{0} ^{\infty} f(x)\ln f(x)\,dx$, where $f(x)$ is given by Eq. (\[eq-gaussn1\]) and $k$ is the Boltzmann constant. If we recall that $\epsilon=E/N$, we obtain $$S(E)= {kN\over b}\ln\left({E\over N} \right) + {kN\over b}(1-b\ln c_b),$$ where it has been used that $\epsilon=<x^b>=\!\int_{0} ^{\infty} x^bf(x)dx$. Let us recall at this point that, for the case $b=2$, the limits used in the normalization integral of $f(x)$ in expression (\[eq-p\_n\]), and, therefore, in the calculation of $S(E)$, are from $-\infty$ to $\infty$ instead from $0$ to $\infty$ that are used here. This does not introduce any change in the final result, only redefines the constant $c_{b=2}$, which now is $\sqrt{2\over \pi}$ instead of the factor $1\over\sqrt{2\pi}$ from expression (\[eq-gauss\]). The calculation of the temperature $T$ gives $$T^{-1}= \left({\partial S\over \partial E} \right)_N = {kN\over bE} = {k\over b\epsilon}.$$ Thus $\epsilon=kT/b$, a result that recovers the theorem of equipartition of energy for the quadratic case $b=2$. The distribution for all $b$ is finally obtained: $$f(x)dx = c_b\left({b\over kT}\right)^{1/b}\,e^{-x^b/kT}dx.$$ This shows that the geometrical image of the volume-based statistical ensemble allows us to recover the same result than that obtained from the microcanonical and canonical ensembles [@lopez2007-1; @lopez2007-2]. Also, it confirms for this case the equivalence among all these ensembles in the thermodynamic limit. APPENDIX: {#appendix .unnumbered} ========== We are interested in this paper with alternative views of the different statistical ensembles. Here we give a different image of the canonical ensemble from that that is its usual presentation in the literature. Let us suppose that a system with mean energy $\bar E$, and in thermal equilibrium with a heat reservoir, is observed during a very long period $\tau$ of time. Let $E_i$ be the energy of the system at time $i$. Then we have: $$E_1+E_2+\cdots +E_{\tau-1}+E_{\tau} = \tau\cdot\bar E. \label{eq-eee}$$ If we repeat this process of observation a huge number (toward infinity) of times, the different vectors of measurements, $(E_1,E_2,\ldots,E_{\tau-1},E_{\tau})$, with $0\leq E_i\leq \tau\cdot\bar E$, will finish by covering equiprobably the whole surface of the $\tau$-dimensional hyperplane given by Eq. (\[eq-eee\]). If it is now taken the limit $\tau\rightarrow\infty$, the asymptotic probability $p(E)$ of finding the system with an energy $E$ (where the index $i$ has been removed), $$p(E)\; \sim \;\; e^{-E/\bar E}, \label{eq-e22}$$ is found by means of the geometrical arguments exposed in Ref. . Doing a thermodynamic simile, the temperature $T$ can also be calculated. It is obtained that $$\bar E = kT. \label{eq-e222}$$ The [*stamp*]{} of the canonical ensemble, namely, the Boltzmann factor, $$p(E)\;\sim\;\; e^{-E/kT}, \label{eq-e223}$$ is finally recovered from this new image of the canonical ensemble. [10]{} K. Huang, “Statistical Mechanics,” John Wiley and Sons, New York (1987);\ A. Munster, “Statistical Thermodynamics,” Volume I, Springer-Verlag, Berlin (1969). R.B. Griffiths, “Microcanonical ensemble in quantum statistical mechanics,” J. Math. Phys. 6, 1447-1461 (1965);\ T. Dauxois, P. Holdsworth, and S. Ruffo, “Violation of ensemble equivalence in the antiferromagnetic mean-field XY model,” Eur. Phys. J. B 16, 659-667 (2000);\ R.S. Ellis, K. Haven, and B. Turkington, “Large deviations principles and complete equivalence and nonequivalence results for pure and mixed ensembles,” J. Stat. Phys. 101, 999-1064 (2000);\ M. Costeniuc, R.S. Ellis, H. Touchette, and B. Turkington, “The Generalized Canonical Ensemble and its universal equivalence with the Microcanonical Ensemble,” J. Stat. Phys. 119, 1283-1329 (2005);\ R. Toral, “Ensemble equivalence for non-Boltzmannian distributions,” Physica A 365, 85-90 (2006). R. Lopez-Ruiz and X. Calbet, “Derivation of the Maxwellian distribution from the microcanonical ensemble,” Am. J. Phys. [**75**]{}, 752-753 (2007). K. Rateitschak, R. Klages, and G. Nicolis, “Thermostating by deterministic scattering: the periodic Lorents gas,” J. Stat. Phys. 99, 1339-1364 (2000), see Appendix A. R. Lopez-Ruiz, J. Sañudo, and X. Calbet, “Geometrical derivation of the Boltzmann factor,” arXiv:0707.4081 \[nlin.CD\] (2007). A. Dragulescu and V. M. Yakovenko, “Statistical mechanics of money,” Eur. Phys. J. [**B 17**]{}, 723–729 (2000); “Evidence for the exponential distribution of income in the USA,” Eur. Phys. J. B [**20**]{}, 585–589 (2001).
--- abstract: 'A simple solvable toy model exhibiting effective restoration of chiral symmetry in excited hadrons is constructed. A salient feature is that while physics of the low-lying states is crucially determined by the spontaneous breaking of chiral symmetry, in the high-lying states the effects of chiral symmetry breaking represent only a small correction. Asymptotically the states approach the regime where their properties are determined by the underlying unbroken chiral symmetry.' author: - 'Thomas D. Cohen${}^1$ and Leonid Ya. Glozman${}^2$' title: 'A simple toy model for effective restoration of chiral symmetry in excited hadrons.' --- One striking feature of the hadron spectrum is the presence of nearly degenerate resonances of opposite parity relatively high in the excitation spectrum. It has been argued that this phenomenon might be a manifestation of an “effective restoration of chiral symmetry” [@G1; @CG1; @CG11; @G2; @G22; @G3; @G4; @G44]. The central idea underlying this notion is the possibility that the coupling of these highly excited states to the dynamics driving spontaneous chiral symmetry breaking could get progressively weaker for progressively massive hadrons leading to the excited states being quite insensitive to the effects of spontaneous chiral symmetry breaking and hence act qualitatively very much as if they were in the Wigner-Weyl mode. In such a situation the resonances would be expected to form approximate linearly realized chiral multiplets whose members were nearly degenerate. Such a scenario naturally predicts multiplets of states with opposite parities. That the linear realization of the chiral symmetry could be relevant to the low-lying hadrons, where, however, the chiral symmetry is strongly broken, have been explored in the context of a certain class of models [@Jido; @Jido1] and in heavy-light quark physics [@HQ; @HQ1; @HQ11; @HQ111]. It is important to precisely characterize what is implied under effective restoration in [@G1; @CG1; @CG11; @G2; @G22; @G3; @G4; @G44], because sometimes it is erroneously interpreted in the sense that the highly-excited hadrons are in the Wigner-Weyl mode. The mode of symmetry is defined only by the properties of the vacuum. If symmetry is spontaneously broken in the vacuum, then it is the Nambu-Goldstone mode and the [*whole*]{} spectrum of excitations on the top of the vacuum is in the Nambu-Goldstone mode. However, it may happen that the role of the chiral symmetry breaking condensates of the vacuum becomes progressively irrelevant in excited states. This means that the chiral symmetry breaking effects (dynamics) become less and less important in the highly excited hadrons and asymptotically the states approach the regime where their properties are determined by the underlying unbroken chiral symmetry (i.e. by the symmetry in the Wigner-Weyl mode). Although the basic idea has been discussed in the past, it is useful to construct a simple model which illustrates clearly the basic idea. Such a model has the virtue of demonstrating explicitly that the idea is consistent with the underlying concepts of chiral symmetry and spontaneous chiral symmetry breaking. This may be of use in preventing confusion about the physical content of “effective chiral restoration”. We will then use the behavior in this model to clarify the physical meaning of the given phenomenon. Consider as an example the following model which, while having no particular physical significance illustrates quite clearly the physical content of effective chiral restoration of the type discussed in refs. [@G1; @CG1; @CG11; @G2; @G22; @G3; @G4; @G44]. The model contains an infinite number of $\pi$ and $\sigma$ mesons. In this respect the model mimics large $N_c$ QCD. We denote the $j^{\rm th}$ pion ($\sigma$ meson) $\pi_j$ ($\sigma_j$). These fields enter the Lagrangian in a chirally invariant way as members of $\left ( \frac{1}{2},\frac{1}{2} \right ) $ chiral multiplets: $$\begin{aligned} \left [V^a,\pi^b_j \right] = i \,\epsilon^{a b c} \, \pi^{c}_j &{}& \left [ V^a,\sigma_j \right ] = 0 \nonumber\\ \left[A^a,\pi^b_j \right] = i \delta_{a b} \, \sigma_j \; &{}& \left[A^a,\sigma_j \right] = i \pi^a_j\end{aligned}$$ where $V^a$ ($A^a$) represent the generators of vector (axial) rotations. To simplify the analysis by reducing the number of possible couplings, in addition to chiral symmetry the model has an infinite number of discrete symmetries: it is invariant under $\sigma_j\rightarrow -\sigma_j$ for all $j$. The discrete symmetries ensure that each type of field always enters the Lagrangian in even powers. The Lagrangian is given by $$\begin{aligned} {\cal L} &=& \sum_j \frac{1}{2} \left ( \partial^\mu \sigma_j \, \partial_\mu \sigma_j \, + \partial^\mu \vec{\pi}_j \cdot \partial_\mu \vec{\pi}_j \right ) - \frac{m_o^2}{2} \left ( \alpha (\sigma_1^2 + \vec{\pi}_1 \cdot \vec{\pi}_1 ) + \frac{g}{2 m_o^2} (\sigma_1^2 + \vec{\pi}_1 \cdot \vec{\pi}_1 )^2 \right ) \nonumber \\ & - & \frac{m_o^2}{2} \sum_{j=2}^\infty \, \left( j^2 (\sigma_j^2 \, + \vec{\pi_j} \cdot \vec{\pi_j}) \, + \, \frac{g}{j \, m_o^2 \,} \left ( (\sigma_1 \sigma_j + \vec{\pi}_1\cdot \vec{\pi}_j)^2 \, + \, (\sigma_1^2 + \vec{\pi}_1\cdot \vec{\pi}_1) \, ( \sigma_j^2 + \vec{\pi}_j\cdot \vec{\pi}_j) \right )\right) + g V_4\label{L}\end{aligned}$$ where $m_o$ has dimensions of mass and $\alpha$ and $g$ are dimensionless constants and $V_4$ is some function of the fields $\sigma_j, \vec{\pi_j}; ~ j>1$ consistent with the symmetries and with the potential whose terms are quartic in the fields. The model is chosen so that $j=1$ fields play a special role in the chiral broken phase: $\sigma_1$ (and no other fields) acquires a vacuum expectation value and the excitation associated with $\pi_1$ becomes massless. The parameter $\alpha$ controls spontaneous symmetry braking; $\alpha>0$ yields the Wigner-Weyl mode while $\alpha<0$ yields the Nambu-Goldstone mode. As will be seen below the analysis is independent of the particular form picked for $V_4$. The interaction terms in the model of Eq. (\[L\]) are controlled by the parameter $g$. For $g \ll 1$, the theory is weakly coupled and hence can be treated classically. The Lagrangian is parameterized in such a way that the dependence of the mass spectrum on $g$ only arises through loop contributions which can be neglected in the weak-coupling limit. Note, however, that even in the weak coupled limit the interaction terms play an essential role when $\alpha < 0$ since it determines the amount of chiral symmetry breaking. We consider here the weak coupling limit of the theory which is analytically tractable—indeed trivial—but is a perfectly legitimate chiral theory. We will study the theory in the weakly coupled regime where it is tractable. If one imposes isospin invariance then it is easy to see that the minimum of the potential is given by: $$\begin{aligned} \langle \sigma_j \rangle & =& 0 \; \; \; \; {\rm for} \, \, \alpha > 0 \nonumber \\ \langle \sigma_j \rangle & = & \pm \delta_{j 1} \, m_o \, \sqrt{\frac{-\alpha}{g}} \; \; \; \; {\rm for} \, \, \alpha \le 0 \; .\end{aligned}$$ By expanding quadratically around the minimum of the potential one can find the mass spectrum. $${\rm for}\, \, \alpha > 0 \; \left \{ \begin{array}{l} m^2_{\pi_1} = \alpha m_o^2 \\ \\ m^2_{\sigma_1} = \alpha m_o^2 \\ \\ m^2_{\pi_j} = j^2 m_o^2 \; \; (j \ge 2) \\ \\ m^2_{\sigma_j} = j^2 m_o^2\; \; (j \ge 2) \end{array} \right . \nonumber$$ $${\rm for} \, \, \alpha \le 0 \; \left \{ \begin{array}{l} m^2_{\pi_1} = 0 \\ \\ m^2_{\sigma_1} = - 2 \alpha m_o^2 \\ \\ m^2_{\pi_j} = j^2 m_o^2+ \frac{2 g\langle \sigma_1 \rangle^2 }{j} \\\; \; \; \; \; =\left( j^2 + \frac{2 \alpha}{j} \right ) m_o^2\; \; (j \ge 2) \\ \\ m^2_{\sigma_j} = j^2 m_o^2 + \frac{4 g \langle \sigma_1 \rangle^2 }{j} \\\; \; \; \; \; = \left ( j^2 + \frac{4 \alpha}{j} \right) m_o^2 \; \; (j \ge 2) \end{array} \right .$$ This procedure is legitimate since we are working in the weak coupling limit and quantum (loop) corrections are suppressed. The spectrum is shown in Fig. \[toy\]. ![The mass spectrum of the model in Eq. (\[L\]) in units of the mass parameter $m_o$. The solid lines correspond to pions while the dotted lines correspond to $\sigma$-mesons. \[toy\]](toy.eps) It is apparent from Fig. \[toy\], that this model in the spontaneously broken phase exhibits the phenomenon of effective chiral restoration. Consider, for example, the region near $\alpha=-1$. While the lowest-lying states have no hint of a chiral multiplet structure, as one goes higher in the spectrum the states fall into nearly degenerate multiplets which to increasingly good approximation look like pions and $\sigma$-meson in linearly realized $\left ( \frac{1}{2},\frac{1}{2} \right )$ representations. Suppose that there is a “no-go theorem” forbidding effective chiral restoration in excited hadrons and the existence of approximate parity doublets in the Nambu-Goldstone phase cannot be a manifestation of chiral symmetry. If this were correct then the appearance of approximate parity doublets in the Nambu-Goldstone phase of this model must be unrelated to chiral symmetry. However, this is obviously not the case: the near degeneracy of the parity doublets reflects the underlying chiral symmetry of the model. The key point is that high-lying states are very insensitive to the chiral symmetry breaking order parameter $\sqrt{g} \langle \sigma_1 \rangle$: $$\begin{aligned} \frac{\partial m_{\sigma_j}}{\partial \left(\sqrt{g} \langle \sigma_1 \rangle \right) } & = & \frac{4 m_o \sqrt{-\alpha}}{ j m_{\sigma_j}} \rightarrow \frac{4 \sqrt{-\alpha} }{ j^2 } \nonumber \\ \frac{\partial m_{\pi_j}}{\partial \left ( \sqrt{g} \langle \sigma_1 \rangle \right ) } & = & \frac{2 m_o \sqrt{-\alpha}}{ j m_{\pi_j}} \rightarrow \frac{2 \sqrt{-\alpha}}{ j^2 }\end{aligned}$$ where the arrow indicates the asymptotic behavior at large $j$. Clearly, at large $j$ the masses become increasingly insensitive to the chiral order parameter and the spectrum approximates a Wigner-Weyl mode spectrum increasingly accurately. The scenario in which high mass resonances become increasingly insensitive to the chiral order parameter and hence form into approximate chiral multiplets in the spectrum is precisely what was meant by effective chiral restoration in refs. [@G1; @CG1; @CG11; @G2; @G22; @G3; @G4; @G44]. Thus, since there is at least one chirally invariant model in which this happens, there cannot be any “no-go theorem” for effective chiral restoration. One might legitimately argue that the model in Eq. (\[L\]) is highly artificial. The model was designed—with malice aforethought—to ensure effective chiral restoration. This was done in part by fixing the term in the Lagrangian which couples to the chiral order parameter to scale like $1/j$ and hence become weak for large $j$. This is, of course, true. However, the point of the exercise was simply to demonstrate that there is a solvable model that exhibits effective chiral restoration and this model helps to clarify a physical meaning of this phenomenon. However artificial the model, it is adequate for this purpose. The model above was formulated in terms of the fields that transform linearly under the chiral group. Then a question arises whether this model illustrates some generic behavior or it is only specific to the linear realization of the chiral symmetry? Indeed, in the Nambu-Goldstone mode one can always make a field redefinition to the standard nonlinear realization [@C; @C1] in which fields of opposite parity are decoupled under, [*i.e.*]{} do not transform into each other under chiral transformation. In this case the act of making an axial rotation does not transform a field into its chiral partner but instead creates a massless Goldstone boson (pion) from the vacuum. However, such a redefinition is unphysical and the Lagrangian of Eq. (\[L\]) rewritten in terms of these new fields cannot alter the spectrum. This reflects a general situation that the field redefinitions themselves cannot modify any physical content of the theory. In this context it is useful to recall that the physics is not in the fields, but in the states, which appear once one applies fields on the vacuum. The physics of these states is controlled only by the microscopical theory. Clearly the chiral symmetry can impose no constraints on these decoupled fields in the spontaneously broken phase. However this does not rule out effective chiral restoration. Generically, in the Nambu-Goldstone mode, the properties of states are sensitive to the dynamics of chiral symmetry breaking—this is why the states generically do not form [*exact*]{} chiral multiplets. If however, there exist states in the spectrum which for some reason are insensitive or weakly sensitive to the dynamics of chiral symmetry breaking, there will be approximate relations between the properties of the levels dictated by the underlying chiral symmetry (in the limit of complete insensitivity the states must look identical to exact linear chiral multiplets). The reason for this is simple: one can reverse the transformation from the standard nonlinear realization and rewrite things in terms of a set of linear realized fields; theory in terms of these fields is equally physical as the standard nonlinear realization. These fields correspond to states which are related to each other under chiral symmetry except due to the interactions with the chiral order parameter and to the Goldstone bosons. As these interactions become small approximate chiral multiplets emerge. This is explicitly seen in our toy model since the high-lying states do decouple from the vacuum condensate and from the Goldstone bosons. Indeed, it was argued in ref. [@G3; @N] that the high-lying hadrons do decouple from the Goldstone bosons once they decouple from the quark condensate. Thus, the issue is a quantitative one and not merely a qualitative one. In the Nambu-Goldstone mode the states should be sensitive to the dynamics of chiral symmetry breaking. The question is how much. As demonstrated above, it is certainly possible in some models for high lying states to have very small coupling to the dynamics of symmetry breaking and, hence, to very good approximation to fall into chiral multiplets. The conjecture of effective chiral restoration in hadronic physics is that QCD behaves in the same manner as this simple model. As noted above the model in Eq. (\[L\]) is highly artificial. However, it does illustrate many of the salient points relevant to the issue. Firstly, it illustrates the most important point: while the physics of the low-lying states is crucially determined by the spontaneous breaking of chiral symmetry, in the high-lying states the effects of chiral symmetry breaking represent only a small correction. Secondly, it shows the gradual nature of the conjectured effect. The effect is never absolute but always approximate; for any given strength of the coupling $\alpha$ it becomes increasingly accurate as one goes up in the spectrum. Thirdly, it makes very clear that the key issue is the coupling of the state to the dynamics responsible for spontaneous chiral symmetry breaking—in this case the coupling to $\sqrt{g} \langle \sigma_1 \rangle$ which plays the role of the chiral order parameter. Although we have constructed a simple tractable toy model explicitly exhibiting the effective restoration of chiral symmetry in excited hadrons, the essential question of whether the parity partners seen in excited hadrons are due to effective chiral restoration remains open. As discussed in [@CG1; @G4; @G44], there are general arguments leading to the expectation that as one goes to asymptotically high mass states the sensitivity to the chiral condensate decreases. This in turn leads to the expectation of effective chiral restoration in the hadron spectrum, [*provided*]{} that discernable hadrons still exist in the regime where the sensitivity to the chiral condensate becomes negligible. However, at present there is no reliable theoretical tool which allows one to answer the question of whether discernable hadronic resonances persist high enough in the spectrum to reach the regime of effective chiral restoration or whether the spectrum essentially melts into the QCD continuum before this point. There is an interesting theoretical limit—the large $N_c$ limit—in which the meson spectrum is infinite and remains discrete and the phenomenon of effective chiral restoration ought to occur. Some discussions of the rate of symmetry restoration can be found in [@SHIFMAN; @GOLTERMAN]. The results of the solvable model of the ’t Hooft type in 3+1 dimensions are presented in [@W]. While the large $N_c$ argument is interesting theoretically and shows how the phenomenon may come about, it does not provide a compelling theoretical argument for the $N_c=3$ world. Similarly, the present empirical evidence is not compelling but we take it to be suggestive. There is an empirical way to verify the idea, however. If the parity doubling observed at present is indeed due to chiral restoration, then some of the missing states in the chiral multiplets with approximately known masses should be experimentally found. This point is a legitimate subject for discussion. Correspondence with R. Jaffe, D. Pirjol, A. Scardicchio, M. Shifman and A. Vainstein is acknowledged. LYaG acknowledges the support from the the Austrian Science Fund, projects P16823-N08 and P19168-N16. TDC acknowledges the support of the United States Department of Energy. [99]{} L. Ya. Glozman, Phys. Lett. B [**475**]{}, 329 (2000). T. D. Cohen and L. Ya. Glozman, Phys. Rev. D [**65**]{}, 016006 (2002). T. D. Cohen and L. Ya. Glozman, Int. J. Mod. Phys. A [**17**]{}, 1327 (2002). L. Ya. Glozman, Phys. Lett. B [**539**]{}, 257 (2002). L. Ya. Glozman, Phys. Lett. B [**587**]{}, 69 (2004). L. Ya. Glozman, Phys. Lett. B [**541**]{}, 115 (2002). L. Ya. Glozman, Int. J. Mod. Phys. A., [**21**]{}, 475 (2006). L. Ya. Glozman, A. V. Nefediev, J.E.F.T. Ribeiro, Phys. Rev. [**D 72**]{}, 094002 (2005). D. Jido, T. Hatsuda, T. Kunihiro, Phys. Rev. Lett. ,[**84**]{}, 3252 (2000). D. Jido, M. Oka, A. Hosaka, Progr. Theor. Phys. [**106**]{}, 873 (2001). M. A. Nowak, M. Rho and I. Zahed, Phys. Rev. [**D 48**]{}4370 (1993). M. A. Nowak, M. Rho and I. Zahed, Acta. Phys. Polon. B [**35**]{} 2377 (2004). W. A. Bardeen and C. T. Hill, Phys. Rev. [**D49**]{}, 409 (1994). W. A. Bardeen E. J. Eichten and C. T. Hill, Phys. Rev.[**D49**]{}, 409 (1994). S. R. Coleman, J. Wess and B. Zumino, Phys. Rev. [**177**]{}, 2239 (1969). C. C. Callan, S. R. Coleman, J. Wess and B. Zumino, Phys. Rev. [**177**]{}, 2247 (1969). M. Shifman, hep-ph/0507246. O. Cata, M. Golterman, S. Peris, hep-ph/0602194. R. F. Wagenbrunn, L. Ya. Glozman, hep-ph/0605247. L. Ya. Glozman, A. V. Nefediev, Phys. Rev. [**D 73**]{}, 074018 (2006).
--- abstract: '[The satellite-borne experiment  has been used to make new measurements of cosmic ray H and He isotopes. The isotopic composition was measured between 100 and 600 MeV/n for hydrogen and between 100 and 900 MeV/n for helium isotopes over the 23^rd^ solar minimum from July 2006 to December 2007. The energy spectrum of these components carries fundamental information regarding the propagation of cosmic rays in the galaxy which are competitive with those obtained from other secondary to primary measurements such as B/C. ]{}' author: - | O. Adriani$^{1,2}$, G. C. Barbarino$^{3,4}$, G. A. Bazilevskaya$^{5}$, R. Bellotti$^{6,7}$, M. Boezio$^{8}$,\ E. A. Bogomolov$^{9}$, M. Bongi$^{1,2}$, V. Bonvicini$^{8}$, S. Borisov$^{10,11,12}$, S. Bottai$^{2}$,\ A. Bruno$^{6,7}$, F. Cafagna$^{7}$, D. Campana$^{4}$, R. Carbone$^{8}$, P. Carlson$^{13}$,\ M. Casolino$^{10,14}$, G. Castellini$^{15}$, I. A. Danilchenko$^{12}$ M. P. De Pascale$^{10,11,\dagger}$, C. De Santis$^{11}$,\ N. De Simone$^{11}$, V. Di Felice$^{11}$, V. Formato$^{8,16}$, A. M. Galper$^{12}$, A. V. Karelin$^{12}$,\ S. V. Koldashov$^{12}$, S. Koldobskiy$^{12}$, S. Y. Krutkov$^{9}$, A. N. Kvashnin$^{5}$, A. Leonov$^{12}$,\ V. Malakhov$^{12}$, L. Marcelli$^{11}$, A. G. Mayorov$^{12}$, W. Menn$^{17}$, V. V. Mikhailov$^{12}$,\ E. Mocchiutti$^{8}$, A. Monaco$^{6,7}$, N. Mori$^{2}$, N. Nikonov$^{9,10,11}$, G. Osteria$^{4}$, F. Palma$^{10,11}$,\ P. Papini$^{2}$, M. Pearce$^{13}$, P. Picozza$^{10,11}$, C. Pizzolotto$^{8,18,19}$ M. Ricci$^{20}$, S. B. Ricciarini$^{15}$,\ L. Rossetto$^{13}$, R. Sarkar$^{8}$, M. Simon$^{17}$, R. Sparvoli$^{10,11}$, P. Spillantini$^{1,2}$,\ Y. I. Stozhkov$^{5}$, A. Vacchi$^{8}$, E. Vannuccini$^{2}$, G. Vasilyev$^{9}$, S. A. Voronov$^{12}$,\ Y. T. Yurkin$^{12}$, J. Wu$^{13,*}$, G. Zampa$^{8}$, N. Zampa$^{8}$, V. G. Zverev$^{12}$,\ title: Measurement of the isotopic composition of hydrogen and helium nuclei in cosmic rays with the  experiment --- Introduction ============ Hydrogen and helium isotopes in cosmic rays are generally believed to be of secondary origin, resulting from the nuclear interactions of primary cosmic-ray protons and [^4^He]{} with the interstellar medium, mainly through spallation of primary [^4^He]{} nuclei or through the reaction $p+p \rightarrow \text{ }^2\text{H}+\pi^+$. These isotopes can be used to study and constrain parameters in propagation models for galactic cosmic rays (GCRs) . $^2$H and $^3$He are the most abundant secondary isotopes in galactic cosmic-rays and have peculiar features: $^2$H is the only secondary species (apart from antiprotons) that can also be produced in proton-proton interactions and $^3$He is the only secondary fragment with a $A/Z$ significantly different from two ($A$ and $Z$ being respectively the mass and charge number). The importance of light isotopes has been known for about 40 years, when the first measurements became available [@1975ICRC....1..319G; @1975ApJ...202..265G; @1976ApJ...206..616M; @1978ApJ...221.1110L]. Measurements require very good mass resolution, a challenge for instruments deployed in space. With the exception of the results from AMS-01 [@2011ApJ...736..105A_red] most of the measurements were performed using stratospheric balloons [@2002ApJ...564..244W; @1998ApJ...496..490R_red; @1995ICRC....2..630W; @1991ApJ...380..230W; @1993ApJ...413..268B_red], where the residual atmosphere above the instrument caused a non-negligible background of secondary particles. The atmospheric background estimation is subject to large uncertainties (e.g. the limited knowledge of isotope production cross sections). Due to these limitations, experimental errors are generally very large and the focus of measurements therefore shifted to other secondary species, like boron or sub-iron nuclei [@1998ApJ...509..212S]. The light-isotope quartet offers an independent unique way to address the issue of “universality in GCR propagation”, which can be crucial when analysing antiproton and positron spectra to search for possible primary signals (see e.g. @1989AdSpR...9..145S). [The  experiment has been observing GCRs over the 23^rd^ solar miminum since July 2006 at an altitude ranging from $\sim 350$ km to $\sim 600$ km on-board of the Russian Resurs-DK1 satellite which executes a quasi-polar orbit. The low-earth orbit allows  to perform measurements in an environment free from the background induced by atmospheric cosmic rays. ]{} The results presented here are based on the data set collected by  between July 2006 and December 2007. From about $10^9$ triggered events, accumulated during a total acquisition time of 528 days, 5,378,795 hydrogen nuclei were selected in the energy interval between 100 and 600 MeV/n and 1,749,964 helium nuclei between 100 and 900 MeV/n. The data presented here replace and complete the preliminary results presented in [@2011ASTRA...7..465C]. A more complete evaluation of the selection efficiencies and contamination due to an improved simulation resulted in better reconstructed [^2^H]{} and [^3^He]{} spectra and [^3^He]{}/[^4^He]{} ratio. The  apparatus ============== The  spectrometer was designed and built to study the antimatter component of cosmic rays from tens of MeV up to hundreds of GeV and with a significant increase in statistics with respect to previous experiments. To reach this goal the apparatus was optimized for the study of $Z=1$ particles and to reach a high level of electron-proton discrimination. The core of the instrument (Fig. \[im:pamela\]) is a permanent magnet with an almost uniform magnetic field inside the magnetic cavity which houses six planes of double-sided silicon microstrip detectors to measure the trajectory of incoming particles. The spatial resolution is $\sim3$ $\mu$m in the bending view (also referred as $x$-view) and $\sim11$ $\mu$m in the non-bending view (also referred as $y$-view). The main task of the magnetic spectrometer is to measure particle rigidity $\rho = pc/Ze$ ($p$ and $Ze$ being respectively the particle momentum and charge, and $c$ the speed of light) and ionization energy losses ($dE/dx$). The Time-of-Flight (ToF) system comprises three double layers of plastic scintillator paddles (S1, S2, and S3, as shown in Fig \[im:pamela\]) with the first two placed above and the third immediately below the magnetic spectrometer. The ToF system provides 12 independent measurements of the particle velocity, $\beta = v/c$, combining the time of passage information with the track length derived from the magnetic spectrometer. By measuring the particle velocity the ToF system discriminates between down-going particles and up-going splash albedo particles thus enabling the spectrometer to establish the sign of the particle charge. The ToF system also provides 6 independent $dE/dx$ measurements, one for each scintillator plane. A silicon-tungsten electromagnetic sampling calorimeter made of 44 single-sided silicon microstrip detectors interleaved with 22 plates of tungsten absorber (for a total of 16.3 $X_0$) mounted below the spectrometer is used for hadron/lepton separation with a shower tail catcher scintillator (S4). A neutron detector at the bottom of the apparatus helps to increase this separation. The anticoincidence (AC) system comprises 4 scintillators surrounding the magnet (CAS), one surrounding the cavity entrance (CAT) and 4 scintillators surrounding the volume between S1 and S2 (CARD). The system is used to reject events where the presence of secondary particles generates a false trigger or the primary particle suffers an inelastic interaction. The readout electronics, the interfaces with the CPU and all primary and secondary power supplies are housed around the detectors. The apparatus is enclosed in a pressurized container attached to the side of the Resurs-DK1 satellite. The total weight of  is 470 kg while the power consumption is 355 W. A more detailed description of the instruments and the data handling can be found in [@Pi07]. Data analysis ============= Event selection --------------- The month of December 2006 was discarded to avoid possible biases from the solar particle events that took place during the 13^th^ and 14^th^ of December. [The event selections adopted are similar to those used in previous works on the high energy proton and helium fluxes [@2011Sci...332...69A_red] and on the time dependence of the low energy proton flux [@2013ApJ...765...91A]. ]{} ### Event quality selections In order to ensure a reliable event reconstruction a set of basic criteria was developed. The aim of these requirements was to select positively charged particles with a precise measurement of the absolute value of the particle rigidity and velocity. Furthermore, events with more than one track, likely to be products of hadronic interactions occurring in the top part of the apparatus, were rejected. Events were selected requiring: - A single track fitted within the spectrometer fiducial volume where the reconstructed track is at least 1.5 mm away from the magnet walls. - A positive value for the reconstructed track curvature. - Selected tracks must have at least 4 hits on the $x$-view and at least 3 hits on the $y$-view to ensure a good rigidity reconstruction. - A maximum of one hit paddle in the two top planes of the ToF system. - The hit paddles in S1 and S2 must match the extrapolated trajectory from the spectrometer. - A positive value for the measured time of flight. This selection ensures that the particle enters  from above. - For the selection of the hydrogen sample no activity in the CARD and CAT scintillators of the anticoincidence system is required. The anticoincidence selections on the hydrogen sample were necessary since most secondary particles that entered the  fiducial acceptance were by-products of hadronic interactions taking place in the aluminium dome or in the S1 and S2 scintillators. Such particles were generally accompanied by other secondary particles which hit the anticoincidence detectors. For the selection of the helium sample there were no anticoincidence requirements since contamination by secondary helium coming from heavier nuclei spallation is negligible. ### Galactic particle selection {#sec:galactic} The Resurs-DK1 satellite orbital information was used to estimate the local geomagnetic cutoff, $G$, in the Störmer approximation [@shea] using the IGRF magnetic field model along the orbit. The maximum zenith angle for events entering the  acceptance was 24 degrees with a mean value of 10 degrees. To select the primary (galactic) cosmic ray component particles were binned by requiring that $\rho_m > k \cdot G$, where $\rho_m$ is the lowest edge of the rigidity interval and $k=1.3$ is a safety factor required to remove any directionality effects due to the Earth’s penumbral regions. [Galactic particles losing energy while crossing the detector may be rejected by this selection. This effect is accounted for using Monte Carlo simulations. ]{} ### Charge selection {#sec:chargesel} Particle charge identification relies on the ionization measurements provided by the magnetic spectrometer. Depending on the number of hit planes there can be up to 12 $dE/dx$ measurements. The arithmetic mean of those measurements is shown in Fig. \[im:dedx\]. A rigidity dependent selection on the mean $dE/dx$ from the spectrometer is used to select $Z=1$ or $Z=2$ candidates and is depicted by the solid lines in Fig. \[im:dedx\]. The residual contamination of $Z=2$ particles in the $Z=1$ sample was studied selecting helium events using the $dE/dx$ information from the ToF system, and then applying the $Z=1$ selection previously described. The fraction of misidentified helium events was found to be less than $10^{-4}$. Since the [^2^H]{}/[^4^He]{} ratio is roughly 0.15 the resulting contamination in the [^2^H]{} sample from misidentified [^4^He]{} was estimated less than $10^{-3}$. Similarly the $Z=1$ residual contamination as well as contamination by heavier nuclei in the $Z=2$ sample was estimated to be negligible. Isotope separation ------------------ The selection criteria described in the previous section provided clean samples of $Z = 1$ and $Z=2$ particles. [In each sample an isotopic separation at fixed rigidity is possible by reconstructing $\beta$, where $$\beta = \left( 1 + \frac{m^2}{Z^2 \rho^2} \right)^{-1/2}$$ as shown in Fig. \[im:beta\_r\] for events in the ($\beta$, $\rho$) plane. ]{} Isotope separation as well as the determination of isotope fluxes was performed in intervals of kinetic energy per nucleon. Since the magnetic spectrometer measures the rigidity of particles, this implies different rigidity intervals according to the isotope under study. For example Fig. \[im:beta\_fit\_h\] shows the 1/$\beta$ distributions used to select [^1^H]{}  (top panel) and [^2^H]{} (bottom panel) in the kinetic energy interval 0.329 - 0.361 GeV/n corresponding to 0.85 - 0.9 GV for [^1^H]{} and 1.7 - 1.8 GV for [^2^H]{}. Particle counts were subsequently extracted from a Gaussian fit to the 1/$\beta$ distribution in each rigidity range as shown by the solid lines in Fig. \[im:beta\_fit\_h\]. Separation between [^3^He]{} and [^4^He]{} was obtained in a similar way. Fig. \[im:beta\_fit\_he\] shows the 1/$\beta$ distributions used to select [^3^He]{}(bottom panel) and [^4^He]{} (top panel) in the kinetic energy interval 0.312 - 0.350 GeV/n corresponding to 1.24 - 1.32 GV for [^3^He]{} and 1.65 - 1.76 GV for [^4^He]{}. It should be noted that, because of the large proton background, an additional selection, based on the lowest energy release among the 12 measurements provided by the tracking system (often referred as [*truncated mean*]{}), was used to produce the 1/$\beta$ distributions in the [^2^H]{} case. Fig. \[im:trk\_lowest\] shows this quantity for $Z = 1$ particles. The solid line indicates the condition on the minimum energy release used for the selection. The selected number of hydrogen and helium events are summarized in the second and third column of Tables \[table:events\_h1\] and \[table:events\_he\]. Flux determination ------------------ The procedure described in the previous section was used to estimate the number of [^1^H]{} and [^2^H]{} events in the $Z=1$ sample and the number of [^3^He]{} and [^4^He]{} events in the $Z=2$ sample. To derive each isotope flux the number of selected events had to be corrected for the selections efficiencies, particle losses, contamination and energy losses. These corrections were obtained using a Monte Carlo simulation of the  apparatus based on the `GEANT4` code [@Geant4] and from the flight data. [The simulation contains an accurate representation of the geometry and performance of the  detectors. The measured noise of each silicon plane of the spectrometer and its performance variations over the duration of the measurement were accounted for. The simulation code was validated by comparing the distributions of several significant variables with those obtained from real data. ]{} Hadronic interactions for all the isotopes under study were handled via the `QGSP_BIC_HP` physics list. The following corrections to the number of selected events were applied: - [*Selection efficiencies:*]{} The redundant information provided by  allowed most of the selection efficiencies to be estimated directly from flight data. For example the efficiency of the charge selections was evaluated on a sample of events selected with the ToF $dE/dx$ measurements in the same way as described in section \[sec:chargesel\] for the charge misidentification study. The efficiency of the tracking system was however obtained from the Monte Carlo simulation. The decrease in efficiency from 2006 to 2007 is due to the failure of some of the front-end chips in the tracking system. This situation was included in the Monte Carlo simulation, as discussed in [@2013ApJ...765...91A]. The efficiencies of the various selections are reported in Table \[table:eff\]. - [*Hadronic interactions:*]{} Helium and hydrogen nuclei may be lost due to hadronic interactions in the 2 mm thick aluminium pressurized container and the top scintillator detectors. The correction to the flux due to this effect was included in the  geometrical factor as follows: $$G(E) = \left[1 - b(E)\right] G_F$$ where $G(E)$ is the effective geometrical factor used for the flux determination, $b(E)$ is a correction factor which accounts for the effect of inelastic scattering, $G_F$ is the nominal geometrical factor which is almost constant above 1 GV and slowly decreases by $\sim2\%$ to lower energies, where the particle trajectory in the magnetic field is no longer straight. The requirement on the fiducial volume corresponds to a geometrical factor $G_F = 19.9$ cm^2^ sr above 1 GV. The correction factor $b(E)$ is different for each isotope and has been derived from the Monte Carlo simulation, being $\simeq 6\%$ for protons, $\simeq 10\%$ for deuterium, and $\simeq 13\%$ for both helium isotopes. The nominal geometrical factor and the effective geometrical factor for each isotope are shown in Fig. \[im:geofa\]. - [*Contamination:*]{} The contribution to [^2^H]{} from [^4^He]{} inelastic scattering was evaluated from the simulation (Fig. \[im:contamination\]) and subtracted from the raw [^2^H]{} counts (see column 4 of Table \[table:events\_h1\]). The contamination in the [^3^He]{} sample from [^4^He]{} fragmentation was also evaluated and it was estimated to be less than $1\%$. This was included in the systematic uncertainty of the measurement. - [*Energy loss and resolution:*]{} The finite resolution of the magnetic spectrometer and particle slowdown due to ionization energy losses results in a distortion of the particle spectra. A Bayesian unfolding procedure, described in [@dagostini], was used to derive the number of events at the top of the payload (see [@2011Sci...332...69A_red]). The flux was then calculated as follows: $$ \Phi_{\text{ToP}} (E) = \frac{N_\text{ToP}(E)}{T G(E) \Delta E}$$ where $N_\text{ToP}(E)$ is the unfolded particle count for energy $E$, also corrected for all the selection efficiencies (see Tables \[table:events\_h1\] and \[table:events\_he\], rightmost two columns), $\Delta E$ is the energy bin width, and $G(E)$ is the effective geometrical factor. The live time, $T$, as evaluated by the trigger system, depends on the orbital selection as described in section \[sec:galactic\] (e.g. see [@bru08]). The live and dead time are cross-checked with the total acquisition time measured by the on-board CPU to remove possible systematic effects in the time counting. Systematic uncertainties ------------------------ The possible sources of systematic uncertainties considered in this analysis are listed below and are also included in Tables \[tab:hydrogen\] and \[tab:helium\] and in Figs. \[im:fig\_data\_h\], \[im:fig\_data\_he\] and \[im:ratios\]. - [*Quality of the $1/\beta$ fit:*]{} The quality of the Gaussian fit procedure was tested using the truncated mean of the energy deposited in the electromagnetic calorimeter to select pure samples of [^1^H]{} and [^2^H]{} from non-interacting events. The two samples were then merged to form a control sample for the fitting algorithm. The number of reconstructed events from the Gaussian fit was found to agree with the number of events selected with the calorimeter, so no systematic uncertainty was assigned to this procedure. - [*Selection efficiencies:*]{} The estimation of the selection efficiencies is affected by a statistical error due to the finite size of the sample used for the efficiency evaluation. This error was considered and propagated as a systematic uncertainty. For the efficiency of the ToF and AC selections this uncertainty is $0.21\%$ at low energy (120 MeV/n) and drops to $0.14\%$ at high energy (600-900 MeV/n). For the tracker selections the uncertainty is $0.3\%$ at low energy increasing to $0.4\%$ at high energy. - [*Galactic particle selection:*]{} The correction for particles lost due to this selection has an uncertainty, due to the size of the Monte Carlo sample, which decreases from $6\%$ to $0.07\%$ as the energy increases from 120 MeV/n to 900 MeV/n. - [*Contamination subtraction:*]{} The subtraction of the contamination results in a systematic uncertainty on the [^2^H]{} flux of $1.9\%$ at low energy dropping below $0.1\%$ at 300 MeV/n due to the finite size of the Monte Carlo sample. To test the validity of the Monte Carlo simulation the ^3^H component, identified as the additional cluster of events at low $\beta$ in the hydrogen sample visible in Fig. \[im:beta\_fit\_h\], was used. The ^3^H events are created by [^4^He]{} spallation in the top part of the apparatus since no tritium of galactic origin should survive propagation to Earth. The observed number of ^3^H events was used to test that the Monte Carlo simulation correctly inferred the number of [^2^H]{} events coming from [^4^He]{} fragmentation. For example, for the 2006 data-set in the rigidity range between $1.7$ GV and $1.8$ GV the flight data sample contains $136 \pm 17$ tritium events, while $110 \pm 15$ events are expected according to the Monte Carlo simulation (Fig. \[im:beta\_fit\_h\]). Simulation and flight data were in agreement within a 10% tolerance. This discrepancy was treated as an additional systematic uncertainty on the estimated number of contamination events. The 10% systematic uncertainty on the [^2^H]{} contamination translates in an additional 1% uncertainty on the number of reconstructed [^2^H]{} events. - [*Geometrical factor:*]{} The uncertainty on the effective geometrical factor as estimated from the Monte Carlo simulation is almost independent of energy and amounts to $0.18\%$. - [*Unfolding procedure:*]{} As discussed in [@2011Sci...332...69A_red] two possible systematic effects have been studied regarding the unfolding procedure: the uncertainty associated with the simulated smearing matrix and the intrinsic accuracy of the procedure. The former was constrained by the checking for compatibility between measured and simulated spatial residuals and was found to be negligible. The latter was estimated by folding and unfolding a known spectral shape with the spectrometer response and was found to be 2%, independent of energy. Results ======= Figure \[im:fig\_data\_h\] and \[im:fig\_data\_he\] show hydrogen and helium isotope fluxes (top) and the ratios of the fluxes (bottom). Results are also reported in Tables \[tab:hydrogen\] and \[tab:helium\]. Fig. \[im:ratios\] shows the [^2^H]{}/[^4^He]{} ratio as a function of kinetic energy per nucleon, compared to previous measurements [@2011ApJ...736..105A_red; @2002ApJ...564..244W; @1998ApJ...496..490R_red; @1995ICRC....2..630W; @1991ApJ...380..230W; @1993ApJ...413..268B_red]. It is worth noting that flux ratios (in particular the [^3^He]{}/[^4^He]{} ratio), if modulated using the force-field approximation [@1968ApJ...154.1011G], show very little dependence on solar activity and can therefore be used to discriminate between various propagation models of GCR in the Galaxy. The  results are the most precise to date. Considering the relatively large spread in the existing data, results agree with previous measurements, in particular with BESS results for $^2$H and IMAX results for $^3$He. Previous measurements are affected by large uncertainties and, for $^3$He where more measurements are available, there is a large spread between data. All the measurements displayed in Figures \[im:fig\_data\_h\], \[im:fig\_data\_he\], and \[im:ratios\], except AMS-01, are from balloon-borne experiments and are affected by a non-negligible background of atmospheric secondary particles. A high precision measurement of the H and He isotope quartet abundances represents a significant step forward in modelling the origin and propagation of GCRs. The constraints on diffusion-model parameters set by the quartet ($^1$H, $^2$H, $^3$He, and $^4$He) were recently revisited . It was found that the constraints on the parameters were competitive with those obtained from the B/C flux ratio analysis and available data supported the universality of GCR propagation in the Galaxy. The tightest constraint was obtained when the He flux was included in the fit. This is because at energies of a few GeV about 10$\%$ of He is from fragmentation of heavier nuclei, which is a non-negligible amount given the precision (1$\%$ statistical) of the  He data [@2011Sci...332...69A_red]. , O., [*et al. *]{}2011. , [**332**]{}, 69 - Supplementary Online Material. Adriani, O., [*et al.*]{} 2013, , 765, 91 Agostinelli, S. et al., 2003, Nucl. Instrum. Meth. A, 506, 250 AMS Collaboration, Aguilar, M., Alcaraz, J., et al. 2002, , 366, 331 , M., [*et al. *]{}2011. , [**736**]{}, 105. , J. J., [*et al. *]{}1993. , [**413**]{}, 268. , A., Cosmic ray antiprotons measured in the pamela experiment, Ph.D. thesis, University of Bari, Bari, Italy, http://pamela.roma2.infn.it/ (2008). Casolino, M., [*et al.*]{} 2011, Astrophysics and Space Sciences Transactions, 7, 465 , B., [Derome]{}, L., [Maurin]{}, D., & [Putze]{}, A. 2012. , [**539**]{}, A88. 1995\. , [**A362**]{}, 487. Garcia-Munoz, M., Mason, G. M., & Simpson, J. A. 1975, International Cosmic Ray Conference, 1, 319 Garcia-Munoz, M., Mason, G. M., & Simpson, J. A. 1975, , 202, 265 , L. J., & [Axford]{}, W. I. 1968. , [**154**]{}, 1011. Leech, H. W., & O’ Gallagher, J. J. 1978, , 221, 1110 , S., & [Maus]{}, S. 2005. , [**57**]{}, 1135. Mewaldt, R. A., Stone, E. C., & Vogt, R. E. 1976, , 206, 616 , P., [*et al. *]{}2007. , [**27**]{}, 296. , V. S., [*et al. *]{}2006. , [**642**]{}, 902. , O., [*et al. *]{}1998. , [**496**]{}, 490. , M. A., [*et al. *]{}1987. , [**48**]{}, 200–205. Strong, A. W., & Moskalenko, I. V. 1998, , 509, 212 Strong, A. W., Moskalenko, I. V., & Ptuskin, V. S. 2007, Annual Review of Nuclear and Particle Science, 57, 285 , S. A. 1989. , [**9**]{}, 145. Tomassetti, N. 2012, , 342, 131 , A. E., [*et al. *]{}2011. . , J. Z., [Seo]{}, E. S., [*et al. *]{}. 2002. , [**564**]{}, 244. , W. R., [*et al. *]{}1991. , [**380**]{}, 230. , J. P., [*et al. *]{}1995. International Cosmic Ray Conference, 2, 630.
--- abstract: 'Several observables of unbound nucleons which are to some extent sensitive to the medium modifications of nucleon-nucleon elastic cross sections in neutron-rich intermediate energy heavy ion collisions are investigated. The splitting effect of neutron and proton effective masses on cross sections is discussed. It is found that the transverse flow as a function of rapidity, the $Q_{zz}$ as a function of momentum, and the ratio of halfwidths of the transverse to that of longitudinal rapidity distribution $R_{t/l}$ are very sensitive to the medium modifications of the cross sections. The transverse momentum distribution of correlation functions of two-nucleons does not yield information on the in-medium cross section.' address: | 1) Frankfurt Institute for Advanced Studies (FIAS), Johann Wolfgang Goethe-Universität, Max-von-Laue-Str. 1, D-60438 Frankfurt am Main, Germany\ 2) China Institute of Atomic Energy, P.O. Box 275 (18), Beijing 102413, P.R. China\ 3) Institut für Theoretische Physik, Johann Wolfgang Goethe-Universität, Max-von-Laue-Str. 1, D-60438 Frankfurt am Main, Germany\ author: - 'Qingfeng Li$\, ^{1}$[^1] , Zhuxia Li$\, ^{2}$ , Sven Soff$\, ^{3}$, Marcus Bleicher$\, ^{3}$, and Horst Stöcker$\, ^{1,3}$' title: 'Medium modifications of the nucleon-nucleon elastic cross section in neutron-rich intermediate energy HICs' --- Introduction ============ The isospin dependence of the in-medium nucleon-nucleon (NN) interaction in a dense neutron-rich nuclear matter attracts more and more interest with the development of upcoming experiments at the Rare Isotope Accelerator (RIA) laboratory (USA) and at the new international accelerator Facility for Antiproton and Ion Research (FAIR) at the Gesellschaft für Schwerionenforschung (GSI, Germany). Within transport theory both of the mean field and the in-medium two-body scattering cross sections eventually come from the same origin and can derived from the same NN interaction [@Han94; @Bass98]. In [@Han94] the in-medium NN elastic scattering cross sections were studied based on the quantum hadrodynamics (QHD) model and the Skyrme interaction with closed-time Green’s function technique without considering the isospin dependence of cross sections. The symmetry potential energy in the mean field part has been explored and found to be very important for the understanding of many problems in intermediate energy nuclear physics as well as in astrophysics (see, for example, [@BaoAnBook01; @baranRP; @Li:1997px]). Naturally, the next step is to ask about the isospin dependence of the in-medium NN cross section and how to probe it practically. The isospin dependence of the in-medium NN elastic cross sections was studied based on the extended QHD model in which $\rho$ [@Li:2000sh] as well as $\delta$ [@Li:2003vd] mesons are included. In [@Li:2003vd] it was found that $\sigma_{nn}^*$ is smaller than that of $\sigma_{pp}^*$ since the effective neutron mass in neutron-rich matter is smaller than that of proton after considering the contribution of $\delta$ meson [@Liu:2001iz]. Recently, the neutron and proton mass splitting are widely studied. There exist two typically different definitions on the effective mass: the Dirac mass $m_D^*$ and the nonrelativistic effective mass (NR) $m_{NR}^*$ [@DiToro:2005ac; @vanDalen:2005ns]. They actually have complete different origin. And, it is further found that the neutron’s Dirac mass is always smaller than the proton’s in a neutron-rich medium. For the NR mass, the situation becomes more complicated, Dalen et al [@vanDalen:2005ns] showed that within the relativistic mean field theory (RMF) the NR mass has the same behavior with Dirac mass but when the NR mass is calculated with Dirac-Brueckner-Hartree-Fock (DBHF) theory [@vanDalen:2005ns; @Sammarruca:2005ch; @Sammarruca:2005tk] and nonrelativistic Brueckner-Hartree-Fock (BHF) theory [@Zuo:2001bd; @Zuo:2005hw] the non-relativistic neutron mass becomes larger than that of protons in the neutron-rich medium. When the NR mass is calculated with Skyrme interactions at mean field level, it is found that whether the neutron mass larger or smaller than protons depends on the version of Skyrme interaction used, for instance, the neutron effective mass is larger than proton’s for $SKM^{*}$ while for SLyb, it is just opposite [@DiToro:2005ac]. Recently the sensitive probes to the medium modification of NN elastic cross sections in neutron-rich heavy ion collisions (HICs) at intermediate energies was studied by B.A. Li et al [@Li:2005ib; @Li:2005jy], in which the NR mass splitting was used. Again, it is seen that the effective cross sections are influenced contrarily by the different definitions of effective nuclear mass: based on the effective Lagrangian of density dependent relativistic hadron theory, our calculations [@Li:2003vd] give the trend (in neutron-rich nuclear medium): $\sigma_{nn}^*< \sigma_{pp}^*$; while, based on the DBHF model [@Sammarruca:2005ch; @Sammarruca:2005tk], the trend is on the contrary: $\sigma_{nn}^*> \sigma_{pp}^*$. For the effective $np$ elastic cross section $\sigma_{np}^*$, two approaches give the similar result. As a default in this work, we address the splitting $\sigma_{nn}^*<\sigma_{pp}^*$ as the “Dirac” case while the $\sigma_{nn}^*>\sigma_{pp}^*$ as the “NR” case (supposing a neutron-rich medium). How does the difference between the in-medium NN cross sections resulting from different splitting effects (NR and Dirac cases) influences the dynamics of HICs at intermediate energies? What (more) observables are sensitive to the medium modification of NN elastic cross sections? In this paper we would like to extend this topic with more observables. The new updated UrQMD transport model especially for simulating the intermediate energy HICs [@Li:2005zz; @Li:2005kq; @Li:2005gf] is adopted for calculations in this work. A soft equation of state (EoS) with a symmetry potential energy depending linearly on the nuclear density is adopted in this work [@Li:2005gf]. The neutron-rich reactions $^{96}{\rm Zr}+^{96}{\rm Zr}$ and $^{78}{\rm Ni}+^{96}{\rm Zr}$ at a beam energy $E_b=100A\,{\rm MeV}$ and for reduced impact parameters $b/b_0=0$ and $0.5$ are chosen, where $b_0=R_{proj}+R_{targ}$ is the maximum impact parameter for the colliding system. For each reaction $2\cdot 10^5$ events are calculated. In this work, we suppose that the in-medium cross sections can be factorized as the product of a medium correction factor ($F(u,\alpha,p)$) and the free NN elastic scattering cross sections ($\sigma^{free}$) based on present results of theoretical calculations in Refs. , which reads $$\sigma^*=F(u,\alpha,p) \sigma^{\rm free}. \label{ftot}$$ The medium correction factor $F$ depends on the nuclear reduced density u (=$\rho/\rho_0$), the isospin-asymmetry $\alpha=(\rho_n-\rho_p)/\rho$ and momentum, generally. As usual, the $\rho$, $\rho_n$, and $\rho_p$ are the nuclear, neutron and proton densities, and the $\rho_0$ represents the normal nuclear density, respectively. In order to study various effects we consider three different cases here, i.e., (1), for “NoMed”, $F=1$, which means that we use the cross sections in free space; (2), for “PartMed”, $F=F_\alpha\cdot F_u$, which means that we consider the isospin-scalar density effect ($F_u$) and the isospin-vector splitting effect ($F_\alpha$) on the NN elastic cross sections; and (3), for “FullMed”, $F=F_\alpha^{\rm p}\cdot F_u^{\rm p}$, which means that we further consider the momentum constraints on the case “PartMed”, the density dependence of the splitting effect is also considered. The $F_u$ and $F_\alpha$ are expressed as follows $$F_u=\frac{1}{3}+\frac{2}{3}\exp[-u/0.54568], \label{fr}$$ which is similar to the density dependence of the scaling factor used in [@Li:2005jy]. From Eq. (\[fr\]) the decrease of cross sections as a function of nuclear density is clear, for example, $F_{u=2}=0.35$. It was also seen from our previous work [@Li:2003vd] that the density dependence of neutron-neutron (or proton-proton) and neutron-proton elastic cross sections is also different when we consider the isospin vector $\rho$-meson contribution. This effect is not considered here in order to observe more clearly the probable splitting effect which is originally from the isospin vector $\delta$-meson contribution from the point of view of the extended QHD theory. To model the splitting effect in in-medium nucleon-nucleon elastic cross sections, we use $$F_\alpha=1+\tau_{ij}\eta A(u)\alpha. \label{fd}$$ For $\tau_{ij}$ in Eq. (\[fd\]), when $i=j=n$, $\tau_{ij}=-1$; $i=j=p$, $\tau_{ij}=+1$; and when $i \neq j$, $\tau_{ij}=0$. The $\eta = +1$ and $-1$ represent the Dirac and NR typed splittings, respectively. $A(u)$ represents the density dependence of the splitting effect $F_\alpha$, which is different between Dirac and NR cases and expressed as $$A(u)=\left\{ \begin{array}{l} 0.85 \hspace{1.5cm} {\rm "PartMed-NR"} \\ \frac{0.85}{1+3.25 u} \hspace{1cm} {\rm "FullMed-NR"} \end{array} \right. ,\label{fau1}$$ and $$A(u)=\left\{ \begin{array}{l} 0 \hspace{1.5cm} {\rm "PartMed-Dirac"} \\ 0.25 u \hspace{1cm} {\rm "FullMed-Dirac"} \end{array} \right. ,\label{fau2}$$ respectively. The different density dependence of $A(u)$ shown in Eqs. (\[fau1\]) and (\[fau2\]) originates from the different density dependence of the splitting effect on cross sections based on different theories [@Li:2003vd; @Sammarruca:2005ch], i.e., In [@Sammarruca:2005ch] it was seen that the sensitivity of the splitting effect of neutron-neutron and proton-proton elastic cross sections to the isospin asymmetry is weaker at larger densities, while in our previous work based on the extended QHD model [@Li:2003vd] an increasing density dependence of the splitting effect was seen due to a larger neutron and proton effective mass-splitting at higher densities. The different behavior of the density dependence of the splitting effect on cross sections is interesting and deserves further investigation. The $F_\alpha^{\rm p}$ and $F_u^{\rm p}$ factors in the case “FullMed” are expressed in one formula, $$F_{\alpha,u}^{\rm p}=\left\{ \begin{array}{l} 1 \hspace{3.3cm} p_{NN}>1 {\rm GeV}/c \\ \frac{F_{\alpha,u} -1}{1+(p_{NN}/0.425)^5}+1 \hspace{1cm} p_{NN}<1 {\rm GeV}/c \end{array} \right. , \label{fdpup}$$ with $p_{NN}$ being the relative momentum in the NN center-of-mass system. It is seen that the density- and splitting- effects on the NN elastic cross section (in Eqs. (\[fr\])-(\[fau2\])) will disappear at high momenta such as $p_{NN}>1\, {\rm GeV}/c$ which was implied in [@Sammarruca:2005ch; @Li:2005jy] for the NR case . For comparison, this study adopts the same momentum constraint for the Dirac splitting case as for the NR case. In addition, the temperature effect on the nucleon-nucleon cross sections should be considered. Unfortunately, the theoretical predictions of this effect are not very robust. For example, by using a thermodynamic Green’s function approach with nonrelativistic propagators and with the ladder approximation for the thermodynamic $T$ matrix, a sharp resonance structure of the in-medium cross section is present at low temperatures. Furthermore, with a smaller nuclear density, this cusplike behavior is more distinct [@Alm:1994db]. Nevertheless, based on the extend QHD (QHD II) model and introducing temperature dependent distribution functions of fermions and anti-fermions, we found that the effective nucleon-nucleon elastic cross section increases slowly with the increase of the temperature [@Li:2003vd]. Therefore, we do not consider the temperature effect on the nucleon-nucleon cross section in this work. A conventional phase-space coalescence model [@Kru85] is used to construct the clusters, in which the nucleons with relative momenta smaller than $P_0$ and relative distances smaller than $R_0$ are considered to belong to one cluster. In this work, $P_0$ and $R_0$ are chosen to be $0.3\,{\rm GeV}/c$ and $3.5$ fm. These are the same values as in our previous works . The freeze-out time is taken to be $150\ {\rm fm}/c$. Fig. \[figr1\] shows the rapidity ($Y^{(0)}=y_{c.m.}/y_{beam}$) distribution of unbound neutrons and protons for $^{96}$Zr+$^{96}$Zr reactions (initial $\alpha\simeq 0.167$) at beam energy $E_b=100A$ MeV and reduced impact parameter $b/b_0=0.5$. In the left part, the calculations of the three cases (“NoMed”, “PartMed-NR”, and “FullMed-NR”) are compared for unbound protons and neutrons. It is seen that the yields of nucleons at midrapidity are indeed influenced by the medium modification of the NN elastic cross sections. This is understandable since these nucleons are emitted mainly from the high density region where the NN cross sections are strongly reduced. Due to the reduction of the cross section, less nucleons are emitted for the cases “PartMed” and “FullMed”. Slightly more nucleons are emitted for the case of “FullMed-NR” compared to the case of “PartMed-NR” because the medium correction factor $F_{u}^p$ increases with momentum as shown in Eq. (\[fdpup\]). Due to the neutron-rich environment, the rapidity distribution of neutrons is strongly influenced by the medium modification of the two-body cross section than that of protons. Therefore, in Fig. \[figr1\] (right), the different splitting effects of cross sections (Dirac and NR) are only shown for unbound neutrons. It is found that in general the splitting effect on the rapidity distribution of neutrons is small and can only be detected for the case “PartMed”, while it is almost negligible for the case “FullMed”. This is because the splitting effect decreases strongly with increasing momentum (see Eq. (\[fdpup\])). For “PartMed” cases and at midrapidities, the unbound nucleons of Dirac splitting are a little smaller than those of NR due to a further reduction of the $\sigma^*_{nn}$ for the Dirac case compared to NR case in the nuclear medium. ![Rapidity distributions of unbound neutrons and protons for Zr+Zr reactions at $E_b=100A$ MeV and $b/b_0=0.5$. The left plot compares the results of three cases (“NoMed”, “PartMed-NR”, and “FullMed-NR”) for protons and neutrons. The right plot shows the NR and Dirac splitting effects on neutrons for the constraints “PartMed” and “FullMed” (see context).[]{data-label="figr1"}](figr1.EPS){width="60.00000%"} Transverse flow was proposed as a sensitive probe of the medium NN elastic cross section [@Gyu82etc]. Here we show the in-plane transverse flow of nucleons in Fig. \[figr2\] as a function of rapidity. The same trend as in Fig. \[figr1\] with respect to the medium correction of cross sections is found. In Fig. \[figr2\] (left) the case “NoMed” produces the largest positive flow for unbound nucleons, while the case “PartMed” shows the smallest flow and “FullMed” lies in-between. Due to the Coulomb potential on protons, the positive transverse flow of protons is always higher than that of neutrons. It is known that above the balance energy, the repulsively nucleon-nucleon scattering effect gains increasing importance, which leads to a positive flow parameter. Similarly, Fig.\[figr2\] (right) indicates that the unbound neutron flows calculated with the different mass splitting effect of NR and Dirac, leads only in the “PartMed” case to a difference of neutron flows while there is no difference for the “FullMed” case. This strong effect of the medium modified cross sections on the dynamics of HICs but the small difference corresponding to different mass splitting of NR and Dirac mass is easy to understand because the splitting effect due to the isovector part is rather small compared to the isoscalar density effects. This finding is consistent with Ref. [@Li:2005jy] with respect to the splitting effect in NN cross section shown in Figs.\[figr1\] and \[figr2\]. This feature of transverse flow is very important to experimentally measure the medium modification of cross sections. However, on the one hand, it is known that the uncertainties of iso-scalar part of the mean-field potentials has also strong effect on the transverse flow, while on the other hand, it will be difficult to discover the probable splitting of neutron-neutron and proton-proton effective cross sections in isospin-asymmetric nuclear medium. Thus, one needs further observables for testing in-medium cross sections to obtain unambiguous conclusions. ![Transverse flow distribution of unbound neutrons and protons as a function of rapidity for Zr+Zr reactions at $E_b=100A$ MeV and $b/b_0=0.5$. In the left plot, the results of three cases (“NoMed”, “PartMed-NR”, and “FullMed-NR”) are compared for protons and neutrons. The right plot shows the NR and Dirac splitting effects on neutron flow for “PartMed” and “FullMed”.[]{data-label="figr2"}](figr2.EPS){width="60.00000%"} The momentum quadrupole $Q_{zz}$ ($=\langle 2p_z^2-p_x^2-p_y^2 \rangle$), which is usually taken to measure the stopping power, has also been extensively studied as a good messenger of the medium modifications of elastic NN cross section. Its major advantage is the weak dependence on the uncertainties of the symmetry energy (for example, see Refs. [@Liu:2001uc; @Li:2002ag; @Chen:2003wp]). Fig.\[figr3\] shows the momentum distributions of the average $Q_{zz}$ of neutrons and protons calculated for two cases “NoMed” and “FullMed” (considering NR and Dirac splitting effects) for $^{78}$Ni+$^{96}$Zr reactions (initial $\alpha\simeq0.218$) at $b=0$ fm and $E_b=100A$ MeV. At first glance, we see that the result for the case “NoMed” is negative in the whole momentum region, while the results for the case “FullMed” with different splitting effects are positive at low momenta $p<0.5\,{\rm GeV}/c$ and return to be negative at larger momenta. From the definition of $Q_{zz}$ it is easy to see that positive $Q_{zz}$ values mean incomplete stopping or nuclear transparency while negative $Q_{zz}$ values mean transverse expansion or collectivity. Hence, the result with free cross sections shows strong collectivity in the whole momentum region (which is in line with the transverse in-plane flow, see Fig. \[figr2\]), while the results with medium modifications (i.e., with reduced cross sections) show larger transparency. The conversion of $Q_{zz}$ (after integrating the whole momentum space) from negative to positive was also seen [@Chen:2003wp] and should be easily measurable by experiments. From Fig. \[figr3\] we also find that the $Q_{zz}$ of neutrons at moderate momenta ($\sim 0.15 - 0.45\,{\rm GeV}/c$) is always larger than that of protons, while this difference disappears at larger momenta. In our previous work [@Li:2005zz] we found that the average collision number of neutrons is always smaller than that of protons for neutron-rich intermediate-energy HICs, which implies that the neutrons experience larger transparency than protons. Furthermore, the $Q_{zz}$ result of neutrons with Dirac splitting effect shows a larger transparency than that with the NR one, especially at moderate momenta $\sim 0.35{\rm GeV}/c$ due to a smaller $\sigma_{nn}^*$ in a neutron-rich system. In the analysis of Ref.[@Li:2005ib] it was shown that $Q_{zz}$ is only sensitive to the magnitude of the cross section but not the isospin dependence of the NN cross sections. In this work we further find that the difference between two different medium corrections due to different mass splittings is rather small because the magnitude difference of cross sections caused by different mass splitting of NR and Dirac we studied is rather small compared to the cross section itself. ![Momentum distribution of $Q_{zz}$ of neutrons and protons separately. Two cases “NoMed” and “FullMed” (with NR or Dirac splitting effect) are chosen for $^{78}$Ni+$^{96}$Zr reactions at $b=0$ fm and $E_b=100A$ MeV.[]{data-label="figr3"}](figr3.EPS){width="60.00000%"} In order to measure the degree of the stopping, the ratio of variances of the transverse (x-axis) to that of the longitudinal (z-axis) rapidity distribution, called vartl, was recently proposed and measured [@Reisdorf:2004wg]. Alternatively and similarly, here we define $R_{t/l}=\Gamma_{dN/dY_x}/\Gamma_{dN/dY_z}$ where $\Gamma$ means the halfwidth of the rapidity distribution. In a thermal nonequilibrated system, the $R_{t/l}$ value will depart from unit: super-stopping leads to $R_{t/l} > 1$ while large transparency leads to $R_{t/l} < 1$. Fig. \[figr4\] shows the calculation results for $R_{t/l}$ values of neutrons and protons with different medium modifications on cross sections. One clearly observes a large effect (more than $20\%$) of the medium modification of the NN cross section on $R_{t/l}$ (from the case “NoMed” to “FullMed”). But the difference between the results calculated with “FullMed-NR” and “FullMed-Dirac” is still very small. Here the effect is only about $2\%$. ![The $R_{t/l}$ values of unbound neutrons and protons with different medium modifications on cross sections (“NoMed”, “FullMed-NR”, and “FullMed-Dirac”). The reaction $^{78}$Ni+$^{96}$Zr at $b=0$ fm and $E_b=100A$ MeV is chosen (see context).[]{data-label="figr4"}](figr4.EPS){width="60.00000%"} Recently, Chen et al [@Chen:2003wp] found that the two-nucleon correlation functions in neutron-rich intermediate energy HICs are sensitive to the density dependence of the nuclear symmetry energy. Soon afterwards the isospin effects were investigated experimentally in the $E_b=61A$ MeV $^{36}$Ar+$^{112,124}$Sn reactions by Ghetti et al [@Ghetti:2003pv] and it seems that the two-particle correlation functions is indeed a useful probe for the isospin dependence of the nuclear EoS. It was also pointed out that the two-body correlation is unsensitive to both the isoscalar part of the EoS and in-medium cross sections [@Chen:2003wp]. In this work we elaborate on the sensitivity of the two-particle correlation function to the in-medium NN cross section. To calculate the unbound NN correlation functions, we adopt the Koonin-Pratt method. The program Correlation After Burner (CRAB) (version 3.0) is used, which is based on the formula : $$C({\bf P},{\bf q}) = \frac {\int d^4x_1 d^4x_2 g(x_1,{\bf P}/2) g(x_2,{\bf P}/2) |\phi({\bf q}, {\bf r})|^2} {{\int d^4x_1 g(x_1,{\bf P}/2)} {\int d^4x_2 g(x_2,{\bf P}/2)}}. \label{cpq}$$ Here $g(x,{\bf P}/2)$ is the probability for emitting a particle with momentum ${\bf P}/2$ from the space-time point $x = ({\bf r}, t)$. $\phi({\bf q}, {\bf r})$ is the relative two-particle wave function with ${\bf r}$ being their relative position. ${\bf P}={\bf p}_1+{\bf p}_2$ and ${\bf q}=({\bf p}_1-{\bf p}_2)/2$ are the total and relative momenta of the particle pair, respectively. According to previous studies (see for examples, [@Bauer:1993wq; @Ghetti:2000ab; @Chen:2003wp]), the effect of nuclear medium on neutron-neutron, neutron-proton correlation functions is more pronounced at smaller relative momentum ($q$) and larger total momentum ($P$) than that with larger $q$ or smaller $P$ and that on proton-proton correlation function is more pronounced at $q\sim 20 {\rm MeV}/c$ and with larger $P$. In Fig. \[figr5\] we show the transverse momentum $P_T$ ($=\sqrt{(p_{1x}+p_{2x})^2+(p_{1y}+p_{2y})^2}$) dependence of the correlation functions ($C(q,P_T)$) for neutron-neutron (top), proton-neutron (bottom) pairs within the bin of relative momentum $q=0\sim 2.5 {\rm MeV}/c$ and for proton-proton (middle) pairs within the bin $q=20\sim 22.5\, {\rm MeV}/c$. For better precision, $10^9$ neutron-neutron and proton-neutron pairs and $10^8$ proton-proton pairs are analyzed for $P_T<500\, {\rm MeV}/c$ and $q < 50\, {\rm MeV}/c$. From Fig. \[figr5\] one sees that with the increase of $P_T$, the two-nucleons exhibit enhanced correlation obviously due to the short average spatial separation at the emission time. In the transverse momentum region studied here, the results for the case “NoMed” are always lower than for the other cases. The observed increase of the correlation function with the reduction of the NN elastic cross sections was also seen in [@Bauer:1993wq]. A small effect of the medium correction of two-body cross sections on the two-particle correlation is also seen, while the difference between the results with NR and Dirac splitting is negligible. Thus the two-particle correlation function is an ideal observable for probing the density dependence of the nuclear symmetry energy. ![The n-n (top), p-p (middle) and p-n (bottom) correlation functions as a function of total transverse momentum $P_T$ and for fixed relative momentum $q$ without and with medium modifications (NR and Dirac splittings) of the nucleon-nucleon elastic cross sections. $^{78}$Ni+$^{96}$Zr reactions at $b=0$ fm and $E_b=100A$ MeV are chosen.[]{data-label="figr5"}](figr5.EPS){width="60.00000%"} In summary, we have investigated several observables concerning unbound nucleons which are to some extent sensitive to the medium modifications of nucleon-nucleon elastic cross sections in neutron-rich intermediate energy heavy ion collisions. The splitting effect of neutron and proton effective masses on cross sections has been discussed. Although many suggested observables, such as the well-known rapidity and the momentum distributions of the yields of nucleons, are sensitive to the medium modifications of cross sections, they are also subject to the uncertainties in the mean field part making the conclusions somehow ambiguous. The transverse flow as a function of rapidity, and especially the $Q_{zz}$ as a function of momentum and the ratio of halfwidths of the transverse to that of longitudinal rapidity distribution $R_{t/l}$ are found to be the highly sensitive probes of the medium modifications of the cross sections. The transverse momentum distributions of correlation functions of two-nucleons is unsensitive to the cross sections. The difference between the in-medium cross section modified by NR and Dirac effective neutron- and proton- mass splitting on these observables was shown to be very small, suggesting to find more sensitive observables to explore these effect. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank Scott Pratt for the use of the CRAB program and acknowledge support by the Frankfurt Center for Scientific Computing (CSC). Q. Li thanks the Alexander von Humboldt-Stiftung for a fellowship. This work is partly supported by the National Natural Science Foundation of China under Grant No. 10235030 and the Major State Basic Research Development Program of China under Contract No. G20000774, as well as by GSI, BMBF, DFG, and Volkswagenstiftung. [99]{} Yinlu Han, G.J. Mao, Z. Li, and Y. Zhuo, Phys. Rev. C [**50**]{}, 961 (1994)\ S. A. Bass [*et al.*]{}, Prog. Part. Nucl. Phys. [**41**]{}, 255 (1998)\ , edited by Bao-An Li and W. Udo Schroeder, NOVA Science Publishers, Inc., New York, 2001\ V. Baran, M. Colonna, V. Greco, M. DiToro, Phys. Rep. [**410**]{}, 235 (2005)\ B. A. Li, C. M. Ko and W. Bauer, Int. J. Mod. Phys. E [**7**]{}, 147 (1998) Q. Li, Z. Li and G. Mao, Phys. Rev. C [**62**]{}, 014606 (2000) Q. Li, Z. Li and E. Zhao, Phys. Rev. C [**69**]{}, 017601 (2004) B. Liu, V. Greco, V. Baran, M. Colonna and M. Di Toro, Phys. Rev. C [**65**]{}, 045201 (2002) M. Di Toro, M. Colonna and J. Rizzo, arXiv:nucl-th/0505013. E. N. E. van Dalen, C. Fuchs and A. Faessler, Phys. Rev. Lett.  [**95**]{}, 022302 (2005) F. Sammarruca, arXiv:nucl-th/0506081. F. Sammarruca and P. Krastev, arXiv:nucl-th/0509011. W. Zuo, I. Bombaci and U. Lombardo, Phys. Rev. C [**60**]{}, 024605 (1999) W. Zuo, L. G. Cao, B. A. Li, U. Lombardo and C. W. Shen, Phys. Rev. C [**72**]{}, 014005 (2005) B. A. Li, P. Danielewicz and W. G. Lynch, Phys. Rev. C [**71**]{}, 054603 (2005) B. A. Li and L. W. Chen, arXiv:nucl-th/0508024. Q. Li, Z. Li, S. Soff, R. K. Gupta, M. Bleicher and H. Stöcker, J. Phys. G: Nucl. Part. Phys. [**31**]{}, 1359 (2005) Q. Li, Z. Li, S. Soff, M. Bleicher and H. Stöcker, Phys. Rev. C [**72**]{}, 034613 (2005) Q. Li, Z. Li, S. Soff, M. Bleicher and H. Stöcker, J. Phys. G: Nucl. Part. Phys. [**32**]{}, 151 (2006) T. Alm, G. Ropke and M. Schmidt, Phys. Rev. C [**50**]{}, 31 (1994) H. Kruse, B.V. Jacak, J.J. Molitoris, G.D. Westfall, H. Stöcker, Phys. Rev. C [**31**]{}, 1770 (1985)\ M. Gyulassy, K.A. Fraenkel, and H. Stöcker, Phys. Lett. [**110B**]{}, 185 (1982); G. Peilert, H. Stöcker, and W. Greiner, Phys. Rev. C [**39**]{}, 1402 (1989); for reviews, see: H. Stöcker and W. Greiner, Phys. Rep. [**137**]{}, 277 (1986); S. Bass, M. Gyulassy, and H. Stöcker, J. Phys. G: Nucl. Part. Phys. [**25**]{}, R21 (1999)\ J. Y. Liu, W. J. Guo, S. J. Wang, W. Zuo, Q. Zhao and Y. F. Yang, Phys. Rev. Lett.  [**86**]{}, 975 (2001) Q. Li and Z. Li, Chin. Phys. Lett.  [**19**]{}, 321 (2002) L. W. Chen, V. Greco, C. M. Ko and B. A. Li, Phys. Rev. C [**68**]{}, 014605 (2003) W. Reisdorf [*et al.*]{} \[FOPI Collaboration\], Phys. Rev. Lett.  [**92**]{}, 232301 (2004) R. Ghetti [*et al.*]{}, Phys. Rev. C [**69**]{}, 031605 (2004) S. E. Koonin, Phys. Lett. B [**70**]{} (1977) 43. S. Pratt, Phys. Rev. Lett.  [**53**]{} (1984) 1219. S. Pratt, Phys. Rev. D [**33**]{} (1986) 72. S. Pratt [*et al.*]{}, Nucl. Phys. A [**566**]{} (1994) 103C. R. Ghetti [*et al.*]{}, Nucl. Phys. A [**674**]{} (2000) 277. W. Bauer, C. K. Gelbke and S. Pratt, Ann. Rev. Nucl. Part. Sci.  [**42**]{} 77 (1992). [^1]: Fellow of the Alexander von Humboldt Foundation.
--- abstract: 'Numerical upper and lower bounds to the information rate transferred through the additive white Gaussian noise channel affected by discrete-time multiplicative autoregressive moving-average (ARMA) phase noise are proposed in the paper. The state space of the ARMA model being multidimensional, the problem cannot be approached by the conventional trellis-based methods that assume a first-order model for phase noise and quantization of the phase space, because the number of state of the trellis would be enormous. The proposed lower and upper bounds are based on particle filtering and Kalman filtering. Simulation results show that the upper and lower bounds are so close to each other that we can claim of having numerically computed the actual information rate of the multiplicative ARMA phase noise channel, at least in the cases studied in the paper. Moreover, the lower bound, which is virtually capacity-achieving, is obtained by demodulation of the incoming signal based on a Kalman filter aided by past data. Thus we can claim of having found the virtually optimal demodulator for the multiplicative phase noise channel, at least for the cases considered in the paper.' author: - - title: '[Tight Upper and Lower Bounds to the Information Rate of the Phase Noise Channel]{}' --- Introduction ============ Multiplicative phase noise is a major source of impairment in radio and optical channels. The presence of phase noise in radio channels is well known and studied from a long time, being phase noise introduced by the local oscillators used in up conversion and down conversion, while multiplicative phase noise is recently becoming a hot topic in the context of coherent optical transmission. Recent studies about the phase noise that arises in optical channels and about its effects in coherent optics can be found in [@essiambre; @mag]. Several methods have been proposed in the literature to combat the detrimental effects of phase noise. Among these methods we cite iterative demodulation and decoding techniques of [@shamai; @colavolpe; @barbieri] and the insertion of pilot symbols [@spalv], and staged demodulation and decoding [@staged]. The capacity of the additive white Gaussian noise (AWGN) channel affected by multiplicative phase noise with white power spectral density is studied in [@essiambre],[@goebel; @hou], while Wiener’s phase noise is considered in [@barl; @barl2; @furboni; @dauwels]. Analytical bounds on capacity of phase noise channels at high signal-to-noise ratio are given in [@lapidoth]. Despite the quantity and quality of the literature available, we find room for new results by considering the channel impaired by autoregressive moving-average (ARMA) multiplicative phase noise, a phase noise model that is much more realistic than Wiener’s phase noise and/or white phase noise in many cases of practical interest. The ARMA model makes it possible to shape the power spectral density of phase noise by acting on the order and on the parameters of the model. Working out the capacity of a channel affected by a general multiplicative ARMA phase noise process is a challenging problem, because - [the state space is not finite and it is multidimensional, therefore it cannot be approached by techniques like those used for white and Wiener phase noise,]{} - [the observation is a nonlinear function of the state.]{} The only paper studying the capacity of the channel affected by ARMA phase noise we are aware of is [@dauwels], where the method of particle filtering (see [@particle] for a tutorial on particle filtering) is adopted to work out an approximation to the constrained channel capacity, the constrained capacity being the information rate transferred through the channel with a fixed source. The new results presented in this paper are tight numerical upper and lower bounds to the constrained capacity of the AWGN ARMA phase noise channel. First-Order Markov Channels with Continuous State ================================================= Let $u_{i}^{k}$ indicate the column vector $ (u_k, u_{k-1}, \ldots, u_i)^T$, $i\le k$, where $u_i^k$ is empty for $i>k$, the superscript $^T$ denotes transposition, and $u_i^k \in {\cal U}_i^k$. Also, let $U$ indicate a possibly non-stationary process, $U=(U_0,U_1, \cdots)$, whose generic realization is the sequence $(u_0, u_1, \cdots).$ When ${\cal U}_i^k$ is a continuous set, $p(u_i^k)$ is used to indicate the multivariate probability density function, while when ${\cal U}_i^k$ is a discrete set $p(u_i^k)$ indicates the multivariate mass probability and $|{\cal U}_i|$ denotes the number of elements in ${\cal U}_i$. Consider a first-order Markov channel. The Markovian state process $S$ is characterized by the joint probability $$\label{markovstate} p(s_0^{n}) = p(s_{0})\prod_{k=1}^n p(s_k|s_{k-1}).$$ A channel without feedback that is memoryless given the state is characterized by the state transition probability $p(s_k|s_{k-1})$ and by the conditional distribution $$\begin{aligned} p(y_1^n|x_1^n,s_1^n)&=\prod_{k=1}^np(y_k|x_k,s_k) , \label{markovchannel}\end{aligned}$$ where $Y$ is the channel output process and $X$ is the channel input process, that we assume to be discrete. Equation (\[markovchannel\]) says that the channel output process is memoryless given the source and the state. Drawing from the parlance of carrier recovery, the channel transition probability $p(y_k|x_k,s_k)$, which is conditioned on channel’s input, is hereafter called [*data-aided*]{} channel transition probability. We assume that the source is memoryless and independent of the state, that is $$\begin{aligned} p(x_1^n|s_1^n)&=\prod_{k=1}^np(x_k). \label{markovsource}\end{aligned}$$ Putting together (\[markovchannel\]) and (\[markovsource\]) one finds that the joint source and channel model is memoryless given the state: $$\begin{aligned} p(y_1^n,x_1^n|s_1^n)&=\prod_{k=1}^np(y_k,x_k|s_k). \label{markovjoint}\end{aligned}$$ Using (\[markovjoint\]) one finds that channel’s output is memoryless given the state: $$\begin{aligned} p(y_1^n|s_1^n) &=\sum_{x_1^n \in {\cal X}_1^n} p(y_1^n,x_1^n|s_1^n) = \sum_{x_1^n \in {\cal X}_1^n} \prod_{k=1}^np(y_k,x_k|s_k) \nonumber \\ & = \prod_{k=1}^n \sum_{x_k \in {\cal X}_k} p(y_k,x_k|s_k) = \prod_{k=1}^{n} p(y_k|s_k). \label{memorylesschannel}\end{aligned}$$ Drawing again from the parlance of carrier recovery, the channel transition probability $p(y_k|s_k)$, which is not aware of channel’s input, is hereafter called [*blind*]{} channel transition probability. From eq. (\[markovstate\]) and (\[memorylesschannel\]), after straightforward passages one gets $$\label{markov} p(s_k|s_{k-1},y_1^{k-1}) = p(s_k|s_{k-1}), \ \ k=1,2, \cdots, n.$$ Also, by (\[markovjoint\]) and (\[memorylesschannel\]) one finds that the source is memoryless given the state and channel’s output: $$\begin{aligned} p(x_1^n|y_1^n,s_1^n) = \prod_{k=1}^{n} p(x_k|y_k,s_k). \label{memorylesssource}\end{aligned}$$ Bayesian Tracking ================= Any measurement process $Y$ that is memoryless given the state can be cast in the general framework of state-space approach for modelling dynamic systems, which is defined by the state transition equation $$s_{k}=f_{k}(s_{k-1},v_{k-1}), \ \ k=1,2, \cdots, n,\label{statetran}$$ and by the measurement equation $$y_k=h_k(s_k,n_k), \ \ k=1,2, \cdots, n,\label{measurement}$$ where $f_k(\cdot)$ and $h_k(\cdot)$ are possibly non-linear and time-varying known functions of their arguments, $v_k$ is the process noise vector, and $n_k$ is the measurement noise vector, which is assumed to be independent of $v_k$. The state-space approach fits the Markov channel, taking the output channel process $Y$ as the measurement process both in the blind and in the data aided case. In the blind case, the measurement equation is a time-invariant function of the state, and the measurement noise is the joint effect of channel noise and input process. The blind case is described by the memoryless probability $p(y_k|s_k)$ appearing in the product (\[memorylesschannel\]). In the data-aided case the measurement noise is only the channel noise and the input process is embedded in the known non-linear and time-varying $h_k(\cdot)$. In this case the measurement probability is $p(y_k|x_k,s_k)$. A powerful tool in the analysis of dynamical system is the so-called [*Bayesian tracking*]{}. Let the Markovian state be continuous. One can track the hidden state by a two-step recursion that, for $k=1,2,\cdots,n,$ reads $$p(s_k|y_1^{k-1}) = \int_{{\cal S}}p(s_k|s_{k-1})p(s_{k-1}|y_1^{k-1}) d s_{k-1}, \label{predict}$$ $$p(s_k|y_1^{k}) =\frac{p(s_k|y_1^{k-1})p(y_k|s_k)}{p(y_k|y_1^{k-1})}, \ \ \ \label{update}$$ where $p(s_k|y_1^{k-1})$ is the [*predictive*]{} distribution, $p(s_k|y_1^k)$ is the [*posterior*]{} distribution, and the denominator of (\[update\]) is a normalization factor such that the left-hand side is a probability. The normalization factor can be computed by the Chapman-Kolmogoroff equation $$p(y_k|y_1^{k-1}) =\int_{{\cal S}_k} p(s_k|y_1^{k-1})p(y_k|s_k) d s_{k}. \label{ck}$$ The state transition probability $p(s_k|s_{k-1})$ appears in (\[predict\]) in place of $p(s_k|s_{k-1},y_1^{k-1})$ thanks to (\[markov\]). Thanks to (\[memorylesschannel\]), $p(y_k|s_k)$ can be used in place of $p(y_k|s_k,y_1^{k-1})$ in (\[update\]). When the dynamic system is a linear system with Gaussian noises, Bayesian tracking is performed by the Kalman filter. When the model is not tractable, one can resort to particle filtering techniques to work out an approximation to the wanted distribution. The probabilities worked out by Bayesian tracking can be used to evaluate entropy rates by Monte Carlo integration as, for instance, in [@dauwels; @barl2]. When the result of Bayesian tracking is an approximation $q(u_1^k)$ to the wanted probability $p(u_1^k)$, then, by the Kullback-Leibler inequality, the approximation can be used to get an upper bound on the wanted entropy rate $$\overline{h}(U)= -\lim_{k \rightarrow \infty} \frac{1}{k}E_p \left\{\log_2 q(u_1^{k})\right\} \geq h(U),\label{klbound}$$ where operator $E_p$ denotes expectation with respect to probability $p(\cdot)$. The ARMA Phase Noise Channel ============================ The $k$-th output of the channel is $$y_k=x_ke^{j \phi_k}+w_k, \ \ \ k=1,2,\cdots,n\label{envelope}$$ where $j$ is the imaginary unit, $Y$ is the complex channel output process, $X$ is the channel complex input modulation process made by i.i.d. random variables with zero mean and unit variance, $W$ is the complex AWGN process with zero mean and variance $\mbox{SNR}^{-1}$, and $\Phi$ is the phase noise process which is assumed to be independent of $X$ and $W$. Specifically, process $\Phi$ is modelled as the 1-causal accumulation modulo $2 \pi$ of frequency noise, that is $$\phi_{k}= [\lambda_{k-1}+\phi_{k-1}]_{\bmod{2\pi}}, \label{nco}$$ where the frequency noise process $\Lambda$ is given by the $z$-transform $$\begin{aligned} \sum_{k=-\infty}^{\infty}\lambda_kz^{-k}=H(z) \cdot \left( \sum_{k=-\infty}^{\infty}v_kz^{-k} \right)\nonumber\end{aligned}$$ where $V$ is a white Gaussian noise process with zero mean and variance $\gamma^2$, and $$\begin{aligned} H(z)=\frac{\prod_{k=1}^N(1-\beta_{k} z^{-1})}{\prod_{k=1}^N( 1-\alpha_{k} z^{-1})}=\frac{1+\sum_{k=1}^N b_{k} z^{-k}}{ 1-\sum_{k=1}^N a_{k} z^{-k}}, \label{innovation}\end{aligned}$$ where $|\alpha_{k}|<1, \ |\beta_{k}| \leq 1, \ N \geq 0$, and it is understood that $H(z)=1$ for $N=0$, leading to the special case of random phase walk, where $\lambda_k=v_k$. $H(z)$ is the transfer function of a filter made by a shift register with feedback taps $a_1^N$ and forward taps $b_1^N$. Let $\omega_{k-N}^{k-1}$ be the content of the shift register at the $k$-th channel use, that is $$\sum_{k=-\infty}^{\infty}\omega_kz^{-k}= \frac{\sum_{k=-\infty}^{\infty}v_kz^{-k}}{1-\sum_{k=1}^N a_{k} z^{-k}}.$$ The state at time $k$ is the $(N+1)$ column vector $$s_{k}=(\phi_k,(\omega_{k-N}^{k-1})^T)^T. \label{state}$$ Let us introduce the state transition matrix $$\begin{aligned} F= \left[\begin{array}{ccc} 1 & \multicolumn{2}{c}{(a_1^N+b_1^N)^T} \\ 0 & \multicolumn{2}{c}{(a_1^N)^T} \\ 0_{N-1} & I_{N-1} & 0_{N-1} \end{array}\right] \nonumber,\end{aligned}$$ where $I_N$ is the identity matrix of size $N \times N$ and $0_N$ is a column vector of $N$ zeros. The state transition equation is $$s_{k+1}=Fs_k+(v_k,v_k,0_{N-1}^T)^T +(2 m \pi,0_{N}^T)^T,$$ where $m$ is such that $\phi_{k+1}$ lies in the interval $[0,2\pi)$, thus making the state transition equation non-linear. Given $s_{k}$, for $N=0$ the state transition to $s_{k+1}$ is ambiguous of $2 n \pi$, while for $N \geq 1$, due to the presence of $\omega_{k}$ in $s_{k+1}$, the state transition is not ambiguous. Although not necessary, in the following we will assume $N\geq 1$, referring the reader to [@barl2] for the state transition probability with $N=0$. For $N\geq 1$ the state transition probability is a $(N+1)$-dimensional Gaussian distribution. Note that, given $s_k$, $N$ of the $(N+1)$ entries of $s_{k+1}$ are known, the only free random variable being $v_k$, hence the covariance matrix of the state transition probability has unit rank. Specifically, $$\begin{aligned} \label{statetransition} p(s_{k+1}|s_k) = g_{N+1} (Fs_k+(2m\pi,0_N^T)^T,\Sigma_v; s_{k+1}),\end{aligned}$$ where $g_N(\mu, \Sigma; x)$ is a $N$-dimensional Gaussian distribution over the space spanned by $x$ with mean vector $\mu$ and covariance matrix $\Sigma$, $$\begin{aligned} \Sigma_v= \left[\begin{array}{ccc} \gamma^2 & \gamma^2 & 0_{N-1}^T \\ \gamma^2 & \gamma^2 & 0_{N-1}^T \\ 0_{N-1} & 0_{N-1} & 0_{(N-1)\times(N-1)} \end{array}\right] \label{qmatrix},\end{aligned}$$ where $0_{N \times M}$ is an all-zero $N \times M$ matrix, and $$\label{moduloconstraint} 2 m \pi= \phi_{k+1}- \phi_k - \omega_k - \sum_{i=1}^N b_i \omega_{k-i}.$$ The measurement at time $k$ is the $y_k$ given by (\[envelope\]). The data-aided channel transition probability is $$p(y_k|x_k,s_k)= g_c(x_ke^{j \phi_k}, \mbox{SNR}^{-1};y_k), \label{channelprob}$$ where $g_c(\mu,\sigma^2;t)$ indicates a circular symmetric Gaussian probability density function over the complex plane spanned by $t$ with mean $\mu$ and two-dimensional variance $\sigma^2$. The joint source and channel probability is $$p(y_k,x_k|s_k)=p(x_k) g_c(x_ke^{j \phi_k}, \mbox{SNR}^{-1};y_k). \label{sourcechanneltransition}$$ From the above probability one can compute the blind channel transition probability by (\[memorylesschannel\]). Upper Bound =========== Let $h(U)$ denote the entropy rate of process $U$. Extract $h(Y|X)$ from $$h(Y|X)-h(Y|X,S)=h(S|X)-h(S|X,Y),$$ to write $$I(X;Y)=h(Y)-h(Y|X,S)-h(S)+h(S|X,Y),\label{i1}$$ where, by independence between $X$ and the state process $S$, $h(S)$ has been substituted in place of $h(S|X)$. The upper bound that we propose is $$\label{full} \overline{h}(Y)-h(Y|X,S)-h(S)+ \overline{h}(S|X,Y) \geq I(X;Y),$$ where $\overline{A}$ indicates an upper bound on $A$. The two relative entropy rates $h(Y|X,S)$ and $h(S)$ are those of the white Gaussian processes $W$ and $V$, respectively. The upper bound $\overline{h}(Y) \geq h(Y)$ can be obtained by approximating the conditional probability $p(y_k|y_1^{k-1})$ to the normalization factor of blind Bayesian tracking performed by a particle filter as in [@dauwels]. The new contribution of the present paper is the upper bound $\overline{h}(S|X,Y)$, which is worked out as follows. Invoking the chain rule, the Markovian property (\[markov\]), and the Shannon-McMillan-Breiman theorem, one can evaluate the entropy rate by computer simulation as $$h(S|X,Y)= \lim_{n\rightarrow\infty} \frac{1}{n}\sum_{k=1}^n - \log_2 p(s_k|x_1^k,y_1^k,s_{k+1}), \label{SMB2}$$ where $(x_1^n,y_1^n,s_1^{n+1})$ is a realization of the joint process $(X,Y,S)$. Unfortunately, the actual $p(s_{k}|x_{1}^{k},y_1^k,s_{k+1})$ of (\[SMB2\]) is not tractable. We propose to approximate it as $$q(s_{k}|x_{1}^{k},y_1^k,s_{k+1})=\frac{p(s_{k+1}|s_{k})q(s_{k}|x_1^{k},y_1^k) }{ \int_{\cal S}p(s_{k+1}|s_{k})q(s_{k}|x_1^{k},y_1^k) d s_{k}},\label{app}$$ with $$\begin{aligned} q(s_k| x_1^k,y_1^k) = \sum_{l=-\infty}^{\infty} g_{N+1}(\mu_k, \Sigma_k; \phi_k+ 2 l \pi, \omega_{k-N}^{k-1}), \label{postkalman}\end{aligned}$$ thus, thanks to (\[klbound\]), getting the upper bound $$\overline{h}(S|X,Y)= \lim_{n\rightarrow\infty} \frac{1}{n}\sum_{k=1}^n - \log_2 q(s_k|x_1^k,y_1^k,s_{k+1}). \nonumber$$ The denominator of (\[app\]) can be treated by moving the sum (\[postkalman\]) outside the integral, and observing that the integral is the convolution between two Gaussian distributions, leading to closed form computation as in the predictive step of the Kalman filter [@simon Sec. 3.3]: $$\begin{aligned} & \int_{\cal S}p(s_{k+1}|s_{k})q(s_{k}|x_1^{k},y_1^k)\ d s_{k}= \nonumber \\ &\hspace{-0.15cm} \sum_{l=-\infty}^{\infty} g_{N+1}(F \mu_{k},F\Sigma_{k}F^T \hspace{-0.15cm}+ \Sigma_v; \phi_{k+1}+ 2 l \pi, \omega_{k-N+1}^{k}).\label{predictcovkalman}\end{aligned}$$ The parameters $\mu_k$ and $\Sigma_k$ appearing in equations (\[postkalman\]) and (\[predictcovkalman\]) can be worked out by a linearized Kalman filter [@simon Sec. 13.2]. As it will be shown by simulation results, a tighter bound can be obtained by taking for $\mu_k$ and $\Sigma_k$ a sample estimate where the sample is the set of posterior particles of a particle filter. Note that the integral in the denominator of (\[app\]) is a normalization factor such that the left side of (\[app\]) is a probability. As a consequence, it cannot be evaluated by the predictive particles of the particle filter, because the predictive particles would provide only an approximation to the wanted integral, and using an approximation to the denominator is not sufficient to guarantee that the ratio in (\[app\]) is a probability. Also, it is worth pointing out that, while in [@dauwels] the phase in the state model is unwrapped, here it is the evaluation of $\overline{h}(S|X,Y)$, that is not made in [@dauwels], that forces us to define the state by the wrapped phase (\[nco\]). As a matter of fact, phase ambiguities of $2n\pi$ are inherently present in the measurement, therefore cycle slips of the Bayesian tracking algorithm would lead to catastrophic errors of $2 n \pi$ between the actual unwrapped phase and the distribution of the unwrapped phase recovered by the tracking algorithm. Lower Bound =========== Assume a discrete input alphabet. The lower bound that we propose is $H(X)-\overline{H}(X|Y) \leq I(X,Y)$, where, by the same arguments leading to (\[SMB2\]) and by the Kullback-Leibler inequality (\[klbound\]), one evaluates the upper bound on the conditional entropy rate as $$\begin{aligned} \overline{H}(X|Y) = \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{k=1}^n - \log_2 q(x_k|x_{1}^{k-1},y_1^{n}). \label{lb}\end{aligned}$$ The upper bound can be based on demodulation, that is on the probability $$\begin{aligned} \label{saturation}p(x_{k}| x_1^{k-1}, y_1^{n})= \int_{ {\cal S}} p(s_k,x_{k}|x_1^{k-1}, y_1^n) ds_k,\end{aligned}$$ where the probability inside the integral can be written as $$\begin{aligned} \label{twoterms}p(s_k, x_{k}|x_1^{k-1}, y_1^n) &= p(s_k| x_1^{k-1}, y_1^n)p(x_{k}|s_k,x_1^{k-1}, y_1^n) \nonumber \\ & =p(s_k|x_1^{k-1},y_1^n)p(x_{k}|s_k, y_k),\end{aligned}$$ where the second equality comes from (\[memorylesssource\]). In what follows the first factor in (\[twoterms\]) is approximated to $p(s_k| x_1^{k-1}, y_1^{k})$. We point out that the proposed approximation is likely to be tight, because the condition $y_{k+1}^n$ gives only a weak contribution of non-data-aided type to the wanted probability. The proposed approximation leads to $$\begin{aligned} \label{saturation2} & q(x_{k}| x_1^{k-1}, y_1^{n})= \int_{ {\cal S}} q(s_k|x_1^{k-1}, y_1^{k})p(x_k|s_k,y_k) ds_k \nonumber \\ & =\int_{ {\cal S}} \frac{q(s_k|x_1^{k-1}, y_1^{k-1})p(y_k|s_k)} {p(y_k|x_1^{k-1},y_1^{k-1})}\frac{p(y_k,x_k|s_k)}{p(y_k|s_k)} ds_k \nonumber \\ & \propto \int_{ {\cal S}} q(s_k|x_1^{k-1}, y_1^{k-1})p(y_k,x_k|s_k) ds_k,\end{aligned}$$ which, after normalization, can be used in (\[lb\]) to get the desired bound. The first factor inside the integral (\[saturation2\]) is the predictive probability of Bayesian tracking, while the second factor is a memoryless term that comes from the channel model (\[sourcechanneltransition\]). Simulation Results ================== The frequency noise used in the simulations is obtained by filtering white Gaussian noise through the transfer function $$H(z)=\frac{(1-\beta_1 z^{-1})(1-\beta_2z^{-2})}{1-\alpha_1 z^{-1}}. \label{smmodel}$$ Special cases of (\[smmodel\]) are obtained with $\beta_1=\alpha_1$ and $\beta_2=1$, leading to white phase noise, and $\beta_1=0, \ \beta_2=1, \ \alpha_1=1$, that leads to Wiener’s phase noise. Model (\[smmodel\]) is proposed in [@SM] as an approximation to the phase noise spectrum of real-world microwave local oscillators and it has been used with $\alpha_1 = 0.9999$, $\beta_1=0.9937$, $\beta_2=0.7286$ to get the simulation results that are hereafter presented. The lower bound is computed by adopting as a Bayesian tracking method the linearized predictive Kalman filter, as in [@oekalman] and [@indo], while for the upper bound we use both the Kalman filter and the particle filter. Figure \[4\] reports the results for 4-ary quadrature-amplitude modulation (QAM) while Fig. \[16\] reports the results for 16-QAM, in both cases with two values of $\gamma$. The two Figures show that the particle filter greatly improves the upper bound over the Kalman filter, especially for large $\gamma$. In contrast, the lower bound based on the predictive Kalman filter is so tight that there is no need of using a particle filter for demodulation, also for large values of $\gamma$. We have observed that the Kalman filter often produces a covariance $\Sigma_k$ with a determinant that is much lower than the one that is obtained by the particle filter. What happens is that the folded Gaussian distribution (\[postkalman\]) is sampled in the state visited by the simulation, and, when this state is far from the mean vector, the Gaussian is sampled on the tails. In this event, the poor estimation of the covariance leads to dramatically large errors in the evaluation of the differential entropy rate $h(S|X,Y)$. Conversely, the entropy rate $\overline{H}(X|Y)$ that appears in the lower bound is based on the integral of the mentioned Gaussian distribution, hence it is less sensitive to errors in the estimated covariance. ![Upper and lower bounds to the information rate of 4-QAM. []{data-label="4"}](UpperLower_20_40_Model_TCAS08_QPSK_1.eps){width=".95\columnwidth"} ![Upper and lower bounds to the information rate of 16-QAM. []{data-label="16"}](UpperLower_20_40_Model_TCAS08_16QAM.eps){width=".95\columnwidth"} Conclusion ========== We have presented upper and lower bounds to the constrained information rate transferred through the multiplicative phase noise channel with ARMA phase noise. From the results it appears that the upper and lower bounds are so close to each other that we can claim of having computed the actual information rate, at least for the second-order ARMA phase noise studied in the simulation. An important experimental result presented in the paper is that demodulation based on a predictive linearized Kalman filter aided by past data is virtually capacity achieving, at least in the examples studied in the paper. This is not surprising in view of the result obtained in [@forney] for the intersymbol interference (ISI) channel, that says that predictive filtering aided by past data (in the case of the ISI channel, the predictive decision-feedback equalizer) virtually leads to channel capacity. A practical mean to replace past data with the decisions coming from a capacity achieving code is the interleaving scheme originally proposed by Eyuboglu in [@eyuboglu] for the ISI channel. Extension of this principle to other channels can be found, for instance, in [@collins]. Computational complexity of demodulation via Kalman filter can be lowered by using a time invariant filter as described in [@bridging]. [99]{} R.-J. Essiambre, G. Kramer, P. J. Winzer, G. J. Foschini, and B. Goebel “Capacity limits of optical fiber networks,” *J. Lightw. Technol.*, vol. 28, no. 4, pp. 662–701, Feb. 15, 2010. M. Magarini, A. Spalvieri, F. Vacondio, M. Bertolini, M. Pepe, and G. Gavioli, “Empirical modeling and simulation of phase noise in long-haul coherent optical systems,” [*Optics Express*]{}, vol. 19, no. 23, pp. 22455-22461, Nov. 7, 2011. M. Peleg, S. Shamai (Shitz), and S. Galan, “Iterative decoding for coded noncoherent MPSK communications over phase-noisy AWGN channel,” [*Proc. IEE Commun.,*]{} vol. 147, pp. 87-95, Apr. 2000. G. Colavolpe, A. Barbieri, and G. Caire, “Algorithms for iterative decoding in the presence of strong phase noise,” [*IEEE Journal on Selected Areas in Commun.,*]{} vol. 23, no. 9, pp. 1748-1757, Sept. 2005. A. Barbieri and G. Colavolpe, “Soft-output decoding of rotationally invariant invariant codes over channels with phase noise,” [*IEEE Trans. Commun.,*]{} vol. 55, no. 11, pp. 2125-2133, Nov. 2007. A. Spalvieri and L. Barletta, “Pilot-aided carrier recovery in the presence of phase noise,” [*IEEE Trans. Commun.*]{}, vol. 59, no. 7, pp. 1966-1974, July 2011. L. Barletta, M. Magarini, and A. Spalvieri, “Staged demodulation and decoding,” [*Optics Express*]{}, vol. 20, no. 21, pp. 23728–23734, Oct. 8, 2012. B. Goebel, R.-J. Essiambre, G. Kramer, P. J. Winzer, and N. Hanik, “Calculation of mutual information for partially coherent Gaussian channels with application to fiber optics,” [*IEEE Trans. Inf. Theory*]{}, vol. 57, no. 9, pp. 5720-5736, Sept. 2011. P. Hou, B. J. Belzer, and T. R. Fischer, “Shaping gain of the partially coherent additive white Gaussian noise channel,” [*IEEE Commun. Letters*]{}, vol. 6, no. 5, pp. 175-177, May 2002. J. Dauwels and H.-A. Loeliger, “Computation of information rates by particle methods,” [*IEEE Trans. Inf. Theory*]{}, vol. 54, no. 1, pp. 406-409, Jan. 2008. L. Barletta, M. Magarini, and A. Spalvieri, “Estimate of information rates of discrete-time first-order Markov phase noise channels,” [*IEEE Photon. Technol. Lett.*]{}, vol. 23, no. 21, pp 1582–1584, Nov. 1, 2011. A. Barbieri and G. Colavolpe, “On the information rate and repeat-accumulate code design for phase noise channels,” [*IEEE Trans. Commun.,*]{} vol. 59, no. 12, pp. 3223-3228, Dec. 2011. L. Barletta, M. Magarini, and A. Spalvieri, “The information rate transferred through the discrete-time Wiener’s phase noise channel,” *IEEE J. Lightw. Technol.*, vol. 30, no. 10, pp. 1480-1486, May 15, 2012. A. Lapidoth, “On phase noise channels at high SNR,” [*Inf. Theory Workshop,*]{} Oct. 2002. M.S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking,” *IEEE Trans. on Signal Proc.,* vol. 50, no. 2, pp. 174-188, Feb. 2002. D. Simon, *Optimal State Estimation.* New York: Wiley, 2006. A. Spalvieri and M. Magarini, “Wiener’s analysis of the discrete-time phase-locked loop with loop delay,” *IEEE Trans. Circuits and Systems II: Express Briefs,* vol. 55, no. 6, pp.596-600, June 2008. L. Barletta, M. Magarini, and A. Spalvieri, “A new lower bound below the information rate of Wiener phase noise channel based on Kalman carrier recovery,” [*Optics Express*]{}, vol. 20, no. 23, pp. 25471-25477, Nov. 5, 2012. L. Barletta, M. Magarini, and A. Spalvieri, “New lower bound below the information rate of phase noise channel based on Kalman carrier recovery,” [*International Journal on Electrical Engineering and Informatics*]{}, vol. 4, no. 4, pp. 597-607, Dec. 2012. J. M. Cioffi, G. P. Dudevoir, M. V. Eyuboglu, and G. D. Forney, “MMSE decision-feedback equalizers and coding—part I: equalization results,” *IEEE Trans. Commun.,* vol. 43, no. 10, pp 2582-2594, Oct. 1995. M. V. Eyuboglu, “Detection of coded modulation signals on linear severely distorted channels using decision-feedback noise prediction and interleaving,” *IEEE Trans. Commun.,* vol. 36, no. 4, pp. 401-409, Apr. 1988. T. Li and O. M. Collins, “A successive decoding strategy for channels with memory,” *IEEE Trans. Inf. Theory,* vol. 53, no. 2, pp. 628–646, Feb. 2007. L. Barletta, M. Magarini, and A. Spalvieri, “Bridging the gap between Kalman filter and Wiener filter in carrier phase tracking,” [*IEEE Photon. Technol. Lett.*]{}, vol. 25, no. 11, pp 1035–1038, Jun. 1, 2013.
--- abstract: 'We conduct a comprehensive theoretical and numerical investigation of the pollution of pristine gas in turbulent flows, designed to provide useful new tools for modeling the evolution of the first generation of stars. The properties of such Population III (Pop III) stars are thought to be very different than those of later stellar generations, because cooling is dramatically different in gas with a metallicity below a critical value $Z_{\rm c},$ which lies between $\sim 10^{-6}$ and $\sim 10^{-3} Z_\odot$. The critical value is much smaller than the typical overall average metallicity, $\left< Z \right>$, and therefore the mixing efficiency of the pristine gas in the interstellar medium plays a crucial role in determining the transition from Pop III to normal star formation. The small critical value, $Z_{\rm c}$, corresponds to the far left tail of the probability distribution function (PDF) of the metal abundance. Based on closure models for the PDF formulation of turbulent mixing, we derive evolution equations for the fraction of gas, $P$, lying below $Z_{\rm c},$ in statistically-homogeneous compressible turbulence. Our simulation data shows that the evolution of the pristine fraction $P$ can be well approximated by a generalized “self-convolution” model, which predicts that $\dot P = - \frac{n}{\tau_{\rm con}} P (1-P^{1/n}),$ where $n$ is a measure of the locality of the mixing or PDF convolution events and the convolution timescale $\tau_{\rm con}$ is determined by the rate at which turbulence stretches the pollutants. Carrying out a suite of numerical simulations with turbulent Mach numbers ranging from $M = 0.9$ to $6.2,$ we are able to provide accurate fits to $n$ and $\tau_{\rm con}$ as a function of $M$, $Z_{\rm c}/\left<Z\right>,$ and the length scale, $L_{\rm p}$, at which pollutants are added to the flow. For pristine fractions above $P= 0.9,$ mixing occurs only in the regions surrounding blobs of pollutants, such that $n=1$. For smaller values of $P,$ $n$ is larger as the mixing process becomes more global. We show how these results can be used to construct one-zone models for the evolution of Pop III stars in a single high-redshift galaxy, as well as subgrid models for tracking the evolution of the first stars in large cosmological numerical simulations.' author: - 'Liubin Pan, Evan Scannapieco, & John Scalo' title: Modeling the Pollution of Pristine Gas in the Early Universe --- Introduction ============ All stable elements heavier than lithium were forged in stars. Big bang nucleosynthesis produced helium efficiently, but it was halted by the expansion of the universe before it could go much further (Walker et al. 1991). On the other hand, all stars observed to date have substantial mass fractions of carbon, silicon, iron and other elements that are the products of the final stages of stellar evolution. In fact, even the most pristine stars observed (Cayrel et al. 2004; Frebel et al. 2008; Caffau et al. 2011) have been polluted by this material. The earliest stellar generation, referred to as Population III (Pop III), is missing. While this absence could be due to the formation of an extremely small number of metal-free stars, detailed theoretical studies suggest that it is more likely that these stars were too massive to survive to the present day (Scannapieco 2006; Brook 2007). In fact, the absence of heavy elements drastically decreases the cooling rates in collapsing star-forming gas, such that that primordial gas clouds would have been much less susceptible to fragmentation, perhaps forming $10^3$-10$^4$ solar mass star forming clumps (Hutchins 1976; Abel, Bryan, & Norman 2000; Bromm, Coppi, & Larson 2002; Bromm & Loeb 2003). Furthermore, the strong accretion rates onto the central protostellar cores in such pristine clumps cannot be arrested by radiation pressure, bipolar outflows, or rotation (Ferrara 2001), meaning that these regions may have formed stars with masses hundreds of times greater than the sun. On the other hand, recent work suggests that physical processes such as enhanced HD cooling in shock-compressed primordial gas (Johnson & Bromm 2006), a lack of magnetic fields in primordial turbulent clouds (Padoan 2007), photoionization of turbulent primordial clouds (Clark 2011a), fragmentation of the protostellar accretion disks (Stacy 2010, Clark et al. 2011b), early termination of accretion (McKee & Tan 2008, Hosokawa 2011), and gravitational torques (Greif 2012), may have led to primordial stars with masses $\approx 10 M_\odot$ in single or perhaps binary systems (Turk, Abel, & O’Shea 2009). Even at these comparatively low masses, direct detections of primordial stars would be possible only through observations of the high-redshift universe, relying on what is likely to have been an extended transition between metal free and pre-enriched (Population II/I) star formation (Scannapieco 2003; Jimenez & Haiman 2006; Trenti & Stiavelli 2007; 2009; Maio 2010), as well as the unusual observable signatures of metal free stars. During their lifetimes, for example, the lack of heavy elements in Pop III stars drastically reduces their opacities, resulting in much higher surface temperatures and strong ultraviolet spectroscopic features that would distinguish them from current stellar populations (Schaerer 2002; Nagao 2008). Alternatively, if they were extremely massive, Pop III stars could be detectable as they ended their lives as tremendously powerful pair-production supernovae (Bond 1984; Heger & Woosley 2002; Scannapieco 2005; Whalen 2012). Furthermore, the reliance on H$_2$ + HD cooling in primordial gas may have effects beyond masses of stars, such as affecting the phase nature of the interstellar medium and the star formation rate (Norman & Spaans 1997). Whatever the detection method, when and where metal free gas condensed into stars is a question of fundamental importance in planning searches for this remarkable early generation of stars. On cosmological scales, the key issue is the time it takes for heavy elements to propagate from one galaxy to another. As shown in Scannapieco et al. (2003), the distances between these oases of early star-formation are so vast that for several 100 million years, the universe was divided into two regions: One in which galaxies formed out of material that was already polluted with heavy elements well above the minimum “critical mass fraction," $Z_{\rm c},$ at which normal stellar evolution occurs (Schneider et al. 2003; Bromm & Loeb 2003; Omukai et al. 2005), and one in which galaxies were formed from initially pristine material. The evolution of initially-pristine galaxies is especially interesting, as it depends on two important theoretical issues. The first of these is the uncertain value of $Z_{\rm c},$ which is expected to lie in the range from $\sim 10^{-8}$ to $\sim 10^{-5}$, (or from $10^{-6}$ to $10^{-3}$ times solar metallicity), depending on whether the cooling is dominated by dust grains (Omukai et al. 2005) or by the fine structure lines of carbon and oxygen (Bromm and Loeb 2003). The second important issue is the rate at which the gas within the galaxy can be polluted above this critical value by the turbulent mixing of heavy elements (Pan & Scalo 2007). Within a given galaxy, the key quantity to characterize the transition is the fraction, $P(Z_{\rm c}, t)$, of the interstellar gas with metal concentration $Z$ below $Z_{\rm c}$ as a function of time. The temporal behavior of this fraction depends not only on the rate at which new sources of metals are released to the interstellar gas, but, more importantly, on the transport and mixing of the metals in the interstellar gas. For example, a high mixing efficiency would result in a rapid decrease in $P(Z_{\rm c}, t)$, and hence a sharp transition as the average concentration exceeds the threshold $Z_{\rm c}$. On the other hand, a low mixing efficiency would lead to a gradual transition. The interstellar medium is known to be turbulent and highly compressible, and the turbulent motions are likely to be supersonic. Therefore, understanding mixing in supersonic turbulence is crucial to understanding the evolution of primordial gas in early galaxies. The evolution of the pristine fraction was considered by Oey (2000, 2003) in the context of the sequential enrichment model, which, however, does not reflect or correctly capture the physics of mixing in interstellar turbulence. In Pan, Scannapieco, & Scalo (2012; hereafter PSS), we developed a theoretical approach to model the evolution of the pristine fraction in statistically homogeneous turbulence. The starting point of our theoretical model was the PDF method for turbulent mixing, since the pristine fraction $P(Z_{\rm c}, t)$ corresponds to the far left tail of the metallicity PDF. The PDF equation for passive scalars cannot be solved exactly, and we adopted several closure models from the literature and derived predictions for the evolution of the pristine fraction. Using numerical simulations, we showed that a class of PDF closure models, called self-convolution models, provided successful fitting functions to the evolution of $P(Z_{\rm c}, t)$ for a limited range of flow Mach number and pollution properties. These models are based on the physical picture of turbulence stretching pollutants and causing a cascade of concentration structures toward small scales (Pan & Scalo 2007), a picture that is generally valid in turbulent flows at all Mach numbers (Pan & Scannapieco 2010; hereafter PS10). Mixing occurs as the scale of the structures becomes sufficiently small for molecular diffusivity to efficiently operate, and the homogenization between neighboring structures can be described as a convolution of the concentration PDF. As discussed in more detail below, the models are dependent on two major parameters: $ \tau_{\rm con},$ which sets the characteristic timescale for convolution of the metal abundance PDF through turbulent stretching of concentration structures, and $n$, which quantifies the degree of spatial locality of the PDF convolution process. Here we use a suite of numerical simulations to expand these results and show that the generalized self-convolution model provides good fits for all the turbulence and pollutant conditions relevant for primordial star formation. Note that, besides affecting the temperature directly, H$_2$ + HD cooling without heavy elements makes neutral primordial gas less compressible (Scalo & Biswas 2002; Spaans & Silk 2005), emphasizing the importance of studying the Mach number dependence of the pollution processes. By comparing this model to simulations with Mach numbers $M=0.9,$ 2.1, 3.5, & 6.2, in which pollutants are added at different length scales, and 4 different initial values of $P,$ we are able to obtain detailed fits of $ \tau_{\rm con},$ and $n$ over all Mach numbers, initial pollution fraction, and pollution scales of interest. We tabulate and obtain empirical fits to our results, and show how they can be used both to make simple one-zone estimates of the evolution of $P$ within a single high-redhshift galaxy, as well as to construct a subgrid model for numerical simulations, which tracks the evolution of the primordial fraction below the resolution scale. The model is expected to improve the prediction for the evolution of the primordial gas fraction in early galaxies, and should also be applicable to any physical problem in which the unresolved, unmixed fraction needs to be tracked throughout a simulation. A realistic simulation for the pollution of the pristine gas in early galaxies needs to properly specify the driving mechanism of the interstellar turbulence. A variety of physical processes may contribute to turbulent motions in the interstellar medium. Turbulence can be produced at galactic scales, e.g, by the gas infall from the halo or by merger events during the assembly of the galaxy. Supernova explosions by the first stars also contribute to the turbulent energy. In the present work, we adopt a solenoidal driving force in the simulations, which may be a good approximation if the interstellar turbulence is mainly driven by large-scale motions or instabilities, associated with galaxy formation, mergers or interactions. On the other hand, if the primary source for turbulent energy is stellar winds and supernova explosions, the driving force may be compressive rather than solenoidal. In that case, several questions need to be addressed by future studies. First, as shown by Federrath et al. (2010), a compressively driven supersonic turbulent flow shows significantly different statistics than the solenoidal case. The amplitude of the density fluctuations is much larger due to stronger compressions, and the intermittency of the velocity field is significantly higher. We will speculate how these features may qualitatively affect the parameters in our convolution models. A quantitative understanding may be obtained by a suite of simulations using driving forces at different degrees of compressibility. Second, if supernovae are the primary energy source of turbulence, the injection of heavy elements is highly correlated with the turbulent driving force. This is not accounted for in our simulations either. These issues are complicated and require a systematic investigation. This is, however, beyond of the scope of the current paper, which is essentially an initial step that provides the theoretical framework and useful subgrid methodology for the modeling of the pristine gas pollution in early galaxies. The structure of this work is as follows. In §2 we introduce the PDF formulation for turbulent mixing, and discuss the fundamental mixing physics. In §3, we describe the self- convolution closure models, and show how they can be used to predict the evolution of the pristine fraction in statistically homogeneous turbulence. In §4 we describe our suite of numerical simulations, and in §5 we use them to test and constrain the self-convolution models over the full range of turbulence and pollution conditions necessary to model primordial star formation. Having fixed the parameters in the theoretical models with our simulation results we show how they can then be applied to one zone models of high-redshift galaxies in §6, and in §7 we show how they can be used to construct subgrid models for the unresolved primordial fraction in large, numerical simulations. A summary of our conclusions is given in §8. The PDF Formulation for Turbulent Mixing ======================================== The mixing of heavy elements in the interstellar medium can be studied by tracking the evolution of the concentration field, $C({\bf x}, t)$, defined as the ratio of the local density of these elements to the total gas density. The concentration field obeys the advection-diffusion equation, $$\frac{\partial C} {\partial t}+ v_i \frac{\partial C}{\partial x_i} = \frac{1}{\rho} \frac{\partial}{\partial x_i} \left(\rho \gamma \frac{\partial C}{\partial x_i}\right) + S({\bf x},t), \label{advection}$$ where $\rho({\bf x}, t)$ and ${\bf v}({\bf x}, t)$ denote the density and velocity fields in interstellar turbulence, $\gamma$ is the molecular diffusivity, and the term $S({\bf x},t)$ represents continuing sources of heavy elements or pollutants. The concentration field, $C({\bf x}, t)$, could also represent the local abundance of a specific element, but here we are interested in the mass fraction of all metals at a given location. The pristine mass fraction in a flow corresponds to the low tail of the probability distribution function (PDF) of the pollutant concentration, so we adopt a PDF approach for turbulent mixing. This approach was first established for the turbulent velocity field (Monin 1967 and Lundgren1967) and was later extended to mixing of passive or reactive species in turbulent flows (Ievlev 1973; Dopazo and O’Brien 1974; Pope 1976; O’Brien 1980; Pope 1985; Kollemann 1990; Dopazo et al. 1997). It has been particularly successful in the field of reacting turbulent flows (e.g., Haworth 2010). However, most of the work on the PDF modeling of turbulent mixing has been dedicated to incompressible or weakly compressible turbulence. In order to apply the method to the interstellar media of galaxies, where strong density fluctuations exist, PSS generalized the PDF formulation to mixing in highly compressible turbulent flows at large Mach numbers, emphasizing the importance of using a density weighting scheme. As in PSS, herewe use a statistical ensemble to define a density-weighted concentration PDF, $\Phi(Z; {\bf x}, t) \equiv \langle \rho \phi(Z; {\bf x}, t) \rangle/ \langle \rho({\bf x}, t) \rangle$, where $\phi(Z; {\bf x}, t) \equiv \delta(Z-C({\bf x}, t) )$ is the fine-grained PDF in a single realization, $\langle \cdot \cdot \cdot\rangle$ denotes the ensemble average, and $Z$ is the sampling variable. The probability distribution defined here at a given position and time is an average over many independent realizations in the statistical ensemble. The ensemble average density, $\langle \rho({\bf x}, t) \rangle$, is in general a function of ${\bf x}$ and $t$. In PSS, $\langle \rho({\bf x}, t) \rangle$ was implicitly assumed to be constant. Below we consider the more general case that accounts for the spatial and temporal variations of $\langle \rho({\bf x}, t) \rangle$. An important motivation for using a density-weighting factor is that, when studying mixing of primordial gas in early galaxies, it is appropriate to consider the mass fraction, rather than the volume fraction, of the interstellar gas with $Z \le Z_{\rm c}$. An equation for $\Phi(Z; {\bf x}, t)$ can be derived using the advection-diffusion equation (\[advection\]), for $C({\bf x}, t),$ and the continuity equation, for $\rho({\bf x}, t)$. Applying the same method as in Appendix A of PSS and accounting for the spatial and temporal dependence of $\langle \rho \rangle$, we find, $$\begin{gathered} \frac {\partial (\langle \rho \rangle \Phi)}{\partial t} + \frac{\partial} {\partial x_i} \left( \langle \rho \rangle \Phi {\langle v_i|C=Z \rangle}_{\rho} \right) = \hspace {5cm}\notag \\ \hspace {2cm}- \frac {\partial}{\partial Z} \left( \langle \rho \rangle \Phi {\left \langle \frac{1}{\rho} \frac{\partial} {\partial x_i} \left(\rho \gamma \frac{\partial C}{\partial x_i} \right) \left\vert \vphantom{\frac{1}{1}} \right. C=Z \right \rangle}_{\rho} \right) \notag \\ \hspace {-0.6cm}-\frac {\partial}{\partial Z} \left( \langle \rho \rangle \Phi {\langle S|C=Z\rangle}_{\rho} \right), \label{pdfeq}\end{gathered}$$ where $\langle ...|C=Z \rangle_{\rho}$ denotes the conditional ensemble average with density weighting. For any physical quantity, $A({\bf x}, t)$, this ensemble average is defined as, $$\langle A|C=Z \rangle_{\rho} \equiv \frac{\langle \rho A|C=Z \rangle }{ \langle \rho|C=Z \rangle}, \label{densweightconditionalave}$$ where the conditional average $\langle ...|C=Z \rangle$ without density-weighting is evaluated by selecting and counting only those realizations satisfying the constraint that the concentration $C({\bf x}, t)$ at ${\bf x}$ and $t$ is equal to $Z$. Setting $\langle \rho({\bf x}, t) \rangle$ to be constant reduces eq. (\[pdfeq\]) to eq. (2.2) in PSS. Derivations of analogous PDF equations for passive and reacting scalar turbulence in the incompressible case can be found in e.g., Pope (2000) and Fox (2003). The PDF equation is essentially a Liouville equation for the conservation of the concentration probability in phase space. In analogy to the Liouville equation in kinetic theory, the concentration field corresponds to the particle momentum and the advection-diffusion equation corresponds to the particle equation of motion (PSS). Although the PDF equation is derived in the context of a statistical ensemble, it is also useful for the study of scalar statistics in a single realization, i.e., the real flow. If the flow and the scalar statistics are spatially homogeneous, $\langle \rho ({\bf x},t)\rangle$ and $\Phi(Z; {\bf x},t)$, are independent of ${\bf x}$, and the ergodic theorem indicates that the statistics over an ensemble is equivalent to that over the spatial domain of a single realization. This means that $\Phi(Z; {\bf x},t)$ is equal to the PDF, $\Phi(Z; t) = \int_V \rho \delta (Z-C({\bf x}, t)) dx^3 / \int_V \rho dx^3$, computed from the density and concentration fluctuations over the entire volume, $V$, of the real flow domain. At galactic length scales, the assumption of statistical homogeneity is likely invalid. For example, coherent mean flows, such as galactic rotation, infall or outflow, may exist at large scales in the interstellar medium. Also a large-scale metallicity gradient may develop if the star formation rate has a radial dependence, and the metallicity statistics can vary substantially from region to region. However, the ensemble-defined PDF $\Phi (Z; {\bf x},t)$ may still be used to study this spatial dependence if it is understood as corresponding to the concentration PDF for local fluctuations in a region of considerable size in the real flow. If the size is selected to be large enough to allow sufficient statistics, but small enough for local statistical homogeneity to be restored (i.e., considerably smaller than the characteristic scale for the mean flow or mean concentration gradient), then the ergodic theorem will apply locally, and $\Phi(Z; {\bf x},t)$ will represent the PDF in the region around ${\bf x}$. In fact, in an attempt to build a subgrid model for the pollution of pristine interstellar gas, a concentrationPDF characterizing the fluctuations in local regions is defined by applying a spatial filter to the real flow (see §7 and Appendix A), and the equation derived for the filtered PDF is identical to eq. (\[pdfeq\]) for $\Phi (Z; {\bf x},t)$ over an ensemble. This confirms the equivalence of $\Phi (Z; {\bf x},t)$ to the local concentration PDF in the real flow and the applicability of eq. (\[pdfeq\]) to the study of mixing in a statistically inhomogeneous setting. The last three terms in eq. (\[pdfeq\]) correspond to turbulent advection, molecular diffusivity and source terms in the advection-diffusion equation. We give a brief general discussion for each term below, and refer the interested reader to PSS for more details. The Advection Term ------------------ The second term in eq. (\[pdfeq\]) is the advection term. As it takes a divergence form, it conserves the global PDF with density-weighting, i.e., the integral of $\langle \rho \rangle \Phi(Z; {\bf x}, t)$ over the entire flow domain. This corresponds to a fundamental issue in mixing physics: the turbulent velocity field does not homogenize at all by itself. Intuitively, a velocity field moves, stretches and redistributes the concentration field, but it does not change the [*mass*]{} fraction of the fluid elements at a given concentration level, and thus does not truly mix. On the other hand, without density-weighting, the advection term in the PDF equation would not be a divergence term if the flow is compressible, but instead be a term representing the effect of expansions and compressions on the volume fraction of fluid elements at a given concentration. This effect is different from mixing, and makes the PDF modeling more complicated. Therefore, in addition to the practical reasons mentioned above, adopting a density-weighting scheme is also strongly motivated on a theoretical basis. The advection term vanishes and need not be considered in a flow that is statistically homogeneous. In case of statistical inhomogeneity, the term corresponds to the transport of the concentration PDF by the velocity field: Turbulent advection causes changes in the local PDF as it moves the fluid elements around. If one is interested in the concentration fluctuations in a local region, this transport effect must be accounted for carefully. A similar advection term exists in the equation for the filtered PDF of the local concentration fluctuations, which is used to build a subgrid model for the pollution of pristine gas in early galaxies in §7. In that case, a proper treatment of the advection term is essential, as it is responsible for the flux of pristine mass fraction into or out of a computation cell due to the velocity field (see §7). However, an exact treatment of the advection term is impossible due the usual difficulty in turbulence theory known as the closure problem (see, e.g., Pope 2000). A similar problem exists for the diffusivity term, which is discussed in more details in §2.2. We will adopt the commonly-used eddy-diffusivity approximation to model the advection term in §7. The Diffusivity Term -------------------- As shown in Pan & Scalo (2007) and PSS, the molecular diffusivity term in eq. (\[pdfeq\]), i.e., the first term on the right hand side, is the only term responsible for the homogenization of the concentration fluctuations. The term can be rewritten as (see Appendix A), $$\begin{gathered} - \frac {\partial}{\partial Z} \left( \langle \rho \rangle \Phi {\left \langle \frac{1}{\rho} \frac{\partial}{\partial x_i} \left(\rho \gamma \frac{\partial C} {\partial x_i} \right) \left\vert \vphantom{\frac{1}{1}} \right. C=Z \right\rangle}_{\rho} \right) = \frac{\partial}{\partial x_i} \left\langle \rho \gamma \frac{\partial \phi}{ \partial x_i} \right\rangle \notag\\ \hspace{1.5cm}- \frac {\partial^2}{\partial Z^2} \left( \langle \rho \rangle \Phi \left\langle \gamma \left(\frac{\partial C} {\partial x_i} \right)^2 \left\vert \vphantom{\frac{1}{1}} \right. C=Z\right \rangle_{\rho} \right), \label{ensemblediffusivityterm}\end{gathered}$$ where the first term on the right hand side is a spatial diffusion of the fine-grained concentration PDF (defined in §2) by the molecular diffusivity. As a divergence term, it conserves the global PDF and does not contribute to true homogenization. The last term can be thought of as an anti-diffusion process in concentration space, as the coefficient is negative definite. It continuously narrows the PDF toward the mean value. Unfortunately, this diffusivity term does not have an exact or closed form. As it involves concentration gradients, it is nonlocal and dependent on the two-point concentration PDF. Deriving an equation for the two-point PDF gives rise to terms that require knowledge of three-point statistics, and so on, leading to a chain of multipoint PDF equations similar to the BBGKY hierarchy in kinetic theory (Lundgren 1967; Dopazo and O’Brein 1974). Assumptions must be made to truncate the hierarchy to obtain a closed set of equations. This is the so-called closure problem. In PSS, we considered a number of closure models and showed that a class of models based on the convolution of the concentration PDF are particularly successful in fitting the simulation results for the pollution of pristine material in turbulent flows. One of these convolution models for closure of the diffusivity term was used in our earlier modeling of the primordial fraction (Pan & Scalo 2007). We will summarize these models in §3. Although the diffusivity term lacks an apparent dependence on the velocity field, an efficient homogenization of the concentration field does rely on the existence of turbulent motions. The action of the diffusivity term is very slow at large length scales where the pollutants are injected, because the molecular diffusivity $\gamma$ is usually tiny in most natural environments including the interstellar medium. For example, Pan & Scalo (2007) estimated $\gamma$ for the Galaxy’s ISM, weighted by residence time in different phases of the neutral gas, of about $10^{20}$ cm$^{2}$ s$^{-1}$, with a corresponding diffusivity scale of $\simeq 0.06(L_{100}/v_{10})^{1/2}$ pc, where $L_{100}$ is the galactic turbulence integral scale in units of 100 pc, and $v_{10}$ is the turbulent rms velocity in units of 10 km s$^{-1}$. This is the scale to which turbulence must stretch the pollutants in order for mixing to occur. Therefore, mixing by molecular diffusivity is negligible in the absence of a velocity field. A turbulent velocity can act as a catalyst and significantly accelerate the mixing process. This implicit role of turbulence on scalar homogenization is through the dependence of the diffusivity term on the concentration gradients (see eq. (\[pdfeq\])). By continuously stretching the pollutants, turbulence produces structures at smaller and smaller scales, resulting in an enormous increase in the concentration gradients. Once the structures reach a small scale called the diffusion scale, where molecular diffusivity operates faster than turbulent stretching, they are homogenized efficiently by the diffusivity term. This suggests that the mixing timescale is essentially determined by turbulent stretching, even though the velocity itself does not truly mix. It is the cooperation of molecular diffusivity and turbulent motions that gives a significant mixing efficiency. The Source Term --------------- The last term in eq. (\[pdfeq\]) is the source term, corresponding to the injection of new pollutants into the turbulent flow. In general, pollutants are any source materials with a composition pattern different from that in the existing flow. Thus for mixing of heavy elements in the interstellar media of galaxies, the source term would include both ejecta from supernova and stellar winds and, if it exists, infall of low-metallicity or primordial gas. To evaluate the source term, it is actually not necessary to compute its conditional average form in eq. (\[pdfeq\]). Instead, it can be estimated by directly considering the rate at which the pollutants are injected and how they affect the concentration PDF in the flow. For example, assuming that the supernova ejecta are nearly pure metals, the source term for the supernova contribution would take a delta function form at $Z=1$ with a coefficient depending on the supernova rate, ejecta mass etc (see §6 and Pan and Scalo 2007). On the other hand, a primordial infall would give a delta function at $Z=0$. Therefore, the effect of continuous sources of primordial gas and new metals is to force spikes in the concentration PDF at small and large concentration values, respectively. Modeling the Diffusivity Term ============================= The primary goal of this study is to investigate how the diffusivity term in the PDF equation, representing the homogenization by molecular diffusivity catalyzed by turbulent motions, reduces the fraction of pristine material in a turbulent flow. To understand the fundamental physics, we consider an idealized problem: mixing of decaying scalars (i.e., $S({\bf x}, t)$ =0) in statistically stationary and homogeneous turbulence. The initial scalar field is also assumed to be statistically homogeneous. Clearly, this idealized problem is much simpler than in a realistic galactic environment. However, the simplified setting is extremely useful for understanding the underlying physics. As discussed in §2, under the assumption of statistical homogeneity, the advection term vanishes and the PDF, $\Phi(Z; {\bf x}, t)$, is independent of ${\bf x}$ and is equivalent to that computed from the spatial fluctuations in a single realization. With these simplifications, the PDF equation becomes, $$\frac {\partial \Phi (Z; t)}{\partial t} = - \frac {\partial^2}{\partial Z^2} \left(\Phi \left\langle \gamma \left(\frac{\partial C} {\partial x_i} \right)^2 \left\vert \vphantom{\frac{1}{1}} \right. C=Z\right \rangle_{\rho} \right) \label{simplepdfeq}$$ where we used eq. (\[ensemblediffusivityterm\]). The diffusivity term is the only term in the simplified PDF equation. In analogy with the mixing of primordial gas in early galaxies, we set the initial condition of the decaying scalar to be bimodal, consisting of pure pollutants ($Z=1$) and completely unpolluted flow ($Z=0$). This corresponds to a double-delta function form for the initial concentration PDF, $$\Phi(Z; 0) = P_0 \delta(Z) + H_0 \delta(Z-1) \label{initialpdf}$$ where $P_0$ and $H_0$ are the initial probabilities/fractions of pristine gas and pollutants, respectively, and $P_0 +H_0 =1$ from normalization. Before introducing closure models for the diffusivity term, we discuss the evolution of the concentration variance, which helps reveal the general physics of turbulent mixing. In terms of the density-weighted PDF, the average concentration with density weighting is written as $\langle Z \rangle \equiv \int Z \Phi(Z; t) dZ $, which is equal to $\langle {\rho} C \rangle/ \langle {\rho} \rangle$. Similarly, the density-weighted variance is expressed as $\langle (\delta Z)^2 \rangle \equiv \int (Z- \langle Z \rangle)^2 \Phi(Z; t) dZ $, which is equivalent to $\langle \rho (\delta C)^2 \rangle/ \langle \rho \rangle$ with $\delta C = C - \langle \rho C \rangle/\langle \rho \rangle$ being the fluctuating part of the concentration field. Taking the 2nd-order moment of eq. (\[simplepdfeq\]) yields $\partial_t \langle {\rho} (\delta C)^2 \rangle = - 2 \langle {\rho} \gamma ( \partial_i C) ^2 \rangle$, which can also be derived directly from the advection diffusion equation using the assumption of statistically homogeneity (see PS10). We therefore have, $$\frac{d \langle (\delta Z)^2 \rangle}{dt} = - \frac{\langle (\delta Z)^2 \rangle}{\tau_{\rm m}}. \label{ensemblevariance}$$ The mixing timescale, $\tau_{\rm m}$, is the ratio of the concentration variance to its dissipation rate, $$\tau_{\rm m} = \langle {\rho} (\delta C)^2 \rangle/(2 \langle {\rho} \gamma (\partial_i C)^2 \rangle).$$ Clearly, $\tau_{m}$ is the timescale for the variance decay, and thus also characterizes the rate at which the diffusivity term reduces the PDF width. As discussed in §2, the mixing timescale, $\tau_{\rm m}$, depends on the rate of turbulent stretching, which produces concentration structures at small scales and feeds molecular diffusivity with large concentration gradients. In the classical phenomenology for turbulent mixing, the continuous production of small-scale structures is described as a cascade, in the sense that the process proceeds progressively faster toward smaller scales. The picture is similar to the cascade of kinetic energy. It predicts that the mixing timescale is determined mainly by the eddy turnover time at the scale where the pollutants are injected, but insensitive to the small diffusion scale, where the molecular diffusivity acts to homogenize. The prediction has been confirmed by PP10 using simulated supersonic turbulent flows with solenoidal driving force. They found that, at all Mach numbers explored, the mixing time was close to the eddy turnover time at the pollutant injection scale, suggesting that the cascade picture, originally proposed for mixing in incompressible flows, is valid also for highly compressible turbulence. PS10 also found that compressible modes in solenoidally-driven supersonic turbulence do not make a significant contribution to the cascade of concentration structures to small scales. Compressible modes consist of both expansions and compressions. As passive scalars simply follow the flow motions, the compression events would decrease the length scale of the concentration structures, or equivalently increase the concentration gradients. This makes a contribution to enhance the mixing rate. On the other hand, the expansion events would cause the mixing process to slow down. The two opposite effects tend to counteract each other. However, they do not exactly cancel out, and the effect of compressions appear to win slightly. Using the density fluctuations as a measure for the strength of the compression events, PP10 found that, in solenoidally driven flows, the net contribution of compressible modes to the enhancement of the concentration gradients is much smaller than the solenoidal modes (see §5 of PP10). A limitation in the effect of compressible modes on mixing is that the squeezing effect by compressions is not continuous due to the gas pressure. It is likely a compressed region would expand before being squeezed by a second compression event. Such limitation does not exist in the stretching by solenoidal modes, which operates continuous and unlimited by the gas pressure. The stretching effect by incompressible modes appears to be the primary “mixer" in the simulated flows with solenoidal driving at all Mach numbers. As a consequence, a useful measure for the mixing efficiency would be the fraction of energy contained in solenoidal modes in the inertial range of the flow, which is responsible for the cascade of passive scalars toward the diffusion scale. A statistical analysis of the simulated velocity fields by PP10 showed that the solenoidal energy fraction in the inertial range decreases with $M$ for $M\lsim 3$ and then saturates at an equipartition value of 2/3 at $M\gsim 3$. This provides a satisfactory explanation for the behavior of the mixing timescale normalized to the flow dynamical time as a function of $M$. The normalized mixing timescale increases with $M$ for $M\lsim 3$ and saturates at larger $M$. This finding supports our argument above that compressible modes are less efficient at enhancing mixing in solenoidally driven flows, and, with a larger fraction of compressible energy in the inertial range, the mixing is slower. It remains to be checked if the normalized timescale as a function of $M$ has the same behavior in supersonic turbulence with completely compressive driving. One issue is that, quantitatively, the energy fraction of solenoidal or compressible modes at inertial-range scales as a function of $M$ in fully-developed flows with compressive driving may be different from the solenoidal case. Second, as shown in Federrath et al. (2010), at the same $M$, the density fluctuations in a compressively driven flow are much stronger. This implies that the net effect of compressible modes on amplifying the concentration gradients would be more efficient than in the solenoidal driving case. If the contribution from compressible modes to the mixing efficiency in highly supersonic compressively driven flows is comparable to or even faster than the solenoidal modes, the behavior of $\tau_{\rm m}$ as function of $M$ may be qualitatively different, with the normalized timescale decreasing with $M$ at sufficiently high $M$. On the other hand, if compressible modes in supersonic flows with compressive driving are still less efficient at enhancing mixing than solenoidal modes, one may expect a similar behavior for the normalized mixing timescale. We will investigate these possibilities in a future work. Self-convolution PDF Models --------------------------- A variety of closure models have been developed for the diffusivity term in the PDF equation. PSS considered several existing models from the literature, including the mapping closure model (Chen et al. 1989), based on an approximation for the exact but unclosed form of the diffusivity term, and a class of models, referred to as self-convolution models by PSS, based largely on a physical picture of the turbulent mixing process (Curl 1963, Dopazo 1979, Janicka et al. 1979, Venaille and Sommeria 2007, Villermaux and Duplat 2003, Duplat and Villermaux 2008). One of the self-convolution models was used in the initial study of pollution of pristine gas by Pan and Scalo (2007). By a detailed comparison with numerical simulations of turbulent mixing in two compressible flows at Mach 0.9 and 6.2, PSS showed that the convolution models provide both clear physical insights and successful fitting functions for the decay of the pristine mass fraction. Here we give a brief introduction of the convolution models, and refer the interested reader to PSS for details. There has been compelling evidence that the dominant scalar structures at small scales are 2D sheets or edges (e.g., Pan and Scannapieco 2011 and references therein). The rate at which the scalar sheets are produced is determined mainly by the turbulent stretching rate at large length scales. With time, the sheets become thinner, and once the thickness of the sheets is sufficiently small for molecular diffusivity to efficiently operate, the neighboring sheets are homogenized, leading to a reduction in the PDF width. The physical picture outlined above can be approximately described by an integral equation for the concentration PDF, $$\begin{gathered} \frac{\partial \Phi(Z; t)}{\partial t} = s(t) \bigg\{\int\limits_{0}^{1} \Phi(Z_{1};t) \int\limits_{0}^{1} \Phi(Z_{2}; t) \times \hspace {5cm}\notag \\ \hspace {2cm} \delta \left( Z-\frac{Z_{1} +Z_{2}}{2} \right) dZ_{1} dZ_{2} - \Phi(Z; t) \bigg\}, \label{curl}\end{gathered}$$ where $Z_1$ and $Z_2$ denote the concentrations in two nearby sheets prior to the mixing by molecular diffusivity, and the delta function in the integrand arises from the assumption that a perfect homogenization occurs instantaneously once two scalar sheets are sufficiently stretched for molecular diffusivity to take effect. Here $s(t)$ is the turbulent stretching rate that controls the rate at which the PDF convolution proceeds. The last term in eq. (\[curl\]) is the “destruction" of the previous PDF due to the mixing event. Using the properties of delta functions, eq. (\[curl\]) can be written as $\partial_t \Phi (Z; t) = s(t) [2\int\limits_{0}^{1} \Phi(Z'; t) \Phi(2Z -Z'; t) dZ' - \Phi(Z; t)]$, which shows that turbulent mixing is essentially assumed to be a self-convolution process. For a reason to be clarified soon, eq. (\[curl\]) was referred to as the discrete convolution model in PSS. It was first proposed by Curl (1963) in a study of droplet interactions in a two-liquid system, and was later extended to model mixing in turbulent flows (Dopazo 1979, Janicka et al. 1979). Several variants and generalizations of eq. (\[curl\]) have been proposed to solve the problems of the model for turbulent mixing. One problem of Curl’s model is that, for a double delta initial PDF (eq. (\[initialpdf\])), it produces unphysical spikes in between the initial delta functions. In order to avoid this, Dopazo (1979) and Janicka et al. (1979) suggested replacing the delta function in eq. (\[curl\]) by a general function, $J(Z; Z_1, Z_2)$, that is smooth in between $Z_1$ and $Z_2$. PSS showed that, with this modification, the model gives essentially the same prediction for the evolution of the pristine mass fraction. Another weakness of the convolution PDF models with double integral equations is that, for mixing in incompressible flows, they substantially overestimate the PDF tails at late times (Kolleman 1990). However, the model offers an insightful picture for the mixing of pristine gas and provides useful fitting functions to the pristine fraction decay in certain physical regimes. More recently, Venaille and Sommeria (2007) developed a “continuous" version of the self-convolution model, based on an extension of the Curl (1963) model in Laplace space. We first define the Laplace transform, ${\Psi}(\zeta; t)$, of the concentration PDF as $\Psi (\zeta; t) = \int_0^{\infty} \Phi(Z; t) \exp(-Z \zeta) dZ$. Using the convolution theorem, the Laplace transform of eq. (\[curl\]) reads, $$\frac{\partial \Psi (\zeta; t)}{\partial t} = s (t) \left[\Psi (\zeta/2; t)^2 - \Psi (\zeta; t)\right]. \label{convolution}$$ Rewriting eq. (\[convolution\]) in a difference form, we have $\Psi (\zeta; t+ \delta t) = \epsilon \Psi (\zeta/2; t)^2 +(1-\epsilon) \Psi (\zeta; t)$ where $\epsilon = s(t) \delta t$ with $\delta t$ an infinitesimal time step. The difference equation has the following interpretation: during a timestep $\delta t$, mixing occurs in an infinitesimal fraction, $\epsilon$, of the flow, and in this fraction of the flow the scalar PDF undergoes a convolution. This suggests that in Curl’s model the PDF convolution occurs locally in space. Also note that, whenever a mixing event occurs, it appears as a single complete convolution in the model, and in this sense the convolution process is “discrete". The continuous convolution model essentially assumes that the convolution occurs everywhere in the flow at any given time, but in an infinitesimal time the number of convolutions is infinitesimal and equal to $\epsilon$ (Duplat and Villermaux 2008). The assumption can be represented by $\Psi (\zeta; t+ \delta t) = \Psi (\zeta/(1+\epsilon); t)^{(1+\epsilon)}$. The Taylor expansion of this equation gives $\Psi (\zeta/(1+\epsilon); t)^{(1+\epsilon)} \simeq \Psi (\zeta; t) + \epsilon [\Psi(\zeta; t) \ln (\Psi(\zeta; t) ) -\zeta \partial \Psi(\zeta; t)/\partial \zeta]$. Taking the limit $\delta t \to 0$, we obtain, $$\frac{\partial \Psi (\zeta; t)}{\partial t} = s(t) \left[ \Psi \ln (\Psi) - \zeta \frac {\partial \Psi}{\partial \zeta} \right]. \label{cconvolution}$$ The equation was first derived by Venaille and Sommeria (2007), who showed that the predicted PDF evolves toward Gaussian in the long time limit. In the continuous version, the PDF convolution occurs globally in space. The model prediction has been tested against experimental results by Venaille and Sommeria (2008). Similar to Curl’s model, the continuous model cannot be applied to predict the evolution of the entire PDF right at the beginning if the initial PDF is a double-delta function (Venaille and Sommeria 2007). Fortunately, for the problem of pristine gas pollution, the model provides a useful prediction that works immediately from the initial time (PSS). A more general extension of the self-convolution model in Laplace space was given in Duplat and Villermaux (2008), $$\frac{\partial \Psi (\zeta; t)}{\partial t} = s(t) n \left[ \Psi \left(\frac{\zeta}{1+1/n}; t\right)^{(1+1/n)} - \Psi (\zeta; t) \right]. \label{nconvolution}$$ With $n=1$ and in the limit $n \to \infty$, the equation becomes eq. (\[convolution\]) for Curl’s original model and eq. (\[cconvolution\]) for the model of Venaille and Sommeria (2007), respectively. In deriving eq. (\[nconvolution\]), it was assumed that a fraction, $n \epsilon$, of the flow experiences mixing/convolution events during a time interval $\delta t$, and the number of convolutions in this fraction of the flow is $1/n$. From the discussion above for Curl’s model and its continuous version, $n$ characterizes the degree of spatial locality of the PDF convolution. Larger values of $n$ correspond to more “global" convolutions in the spatial space, and the parameter $n$ may be a function of time in general. Eq. (\[nconvolution\]) was referred to as the generalized convolution model in PSS, where we found that with increasing $n$ the tails of the predicted PDFs become narrower. For example, the discrete model with $n=1$ predicts exceedingly fat PDF tails, while in the continuous model ($n \to \infty$) the PDF approaches Gaussian at late times. In other words, more “global" PDF convolutions produce narrower PDF tails. Finally, we point out that self-convolution models were not originally intended for mixing in highly compressible flows and they do not directly account for how compressible modes and the density fluctuations in supersonic turbulence may affect the concentration PDF. The diffusivity term in the PDF equation (see eq. (\[simplepdfeq\])) has a dependence on the density field, suggesting that the flow compressibility may have potentially important effects on the PDF evolution. To our knowledge, the effect of compressibility has not been investigated in existing PDF models for turbulent mixing. Here we take the following approach: We compare the predictions of the convolution models for the primordial fraction against simulation results, and examine whether, by adjusting their parameters, they can be applied to study the pristine gas pollution in supersonic turbulence. Indeed, we find that, by varying the parameter $n$, the self-convolution models give satisfactory predictions for the the pollution of pristine gas in turbulent flows at different degrees of compressibility. Nevertheless, new closure models are strongly motivated to directly and explicitly address the effects of shocks and flow compressibility on the scalar PDF in supersonic turbulence. Mass Fraction of Pristine Gas ----------------------------- The pristine fraction, defined as the mass fraction of the interstellar gas with metallicity smaller than the critical value, $Z_{\rm c}$, can be evaluated from the concentration PDF by $P( Z_{\rm c}, t) = \int\limits_0^{Z_{\rm c}} \Phi(Z', t) dZ'$. The fraction can be calculated easily if the PDF evolution is known. The threshold metallicity, $Z_{\rm c}$, for the transition to Pop II star formation is small but finite, in the range from $10^{-8}$ to $10^{-5}$ by mass (see Bromm & Yoshida 2011, Schneider et al. 2012 and references therein). We also consider the fraction, $P(t)$, in the limit of an infinitesimal threshold, i.e., $ P(t) = \lim_{ Z_{\rm c} \to 0} P( Z_{\rm c}, t)$, which corresponds to the mass fraction of exactly metal-free gas. Clearly, the fraction $P(t)$ is zero unless the concentration PDF, $\Phi(Z; t)$, has a delta function component at $Z=0$. Equations of $P(t)$ can be exactly derived from the self-convolution models in §3.1. There is a subtle issue about the decay of the exactly metal-free fraction, $P(t)$, and the pristine fraction, $P( Z_{\rm c}, t)$, with a finite threshold. PSS pointed out that the nonlocal nature of the Laplacian operator in the molecular diffusivity term leads to an essentially instantaneous decrease of $P(t)$. Physically, a tiny but finite fraction of the pollutant atoms can have extremely fast thermal speed, corresponding to the high tail of the Maxwellian distribution, and may reach and pollute the pristine gas at large distances in a short time. Even though the degree of pollution by these atoms at large distances is negligibly tiny, they do reduce the mass of gas that is [*exactly*]{} metal-free, and this occurs at a timescale much shorter than the sound crossing time. Therefore, with the molecular diffusivity alone, $P(t)$ would decrease to zero almost instantaneously, regardless of the amplitude of the molecular diffusivity $\gamma$. On the other hand, it takes a finite time for the molecular diffusivity to enrich the entire flow up to a finite threshold $Z_{\rm c}$. In fact, the decay of $P(Z_{\rm c}, t)$ with $Z_{\rm c}$, say, $\simeq 10^{-8}$ by molecular diffusivity alone is very slow because $\gamma$ is typically tiny, $\simeq 10^{20}$ cm$^{2}$ s$^{-1}$ in Galactic neutral ISM (see Pan and Scalo 2007). An efficient mixing rate relies on the presence of turbulent motions. An ideal model for the pollution of pristine gas should accurately capture both the rapid decay of $P(t)$ and the evolution behavior of $P(Z_{\rm c}, t)$. However, none of the models considered in PSS satisfy both constraints. For example, the mapping closure model by Chen et al. (1989) does predict an instantaneous decay of $P(t)$, but a comparison with simulation results shows that its prediction for $P(Z_{\rm c}, t)$ is poor in general, especially in highly supersonic flows. On the other hand, the convolution PDF models introduced in §3.1 do not reduce $P(t)$ to zero immediately, instead the delta function component at $Z=0$ remains finite at any finite time. This is inconsistent with the expectation of an instantaneous reduction of $P(t)$, and the reason is that the Laplacian operator in the molecular diffusivity term was not directly incorporated in these models. Despite this inconsistency, PSS found a very interesting result: the evolution equations of $P(t)$ derived from the convolution models provide excellent fitting functions for the simulation results for the decay of the pristine fraction $P(Z_{\rm c}, t)$ with a small but finite threshold. Here we take the same approach as PSS and carry out a more systematic parameter study required to accurately span the range of astrophysical environments of interest. We will use the $P(t)$ equations from the convolution models to fit the simulation results for $P( Z_{\rm c}, t)$ with different thresholds, $Z_{\rm c}$, for scalars with different initial conditions evolving in a number of turbulent flows. This systematic procedure gives best-fit parameters in the convolution models as functions of $Z_{\rm c}$, the initial pollutant conditions and the flow Mach number. The numerically-tested $P(t)$ equations with the best-fit parameters then provide a new tool to model the pollution of the primordial gas and the transition from Pop III to Pop II star formation in early galaxies. We derive the equations of $P(t)$ from the convolution models using the PDF equations in Laplace space. Since the delta function at $Z=0$ persists in these models, we decompose the concentration PDF into two terms, $$\Phi(Z;t) =P(t) \delta (Z) + \Phi_{\rm e}(Z; t) \label{decomp}$$ where $P(t)$ is the fraction of exactly metal-free gas, and$\Phi_{\rm e} (Z; t)$ is the concentration PDF in the enriched part of the flow, which satisfies the condition $\lim_{Z\to 0} \int_0^Z \Phi_{\rm e} (Z';t) dZ' =0$. The Laplace transform of eq. (\[decomp\]) gives, $$\Psi(\zeta; t) = P(t) + \Psi_{\rm e}(\zeta; t). \label{lapdecomp}$$ where $\Psi_{\rm e}(\zeta; t)$ is the Laplace transform of $\Phi_{\rm e}(Z;t)$. From the condition $\lim_{Z\to 0} \int_0^Z \Phi_{\rm e}(Z';t) dZ' =0$, we have $\Psi_{\rm e}(\zeta; t) \to 0$ in the limit $\zeta \to +\infty$. Inserting eq. (\[lapdecomp\]) to the PDF equation (\[nconvolution\]) for the generalized convolution model and taking the limit $\zeta \to +\infty$ yields: $$\frac{dP}{dt} = -\frac{n}{\tau_{\rm con}} P(1-P^{1/n}). \label{pfnconv}$$ For later convenience, we have replaced the turbulent stretching rate, $s$, by a “convolution" timescale $\tau_{\rm con} \equiv s(t) ^{-1}$. Setting $n=1$ in eq. (\[pfnconv\]), we obtain the equation of $P(t)$ for Curl’s model, $$\frac{dP(t)}{dt}= -\frac{1}{\tau_{\rm con}}P(1-P), \label{pfcurl}$$ which was first given in Pan & Scalo (2007). An alternative derivation of this equation from the PDF equation in the double integral form is presented in PSS. From eq. (\[pfcurl\]), we see an interesting and simple physical picture for the pollution of the pristine gas by turbulent mixing: The primordial fraction is reduced when the fluid elements that are exactly metal-free and the rest of the flow that has been polluted by sources or previous mixing events, are brought close enough by turbulent stretching for the molecular diffusivity to homogenize. Taking $n \to \infty$, eq. (\[pfnconv\]) becomes, $$\frac{d P(t)}{dt} = \frac{P \ln(P)}{\tau_{\rm con}}, \label{pfcconv}$$ which is the prediction of the continuous convolution model of Venaille and Sommeria (2007) for the pristine faction evolution. Assuming both $n$ and $\tau_{\rm con}$ are constant with time, equation (\[pfnconv\]) has an analytic solution, $$P(t) = \frac{P_0}{\left[P_0^{1/n} + (1-P_0^{1/n} ) \exp\left( t /\tau_{\rm con} \right) \right]^n}, \label{pfnconvsolution}$$ where $P_0$ is the initial pristine fraction. This equation becomes $$P(t) = \frac{P_0}{P_0 + (1-P_0) \exp \left( \frac{t}{\tau_{\rm con}} \right)}, \label{pfintegralsolution}$$ for Curl’s “discrete" model with $n=1$ and $$P(t) = P_0^ {\exp(t/\tau_{\rm con})}, \label{pfcconvsolution}$$ for the Venaille and Sommeria (2007) model with $n \to \infty$. These convolution models predict that the pollution of primordial gas in turbulent flows proceeds at a timescale $\tau_{\rm con} \simeq s^{-1}$, which is essentially the timescale of turbulent stretching at large scales, anticipated by the cascade picture for turbulent mixing. Also note that the pollution timescale is essentially independent of the molecular diffusivity $\gamma$. Again, this is because the mixing rate is largely controlled by how fast the velocity field produces and feeds fine structures to the molecular diffusivity, but insensitive to the diffusion scale at which molecular diffusivity operates. Numerical Simulations ===================== To calibrate $n$ and $\tau_{\rm con}$ as function of the flow and pollutant properties, we carried out numerical simulations for mixing in compressible turbulence using the FLASH code (version 3.2), a multidimensional hydrodynamic code (Fryxell et al. 2000) that solves the Riemann problem on a Cartesian grid using a directionally-split Piecewise-Parabolic Method (Colella & Woodward 1984; Colella & Glaz 1985; Fryxell, M" uller, & Arnett 1989). The hydrodynamic equations were evolved in a periodic box of unit size with 512$^3$ grid points. Simulation runs at a lower resolution (256$^3$) were also conducted to check the potential effect of numerical diffusion. An isothermal equation of state with unit sound speed was adopted in all our simulations. The turbulent flows were driven and maintained at a steady state by a large-scale solenoidal external force, which was set be a Gaussian stochastic vector that decorrelates exponentially with a timescale equal to a quarter of the sound crossing time. The driving force was generated in Fourier space, and it included all independent modes with wave numbers in the range from $2\pi$ and $6\pi$. Each mode was given the same amount of power. We defined a characteristic driving length scale $L_{\rm f} \equiv \int \frac{2 \pi} {k} \mathcal{P}_{\rm f}(k)d{\bf k}/\int \mathcal{P}_{\rm f}(k)d{\bf k}$, where $ \mathcal{P}_{\rm f}(k)$ is the power spectrum of the driving force. Calculating $L_{\rm f} $ from our forcing spectrum, we found $L_{\rm f} = 0.46$ in units in which the box size is unity. By adjusting the amplitude of the driving force, we simulated four flows with different (density-weighted) rms velocities, $v_{\rm rms}$. For each flow, we defined a dynamical timescale, $\tau_{\rm dyn} \equiv L_{\rm f}/v_{\rm rms}$, and all the simulation runs lasted for about 5 $\tau_{\rm dyn}$. We computed the mean rms velocity by a temporal average after each flow reached a steady state, and the rms Mach numbers, $M$, i.e., the ratio of the rms velocity to the sound speed, in the four flows were $M=0.9$, 2.1, 3.6, and 6.2, respectively. The simulation setup for the turbulent velocity field is the same as in PS10 and PSS, to which we refer the interested reader for details. To study turbulent mixing, we evolved a number of decaying scalar fields in the four simulated flows. In each flow, we solved the advection equations of all scalar fields starting at the same time after the flow had already become fully developed and statistically stationary. The initial condition of the scalar fields was taken to be bimodal, consisting of pure pollutants and completely unpolluted material only. Such a bimodal field was obtained by setting the pollutant concentration, $C$, to unity in selected regions, representing pure pollutants, and to zero in the rest of the simulation box, corresponding to the unpolluted flow. The rate at which the pollution of the pristine material proceeds in our simulations depends not only on the flow properties but also on the initial configuration of the pollutants. Two parameters in the initial condition are of particular interest. The first one is the initial pollutant fraction, $H_0$, defined as the ratio of the heavy element mass to the total flow mass in the simulation box. The fraction is related to the initial primordial fraction, $P_0$, by $P_0 + H_0 =1$. Obviously, with more pollutants in the flow, i.e., a larger value of $H_0$, one would expect a faster pollution of the pristine gas. The mixing/pollution timescale also depends on how the pollutants are spatially distributed in the flow. For illustration, let us consider two different distribution patterns for the same amount of pollutants. In the first pattern, the pollutants are released in the form of a single blob, while in the second the pollutants are divided into many blobs of similar sizes evenly distributed in the flow. Intuitively, the pollution process would be considerably faster in the latter case. In that case, the pollution injection scale, $L_{\rm p}$, which is essentially the average distance between the pollutant locations, is smaller, and the mixing timescale should be shorter since it is determined by the eddy turnover time at $L_{\rm p}$ (PS10). We thus expect that a smaller $L_{\rm p}$ would result in a faster decay of the pristine mass fraction, and we will quantitatively examine the dependence of the pollution of the pristine flow on this parameter. In the context of the mixing of heavy elements in the interstellar media of early galaxies, the pollutant fraction, $H_0$, is related to the number or the rate of the supernova events, and the pollutant injection scale corresponds to the average distance between the explosion locations. In order to conduct a systematic study of the parameter dependence of $P$, we included in each simulated flow a total of 20 scalar fields with different initial conditions. Table 1 summarizes the fields, which are divided into 5 categories based on the geometry and the spatial distribution of the pollutants. For categories [1]{} and [2]{}, the initial pollutant configuration is a single blob located right at the center of the simulation box, and the geometrical shape of the blob was set to be a cube and a spherical ball, respectively. Clearly, for a single pollutant blob, the pollutant separation and hence the injection length scale, $L_{\rm p}$, is given by the box size. Considering that the flow driving scale $L_{\rm f}$ in our simulations is 0.46 of the box size, we have $L_{\rm p} \simeq 2 L_{\rm f}$ for categories [1]{}  and [2]{}. In the other three categories, the pollutants are divided into identical spherical blobs, equally spaced in the simulation box. The number of blobs is 8, 64 and 512, respectively, for category [3]{}, [4]{}, and [5]{}. For scalar fields in these three categories, $L_{\rm p}$ corresponds to $\frac{1}{2}$, $\frac{1}{4}$ and $\frac{1}{8}$ box size (or equivalently $\simeq 1, 0.5$ and $0.25 L_{\rm f}$), respectively. For reference, the scale at which the energy cascade and hence the inertial range starts in our simulated flows is about $\frac{1}{4}$ box size. The injection scale ($\frac{1}{8}$ box size) of category [5]{} scalar fields is well within the inertial range. There are four scalar fields in each category, which differ in the initial pollutant mass fraction, $H_0$. The four scalar fields in category [1]{}, named [1]{}A, [1]{}B, [1]{}C, and [1]{}D, have $H_0 = 0.5$, 0.1, 0.01, and 0.001, respectively. Scalar fields in other categories are named in the same way. These exact $H_0$ values were achieved by tuning the size of the pollutant blob(s). In category [1]{}, the length of the pollutant cube is set to 0.79, 0.47, 0.22 and 0.1 in units of the box size, or 1.7, 1.0, 0.48 and 0.22 in units of the driving scale $L_{\rm f}$, for scalars A, B, C and D, respectively, in the Mach 0.9 flow. For the four scalars in category [2]{}, the radius, $r_{\rm p}$, of the spherical ball is 0.49, 0.29, 0.14, and 0.063 box size. In units of $L_{\rm f}$, $r_{\rm p} = 1.1$, 0.63, 0.30, and 0.14 $L_{\rm f}$, respectively. The radius of each pollutant ball for the corresponding scalar in categories [3]{}, [4]{}, and [5]{} is smaller by a factor of 2, 4, and 8, respectively, than the $r_{\rm p}$ value for category [2]{}. This is because the numbers of balls in those categories are larger than that in category [2]{} by factors of $8$, $64$ and $512$. The radii $r_{\rm p}$ given here are the values used in the $M=0.9$ flow. At larger $M$, $r_{\rm p}$ for each corresponding scalar is slightly different. Due to significant density fluctuations in flows at higher $M$, using pollutant blobs of the same size at the same locations leads to different values of $H_0$. We thus tuned the pollutant size to guarantee the initial pollutant [*mass*]{} fraction, $H_0$, is exactly 0.5, 0.1, 0.01, and $0.001$ for the scalars in each flow. We also made an attempt to investigate a smaller value ($10^{-4}$) of $H_0$, which would also be interesting for mixing in the interstellar medium of early galaxies. However, a tiny $H_0$ corresponds to a small pollutant size, and due to the limited numerical resolution, the pollutant size for $H_0 = 10^{-4}$ is too close to the resolution scale of our simulations. In that case, numerical diffusion took effect and significantly polluted the surrounding flow from the beginning, leading to a different evolution behavior for the pristine fraction at early times than the other cases with $H_0 \ge 10^{-3}$. In the interstellar medium, the pollutant size is essentially a supernova remnant stall diameter, $\simeq150$ pc, with little dependence on parameters (see Thornton et al. 1998, Hanayama & Tomisaka 2006). This is expected to lie within the inertial range of interstellar turbulence. Therefore, the real homogenization of fresh metals from supernovae by molecular diffusivity must wait for turbulent stretching to bring the concentration structures to the diffusion scale, which is tiny in comparison to the remnant size. It is thus appropriate to consider pollutants with initial sizes significantly larger than the diffusion/resolution scale of the turbulent flow, and in the present work we do not explore scalar cases with $H_0 \le 10^{-4}$. We point out that the first three scalar fields in category [1]{}, i.e., [1]{}A, [1]{}B, and [1]{}C, in Mach 0.9 and 6.2 flows have been studied in details in PSS. In this paper, we perform a more systematic study covering a much larger parameter space. Neither the viscous term in the hydrodynamic equations nor the diffusivity term in the advection-diffusion equation were explicitly included in our code. Therefore, both kinetic energy dissipation and scalar homogenization (or dissipation) are through numerical diffusion in our simulations. The diffusion scale where the scalar homogenization occurs is close to the resolution scale, and so is the energy dissipation or the Kolmorgorov scale. To examine whether our results for primordial gas mixing depend on the amplitude of numerical diffusion, we also performed simulations at the resolution of $256^3$, and the results at the two resolutions are compared in §5.4.5. Otherwise, unless explicitly stated, the results reported below are from the $512^3$ simulations. Results ======= The Concentration Field ----------------------- ![image](f1.eps){width="2\columnwidth"} In Fig. 1, we show the concentration on a slice (on the $y$-$z$ plane at $x =0.25$) of the simulation grid for three scalar fields (rows) at three different times (columns). The concentration is plotted on a logarithmic scale with the white color representing concentration levels below $10^{-8}$. The top three panels, from left to right, correspond to case [3]{}B in the Mach 0.9 flow at $t=0.11, 0.68$, and $1.06 \tau_{\rm dyn}$, respectively. At $t=0$, four spherical pollutant blobs lie on the $x=0.25$ plane. With time, turbulence stretches and spreads out the pollutants, and structures at small scales are continuously produced. In particular, we observe prominent “cliff" structures with sharp concentration gradients. These sheet-like structures are typical of passive scalars in incompressible turbulence (e.g., Watanabe and Gotoh 2004). As the length scale of the scalar structures reaches the (numerical) diffusion scale, mixing occurs between the pollutants/polluted flow and the pristine regions. The mixing process reduces the volume fraction of the pristine flow (white regions), and at $1.06\, \tau_{\rm dyn}$ almost the entire flow is polluted. The density-weighted concentration variance decreases from the initial value of 0.09 to 0.086, 0.038, and 0.015, respectively, for the three snapshots from the left to the right. The three panels in the second row show snapshots of the same scalar case ([3]{}B), but in the Mach 6.2 flow, at $t=0.11$ (left), 0.76 (mid), and 1.41 $\tau_{\rm dyn}$ (right), respectively. Comparing with the top panels, we see that, even at later times (in units of $\tau_{\rm dyn}$), the surviving pristine volume is larger than in the $M=0.9$ flow, suggesting that the pollution of the pristine gas is slower in turbulent flows at higher $M$. The concentration field appears to be smoother than in the $M=0.9$ flow. As explained in detail in PS10, this is because in highly supersonic turbulence the visual impression of the scalar field is dominated by expansion events, which occupy most volume of the flow domain. Since a passive scalar simply follows the flow velocity, an expanding region tends to produce coherent and smooth structures at large scales. We note that the scalar in the supersonic flow has a smoother appearance also at small scales, and the likely reason is that the code used in our simulations applies a larger effective numerical diffusion to stabilize stronger stocks in flows with larger $M$ . Although compressible modes play a key role in shaping the large-scale geometry of the scalar field, the primary mixing agent is still stretching by solenoidal modes even in our simulated flows at very high $M$ (PS10). The concentration variances at the three snapshots shown here are 0.078, 0.023, and 0.006, respectively. Note that, even though the scalar variances in the right two panels are smaller than the corresponding snapshots in the $M=0.9$ flow, the remaining pristine fraction in the $M=6.2$ flow appears to be larger. This is because in turbulent flows with higher $M$ the scalar PDF tails are broader, leading to a larger pristine fraction at the same concentration variance. A more detailed discussion on this issue is given in §5.2. The bottom three panels in Fig. 1 plot the evolution of the scalar field [5]{}B in the Mach 6.2 flow, which also has $H_0 = 0.1$. Unlike case [3]{}B shown in the top and central panels, this field initially consists of 512 small blobs, and has a smaller injection scale, $L_{\rm p}$. At $t=0$, 64 blobs lie on the slice shown here. At early times, some blobs appear to be small dots or filaments because they are being advected out of the selected slice. The three panels correspond to $t= 0.11 $, 0.33, $0.76$ $\tau_{\rm dyn}$. The mixing/pollution process proceeds much faster than case [3]{}B in the same flow. One reason is that, for a smaller pollutant size, turbulent stretching of the pollutant is faster, and thus mixing of each individual blob with the surrounding flow is more efficient. Also, since the separation between the pollutant blobs is small, the polluted/mixed regions by the individual pollutant blobs start to overlap quickly, resulting in a much faster erasure of the pristine flow material. As a reference, the scalar variances at the three snapshots from the left to the right are 0.07, 0.038, and 0.011, respectively. The PDF Evolution ----------------- ![image](f2.eps){width="2\columnwidth"} In this subsection, we discuss simulation results for the evolution of the concentration PDF. Fig. (\[pdf\]) plots the PDF as a function of time for four scalar fields. For all scalars, the heights of the two spikes at $Z=0$ and $Z=1$ decrease at early times, and mixing causes a probability flux toward the central part, which gradually fills the concentration space between the two spikes. Both spikes are eventually removed, and for an initial PDF with negative skewness, or $P_0 > H_0$, the left spike lasts longer than the right one. At later times, a central peak forms around the mean concentration, and the PDF becomes unimodal. After that, the PDF continuously narrows toward the mean value, a process described in §2.2 as anti-diffusion in concentration space. In PSS, we tested the predictions of various models for the PDF evolution against simulation data for scalar case [1]{}B in Mach 0.9 and 6.2 flows. The initial condition of this scalar is a single cubic pollutant with $H_0 =0.1$. It was found to be very challenging for PDF models to accurately predict the scalar PDF tails, especially for scalar fields in highly supersonic flows. Here we do not attempt to obtain successful model fits to the measured PDFs, as the main goal of this work is to understand the evolution of the pristine fraction, rather than the full details of the entire PDF. However, including a model prediction for the PDF evolution in our figure is useful, because it provides a guideline to compare the fatness of the PDF tails for different scalar fields in different flows. For this purpose, we consider the beta distribution function as a PDF model for passive scalar mixing, which has been shown to provide a good approximation for the PDF shape of decaying scalars with a double-delta initial condition in incompressible turbulence (e.g., Girimaji 1991). The beta distribution function is defined as, $$\Phi_{\beta} (Z) = \frac{\Gamma(\beta_1 +\beta_2)}{\Gamma(\beta_1) \Gamma(\beta_2)} Z^ {\beta_1-1}(1-Z)^{\beta_2-1} \label{beta}$$ where $\Gamma$ is the Gamma function. To compare the beta distribution with the simulation results, one can determine the two parameters, $\beta_1$ and $\beta_2$, in eq. (\[beta\]) by equating the mean and variance of the beta PDF to those measured from the simulation data. For each measured PDF (data points) shown in Fig. (\[pdf\]), we plot a beta distribution (line), where the beta parameters are fixed using the concentration mean and variance at the corresponding time. The left top panel in Fig. (\[pdf\]) shows the result for case [3]{}B in the Mach 0.9 flow. The initial condition of this case is eight equally-spaced spherical blobs with $L_{\rm p}$ equal to 1/2 box size. The total pollutant fraction, $H_0$, of this scalar field is 0.1. The PDFs are measured at five different times as indicated in the legend. For this scalar, the fitting quality of the beta distribution functions is generally good except at far tails. In PSS, we showed that, at later evolution times, the PDF of scalar [1]{}B in the $M=0.9$ flow is well fit by a Gamma distribution, as predicted by the PDF model of Villermaux and Duplat (2003). The initial condition of scalar [1]{}B is a single cubic pollutant, and it has a twice larger $L_{\rm p}$ than [3]{}B shown here. The performance of the Villermaux and Duplat (2003) model is less satisfactory for scalars with smaller $L_{\rm p}$, e.g., it significantly underestimates the PDF tails for the scalars in Fig. (\[pdf\]). Also that model is is invalid at the early evolution stage (see PSS). On the other hand, the beta distribution does provide acceptable fits to the measured PDFs at early times, as seen in Fig. (\[pdf\]). The top right panel shows the PDF of the same scalar field in the Mach 6.2 flow. Using the beta distributions as a reference, we see that at late times, the left PDF tails are broader than in the Mach 0.9 case. The bottom two panels plot the results for case [5]{}B in the same two flows. This scalar field also has $H_0= 0.1$, but the injection scale, $L_{\rm p}$, is significantly smaller, $\simeq \frac{1}{8}$ box size (see Table 1). A comparison of the time series indicated by the legends in the top and bottom panels shows that the PDF variance decays much faster for scalar fields with smaller injection scale (see §5.3), but at similar values of the variance, the PDF tails are broader for scalar fields with smaller $L_{\rm p}$. These observations are consistent with the findings of PS10, who studied the dependence of the PDF shape on $M$ and $L_{\rm p}$ in details, and found that the PDF tails become broader with increasing $M$ or decreasing $L_{\rm p}$. The physical origin of this behavior is probably related to the phenomenon of turbulent intermittency, i.e., the existence of strong non-Gaussian velocity structures at small scales. Supporting this interpretation is the fact that the degree of non-Gaussianity of the velocity field increases as $M$ increases or as $L_{\rm p}$ decreases, which coincides with the trend of the PDF tails of passive scalars. As discussed in §3.1, the convolution PDF models with smaller $n$ predict fatter tails, meaning that $n$ would decrease with increasing $M$ or decreasing $L_{\rm p}$ if one attempts to fit the measured PDFs with the predictions of the convolution models. Extending the intermittency argument here to mixing in supersonic flows with totally compressive driving, we expect that, at the same $M$, the passive scalar PDF would have fatter tails than in our flows with solenoidal driving. This is because the compressively driven supersonic flows are significantly more intermittent (Federrath et al. 2010). We point out that the fatness of the PDF tails as a function of $M$ and $L_{\rm p}$ for decaying scalars in the current study is less clear-cut than for the forced scalars examined in PS10. The general trend is sometimes not clearly obeyed in our simulations here, especially for scalar fields in high Mach-number flows ($M=3.5$ and 6.2). For example, as seen in Fig. (\[pdf\]), it appears that the PDF tail of scalar [5]{}B in the Mach 6.2 flow is less broad than in the Mach 0.9 flow. A possible reason is that the measurement of the PDFs of decaying scalars is less precise than in the case of forced scalars, for which the PDFs can be computed by averaging over many snapshots. Simulations with higher resolutions may help us establish a robust trend for the PDF tail of decaying scalars in turbulent flows at large $M$, as they provide better statistics and better resolution of complexities, such as strong density fluctuations, in highly supersonic turbulence. Finally, we stress that the pristine fraction corresponds to the probability contained in the far left tail of the PDF with $Z<10^{-8}-10^{-5}$, which is beyond the range of $Z$ values shown in Fig. (\[pdf\]). Nevertheless, the PDF tails shown in Fig. (\[pdf\]) can be used to infer the trends of the pristine fraction with varying $M$ and $L_p.$ Since the PDF tail broadens with $M$, we would expect that, with the same concentration variance, the pristine fraction contained in the PDF would be higher for a scalar field evolving in a flow with higher Mach number or smaller pollutant injection scale. In fact, the dependence of the PDF tails on $M$ and $L_{\rm p}$ induces interesting effects on the pristine fraction as a function of time, which will be discussed in detail in §5.4. The Variance Decay ------------------ ![The density-weighted concentration variance as a function of time. Top panel: scalar field [3]{}B in four simulated flows with $M= 0.9$, 2.1, 3.5 and 6.2. The scalar field has a total pollutant fraction, $H_0$, of 0.1 and the injection length scale, $L_{\rm p}$, is $\simeq 1/2$ box size. Mid panel: B scalar fields ($H_0 =0.1$) in the $M =6.2$ flow. The five curves correspond to five categories in Table 1 with different pollutant shape and injection length scales. Bottom panel: scalar fields from category [3]{} in the $M =6.2$ flow. Each curve is for a different value of the initial pollutant fraction, and the variance of each scalar field is normalized to its initial value.[]{data-label="var"}](f3.eps){width="1\columnwidth"} Fig. (\[var\]) plots the variance decay of a number of scalar fields. The top panel shows the results for scalar fields [3]{}B in the four simulated flows at different $M$. For these scalar fields, $L_{\rm p} \simeq L_{\rm f}$, $H_0 =0.1$, and the initial variance is 0.09. The variance decay first slows down with increasing Mach number, then becomes slightly faster as $M$ increases from $3.5$ to 6. The same behavior has been found in PS10, where a physical explanation was given (see also §3). In our simulated flows, compressible modes are inefficient in producing small-scale structures. Therefore, the mixing efficiency decreases as the fraction of kinetic energy contained in compressible modes at inertial-range scales increases. At $M\gsim 3$, this fraction saturates at an equipartition value of 1/3, and the mixing timescale becomes essentially constant. The slightly faster mixing as $M$ increases to $6.2$ is because of the effect of strong compression in our simulated flow with $M =6.2$. Due to the limited numerical resolution, the strongest compression events in this flow can directly squeeze the scalar structures to the diffusion scale and provide some contribution to the mixing efficiency. As discussed in §3, the effect of compressible modes on mixing could be stronger in a highly supersonic flow with compressive driving, and the behavior of the normalized mixing timescale as a function of $M$ in compressively driven flows will be studied in a future work. In Fig. (\[var\]), we see that the variance decrease is approximately exponential. The mixing timescale $\tau_{\rm m}$ is measured to be 0.45, 0.48, 0.58, 0.57 $\tau_{\rm dyn}$ for $M=0.9$, 2.1, 3.5, and 6.2, respectively. The 20% increase in $\tau_{\rm m}$ as $M$ goes from 0.9 to 6.2 is consistent with the results of PS10. The middle panel of Fig. (\[var\]) shows the variance of five B scalars in the $M=6.2$ flow. Each case is from one of the five categories listed in Table 1, and they all have $H_0 =0.1$. The curves for scalar fields [1]{}B and [2]{}B are very close to each other. The initial pollutant distributions of these two scalar fields are a single cube and spherical ball, respectively, and the similarity of their variance decay suggests that the mixing timescale is essentially independent of the geometrical shape of the pollutants. On the other hand, the mixing timescale decreases steadily as the average pollutant separation becomes smaller. The injection scale, $L_{\rm p}$, for scalar fields [3]{}B, [4]{}B, and [5]{}B is $\frac{1}{2}$, $\frac{1}{4}$, and $\frac{1}{8}$ box size, respectively. We attempted to measure $\tau_{\rm m}$ by fitting the five curves with exponentials in the time interval from $0$ to $\simeq 2 \tau_{\rm dyn}$, which is the time range of primary interest for the pristine gas pollution (see §5.4). The measured values of $\tau_{\rm m}$ are 0.72, 0.71, 0.57, 0.48, and 0.34 $\tau_{\rm dyn}$, respectively, for the five curves from top to bottom. The mixing timescale is determined by the eddy turnover time at the pollutant injection scale, and thus decreases with decreasing $L_{\rm p}$. This physical picture also provides an explanation for the scale dependence of the mixing timescale found by de Avillez and Mac Low (2002) in a suite of numerical simulations of mixing in supernova-driven interstellar turbulence. In the bottom panel, we plot the variance decay of four scalar fields from category [3]{} in the Mach 6.2 flow. Different curves correspond to different values of $H_{0}$. Unlike the top two panels, here we normalize the concentration variance to its initial value, $\langle \delta Z^2(0) \rangle,$ which makes it easier to compare the variance decay timescale of different scalars. For a double-delta PDF (eq. (\[initialpdf\])), the initial variance $\langle \delta Z^2(0) \rangle$ is equal to $P_0 H_0 = P_0 (1-P_0) = H_0(1-H_0)$. For $H_0 \le 0.5$, $\langle \delta Z^2(0) \rangle$ decreases with decreasing $H_0$. This suggests that, for scalar PDFs close to a double delta shape, the variance is not a good indicator of the pristine fraction, as the smaller variance for scalar fields with smaller $H_0$ in bottom panel of Fig. (\[var\]) actually corresponds to a larger pristine fraction. While a better indictor would be the variance normalized to the average concentration squared, which measures the rms of the fluctuations relative to the mean, the variance plot normalized to the initial value nevertheless provides useful information for the timescale of the mixing process. With decreasing $H_0$, the radius of each individual pollutant blob becomes smaller, decreasing from about 0.5 box size ($H_0=0.5$) to only 0.06 box size ($H_0=10^{-3}$). The top two curves for $H_0 =0.5$ and 0.1 are close to each other, and the reason is that, for these two scalar fields, both the pollutant size and the pollutant separation are close to the flow driving scale, $L_{\rm f}$, and the scale (1/4 box size) at which the inertial range of the flow starts. The mixing timescales for these two scalar cases are thus given by the turnover time of large eddies of similar sizes. The situation is different for the rest two scalar fields. For $P \le 0.01$, the size of each individual blob is significantly smaller than $L_{\rm f}$. It is also smaller than the average separation, $L_{\rm p}$ ($\simeq L_{\rm f}$), between the pollutant blobs. In this case, the mixing process around each blob is not synchronized with that over the entire flow. This divides the variance evolution into two phases. The early phase occurs faster and is controlled by the turbulent stretching rate at smaller scales (the pollutant size). This explains the faster variance decay for smaller $H_0$ at early times. After each blob is stretched, spread and mixed to a size close to the average pollutant separation, the mixing process starts to proceed at a single pace, and the timescale is determined by the turnover time of eddies of size $L_{\rm p}$. As seen in the bottom panel of Fig. (\[var\]), the variance decay is exponential for all cases at late times with essentially the same timescale ($0.6 \tau_{\rm dyn}$). The existence of two phases for scalar fields with small $H_0$ also leaves a signature in the evolution of the pristine gas fraction. ![image](f4.eps){width="2\columnwidth"} ![image](f5.eps){width="2\columnwidth"} The Pristine Fraction --------------------- ### General Results We now present results for the decay of the pristine mass fraction in our simulated flows. In PSS, we have shown results for three scalar fields, [1]{}A, [1]{}B and [1]{}C from category [1]{} (with $H_0 =0.5$, 0.1 and 0.01, respectively; see Table 1), evolved in two flows with $M=0.9$ and $M=6.2$. The pollutant injection scale $L_{\rm p}$ of those fields was the box size, or about twice the flow driving scale $L_{\rm f}$. sIn this section, we consider scalar fields in category [3]{} in the $M=0.9$ and 6.2 flows as primary examples. The injection scale of these fields is smaller, with $L_{\rm p} \simeq L_{\rm f}$. In the subsequent subsections, we will discuss in details the dependence of the pristine fraction decay on various parameters. Fig.  (\[Mach0.9Four\]) shows the mass fraction $P({10^{-8}}, t)$ of the flow with concentration level below $10^{-8}$ for scalar fields [3]{}A, [3]{}B, [3]{}C, and [3]{}D in the $M=0.9$ flow. The data points are simulation results, and the lines are fitting functions based on the predictions of the self-convolution PDF models discussed in §3.2. The left and right panels are the same figure on linear-linear and linear-log scales, respectively. The linear-linear scale shows the early evolution more clearly, while with a linear-log plot one can see the late-time behavior better. The initial pollutant fraction, $H_0$, of the four cases in this figure ranges from 0.5 ([3]{}A) to $10^{-3}$ ([3]{}D). As shown in PSS, the prediction, eq. (\[pfnconvsolution\]), of the self-convolution models can successfully fit the simulation results for scalar fields with $H_0 \ge 0.1$. The fitting lines in Fig. (\[Mach0.9Four\]) for the two cases with $H_0 =0.5$ and $0.1$ are the predictions of the convolution models with $n=10$. The initial pristine fraction $P_0$ in eq. (\[pfnconvsolution\]) is set to 0.5 and 0.9, and the timescale $\tau_{\rm con}$ is taken to be $0.27$ and $0.25 \tau_{\rm dyn}$, respectively. Both the linear-linear and the linear-log plots show that the model prediction matches the simulation data well, suggesting that the pollution process in turbulent flows may be adequately described as a self-convolution process. If $H_0$ is smaller than $\simeq 0.1$, the evolution of the pristine fraction is more complicated, and one cannot satisfactorily fit the entire evolution of $P(Z_{\rm c}, t)$ with the convolution model, eq. (\[pfnconvsolution\]), by properly choosing the parameters $n$ and $\tau_{\rm con}$. In this case, the pollution process shows different behaviors at early and late evolution phases. A two-phase behavior for scalar fields with small $H_0$ was actually seen earlier in the scalar variance decay (see the bottom panel of Fig. (\[var\]) for scalars [3]{}C and [3]{}D in the Mach 6.2 flow). For these cases, only a small fraction of the flow material, near the pollutant blobs, experiences PDF convolution at early times, because the amount of pollutants available for mixing is limited. This suggests that the convolution of the concentration PDF is local in space in the early phase, and, based on the physical discussion in §3.1, the mixing process in this phase would be better described by a “discrete" version of the convolution model (with $n$=1). Consistent with this picture, we find that the pristine fraction in the early phase is in good agreement with the prediction, eq. (\[pfintegralsolution\]), of the “discrete" convolution model, or equivalently eq. (\[pfnconvsolution\]) with $n$=1. With time, more and more flow is polluted, and the mixed flow material then acts as sources for further pollution. The PDF convolution would thus becomes more global in spatial space and hence more continuous in Laplace space, leading to an increase in $n$. As described in §3.1, $n$ essentially corresponds to the degree of spatial locality for the PDF convolution. Recognizing the different mixing behaviors at early and late times, we attempted to apply a two-phase fitting procedure for scalar fields with $H_0\le 0.01$ (see PSS). For a two-phase fit, we need to determine the transition time at which the two behaviors connect. Since the generalized convolution model with a single phase provides perfect fits to scalar fields with $H_0 \ge 0.1$, one may expect that the second phase with a more global PDF convolution starts when the pristine fraction, $P(Z_{\rm c}, t)$, decreases to 0.9. We thus first tried to obtain a fitting function that connects the two phases at the time $t_{0.9}$ when $P(Z_{\rm c}, t) = 0.9$. The results are shown as dashed lines in Fig.  (\[Mach0.9Four\]). In these lines, the early phases are fit by the “discrete" model, eq. (\[pfintegralsolution\]), with $\tau_{\rm con} = 0.17 \tau_{\rm dyn}$ for both case [3]{}C ($H_0= 0.01$) and case [3]{}D ($H_0=0.001$). Once $P(Z_{\rm c}, t)$ decreases to 0.9, we use the generalized model prediction, $P(Z_{\rm c}, t) = 0.9/[0.9^{1/n} + (1-0.9^{1/n}) \exp((t-t_{0.9})/\tau_{\rm con})]^n$ (c.f. eq. (\[pfnconvsolution\])) with $n=10$. The timescale $\tau_{\rm con} $ for the late phase is set to $0.23$ and $0.25 \tau_{\rm dyn}$ for case [3]{}C and case [3]{}D, respectively. The fitting values adopted for $n$ and $\tau_{\rm con}$ in the late phase are close to those used for the scalar fields with $H_0 \ge 0.1$. This means that, once the polluted fraction becomes larger than $\simeq 0.1$, the pristine fraction decays in a similar way as the $H_0 \gsim 0.1$ fields. The fitting quality of the dashed lines appears to be acceptable. To distinguish the two convolution timescales in the early and late phases, we denote them as $\tau_{\rm con1}$ and $\tau_{\rm con2}$, respectively. We will also use $\tau_{\rm con2}$ to denote the convolution timescale for scalar fields with $H_0 \ge 0.1$ because the decay of the pristine fraction for those fields is similar to the later-phase evolution of the $H_0 \le 0.01$ cases. We find that one can obtain better fits for scalar fields with $H_0 \le 0.01$ by connecting the two phases at later times. As shown in PSS, for these fields the “discrete" model well matches the simulation data in an extended time range until $P(Z_{\rm c}, t)$ drops to 0.2-0.3. This allows us to connect the early and late behaviors at a time significantly larger than $t_{0.9}$. It turns out that the fitting quality is actually significantly improved if we start to use the generalized model with $n=10$ at times when $P(Z_{\rm c}, t)$ is smaller than $\simeq0.7$. The solid lines in Fig.  (\[Mach0.9Four\]) for cases [3]{}C and [3]{}D show the fitting functions that connect the “discrete" model and the later phase at $t_{0.5}$ when $P(Z_{\rm c}, t)$ decreases to 0.5. In the fitting curves, $\tau_{\rm con1}$ for the “discrete" phase is set to $0.18 \tau_{\rm dyn}$ for both case [3]{}C and case [3]{}D. Starting from $t_{\rm 0.5}$, we use the generalized model $P(Z_{\rm c}, t) = 0.5/[0.5^{1/n} + (1-0.5^{1/n}) \exp((t-t_{0.5})/\tau_{\rm con})]^n$ with $n=10$. The timescale $\tau_{\rm con2} $ is set to $0.25$ and $0.27 \tau_{\rm dyn}$ for case [3]{}C and case [3]{}D, respectively. From Fig.  (\[Mach0.9Four\]), the two-phase fitting lines connecting at $t_{0.5}$ agree with the data considerably better than the dashed lines that connect at $t_{0.9}$. Our choice here to connect the two phases at $t_{\rm 0.5}$ is somewhat arbitrary because there is an extended time range where both the “discrete" model and the $n=10$ model can match the simulation data (PSS). In fact, combining the two models at any time with $0.2 \lsim P(Z_{\rm c}, t) \lsim 0.7$ would give fitting curves of similar quality. The parameter $n$ adopted in both the dashed lines and the solid lines is 10, i.e., the same as used for the scalar fields with $H_0 \ge 0.1$. This is also the case for the convolution timescale $\tau_{\rm con2}$ in the second phase. The values of $\tau_{\rm con2}$ used in the solid lines almost coincide with those adopted in the fitting lines for scalar fields with $H_0 \ge 0.1$, and in the dashed lines the $\tau_{\rm con2}$ values are only slightly smaller (by $\lsim 10\%$). For the early phase, the adopted values for the timescale $\tau_{\rm con1}$ in the solid and dashed lines are very close too. Our result that connecting the two phases at $t_{0.5}$ yields better fits than at $t_{0.9}$ seems to suggest that, for scalar fields with $H_0 \lsim 0.01$, the pollution process does not make an immediate transition from the “discrete" to the generalized convolution model with larger $n$, when the pristine fraction decreases to ${0.9}$. The transition tends to occur later. Considering that the generalized model with a single phase works perfectly for scalar fields with $H_0 \gsim 0.1$, this implies that the time at which the generalized convolution phase starts is not simply controlled by the value of the pristine or polluted fraction: It appears to have some dependence on whether the initial pristine fraction is larger or smaller than $\simeq 0.9$. This does not cause any problems in a practical application if the exact value of the initial pollutant fraction, $H_0$, is known. One can use the generalized convolution model with a single phase if $H_0 \ge 0.1$, or adopt a two-phase model connecting at, say, $t_{0.5}$ if $H_0 \lsim 0.1$. However, there is some complication when applying this procedure to the subgrid model we will construct in §7 for large-eddy simulations for the pollution of primordial gas in early galaxies. For example, if at a given time the pristine fraction in a computational cell is, say, in between 0.9 and 0.5, then the choice of using the “discrete" model or the generalized model at that moment depends on whether the pristine fraction in that cell was larger or smaller than 0.9 when it was first polluted. This would make the implementation of our subgrid model complicated, as it requires keeping some information on the pollution history in each cell. We advocate simply using the generalized convolution model for any cells with a pristine fraction smaller than 0.9, as it gives acceptable, if not perfect, fits to our simulation data for $H_0 \le 0.01$ scalars at any time after $ t_{0.9}$. In the following subsections, we will only consider fitting functions that connect at $t_{0.5}$ for scalar fields with $H_0 \le 0.01$, as they are in better agreement with the simulation data. We will tabulate the fitting parameters obtained from such fits in §5.4.6. If in a particular application connecting the early and later phases at $t_{0.9}$ is preferred rather than at $t_{0.5}$, our tabulated parameters would still be applicable, as the best-fit parameters used in the fitting curves that connect at $t_{0.9}$ and $t_{0.5}$ are very close. In Fig. (\[Mach6.2Four\]), we show the simulation results and the fitting curves for scalar fields from the same category ([3]{}), but in the Mach 6.2 flow. For the fields with $H_0 =0.5$ and $0.1$, the data points are fit by the convolution model, eq. (\[pfnconvsolution\]), with $n=3$. The timescale $\tau_{\rm con2}$ for the two cases is set to $0.30$ and $0.31 \tau_{\rm dyn}$, respectively. Similar to the $M=0.9$ case, two-phase models connecting at $t_{0.9}$ (dashed lines) and $t_{0.5}$ (solid lines) are used for the rest two cases with $H_0 \le 0.01$. For the dashed lines, the early phase is fit by the “discrete" convolution model with $\tau_{\rm con1} = 0.22\tau_{\rm dyn}$ for scalar field [3]{}C and $\tau_{\rm con1} = 0.24\tau_{\rm dyn}$ for case [3]{}D, and for the late phase we used the $n=3$ convolution model with $\tau_{\rm con2} = 0.31 \tau_{\rm dyn}$ and $\tau_{\rm con2} = 0.33 \tau_{\rm dyn}$ for the two cases, respectively. For the solid lines that connect at $t_{0.5}$, the fitting parameters for the “discrete" phase are $\tau_{\rm con1} = 0.23\tau_{\rm dyn}$ and $\tau_{\rm con1} = 0.25\tau_{\rm dyn}$ for case [3]{}C for case [3]{}D, respectively, and the late evolution stage is fit with $n=3$ and $\tau_{\rm con2} = 0.34 \tau_{\rm dyn}$ for both cases. Again, the fitting quality is better with a connection at $P(Z_{\rm c}, t) =0.5$. In all cases, the fitting parameters, $n$ and $\tau_{\rm con2}$, adopted for the scalar fields with $H_0 \ge 0.1$ and for the late phases of the $H_0\le 0.01$ fields are very close, suggesting a universal decay behavior of the pristine fraction once the polluted fraction exceeds $\simeq 0.3$. We find that, for scalar fields with $H_0 \le 0.01$, it is more difficult to fit the early phases as $H_0$ decreases, and the fitting quality becomes poorer with decreasing $H_0$ (see Fig. (\[Mach6.2Four\] and \[Mach0.9Four\])). The pollutant size is smaller for smaller values of $H_0,$ and this may cause some complexities for the prediction of the pristine fraction. For example, as $H_0$ decreases to 0.001, the blob diameter is about the size of 30 computational cells, which is close to the scale where the flow inertial range ends. The first effect is that, with time, the size of the polluted region around each pollutant blob increases, and the turbulent stretching timescale in the polluted regions may increase with time. This is not accounted for in the convolution models since the convolution timescale is set to be constant. Another effect arises from the fact that the turbulent stretching rate has larger spatial variations at smaller scales. The turbulent eddy “seen" by a small pollutant blob may have a stretching rate different from the average value at the pollutant size. The increase in the amplitude of the stretching rate fluctuations with decreasing length scale indicates the turbulent intensity around smaller blobs is more “random". In the early phase, the flow mass polluted by a single blob is expected to increase exponentially with the stretching rate. Therefore, using an average stretching rate for all the pollutant blobs may not give a precise prediction. The overall pollution rate depends on the turbulent stretching rates “seen" by all the pollutants at early times. The blobs encountering more intense eddies provide a larger contribution to the pollution process, and vice versa. The effect is further amplified by the phenomenon of intermittency: the PDF of the stretching intensity exhibits fatter tails toward smaller scales. Therefore, the small blobs have large chance to encounter extreme stretching events. Clearly, the effect of intermittency makes it more difficult to predict the pristine fraction for scalar fields with smaller $H_0$. We point out that, for a given scalar field, the parameters, $n$ and $\tau_{\rm con2}$, that can fit scalar fields with $H_0 \ge 0.1$ or the late phases of $H_0 < 0.1$ cases are not unique. In fact, a (small) range of parameter pairs ($n$, $\tau_{\rm con2}$) can give acceptable fits to the simulation data. For example, if a somewhat smaller (larger) $n$ is used, one could also have a similar fit with correspondingly smaller (larger) value of $\tau_{\rm con2}$. When obtaining the best-fit parameters, we attempted to select a single value of $n$ that provides good fits to all scalar fields in each category. With the chosen $n$, we then determine the best-fit value of $\tau_{\rm con2}$ for each scalar field in the category. As discussed above, the timescale turns out to be similar for all cases in a given category. ![image](f6.eps){width="2.\columnwidth"} A comparison of Figs. (\[Mach0.9Four\]) and (\[Mach6.2Four\]) shows that, when the time is normalized to the flow dynamical timescale, the pristine fraction in the Mach 6.2 flow survives for significantly longer than in the Mach 0.9 case (see also PSS). We discuss this Mach number dependence in the following subsection. ### Dependence on the Flow Mach Number In Fig. (\[Machdependence\]), we show the evolution of the pristine fraction for scalar fields [3]{}B (left) and [3]{}C (right) in our four simulated flows. As observed earlier, with $t$ normalized to the flow dynamical time, $\tau_{\rm dyn}$, the decrease of the pristine fraction becomes slower with increasing Mach number. In the fitting lines for case [3]{}B (left panel), the parameter pair ($n$, $\tau_{\rm con2}$) is set to (10, 0.26$\tau_{\rm dyn}$), (6, 0.29 $\tau_{\rm dyn}$), (5, 0.31$\tau_{\rm dyn}$), and (3, 0.31$\tau_{\rm dyn}$) for the four flows with $M=0.9$, 2.1, 3.5, and 6.2, respectively. Again, we see that $\tau_{\rm con2}$ first increases with $M$ and then saturates for $M\gsim 3$. This trend is similar to that of the variance decay timescale, $\tau_{\rm m}$, as a function of $M$ (see the top panel of Fig. (\[var\])). As explained in §5.3, $\tau_{\rm m}$ increases as the energy fraction in compressible modes increases and then becomes roughly constant when compressible energy fraction saturates at $M \gsim 3$. The same reasoning also applies here for the trend of $\tau_{\rm con2}$ with $M$. Similar to the discussion in §5.3 on $\tau_{\rm m}$, the convolution timescale may have a different behavior with $M$ in compressively driven flows. In §5.2, we showed that, at a similar concentration variance, the PDF tail becomes broader as $M$ increases, most likely because of a the increase in turbulent intermittency. Because the pristine fraction corresponds to the far left tail of the concentration PDF, the effect also slows the pristine gas pollution at larger $M$. Another effect of the broadening of the PDF tail with increasing $M$ is that it changes the shape of the pristine fraction vs. time curve, as seen in Fig.  (\[Machdependence\]). To fit the pristine fraction in flows at different $M$, we varied the parameter, $n$, in the self-convolution model, which controls the shape of the fitting function, eq. (\[pfnconvsolution\]). The best-fit value of $n$ decreases from 10 to 3 as $M$ increases from 0.9 to 6.2. This decrease of $n$ is expected from the fact that the self-convolution model with smaller $n$ would predict broader PDF tails (see §3.1). The convolution model was originally proposed for mixing in incompressible turbulence, where $n$ was a pure parameter without a clear connection to the mixing physics. Our finding that the convolution model with properly chosen $n$ can well describe the pristine fraction evolution in compressible turbulent flows motivates a physical interpretation of the parameter. A possible intuitive reason for why $n$ decreases with the flow Mach number is that $n$ reflects the degree of spatial locality of the PDF convolution, with more local mixing events implying a smaller $n$. In a highly supersonic turbulence, the majority of the flow mass resides in a small fraction of volume, i.e., in the dense postshock regions. Therefore, mixing of the pollutants into and within local regions of high densities is crucial toward the final homogenization. The dense postshock regions are persistent with a lifetime on the order of the flow dynamical time, and the time scale for the homogenization between different postshock regions is expected to be on the same order. This suggests that the presence of local dense regions may suppress the possibility of a global PDF convolution. In that case, as the flow Mach number increases, the convolution would become more local, leading to a decrease in $n$. Based on this argument, we speculate that, in compressively driven flows at similar $M$, the parameter $n$ would be smaller than in our simulated flows. The convolution in a compressively driven flow is expected to be more local because the density fluctuations are stronger (e.g., Federrath et al. 2010). ![image](f7.eps){width="2\columnwidth"} The right panel of Fig. (\[Machdependence\]) plots the results for case [3]{} C with $H_0 =0.01$ in the four simulated flows. Again the decrease of the pristine fraction is slower in flows with larger $M$. As discussed earlier, a two-phase fitting scenario is needed for scalar fields with $H_0 \le 0.01$. Using the discrete convolution model to fit the early-phase evolution, we find that the timescale $\tau_{\rm con1}$ is 0.17, 0.19, 0.22 and 0.23 $\tau_{\rm dyn}$ for $M=0.9$, 2.1, 3.5 and 6.2, respectively. The two phases are connected at $t_{0.5}$. Note that the timescale $\tau_{\rm con1}$ also increases with $M$ at first and then saturates for $M\gsim 3$. For the late phase of scalar case [3]{} C, we adopted the same values of $n$ (i.e., 10, 6, 5, and 3) for the flows as used in the case of [3]{} B. To match the simulation data, $\tau_{\rm con2}$ in the late phase is set to 0.25, 0.30, 0.34 and 0.34 $\tau_{\rm dyn}$ for $M=0.9$, 2.1, 3.5 and 6.2, respectively. Again, these numbers are close to the best-fit values for case [3]{} B shown in the left panel. In summary, we found that the pollution of the pristine gas is slower in flows at higher $M$. Two reasons are responsible for this behavior. First, the mixing (or variance decay) timescale $\tau_{\rm m}$ becomes larger as $M$ increases. Second, at the same concentration variance, the left PDF tail broadens with $M,$ and this corresponds to a larger pristine fraction. ### Dependence on the Pollutant Injection Length Scale Next, we study the dependence of the pristine fraction evolution on the initial spatial configuration of the pollutants, i.e., on how the pollutants are released into the flow. Each category in Table 1 represents a different pollutant shape or distribution at the initial time. In Fig. (\[lengthscale\]), we compare the simulation results for scalar fields from different categories in the Mach 6.2 flow. The left panel shows five B fields with $H_0= 0.1$, and the right panel is for C cases with $H_0 =0.01$. The initial condition for the scalar fields in categories [1]{} and [2]{} is a single pollutant cube and a single spherical blob, respectively, and the pristine fraction evolution for scalar fields in these two categories is almost the same, suggesting the geometric shape of the pollutant blob does not affect the pollution rate. On the other hand, the pollution process has a sensitive dependence on on the injection length scale, $L_{\rm p}$. For scalar fields in first two categories ([1]{} and [2]{}), $L_{\rm p}$ is about equal to the box size, or twice the flow driving scale, $L_{\rm f}$. For categories [3]{}, [4]{} and [5]{}, $L_{\rm p} \simeq L_{\rm f}$, $ L_{\rm f}/2$ and $L_{\rm f}/4$, respectively, As expected in §4, the decay of the pristine fraction becomes progressively faster with decreasing $L_{\rm p}$. The four lines in each panel of Fig. (\[lengthscale\]) are fitting functions based on the self-convolution models. Since the data points almost coincide for scalar fields in categories [1]{} and [2]{}, a single fitting curve (the line on the right) works for both cases. The other three fitting lines, from the right to the left, correspond to scalar fields in categories [3]{}, [4]{}, and [5]{} respectively. The fitting parameters used for the B fields in the left panel are $n=5, 3, 2,$ and 1, and $\tau_{\rm con2} = 0.42, 0.32, 0.19$ and $0.11\tau_{\rm dyn}$, respectively, for the four lines from the right to the left. The timescale $\tau_{\rm con2}$ decreases by $\simeq 20 \%$ as $L_{\rm p}$ changes from $2L_{\rm f}$ to $L_{\rm f}$, and, as$L_{\rm p}$ decreases further below $L_{\rm f}$, the decrease of $\tau_{\rm con2}$ is faster, dropping by $\simeq 40\%$ for each factor of 2 in $L_{\rm p}$. This trend is similar to the dependence of the variance decay timescale, $\tau_{\rm m}$, on $L_{\rm p},$ which is controlled by the eddy turnover time at $\simeq L_{\rm p}$ and decreases with decreasing $L_{\rm p}$. This also explains the faster pollution of the pristine gas if the pollutants are injected at smaller scales. Recalling that $\tau_{\rm m}$ was measured to be $0.72, 0.57, 0.48$ and 0.34 $\tau_{\rm dyn}$ for the same B fields in the Mach 6.2 flow with $L_{\rm p} \simeq 2, 1, 0.5$ and 0.25$L_{\rm f}$, respectively (see the mid panel of Fig. \[var\]), we see that $\tau_{\rm con2}$ has a more sensitive dependence on $L_{\rm p}$ than $\tau_{\rm m}$. A possible reason for this is that the exposure of the pollutants to the pristine flow may be an important factor for the pollution efficiency, and, with decreasing $L_{\rm p}$, the number of pollutant blobs increases rapidly, leading to enhanced pollutant exposure. The trend that $n$ becomes smaller for scalars injected at smaller scales corresponds to broadening of the PDF tails with decreasing $L_{\rm p}$ found in §5.2. The PDF tails become broader because the flow structures “seen" by the scalars with smaller $L_{\rm p}$ are more intermittent, and thus the dependence of $n$ on $L_{\rm p}$ is related to the higher degree of turbulent intermittency at smaller scales. If the turbulent flow is driven compressively, the decrease of $n$ with decreasing $L_{\rm p}$ may be faster due to stronger intermittency of the flow. Note that broadening of the PDF tails makes the pristine fraction larger, but this effect is minor in comparison to the faster decrease of the pristine fraction caused by the smaller mixing timescale at smaller $L_{\rm p}$. The decrease of $n$ with decreasing $L_{\rm p}$ may also be understood from a more intuitive argument. For scalar fields with a small $L_{\rm p}$, each pollutant is stretched by a local velocity structure, and the mixing of each pollutant blob with the surrounding flow proceeds largely independently at early times. The pollution process is almost complete when the mixed areas by the pollutant blobs start to overlap (see bottom panels of Fig. 1), meaning that the PDF convolution occurs locally and independently in different regions of size $L_{\rm p}$ during most of the mixing process. As the injection scale decreases, the PDF convolution becomes more local, leading to a smaller value of $n$, which corresponds to a higher degree of spatial locality in the PDF convolution (see §3.1). For C fields shown in the right panel, a two-phase scenario connecting at $t_{0.5}$ is used to obtain the fitting lines. In the early phase, the timescale, $\tau_{\rm con1}$, in the discrete convolution model is taken to be 0.30, 0.24, 0.17, and 0.1 $\tau_{\rm dyn}$, respectively, for the four fitting lines from right to left. The dependence of $\tau_{\rm con1}$ is similar to that of $\tau_{\rm con2}$ for the B cases. It is first reduced by 20%, as $L_{\rm p}$ decreases to $L_{\rm f}$, and then decreases faster, by $\simeq 30-40\%$, as $L_{\rm p}$ decreases further by each factor of 2. For the late phase, we adopted the same values (5, 3, 2 and 1) of $n$ as for the corresponding B cases shown in the left panel, and $\tau_{\rm con2}$ is set to $0.43, 0.34$, 0.22, and 0.12 $\tau_{\rm dyn}$ for scalar fields with $L_{\rm p} \simeq 2$, 1, 0.5 and $0.25 L_{\rm f}$, respectively. Again, these values of $\tau_{\rm con2}$ are close to those used in the fitting lines for the corresponding B cases. It is interesting to note that, for case [5]{}C, $n=1$ is adopted in both the early and late phases, although the timescales $\tau_{\rm con1}= 0.1 \tau_{\rm dyn}$ and $\tau_{\rm con2}= 0.12 \tau_{\rm dyn}$ are slightly different. We also examined the $L_{\rm p}$ dependence for all the other scalar fields including those in the other three flows. We found similar trends for the parameters, $n$, $\tau_{\rm con1}$ and $\tau_{\rm con2}$, with varying pollutant injection scale. The results are tabulated and further discussed in §5.4.6. ![image](f8.eps){width="2\columnwidth"} ### Dependence on the Threshold Metallicity When presenting simulation results in earlier subsections, we set the threshold metallicity to $Z_{\rm c} = 10^{-8}$ as a representative value, but, as discussed in the Introduction, the threshold value for the transition to normal star formation is uncertain. We therefore need to study the dependence of the pristine fraction $P(Z_{\rm c}, t)$ on the $Z_{\rm c}$. In Fig. (\[Threshdependence\]), we plot $P(Z_{\rm c}, t)$ at different threshold values for scalar [3]{}C in the Mach 6.2 flow. The two panels show the same figure on linear-linear and linear-log scales, respectively. We consider the scalar case C as an example, with which we can examine the $Z_{\rm c}$ dependence of both convolution timescales, $\tau_{\rm con1}$ and $\tau_{\rm con2}$, for the early and late phases, respectively. The filled circles in Fig. (\[Threshdependence\]) correspond to the fraction of exactly pristine flow material with $Z=0$. This fraction decreases to zero almost instantaneously. This is caused by the effect of numerical diffusion. During each timestep, any computational cell adjacent to one that contains pollutants or has been polluted by earlier mixing events will obtain a finite, but often extremely small concentration. This means that the exactly-pristine flow material would be completely lost in a number of steps $ \approx L_{\rm p}/\Delta$ with $L_{\rm p}$ and $\Delta$ the average pollutant separation and the computation cell size, respectively. The timestep in our simulation is approximately given by $\Delta/v_{\rm max}$, where $v_{\rm max}$ is the maximum flow velocity at a given time. Therefore, the survival time of exactly-pristine gas is $L_{\rm p}/v_{\rm max}$, which is much smaller than the flow dynamical time $L_{\rm f}/v_{\rm rms}$ because $v_{\rm max} \gg v_{\rm rms}$. The almost immediate removal of exactly metal-free gas by numerical diffusion is analogous to the expectation in §3.2 that the molecular diffusivity alone tends to reduce the exactly-pristine fraction $P(t)$ to zero instantaneously (see also PSS), although the numerical diffusion in our simulation probably has a different form and amplitude than the realistic molecular diffusivity. The open symbols in Fig. (\[Threshdependence\]) show simulation data for finite, and more realistic, threshold values in the range from $10^{-9}$ to $10^{-5}$. For $Z_{\rm c}$ in this range, the simulation data for $P(Z_{\rm c}, t)$ can be fit by the self-convolution models. The fitting lines in Fig. (\[Threshdependence\]) are obtained using a two-phase scheme which combines the early and late behaviors at $t_{\rm 0.5}$. Fitting the early phase with the discrete convolution model, we find that the dependence of the timescale $\tau_{\rm con1}$ on $Z_{\rm c}$ is very weak, with $\tau_{\rm con1} =$ 0.226, 0.233, 0.244, 0.253 and 0.271 $\tau_{\rm dyn}$ for the five threshold values increasing from $10^{-9}$ to $10^{-5}$. If we express the dependence as a power-law, $\tau_{\rm con1} \propto Z_{\rm c}^{a_1}$, the exponent, $a_{1}$, would be very small, $\simeq 0.015$. Note that the increase seems to be faster (by about $7\%$), as the threshold increases from $10^{-6}$ to $10^{-5}$. For the late phase, we fixed the parameter $n$ at 3 (the same as used before for this scalar), and adjusted the timescale $\tau_{\rm con2}$ to match the data points for different threshold values. The best-fit value for $\tau_{\rm con2}$ was found to be 0.33, 0.344, 0.355, 0.375 and 0.39 $\tau_{\rm dyn}$ for $Z_{\rm c} = 10^{-9}, 10^{-8}, 10^{-7}, 10^{-6}$, and $10^{-5}$, respectively. On average, $\tau_{\rm con2}$ increases by 4% as $Z_{\rm c}$ increases by each factor of 10, and the dependence can be roughly written as $\tau_{\rm con2} \propto Z_{\rm c}^{a_2}$ with $a_2 = 0.02$. A similar $Z_{\rm c}$ dependence of the convolution timescales was also found for other cases with different $L_{\rm p}$ and in flows at different $M$. There is also a general trend that the increase of the convolution timescale with the threshold becomes faster as $Z_{\rm c}$ increases to the highest value, $10^{-5}$, considered in our study. The weak power-law dependence of the convolution timescales, $\tau_{\rm con1}$ and $\tau_{\rm con2}$, on $Z_{\rm c}$ may extend to a range of threshold values below $10^{-9}$, although as $Z_{\rm c} \to 0$, the numerical diffusion would finally be able to directly act to reduce $P(Z_{\rm c}, t)$ and the scaling of the convolution timescale with $Z_{\rm c}$ given earlier would fail. The $Z_{\rm c} \to 0$ limit may not be of practical interest, as the critical metallicity is likely to be higher than $10^{-9}$ by mass. In the other limit, with increasing $Z_{\rm c}$, the weak power-law scaling will also break down eventually. As pointed out above, the increase of the convolution timescales is already faster as $Z_{\rm c}$ increases to $10^{-5}$. In fact, if $Z_{\rm c}$ approaches the average concentration $\langle Z \rangle,$ (0.01 for the scalar field shown in Fig. \[Threshdependence\]), eq. (\[pfnconvsolution\]), which was derived in the limit $Z_{\rm c} \to 0,$ will become invalid. For illustration, let us consider the case in which $Z_{\rm c}$ is exactly equal to $\langle Z \rangle$. In this case, the fraction $P(Z_{\rm c},t)$ would not decrease to zero in the long time limit; instead it would approach 1/2. A more extreme example is that, if $Z_{\rm c}$ is larger than $\langle Z \rangle$, $P(Z_{\rm c}, t)$ would first decrease when the pollutants mix with a small amount of the flow material, and then increase and finally approach unity when the flow is completely homogenized. This situation may occur at very early times in the history of a galaxy, before heavy elements produced by supernova explosions increased the average metallicity to above the critical threshold. However, there is the possibility metals from the explosion of a single massive Pop III star could make $\langle Z \rangle >Z_{\rm c}$ in a small high-redshift galaxy (Frebel et al. 2009). Even if the average metallicity in the entire interstellar medium is larger than $Z_{\rm c}$, there may exist local regions where the average metallicity is smaller than the threshold. One would need to deal with this situation in a subgrid mode for the large -scale simulation for the primordial gas pollution in an early galaxy (see §7). In this work, we do not examine the evolution of $P(Z_{\rm c}, t)$ for $Z_{\rm c}$ close to or even larger than $\langle Z \rangle$. We defer it to a later work. In the case of $\langle Z \rangle< Z_{\rm c}$, a good approximation is perhaps to assume that $P(Z_{\rm c}, t)$ is a constant $\approx 1.$ Considering that $P(Z_{\rm c}, t)$ would show qualitatively different behaviors as the ratio, $Z_{\rm c}/\langle Z \rangle$, gets close to unity, it appears appropriate to take it as a function of $Z_{\rm c}/\langle Z \rangle$, instead of the absolute value of $Z_{\rm c}$. Another motivation is that, at a given ratio $Z_{\rm c}/\langle Z \rangle$, $P(Z_{\rm c}, t)$ samples a concentration range at a similar distance to the central part of the PDF. Thus, in §5.4.6, we tabulate the fitting parameters for the evolution of $P(Z_{\rm c}, t)$ with $Z_{\rm c}/\langle Z \rangle = 10^{-7}$ as functions of the flow Mach number and the pollutant injection scale. For other values of $Z_{\rm c}/\langle Z \rangle$, the timescales, $\tau_{\rm con1}$ and $\tau_{\rm con2}$ can be inferred using the weak power-law scaling given earlier, as long as $Z_{\rm c}/\langle Z \rangle \lsim 10^{-3}$. ![image](f9.eps){width="2\columnwidth"} ### Dependence on the Numerical Resolution Finally, we examined the effect of numerical resolution. As discussed in §3.2, the timescale for the pollution of the pristine gas to a significant concentration level is mainly determined by the rate at which turbulence stretches the pollutants, and independent of the amplitude of the molecular or numerical diffusion if it is sufficiently small to allow a scale separation between the pollutant injection scale and the diffusion scale. To verify this expectation, we carried out $256^3$ simulations and compared them with the results from $512^3$ runs. The scale separation mentioned above exists at both resolutions, but the separation is limited for the $256^3$ runs. We drive the flows in exactly the same pattern in the $256^3$ and $512^3$ runs. However, due to the “chaotic" nature of turbulence, the developed turbulent velocity field at given locations in the two runs are different. This means that, when released to the simulated flows at different resolutions, the pollutant blobs might encounter completely different velocity structures. In this sense, the comparison of our simulation results at two resolutions is somewhat different from the usual convergence check. In Fig. (\[resolutiondependence\]), we plot $P(Z_{\rm c}, t)$ with $Z_{\rm c} =10^{-8}$ for scalar fields in category [3]{} from the $256^3$ (open symbols) and $512^3$ (filled symbols) runs, with $M=0.9$ and $M=6.2.$ The filled data points and the solid fitting curves for the $512^3$ runs were already presented in Figs. (\[Mach0.9Four\]) and (\[Mach6.2Four\]). Here the early and late phases of cases ([3]{}C) and ([3]{}D) are connected at $t_{0.5}$. The fitting curves for the $256^3$ data are obtained with the same fitting scenario as in the $512^3$ case. In both the Mach 0.9 and 6.2 flows, the pristine fraction for the two scalar fields with $H_0=0.5$ ([3]{}A) and $H_0=0.1$ ([3]{}B) is smaller in the $256^3$ runs. In fact, the fraction becomes smaller than in the $512^3$ runs almost immediately after the pollutants are released into the flows. This can be explained by considering the action of numerical diffusion on the initial concentration field. Since our initial concentration field consists of pure pollutants ($C=1$) and exactly pristine gas ($C=0$), there exist sharp edges between the pollutant blobs and the pristine flow. Numerical diffusion may operate on the large concentration gradient at the edges, and a fraction of the flow material surrounding the pollutants would be polluted immediately. This results in an instantaneous drop in the pristine fraction. In the $512^3$ runs, the effect was found to be weak, and the initial drop was slight. The drop is significantly larger in the $256^3$ simulations due to the larger numerical diffusion, leading to smaller pristine fractions for scalars [3]{}A and [3]{}B in the $256^3$ runs than in the $512^3$ cases. Recognizing this effect of initial drop, we adjusted the initial pristine fraction, $P_0$, to smaller values when fitting the 256$^3$ data. For scalar fields [3]{}A and [3]{}B, we used the same values of $n$ (i.e., $n=10$ and 3 for $M=0.9$ and 6.2, respectively) as in the corresponding $512^3$ cases. With the adjusted values of $P_0$, the best-fit timescales $\tau_{\rm con2}$ for these two cases in the $256^3$ runs differ slightly, only by $\lsim 2\%$, from those used to fit the $512^3$ data. This is the case for both Mach 0.9 and Mach 6.2 flows. Therefore, except for the initial drop, the numerical resolution does not affect how the pristine fraction evolves for the two fields with $H_0 \ge 0.1$, and one may claim a numerical convergence of the convolution timescale. Note that, in realistic interstellar turbulence, the effect of the initial drop would be minimal because the molecular diffusivity is much smaller than the numerical diffusion in our simulations. Also the sharp pollutant-flow edges in the simulations are artificial and may not exist in reality. The dependence on numerical resolution is more complicated for cases [3]{}C and [3]{}D with $H_0 = 0.01$ and $H_0 = 0.001,$ respectively. As seen in Fig. (\[resolutiondependence\]), the pristine fraction decay in the $256^3$ runs can be either faster or slower than in the $512^3$ simulations. The stronger initial drop in the $256^3$ runs still exists in the early evolution phases. However, unlike scalar fields with $P_0 \ge 0.1$, it is not the dominant effect. The velocity field at a given location in the $256^3$ and $512^3$ simulations is different (see above), and thus the same pollutant blob may encounter very different velocity structures in the two runs. As discussed earlier, for cases with small $H_0$, the pollutant size is small, and the turbulent stretching rate around the blobs would show larger variations. Therefore, the stretching rate in the eddy across a small pollutant blob may deviate significantly from the mean value at that scale. Since the flow mass polluted by an individual blob scales nonlinearly with the local stretching rate, the overall pollution rate for scalar fields with tiny $H_0$ cannot be predicted by an average stretching rate, instead it depends on the distribution of the stretching rates over all the pollutant blobs. This is different from the case of $H_0 \gsim 0.1$ fields with large pollutant sizes, where the amplitude of the stretching rate fluctuations is smaller and the stretching rate for each blob is similar and equal to the average value. Thus, the pollution for scalar cases with small $H_0$ ($\lsim 0.01$) is more sensitive to the details of the stretching rates encountered by all the blobs. If the overall stretching rates in the eddies “seen" by the pollutant blobs in the $256^3$ run is relatively higher, the pollution would proceed relatively faster than the $512^3$ run and vice versa. It appears that the origin of the observed difference at late times is stochastic and has nothing to do with the numerical diffusion/resolution. The above picture also suggests that the difference may become larger as $H_0$ decreases further below 0.001. When fitting the early phases of cases [3]{}C and [3]{}D in the $256^3$ flows, we decreased $P_0$ to account for the initial drop. In the $M=0.9$ flow, $\tau_{\rm con1}$ for the early phases of these two cases are close to those used to fit the $512^3$ data, and the difference is at the level of $\lsim 5 \%$. The best-fit timescales $\tau_{\rm con2}$ for the late phases of the two scalars are larger (by $\simeq 10\%$) than for the $512^3$ data. In the $M=6.2$ flow, the fitting parameters for scalar [3]{}C in the $256^3$ run are the same as in the corresponding 512$^3$ case, once the initial drop is accounted for. For case [3]{}D in the $M=6.2$ flow, the data points from the $256^3$ and $512^3$ runs almost coincide, and we do not give a separate fit to the $256^3$ data. It appears that the resolution dependence of the best-fit parameters is quite weak. In summary, we find that the larger numerical diffusion in the $256^3$ simulations causes a larger initial drop in the pristine fraction. This effect is successfully accounted by adjusting the value of $P_0$ in our model fits to the simulation results. The effect is expected to be negligibly weak in the interstellar gas where the molecular diffusivity is tiny. For scalar fields with small $H_0 \le 0.01$, the timescales to fit the simulation data differ by $\lsim 10\%$ at the two resolutions, and numerical convergence may be claimed. The origin of the “random" dependence of the pristine fraction on the resolution for these fields is related to the larger fluctuations of turbulent stretching rate at smaller scales and suggests that a precise prediction of the pristine fraction in the case of tiny $H_0$ may require the detailed eddy conditions at the initial pollutant locations. We finally point out that numerical convergence would not exist at all if the resolution did not allow a separation between the pollution injection and the diffusion length scales. ### Summary We summarize our simulation results in Tables 2, 3, and 4. The parameters listed in the tables are obtained by fitting the fraction $P(10^{-7} \langle Z \rangle, t)$ from simulation data for scalar fields from different categories and in different flows. Here, for each scalar, the threshold $Z_{\rm c}$ is set to $10^{-7}\langle Z \rangle$. The average concentration $\langle Z \rangle$ is equal to the initial pollutant fraction $H_0$ for a double-delta initial condition, eq. (\[initialpdf\]). For scalar fields A, B, C and D, $H_0 =0.5, 0.1, 0.01,$ and 0.001, and we choose $Z_{\rm c}$ to be $5\times 10^{-8}$, $10^{-8}$, $10^{-9}$ and $10^{-10} $, respectively. The choice of a fixed ratio $Z_{\rm c}/\langle Z \rangle$ is more convenient for practical applications. The timescales in Tables 2 and 4 are slightly different from those used in the figures in previous subsections, where (except in §5.4.4) the threshold was fixed at $Z_{\rm c} =10^{-8},$ for all values of $H_0$. The numbers in these two tables are in units of the flow dynamical time, $\tau_{\rm dyn}$. The first columns of Tables 2, 3 and 4 show results for scalar fields with the injection scale $L_{\rm p}$ close to the box size or $\simeq 2 L_{\rm f}$. These parameters are measured from scalar cases in category [1]{}. Measuring the parameters using category [2]{} fields with the same $L_{\rm p}$ would give essentially the same results. Table 2 lists the timescale, $\tau_{\rm con1},$ for the early phase of scalar fields with $H_{0} \le 0.01$. In this phase, the pristine fraction evolution is fit by the “discrete" convolution model with $n=1$. For scalar cases in each category ($L_{\rm p}$) and each flow ($M$), we measured $\tau_{\rm con1}$ for the early phases of fields C (with $H_0 = 0.01$) and D (with $H_0=0.001$), and the number given in Table 2 is the average of the measured values for these two fields. As found in §5.4.2, at a given injection scale, $\tau_{\rm con1}$ first increases as $M$ increases from 0.9 to 2-3, and then saturates for larger $M$. The overall increase in $\tau_{\rm con1}$ is about 20% for $M$ in the range from 0.9 to 6.2. This is in general agreement with the trend of the mixing timescale $\tau_{\rm m}$ with $M$ found in PS10. At a given Mach number, $\tau_{\rm con1}$ decreases with decreasing injection length scale $L_{\rm p}$. As $L_{\rm p}$ decreases from $2 L_{\rm f}$ to $L_{\rm f}$, $\tau_{\rm con1}$ is smaller by $\sim 25\%$. The decrease is faster for smaller $L_{\rm p}$, and a further decrease of $L_{\rm p}$ by each factor of 2 reduces $\tau_{\rm con1}$ by $\simeq 35\%$. If we express the $L_{\rm p}$ dependence of $\tau_{\rm con1}$ roughly as a power law for $L_{\rm p} \lsim L_{\rm f}$, we have $\tau_{\rm con1} \propto L_{\rm p}^{0.62}$. Tables 3 and 4 give the parameters $n$ and $\tau_{\rm con2}$ as functions of $M$ and $L_{\rm p}$. These are measured for the pristine fraction evolution of scalar fields with $H_0 \ge 0.1$ or the late-time behavior of scalars with smaller $H_0$. For a given category ($L_{\rm p}$) and a given flow ($M$), we choose a single value of $n$, with which the self-convolution model prediction can well match the simulation data simultaneously for both the two scalar cases with $H_0 \ge 0.1$ and the late phases of the other two cases with $H_0 \le 0.01$. In Table 3, the parameter $n$ is taken to be $\infty$ for scalar fields with $L_{\rm p} \simeq 2 L_{\rm f}$ in the Mach 0.9 flow, which corresponds to the continuous convolution model (eq. \[pfcconvsolution\]). PSS showed that the continuous model can be used to obtain successful fits to category [1]{}  scalars in the $M=0.9$ flow. We find that, for a given $\tau_{\rm con2}$, the predicted pristine fraction by the convolution model barely changes with increasing $n$ once $n$ exceeds $\sim 20$. This means that replacing $\infty$ in Table 3 by any number larger than $20$ would also work for category [1]{} (or [2]{}) fields in the Mach 0.9 flow. From Table 3, we see that $n$ decreases with increasing Mach number and decreasing $L_{\rm p}$. This is due to the higher degree of flow intermittency at larger $M$ (Pan & Scannapieco 2011) or smaller $L_{\rm p}$, which causes broader concentration PDF tails. As described previously, a smaller $n$ indicates a more local PDF convolution . After fixing the parameter $n$ for each $L_{\rm p}$ and $M$, we measure the timescale, $\tau_{\rm con2}$, for scalar cases A and B and the late phases of cases C and D. The measured values for the four cases are not exactly the same, but show slight variations. The variations are stronger at larger $M$ or smaller $L_{\rm p}$. We found that the amplitude of the variations is smaller when using a fixed $Z_{\rm c}/\langle Z \rangle$ ratio rater than a fixed threshold $Z_{\rm c}$. This also justifies taking the pristine fraction as a function of $Z_{\rm c}/\langle Z \rangle$. The numbers given in Table 4 are the averages of the best-fit values for the four scalar cases in each category and each flow. The dependence of $\tau_{\rm con2}$ on $M$ and $L_{\rm p}$ is very similar to that of $\tau_{\rm con1}$ shown in Table 2. Again, it increases by about 20% as $M$ increases from $0.9$ to $2-3$, and then stays constant at larger $M$. Like $\tau_{\rm con1}$, the decrease of $\tau_{\rm con2}$ with decreasing $L_{\rm p}$ also appears to be faster at smaller $L_{\rm p}$. It is reduced by 25%, 35%, and 40%, respectively, as $L_{\rm p}$ decreases by each factor of 2 from $2L_{\rm f}$ to $L_{\rm f}/4$. Roughly, $\tau_{\rm con2}$ scales with the injection scale as $\tau_{\rm con2} \propto L_{\rm p}^{0.65}$ for $L_{\rm p} \lsim L_{\rm f}$. We point out that, when measuring the model parameters from all the scalar fields with $H_{0} \le 0.01$, we connected the early and late phases at the time $t_{0.5}$ as the pristine fraction decreases to 0.5. However, as discussed in §5.4.1, one can still use the parameters given in Tables 2, 3, and 4 if a connection at an earlier time, $t_{0.9}$, is preferred in a particular application. Tables 2, 3 and 4 can be used for practical applications. One may first fix the three parameters, $\tau_{\rm con1}$, $n$ and $\tau_{\rm con2}$, by interpolating the tabulated values according to the flow Mach number, $M$, and the pollutant injection scale, $L_{\rm p}$. and for interpolation purposes, one can replace $n \to \infty$ by, say, $n=20$ for the case with $M=0.9$ and $L_{\rm p} = 2 L_{\rm f}.$ For subsonic flows with $M < 0.9$, we expect the parameters to be very close to those measured here for the $M=0.9$ flow. As shown in PS10 and Pan and Scannapieco (2011), the velocity structures at all orders in the Mach 0.9 flow are essentially the same as in incompressible turbulence (corresponding to the limit $M \to 0$). In the other limit of large $M$, the timescales would not change with $M$ for $M\gsim 6$, since they already saturate at $M=2-3$. The parameter $n$ may keep decreasing as $M$ increases above 6.2, and in that case one may obtain $n$ by extrapolation, with the expectation that $n$ has a minimum value of 1, corresponding to the highest degree of spatial locality in the PDF convolution. For the dependence of the timescales on $L_{\rm p}$, we can use the approximate power-law scalings given above for $L_{\rm p} \lsim L_{\rm f}$. Next, depending on the initial pollutant fraction $H_0$, one may decide whether to start with an early phase using the discrete convolution model. For different values of the ratio $Z_{\rm c}/\langle Z \rangle$, $n$ does not change, and the timescales $\tau_{\rm con1}$ and $\tau_{\rm con2}$ may be obtained from the weak power-law scaling with $Z_{\rm c}$ given in §5.4.4. The scaling applies for $Z_{\rm c}/\langle Z \rangle \lsim 10^{-3}$. For convenience, we have computed fits to $\tau_{\rm con1}$, $n$ and $\tau_{\rm con2},$ which can be used in place of interpolating along the table. Because the regime in which $L_{\rm p} \leq L_{\rm f}$ is the most important one for most astrophysical systems, we have focused on this case when computing our $L_{\rm p}$ dependence, and furthermore, because of the statistical noise in our measurements, we have taken an average scaling of $L_{\rm p}^{0.63}$ for both $\tau_1$ and $\tau_2.$ Imposing a strict floor of $n \geq 1$ and the $Z_c$ scaling measured above we find $$\begin{gathered} \tau_{\rm con1} = \left[0.225 - 0.055 \exp(-M^{3/2}/4) \right] \left(\frac{L_{\rm p}}{L_{\rm f}} \right)^{0.63} \times \notag\\ \left(\frac{Z_{\rm c}}{10^{-7} \langle Z \rangle} \right)^{0.015}, \notag \\ \tau_{\rm con2} = \left[0.335 - 0.095 \exp(-M^2/4) \right] \left(\frac{L_{\rm p}}{L_{\rm f}} \right)^{0.63}\hspace{.3cm} \times \notag \\ \left(\frac{Z_{\rm c}}{10^{-7} \langle Z \rangle} \right)^{0.02}, \notag\\ n = 1 + 11 \, \exp(-M/3.5) \left(\frac{L_{\rm p}}{L_{\rm f}}\right)^{1.3}, \hspace{1.7cm} \label{eq:taunfit}\end{gathered}$$ which provides good fits for all Mach numbers and pollution properties, as long as $Z_{\rm c}/\langle Z \rangle \lsim 10^{-3},$ and $L_{\rm p} \leq L_{\rm f}.$ We finally point out that the parameters may have a dependence on how the turbulent flow is driven. For example, in a compressively driven flow at the same Mach number, $n$ may be smaller than measured from our simulations (see §5.4.2). Application to the Pollution of Primordial Gas in Early Galaxies ================================================================ The Global Pristine Fraction ---------------------------- In previous sections, we have focused on understanding the fundamental physics of the pollution of pristine flow material by turbulent mixing. We now describe how our results can be applied to investigate the pollution of primordial gas in the interstellar media of high-redshift galaxies. In this section, we discuss using our results to obtain a qualitative estimate of the pollution timescale in early galaxies, similar to the formalism of Tinsley (1980) in which the evolution within a galaxy is reduced to a few general parameters. A more accurate approach based on large-eddy simulations and subgrid modeling will be presented in the next section. To study the mixing of heavy elements in interstellar turbulence, we need to specify the source term in the PDF equation (\[pdfeq\]), which can be evaluated by considering how the pollutants, including fresh metals from supernova explosions and low-metallicity or pristine infall gas, affect the metallicity PDF (see §2.2). If the supernova rate per unit volume in a given region of a galaxy is $\dot{n}_{\rm SN}({\bf x}, t)$, the source term due to new metals from supernovae would be ${\dot{n}_{\rm SN} m_{\rm ej}} \left[\delta(Z-Z_{\rm ej}) -\Phi(Z; {\bf x}, t)\right] $ where it is assumed that, on average, each supernova produces an ejecta mass of $m_{\rm ej}$ with a mass fraction of metals $Z_{\rm ej}$, and that Rayleigh-Taylor and Kelvin-Helmholtz instabilities arising during the explosion mix the fresh metals with the envelope material. In reality, the source term by supernovae may have a finite width instead of being a delta function, because the ejecta mass and the heavy element yield vary with the mass of the progenitor star. One can refine the form of the source term by using nucleosynthesis results for the ejecta mass and metal yield as functions of the progenitor mass (Maeder 1992; Woosley & Weaver 1995; Heger & Woosley 2002) and accounting for the initial stellar mass function. The $-\Phi$ term corresponds to the replacement of the existing PDF in a fraction of the interstellar gas by $\delta(Z-Z_{\rm ej})$ due to the release of new metals from supernovae, and it guarantees the source term conserves the total probability. During the formation of an early galaxy, there may exist an infall of primordial gas that continuously flows from the halo into the galaxy. This provides another source term, $\dot{m}_{\rm I} [\delta (Z) -\Phi(Z; {\bf x}, t)]$, where $\dot{m}_{\rm I} ({\bf x}, t)$ denotes the local infall rate. The infall rate should be take to be zero except at the boundary, where the pristine gas enters the galaxy. Again the $-\Phi$ term is to ensure the conservation of the total probability. Clearly, new metals from supernovae and the pristine infall gas force spikes at high and low concentration levels in the PDF, respectively. We define a global pristine fraction as $P_{\rm g} (Z_{\rm c}, t) = \int_0^{Z_{\rm c}} dZ \int_{V} dx^3 \langle \rho({\bf x}, t) \rangle \Phi(Z; {\bf x}, t)/M_{\rm g}$ where $V$ is the total volume of the galaxy and $M_{\rm g} = \int_{V} \langle \rho \rangle dx^3$ is the total mass of the interstellar gas. An equation for $P_{\rm g}$ can be derived by performing a double integration of eq. (\[pdfeq\]) over space and concentration. The advection term vanishes when integrated over space, and the double integral of the supernova source term gives $- \frac{\dot{N}_{\rm SN} m_{\rm ej} }{M_{\rm g}} P_{\rm g}(Z_{\rm c}, t)$, where $\dot{N}_{\rm SN}$ is the total supernova rate in the galaxy. Clearly, the contribution from supernovae is always negative. On the other hand, the infall of primordial gas contributes a positive term $\frac {\dot{M}_{\rm I} }{M_{\rm g}} [1 -P_{\rm g}(Z_{\rm c}, t )]$, where $ {\dot{M}_{\rm I} }$ is the global infall rate. Using the self-convolution model for the diffusivity term in the $P_{\rm g}$ equation, we obtain, $$\frac{dP_{\rm g}}{dt} = -\frac{n}{\tau_{\rm con}} P_{\rm g}(1-P_{\rm g}^{1/n})- \frac{\dot{N}_{\rm SN}m_{\rm ej}}{M_{\rm g}} P_{\rm g} + \frac {\dot{M}_{\rm I}} {M_{\rm g}} (1-P_{\rm g}). \label{eq:pfgalaxy}$$ A similar equation with $n=1,$ and without the infall term was first given in Pan and Scalo (2007). When writing eq. (\[eq:pfgalaxy\]), we have made an implicit assumption of statistical homogeneity, which may break down for several reasons. First, the prediction of the self-convolution model for the pristine fraction evolution is tested and verified only in statistically homogeneous flows, and it may not be valid for a system with large-scale inhomogeneities. Second, eq. (\[eq:pfgalaxy\]) adopts single values for the parameters $n$ and $\tau_{\rm con}$, equivalent to assuming similar turbulence conditions everywhere in the interstellar gas. Finally, the first (mixing) term on the r.h.s. is nonlinear with $P_{\rm g}$, and this nonlinearity would affect the prediction accuracy, if, for example, the star formation and hence the metallicity have a large-scale gradient. The parameters in eq. (\[eq:pfgalaxy\]) should be viewed as the effective averages over the turbulence and metallicity conditions of the entire galaxy. These suggest the solution of eq. (\[eq:pfgalaxy\]) only provides a rough estimate for the global pristine fraction, which can be improved by accounting for realistic complexities. Nevertheless, the equation is a useful guideline for the study of the primordial gas pollution in early galaxies. The turbulence conditions in the interstellar media of early galaxies are essentially unknown, and thus the parameters in eq. (\[eq:pfgalaxy\]) cannot be estimated with certainty. Here we will make various assumptions for the turbulence parameters, and discuss how the pollution of the pristine gas proceeds under different conditions. Future observations will help constrain the parameter space and give a clearer picture of the mixing process in high-redshift galaxies. The Pollution Timescale ----------------------- A crucial parameter for mixing in the interstellar gas is the driving length scale of the interstellar turbulence, $L_{\rm f}.$ If the turbulence is driven at the largest scales, e.g., by the collapse of the baryonic matter into the potential well of the dark matter halo, then $L_{\rm f}$ is close to the size of the galaxy, $L_{\rm G}.$ In this case, the primary energy source for turbulence is the gravitational energy, and the driving force of the interstellar turbulence is associated with the source term in eq. (\[eq:pfgalaxy\]) for the pristine infall. The driving scale may also remain close to $L_{\rm G}$ at late times if the infall from the halo is persistent during the galaxy evolution. On the other hand, if the primary energy source for interstellar turbulence is the explosion energy of supernovae, then $L_{\rm f}$ is likely on the order of the typical size of supernova remnants, $L_{\rm SNR}.$ In general, we expect $L_{\rm SNR} \lsim L_{\rm f} \lsim L_{\rm G},$ and, depending on how $L_{\rm f}$ compares with $L_{\rm G}$, the pollution will proceed in qualitatively different ways. We first consider the case where the turbulent driving scale, $L_{\rm f}$, is close to the galaxy size, $L_{\rm G}$. With $L_{\rm f} \simeq L_{\rm G}$, we may roughly think of the entire interstellar medium as corresponding to our simulation box, and the dynamical time, $\tau_{\rm dyn}$, may be calculated by dividing $L_{\rm G}$ by the rms turbulent velocity. As new metals from supernovae are released to the interstellar turbulence, the supernova source term in eq. (\[eq:pfgalaxy\]) reduces the pristine fraction, $P_{\rm g}$. We can start applying the self-convolution model in eq. (\[eq:pfgalaxy\]) to calculate $P_{\rm g}$, once the average metallicity exceeds the threshold value, $Z_{\rm c}$. The parameters $n$ and $\tau_{\rm con}$ can be estimated based on our simulation results tabulated in §5.4.6. With more supernovae exploding, the pollution process would become faster due to the increased amount of pollutants. Also, the average pollutant separation and hence the injection scale, $L_{\rm p}$, will decrease with the total number of supernovae, $N_{\rm SN}(t).$ Assuming a random supernova distribution, $L_{\rm p}$ scales like $N_{\rm SN}^{-1/3}$. The convolution timescale $\tau_{\rm con}$ would thus decrease with time, according to the power-law scaling of $\tau_{\rm con}$ with $L_{\rm p}$ resulting in a faster pollution rate. A subtle and minor effect is that the increase of the average metallicity reduces the threshold to average ratio, $Z_{\rm c}/\langle Z \rangle$, leading to a slight additional decreases in $\tau_{\rm con}$. This t may be accounted for using the $Z_{\rm c}$-dependence of $\tau_{\rm con}$ given in §5.4.4. If the infall of pristine gas is persistent, the infall term in eq. (\[eq:pfgalaxy\]) provides a continuous source for the pristine fraction, and there may exist a quasi-steady-state for $P_{\rm g}$ (see Pan and Scalo 2007), which is controlled by three timescales, the convolution timescale, $\tau_{\rm con}$, the timescale for supernova sources, $M_{\rm g}/(\dot{N}_{\rm SN} m_{\rm ej})$, and the timescale for the mass accretion by infall, $M_{\rm g}/\dot{M}_{\rm I}$. The estimate of $P_{\rm g}$ is more complicated if the driving scale, $L_{\rm f}$, is much smaller than $L_{\rm G}$. If $L_{\rm f} \ll L_{\rm G}$, the correlation length scale of the turbulent velocity field is much smaller than the size of the entire interstellar medium, and one may view the interstellar medium as a collection of many “independent" turbulent regions of size $\sim L_{\rm f}$. The pollution process in each region would be similar to that in our simulation box, with a timescale determined by the local stretching/convolution timescale, $\sim \tau_{\rm dyn}$ ($\equiv L_{\rm f}/v_{\rm rms}$). However, the pollution in the entire interstellar medium may not be simply described by a self-convolution model or eq. (\[eq:pfgalaxy\]) with a local convolution timescale. This is because the situation in individual regions of size $L_{\rm f}$ may be completely different. For example, the regions that had supernova explosions at early times may have already been significantly polluted, while the pollution process may have not yet started in the regions that had not experienced supernovae or received any heavy elements. Thus the mixing/pollution timescale over the entire galaxy may depend on the large-scale turbulent transport of pollutants between the “independent" regions. Assuming a random walk model for turbulent transport at scales $\gg L_{\rm f}$, the transport timescale at the galactic scale may be roughly estimated as $\tau_{\rm trans} \equiv L_{\rm G}^2/(L_{\rm f} v_{\rm rms})$, which is much larger than the local stretching timescale $L_{\rm f}/v_{\rm rms}$ and the timescale $L_{\rm G}/v_{\rm rms}$. If $L_{\rm f} \ll L_{\rm G}$, another timescale of interest is $\tau_{\rm SN}$, defined as the time needed for the average separation between the supernova remnant locations to decrease below $\simeq L_{\rm f}$. In other words, $\tau_{\rm SN}$ represents the time for supernovae to populate the interstellar medium at a level of about one per region of size $L_{\rm f}$. If the supernovae are randomly distributed, $\tau_{\rm SN}$ can be estimated from $N_{\rm SN}(\tau_{\rm SN}) \simeq (L_{\rm G}/L_{\rm f})^3$, where $N_{\rm SN}(t)$ is the total number of supernovae exploded before time $t$. At $t \ll \tau_{\rm SN}$, only a smaller number of supernovae occur, and the supernova sources would be statistically inhomogeneous at the scale $L_{\rm f}$. In that case, eq. (\[eq:pfgalaxy\]) is not directly applicable as it implicitly assumes statistical homogeneity (see above). Thus, the pristine fraction evolution in the $L_{\rm f} \ll L_{\rm G}$ case depends on a comparison of three timescales, $\tau_{\rm dyn}$, $\tau_{\rm SN}$ and $\tau_{\rm trans}$. From their definitions, $\tau_{\rm dyn} \equiv L_{\rm f}/v_{\rm rms} \ll \tau_{\rm trans} \equiv L_{\rm G}^2/(L_{\rm f} v_{\rm rms}),$ and the amplitude of $\tau_{\rm SN}$ relative to these two timescales is crucial for how the pollution proceeds. If the star formation or supernova rate is so high that $\tau_{\rm SN} \ll \tau_{\rm dyn}$, the supernovae fill the interstellar medium quickly, and its spatial distribution would appear more or less homogeneous at the scale $L_{\rm f}$ before each region of size $L_{\rm f}$ is significantly polluted. This suggests that the pollution in all the “independent" regions would roughly proceed at a similar pace, and the pristine fraction evolution in each region may approximately reflect the global pristine fraction. Therefore, at $t \gsim \tau_{\rm SN}$, one may apply eq. (\[eq:pfgalaxy\]) to estimate the global pristine fraction using $n$ and $\tau_{\rm con}$ corresponding to the physical conditions at the scale $L_{\rm f}$. In this case, the timescale for the decay of $P_{\rm g}$ would be $\simeq \tau_{\rm dyn}$. If $\tau_{\rm dyn} \ll \tau_{\rm SN} \ll \tau_{\rm trans}$, the mixing of fresh metals from a supernova with the surrounding region of size $L_{\rm f}$ is fast with a relatively short timescale ($\sim \tau_{\rm dyn}$), and the interstellar medium would have been completely polluted if, on average, each region of size $L_{\rm f}$ in the galaxy had one supernova explosion. This is expected to occur at time $t \simeq \tau_{\rm SN}$, and thus the pollution timescale in the entire galaxy is on the order $\sim \tau_{\rm SN}$. Finally, if the star formation rate is very low and $\tau_{\rm SN}$ is significantly larger than $\tau_{\rm trans}$, then the turbulent transport at large scales ($\gg L_{\rm f}$) plays a crucial role in the pollution process. The delivery of heavy elements by the large-scale transport provides the entire galaxy with pollutants before the metal deposit by supernova events covers most of the interstellar medium. The pollution in the galaxy would be completed at $\tau_{\rm trans}$. In this case, modeling the effect of the large-scale turbulent transport is essential. One interesting limiting case is when the interstellar turbulence is completely driven by supernova explosions, and turbulent motions are weak outside the influence radius $L_{\rm SNR}$ of each supernova. In that case, we have $L_{\rm f} \simeq L_{\rm SNR}$ in regions affected by supernovae, and the transport of metals in between supernova locations would be slow with a large timescale $\tau_{\rm trans}$. From the discussion above, the pollution timescale would be determined by the maximum of the two timescales, $\tau_{\rm dyn}$ and $\tau_{\rm SN}$. We note that a quantitatively accurate prediction for this case may need to carefully account for the correlation between the metal injection and the turbulence driving force. While eq. (\[eq:pfgalaxy\]) provides a rough estimate for the pollution of primordial gas in early galaxies, perhaps the best tool for a quantitative prediction is a large-scale numerical simulation that can include complexities such as large-scale velocity and metallicity inhomogeneities of the interstellar medium, and the effect of large-scale transport. So far we have ignored the advection term in the PDF equation (\[pdfeq\]), which is responsible for the transport of the local PDF by the velocity field. The transport effect on the primordial gas pollution is substantial under certain circumstances, as seen earlier in the pollution timescale estimate. In the next section, we will establish a formulation for large-scale simulations of the pristine gas pollution in early galaxies. In the context of large-eddy simulations, the advection term corresponds to the local PDF or pristine fraction exchange between neighboring computational cells due to both the large-scale velocity and the subgrid turbulent motions. Modeling the advection term in the PDF equation is crucial in these simulations, and we will adopt a commonly-used subgrid closure for the transport effect by subgrid turbulent motions. Large-Eddy Simulations and Subgrid Modeling =========================================== The complexities present in a realistic high-redshift galaxy can only be dealt with in detail through direct numerical simulations of the pollution of pristine gas in the interstellar medium of a high-redshift galaxy. However, it is prohibitively expensive for such simulations to resolve the scale at which homogenization by molecular diffusivity occurs in interstellar turbulence. Limited resolution implies significant numerical diffusion, which causes artificial mixing, erasing any metallicity fluctuations that would exist below the size of a computational cell. In fact, due to the vast range of scales existing in the problem, resolving any inertial-range scales at all is extremely challenging (Scannapieco & Br" uggen 2010). This results in an underestimate in the degree of metallicity fluctuations/inhomogeneity in the interstellar gas. Nevertheless, a large-scale simulations still provide useful estimates for the low-order metallicity statistics, such as the metallicity variance, if they manage to resolve a small portion of inertial range, since the majority of the scalar fluctuation power is at large scales. On the other hand, the problem is much more severe for the pollution of the primordial gas, which corresponds to high-order statistics of the metallicity fluctuations. Since the threshold metallicity $Z_{\rm c}$ for the transition to Pop II star formation is tiny, a computational cell would essentially lose all the pristine gas once it is subject to the pollution by even a small amount of heavy elements. A significant underestimate in the pristine mass fraction is therefore expected in simulations that do not resolve a considerable inertial range. Here we propose to approach the problem using large eddy simulations (LES) that keep track of the concentration fluctuations at subgrid scales. In such simulations, the flow at large scales is directly computed, while the effects of turbulent motions at subgrid scales are modeled. The existence of scale invariance in the inertial range of turbulent flows is crucial for subgrid modeling (Meneveau & Katz 2000), which justifies using the resolved flow structures to infer the feedback effect of small-scale fluctuations. In this section, we outline an LES scenario for simulating the pollution of pristine gas in early galaxies. In §7.1, we first derive the LES equations for the interstellar turbulent flow and introduce subgrid models to close the equations, taking so-called one-equation subgrid model (e.g., Lilly 1966), which evolves the turbulent kinetic energy at subgrid scales, as an illustrative example. In §7.2, we develop an LES formulation for the pristine fraction based on an equation for the local concentration PDF filtered at the resolution scale, using the self-convolution PDF models. The model parameters can be determined with the simulation results summarized in §5.4.6. By retaining the subgrid concentration fluctuations, the model provides a remedy to the over-pollution by numerical diffusion, and is expected to significantly improve the predicting power of large-scale simulations for the primordial gas pollution in high-redshift galaxies. Subgrid Modeling of the Interstellar Turbulent Flow --------------------------------------------------- We start by introducing the basic filtering procedure used to derive the governing flow equations at resolved scales in LES. The procedure employs a low-pass filter function, $G ({\bf x}-{\bf x}')$, which eliminates fluctuations below the resolution scale of the simulation grid, $\Delta$. Examples of the filtering function are a window function of width ${\Delta}$ or a Gaussian function with variance $\Delta^2$. For any flow variable, $A({\bf x}, t)$, the filtered quantity $\overline{A}({\bf x}, t)$ is defined as, $$\overline{A}({\bf x}, t) = \int_V A({\bf x}', t) G ({\bf x}-{\bf x}') d {x'}^3, \label{filter}$$ and it represents the variable at the resolved scales. From eq. (\[filter\]), we have, $$\overline{\left(\frac {\partial A} {\partial t}\right)} = \frac {\partial \overline{A} } {\partial t}, \hspace{5mm} \overline{\left(\frac {\partial A} {\partial x_i}\right)} = \frac {\partial \overline{A}} {\partial x_i} \label{filteredvariables}$$ where integration by parts is used to obtain the second equality. For compressible flows, it is more convenient to use the Favre filtering (e.g., Speziale et al. 1988, Moin et al. 1991, Erlebacher et al. 1992), defined as, $$\widetilde{A}({\bf x}, t) = \frac { \overline{\rho A}}{\overline{\rho}}, \label{Favrefilter}$$ where a density-weighted factor is included. Applying the filtering procedure to the continuity and momentum equations gives, $$\frac{ \partial \overline{\rho}} {\partial t} + \frac{\partial}{\partial x_i} (\overline{\rho} \hspace{0.5mm} \widetilde{v}_i) = 0, \label{FiilteredContinuity}$$ and, $$\frac{ \partial (\overline{\rho} \hspace{0.5mm} \widetilde{v_i}) } {\partial t} + \frac {\partial ( \overline{\rho} \hspace{0.5mm} \widetilde{v_i} \hspace{0.5mm} \widetilde{v_j} ) } {\partial x_j} = - \frac {\partial (\overline{\rho} \hspace{0.5mm} \tau_{ij})} {\partial x_j} - \frac {\partial \overline{p} } {\partial x_i} + \frac {\partial \overline{\sigma}_{ij} } {\partial x_j} + \overline{\rho} \hspace{0.5mm} {\widetilde f_i}, \label{FilteredMomentum}$$ where $\tau_{ij}$, called the subgrid-scale stress tensor, is defined as $$\tau_{ij} = \widetilde{v_i v_j} - \widetilde{v_i} \hspace{0.5mm} \widetilde{v_j}. \label{SGSstress}$$ This tensor cannot be evaluated exactly because of the closure problem, and developing an adequate model for it is essential for large-eddy simulations. The filtered pressure can be written as $\overline {p} = \overline{\rho} R \widetilde{T} $, where $T$ is the gas temperature, and $R = k_{\rm B}/(\mu_{\rm H} m_{\rm H})$ is the ideal gas constant with $k_{\rm B}$, $\mu_{\rm H}$ and $m_{\rm H}$ the Boltzmann constant, the molecular weight and the atomic mass unit, respectively. The viscous stress tensor, $\sigma_{ij}$, in eq. (\[FilteredMomentum\]) is given by $\sigma_{ij} = 2 \rho \nu (S_{ij} - \frac{1}{3} \delta_{ij} S_{kk})$ where $\nu$ is the kinematic viscosity and $S_{ij} = \frac{1}{2}(\partial_i v_j + \partial_j v_i)$ is the rate of strain tensor. We approximate the filtered viscous stress by $\overline{\sigma}_{ij} = 2 \overline{\rho} \nu (\widetilde{S}_{ij} - \frac{1}{3} \widetilde{S}_{kk} \delta_{ij} )$, where $\widetilde{S}_{ij} \equiv \frac{1}{2}(\partial_i \widetilde{v}_j + \partial_j \widetilde{v_i})$ is the strain tensor at the resolution scale. For interstellar turbulence, various sources contribute to the driving force, $f_i$, in the momentum equation, including, e.g., gravity and the acceleration by supernovae. To evaluate the pressure term in eq. (\[FilteredMomentum\]), one needs to consider the filtered energy or temperature equation, which reads (e.g., Garnier et al. 2009), $$\begin{gathered} C_{\rm V}\frac{ \partial (\overline{\rho} \hspace{0.5mm} \widetilde{T}) } {\partial t} + C_{\rm V} \frac {\partial ( \overline{\rho} \hspace{0.5mm} \widetilde{T} \hspace{0.5mm} \widetilde{v_i} ) } {\partial x_i} = - \overline{\rho} R \widetilde{T} \hspace{0.5mm} \frac{\partial \widetilde{v}_i}{\partial x_i} + \widetilde{S}_{ij} \hspace{0.5mm} \overline{\sigma}_{ij} + \overline{\rho} \hspace{0.5mm} \widetilde{\Gamma} - \overline{\rho} \hspace{0.5mm} \widetilde{\Lambda} \hspace{2.5cm} \notag\\ \hspace{2.8cm} - \left(\overline{p \frac {\partial v_i} {\partial x_i} } - \overline{p} \hspace{0.5mm} \frac{\partial \widetilde{v}_i}{\partial x_i}\right) + \left (\overline{S_{ij} \sigma_{ij}} - \widetilde{S}_{ij} \hspace{0.5mm} \overline{\sigma}_{ij} \right) \hspace{2cm} \notag\\ \hspace{1.4cm} + \frac {\partial} {\partial x_i} \left(\overline {\kappa} \frac{\partial \widetilde{T}} {\partial x_i} \right) - C_{\rm V} \frac {\partial \left( \overline{\rho} q_i \right) } {\partial x_i} , \label{FilteredTemperature} \end{gathered}$$ where $C_{\rm V}$ is the heat capacity of the flow material, equal to $3 R/ 2$ for a monoatomic gas. The last two terms, $\Gamma$ and $\Lambda,$ on the first line are the heating rate by external sources and the cooling rate by radiation, respectively. The two terms in the second line of equation (\[FilteredTemperature\]) correspond to the effects of $pdV$ work and heating by viscous dissipation at subgrid scales, which we model later in this section. The first term on the third line represents thermal conduction with $\kappa$ the thermal conductivity, where we assumed $\overline{\kappa \hspace{0.5mm} \partial_i T} \simeq \overline{\kappa} \hspace{0.5mm} \partial_i \widetilde{T}$. The last term in eq. (\[FilteredTemperature\]) is the heat transport by subgrid turbulent motions, and the temperature flux $q_i$ is defined as, $$q_i= \widetilde{T v_i} - \widetilde{T} \hspace{0.5mm} \widetilde{v}_i,$$ which will be modeled later. A variety of subgrid models have been developed to approximate $\tau_{ij}$ (see reviews by Lesieur and Metais (1996) and Meneveau and Katz (2000) for LES of incompressible flows). A major class of models adopt an eddy-viscosity assumption, relating the deviatoric part of $\tau_{ij}$ to the resolved strain tensor, $$\tau_{ij} - \frac{1}{3} \tau_{kk} \delta_{ij} = - 2 \nu_{\rm t} \left(\widetilde{S}_{ij} - \frac{1}{3} \widetilde{S}_{kk} \delta_{ij}\right), \label{eddyviscosity}$$ where the eddy viscosity, $\nu_{\rm t},$ is usually constructed as the product of a length scale ($\simeq \Delta$) and a velocity scale characteristic of the subgrid turbulent motions. In an LES for interstellar turbulence, $\nu_{\rm t}$ is typically much larger than the kinematic viscosity $\nu$, and the viscous stress term in eq. (\[FilteredMomentum\]) may be neglected. For incompressible flows, $\widetilde{S}_{kk}$ in eq. (\[eddyviscosity\]) vanishes because $\partial_i \widetilde{v}_i =0$, and the isotropic part, $\frac{1}{3} \tau_{kk} \delta_{ij}$, of the subgrid stress can be absorbed in the pressure term. Therefore, one obtains a complete subgrid model for LES of incompressible flows by setting $\tau_{ij} = -2 \nu_{\rm t} \widetilde{S}_{ij}$. On the other hand, in compressible flows, the isotropic part must be modeled explicitly. This part behaves like a pressure term, and is sometimes named “turbulent pressure". Note that $\tau_{kk} = \widetilde{v_k v_k} - \widetilde{v}_k \widetilde{v}_k = 2 K$, with $K$ the turbulent kinetic energy per unit mass at subgrid scales. Similar to the eddy-viscosity model for the subgrid stress, one may adopt an eddy-diffusivity assumption for the temperature flux, $q_i$, caused by subgrid turbulent motions, $$q_i = - {\alpha_{\rm t}} \frac{\partial \widetilde{T}} {\partial x_i}, \label{TFlux}$$ where the “eddy conductivity" $\alpha_{\rm t}$ is of the same order as $\nu_{\rm t}$, and is usually parameterized by a subgrid Prandtl number, $\alpha_{\rm t} = \nu_{\rm t}/Pr_{\rm t},$ where $Pr_{\rm t}$ is typically taken to be $\simeq 0.7$ (e.g., Edison 1985, Erlebacher et al. 1992, Jaberi et al. 1999). Eddy-viscosity models differ in how $\nu_{\rm t}$ is evaluated. In the Smagorinsky (1963) model, $\nu_{\rm t}$ is calculated by $(C_{\rm s} \Delta)^2 |\widetilde{S}|$ with $|\widetilde{S}| = (2\widetilde{S}_{ij} \widetilde{S}_{ij})^{1/2}$, which essentially assumes the amplitude of the subgrid velocity fluctuations goes like $\propto \Delta |\widetilde{S}|$. The Smagorinsky model has also been used in the LES of compressible flows (e.g., Moin et al. 1991, Erlebacher et al. 1992, Vreman et al. 1997). For compressible flows, Yoshizawa (1986) proposed to set $\tau_{kk} \equiv 2K =2 C_{\rm I} \Delta^2 |\widetilde{S}|^2$ for the isotropic part of the subgrid stress, which appears to underestimate the subgrid kinetic energy (Park & Mahesh 2007). A variant of the eddy-viscosity model is the so-called one-equation model, where an equation for the subgrid kinetic energy, $K$, is derived, modeled and solved (e.g., Lilly 1966, Schumann 1975, Moeng 1984, Ghosal et al. 1995, Menon & Kim 1996; for one-equation models of compressible flows, see, e.g., Schmidt et al. 2006, Park & Mahesh 2007, Genin & Menon 2010, and Chai & Mahesh 2012). Using the solved subgrid kinetic energy, the eddy-viscosity is then estimated by, $$\nu_{\rm t}= C_{\nu} \Delta \sqrt{2K}. \label{nut}$$ In this paper, we will consider the one-equation model primarily as an example to illustrate the construction of an LES for the pollution of primordial gas in interstellar turbulence. For a compressible flow, the subgrid kinetic energy equation is given by, $$\begin{gathered} {\displaystyle \frac{ \partial (\overline{\rho} \hspace{0.5mm} K) } {\partial t} + \frac {\partial ( \overline{\rho} K \widetilde{v_i} ) } {\partial x_i} = - \overline{\rho} \widetilde{S}_{ij} \tau_{ij} + \overline{\rho} \left(\widetilde{v_i f_i} - \widetilde{v}_i \hspace{0.5mm} \widetilde{f}_i \right)} \hspace{2cm} \notag \\ \hspace{0.5cm} + \left( \overline{p \frac {\partial v_i}{\partial x_i}} - \overline{p} \frac{ \partial \widetilde{v}_i} { \partial x_i} \right) - \left( \overline{S_{ij} \sigma_{ij}} - \widetilde{S}_{ij} \overline{\sigma}_{ij} \right) \notag \\ \hspace{1.4cm} + \frac{\partial}{\partial x_i} \Bigl[ \overline{\rho} \hspace{0.5mm} \widetilde{v}_j \tau_{ij} + (\overline{ v_j \sigma_{ij} } - \widetilde{v}_j \hspace{0.5mm} \overline{\sigma}_{ij} ) \hspace{2.2cm} \notag \\ \hspace{1.8cm} - \frac{1}{2} \overline{\rho} \hspace{0.5mm} ( \widetilde{v_jv_j v_i} - \widetilde{v_j v_j} \hspace{0.5mm} \widetilde{v}_i) - \left(\overline{p v_i} - \overline{p} \hspace{0.5mm} \tilde{v}_i \right) \Bigr] , \label{subgridkinetic}\end{gathered}$$ where the first term on the r.h.s.  represents the production of subgrid kinetic energy by the cascade from resolved scales. The two terms on the second line appeared earlier in the filtered temperature equation, corresponding to the $pdV$ work (or pressure-dilation) and the viscous dissipation at subgrid scales. The pressure-dilation term is sometimes neglected for weakly compressible flows because it is difficult to model (e.g., Moin et al. 1991, Erlebacher et al. 1992), but in highly compressible flows the $pdV$ work is not negligible and needs to be accounted for. Using direct numerical simulations of supersonic turbulence, PS10 showed that, despite its reversible nature, the $pdV$ work tends to convert kinetic energy to thermal energy, and thus acts as a significant kinetic energy sink in addition to the viscous dissipation. Based on their results, one can model the pressure-dilation and viscous dissipation terms together as $C_{\rm diss} \overline{\rho} K/ \tau_{\rm sdyn} = C_{\rm diss} \sqrt{2}\overline{\rho}K^{3/2}/\Delta $, where the subgrid dynamical time, $\tau_{\rm sdyn}$, is assumed to be $\Delta/\sqrt{2K}$. Here we have implicitly assumed that the filter size lies in the inertial range of the real flow and also assumed that the flow “driving" length at subgrid scales is $\simeq \Delta$. This may not be true if, for example, the supernova explosion is the main energy source for turbulence and the resolution scale is significantly larger than the size of supernova remnants. We do not consider this complexity here, as the one-equation subgrid model is used largely for an illustration purpose. The dimensionless parameter $C_{\rm diss}$ is expected to be a function of the subgrid Mach number, $M_{\rm s} \equiv {(2K/ R \widetilde{T})}^{1/2}$, and the dependence of $C_{\rm diss}$ on $M_{\rm s}$ may be determined using the simulation results of PS10. The last term on the first line of eq. (\[subgridkinetic\]) corresponds to the addition of kinetic energy at subgrid scales by the driving force, $f_i$. If the characteristic length scale of $f_i$ is much larger the filter size, $f_i \simeq \widetilde{f}_i$, and $(\widetilde{v_i f_i} - \widetilde{v}_i \hspace{0.5mm} \widetilde{f}_i)$ would be negligible, meaning that the driving force stores kinetic energy mainly at the resolved scales. On the other hand, a significant fraction of energy input from supernova explosions may be deposited primarily as subgrid kinetic energy, if the simulation does not resolve the typical size of supernova remnants (Scannapieco & Bruggen 2010). In that case, $\widetilde{f}_i \simeq 0$ for isotropically expanding supernova remnants, and the subgrid kinetic energy input can be estimated as the product of the supernova explosion energy and the local supernova rate per unit volume. The transport (or flux) terms in the last two lines of eq. (\[subgridkinetic\]) are usually grouped and modeled together as a diffusion of the subgrid kinetic energy (Lilly 1966, Schumann 1975, Moeng 1984, Ghosal et al. 1995, Schmidt et al. 2006, Genin & Menon 2010, see, however, Chai & Mahesh 2012 for separate treatment of each individual term). Here we adopt an eddy-diffusion assumption for the first three flux terms and approximate them together by $(\nu + \nu_{\rm k}) \partial_i K $, where $ \nu_{\rm k} = C_{\rm k} \Delta \sqrt{2K} $. The parameter $C_{\rm k}$ is sometimes set to be equal to $C_{\rm \nu}$ in eq. (\[nut\]) for the subgrid stress tensor (e.g., Kim & Menon 1999). In general, they may be different and need to be treated separately (e.g., Schmidt et al. 2006). The last flux term in eq. (\[subgridkinetic\]) can be written as $-\overline{\rho} R (\widetilde{Tv}_i -\widetilde{T} \hspace{0.5mm}\widetilde{v}_i) = -\overline{\rho} R q_i$ where the temperature flux, $q_i$, by subgrid motions is modeled by eq. (\[TFlux\]). With these assumptions, we have (see Genin & Menon 2010), $$\begin{gathered} \frac{ \partial (\overline{\rho} \hspace{0.5mm} K) } {\partial t} + \frac {\partial ( \overline{\rho} K \widetilde{v_i} ) } {\partial x_i} = - \overline{\rho} \widetilde{S}_{ij} \tau_{ij} - C_{\rm diss} \frac {\sqrt{2} \overline{\rho}K^{3/2} } {\Delta} \hspace{2.3cm} \notag \\ \hspace{2.3cm} + \frac{\partial}{\partial x_i} \left[ \overline{\rho} (\nu + \nu_{\rm k} ) \frac{\partial K}{\partial x_i} + \overline{\rho} R \frac{\nu_{\rm t}}{Pr_{\rm t}} \left( \frac{\partial \widetilde{T} } {\partial x_i} \right) \right] \notag\\ \hspace{-1.1cm} + \overline{\rho} \left(\widetilde{v_i f_i} - \widetilde{v}_i \hspace{0.5mm} \widetilde{f}_i \right), \label{subgridkineticmodel}\end{gathered}$$ which is in a closed form and can be evolved to obtain the subgrid turbulent energy. We next consider the filtered temperature equation (\[FilteredTemperature\]). We use the eddy-diffusivity model, eq. (\[TFlux\]), for the temperature flux, $q_i$, in the last term of eq. (\[FilteredTemperature\]), and model the pressure-dilation and viscous dissipation terms as in eq. (\[subgridkineticmodel\]). With these assumptions, we obtain, $$\begin{gathered} C_{\rm V}\frac{ \partial (\overline{\rho} \hspace{0.5mm} \widetilde{T}) } {\partial t} + C_{\rm V} \frac {\partial ( \overline{\rho} \hspace{0.5mm} \widetilde{T} \hspace{0.5mm} \widetilde{v_i} ) } {\partial x_i} = - \overline{\rho} R \widetilde{T} \hspace{0.5mm} \frac{\partial \widetilde{v}_i}{\partial x_i} + \widetilde{S}_{ij} \hspace{0.5mm} \overline{\sigma}_{ij} \hspace{2cm} \notag \\ \hspace{2.3cm} + \frac {\partial} {\partial x_i} \left(\overline {\kappa} \frac{\partial \widetilde{T}} {\partial x_i} \right) + C_{\rm V} \frac {\partial } {\partial x_i} \left(\overline{\rho} \frac{\nu_{\rm t} } {Pr_{\rm t}} \frac{\partial \widetilde{T}} {\partial x_i}\right) \notag \\\hspace{0.8cm} + C_{\rm diss} \frac {\sqrt{2} \overline{\rho} K^{3/2} } {\Delta} + \overline{\rho} \hspace{0.5mm} \widetilde{\Gamma} - \overline{\rho} \hspace{0.5mm} \widetilde{\Lambda} , \label{FilteredTemperaturemodel} \end{gathered}$$ where the thermal conductivity term can be neglected if $\overline{\kappa} \ll \overline{\rho} C_{\rm V} \alpha_{\rm t}$. An alternative approach to obtain $\widetilde{T}$ is to model and evolve the equation of the filtered total energy, $\widetilde{E}$($\equiv \frac{1}{2} \widetilde{v_i} \widetilde{v_i} + K + C_{\rm V} \widetilde{T} $), per unit mass (e.g., Vreman et al. 1997, Kosovic et al. 2002, Schmidt et al. 2006, Park & Mahesh 2007, Genin & Menon 2010, Scannapieco & Bruggen 2010, Chai & Mahesh 2012). To solve the LES and $K$ equations, one needs to determine four parameters, $C_{\rm \nu}$, $Pr_{\rm t}$, $C_{\rm diss}$, and $C_{\rm k}$. Traditionally, these are assumed to be positive constants and specified [*a priori*]{} and then tuned by testing against experiments or numerical simulations. On the other hand, this approach has the weaknesses of failing to fully account for the flow-dependence of these parameters, as well as not allowing the backscatter of the subgrid kinetic energy to the resolved scales, which does occur in some local regions of a turbulent flow (Piomelli et al. 1991). These limitations motivated a dynamic procedure for subgrid modeling where the model coefficients are computed in a localized and adaptive way using the flow structures at resolved scales and the assumption of scale invariance (e.g. Germano et al. 1991; Moin et al. 1991, Germano 1992; Lilly 1992; Ghosal et al. 1995, Kim & Menon 1999; Schmidt et al. 2006; Park & Mahesh 2007; Genin & Menon 2010; Chai & Mahesh 2012). Here we have restricted our attention to the eddy-viscosity models and focused particularly on the one-equation model. The interested reader is referred to, e.g., Vreman et al. (1997), Meneveau & Katz (2000), and De Stefano et al. (2008), for non-eddy-viscosity subgrid models and their dynamic versions. Two-equation subgrid models have also been developed, which, in addition to the subgrid kinetic energy, evolve another subgrid quantity, such as the dissipation rate (e.g., Gallerano et al. 2005) or the characteristic length scale of subgrid turbulent motions (e.g., Fang & Menon 2006; Dimonte & Tipton 2006; Scannapieco & Bruggen 2010). Subgrid Model for Turbulent Mixing and the Pollution of Pristine Gas -------------------------------------------------------------------- In this subsection, we construct a subgrid model for the pollution of primordial gas in early galaxies. We first consider the equation for the filtered concentration field, which provides a general illustration for subgrid modeling of turbulent mixing. Applying the filtering procedure to the advection-diffusion equation (\[advection\]) gives, $$\frac{ \partial (\overline{\rho} \hspace{0.5mm} \widetilde{C} )} {\partial t} + \frac {\partial (\overline{\rho} \hspace{0.5mm} \widetilde{v}_i \widetilde{C})}{\partial x_i} = - \frac{\partial (\overline{\rho} g_i )}{\partial x_i} + \frac{\partial}{\partial x_i} \left( \overline{\rho \gamma \frac{\partial C}{\partial x_i}} \right) + \overline{\rho} \widetilde{S}, \label{FiilteredConcentration}$$ where $g_i = \widetilde{v_iC} - \widetilde{v_i}\widetilde{C} $ is the concentration flux caused by subgrid turbulent motions. The equation is similar to the temperature equation (\[FilteredTemperature\]) except for the pressure-dilation and viscous dissipation terms. In analogy to the subgrid temperature flux, $q_{i}$, one may adopt an eddy-diffusivity assumption for the concentration flux, $g_i = - \gamma_{\rm t} \partial_i \widetilde{C}$, yielding, $$\frac{ \partial (\overline{\rho} \hspace{0.5mm} \widetilde{C})} {\partial t} + \frac {\partial (\overline{\rho} \hspace{0.5mm} \widetilde{v}_i \widetilde{C})}{\partial x_i} = \frac{\partial}{\partial x_i} \left( \overline{\rho} (\gamma+ \gamma_{\rm t} ) \frac {\partial \widetilde{C}}{\partial x_i} \right) + \overline{\rho} \widetilde{S}, \label{FilteredConcentrationAdvection}$$ where we also assumed $\overline{\rho \gamma \partial_i C} \simeq \overline{\rho} \gamma \partial_i \widetilde{C}$. The eddy diffusivity, $\gamma_{\rm t}$, is of the same order as the eddy viscosity, and the subgrid Schmidt number $Sc_{\rm t}$($\equiv \nu_{\rm t}/\gamma_{\rm t}$) is sometimes set to be the same as the subgrid Prandtl number $Sc_{\rm t} = Pr_{\rm t} \approx 0.7$ (e.g., Jaberi et al. 1999). Somewhat smaller values, $Sc_{\rm t} \simeq 0.3-0.4$, have also been proposed (e.g., Pitsch & Steiner 2000 and Jimenez et al. 2001). $Sc_{\rm t}$ can also be computed from the local flow structures using the dynamic procedure discussed above (see e.g., Moin et al. 1991, Pierce & Moin 1998). For the LES of interstellar turbulence, $\gamma_{\rm t}$ is expected to be much larger than the molecular diffusivity $\gamma$. Similar to the subgrid kinetic energy, we can derive an equation for the subgrid concentration variance, defined as $\widetilde{(\delta C)^2} = \widetilde{C^2} - (\widetilde{C})^2$. The equation reads, $$\begin{gathered} \frac{ \partial \left(\overline{\rho} \hspace{0.5mm} \widetilde{(\delta C)^2}\right)} {\partial t} + \frac {\partial \left(\overline{\rho} \hspace{0.5mm} \widetilde{v}_i \hspace{0.5mm} \widetilde{ (\delta C)^2 }\right)}{\partial x_i} = - 2 \overline{\rho} \hspace{0.5mm} g_i \hspace{0.5mm} \frac{\partial \widetilde{C} }{\partial x_i} + \frac{\partial}{\partial x_i} \bigg\{ 2 \overline{\rho} \hspace{0.5mm} \widetilde{C} \hspace{0.5mm} g_i \hspace{3cm} \notag\\ \hspace{1.6cm} + \left( \overline{\rho \gamma \frac {\partial C^2} {\partial x_i}} - 2 \widetilde{C} \hspace{0.5mm} \overline{ \rho \gamma \frac {\partial C} {\partial x_i} }\right) - \overline{\rho} \left(\widetilde{C^2 v_i} - \widetilde{C^2} \hspace{0.5mm} \widetilde{v}_i\right) \bigg\}\notag \\ \hspace{-0.8cm} - 2 \left[\overline{ \rho \gamma \left(\frac{\partial C}{\partial x_i} \right)^2 } - \overline{\rho \gamma \left( \frac{\partial C} {\partial x_i}\right)} \frac{\partial \widetilde{C} }{\partial x_i} \right] \notag\\ \hspace{-3.5cm} + 2 \overline{\rho} ( \widetilde{SC} - \widetilde{S} \hspace{0.5mm} \widetilde{C}) , \label{FilteredConcentrationVariance} \end{gathered}$$ which is in close analogy to eq. (\[subgridkinetic\]). The first term, $- 2 \overline{\rho} g_i \partial_i \widetilde{C}$, on the r.h.s. represents the production of the concentration variance by the scalar cascade from the resolved scales. The term on the third line corresponds to the subgrid scalar dissipation by molecular diffusivity. We model the dissipation as $\overline{\rho} \widetilde{ (\delta C)^2}/\tau_{\rm sm}$ (see eq. (\[ensemblevariance\])), where the subgrid mixing timescale $\tau_{\rm sm}$ is expected to scale with the subgrid dynamical time, $\tau_{\rm sdyn} \equiv \Delta/\sqrt{2K}$. Parametrizing $\tau_{\rm sm}$ with respect to $\tau_{\rm sdyn}$, we set the dissipation term to $C_{\rm m} \overline{\rho} \sqrt{2K} \widetilde{ (\delta C)^2}/\Delta$, where $C_{\rm m} = \tau_{\rm sdyn}/\tau_{\rm m}$. The parameter $C_{\rm m}$ depends on the local subgrid Mach number, $M_{\rm s} = (2K/RT)^{1/2}$, and also on the subgrid length scale $L_{\rm sp}$ at which the pollutants are injected, and it can be calibrated using the simulation results of PS10, who tabulated the mixing timescale of passive scalars forced at different length scales in turbulent flows at a range of Mach numbers. If the pollutants are forced at large scales and the subgrid fluctuations are contributed primarily by the cascade from resolved scales, and it is appropriate to set $L_{\rm sp} = \Delta $. However, $L_{\rm sp}$ could be smaller than the resolution scale, $\Delta$, if, for example, multiple supernovae explode in a single computational cell (Scannapieco & Br" uggen 2010). Finally, we model the transport (flux) terms (the last term in the first line and all the terms in the second line) together as a diffusion term, $\partial_i \left( (\gamma+ \gamma_{\rm t2}) \overline \rho \partial_i \widetilde{\delta C^2}\right)$. The eddy-diffusivity, $\gamma_{\rm t2}$, here is likely to be close to $\gamma_{\rm t}$ in the $\widetilde{C}$ equation, although it is not clear if they are exactly equal (see below). These assumptions result in a closed variance equation (c.f. Jimenez et al. 2001), $$\begin{gathered} \frac{ \partial (\overline{\rho} \hspace{0.5mm} \widetilde{(\delta C)^2})} {\partial t} + \frac {\partial (\overline{\rho} \hspace{0.5mm} \widetilde{v}_i \widetilde{ (\delta C)^2 })}{\partial x_i} = 2 \overline{\rho} \gamma_{\rm t} \left( \frac{\partial \widetilde{C} }{\partial x_i} \right)^2 \notag \hspace{2.cm} \\ \vspace{2mm}\hspace{1.5cm} - C_{\rm m} \overline{ \rho} \frac{\sqrt{2K} \widetilde{(\delta C)^2}}{\Delta} + \frac{\partial}{\partial x_i} \left[ \overline{\rho}(\gamma + \gamma_{\rm t2}) \frac {\partial \widetilde{ (\delta C)^2 }}{\partial x_i} \right] \notag \\ +2 \overline{\rho} ( \widetilde{SC} - \widetilde{S} \widetilde{C}), \hspace{2.9cm} \label{ModeledConcentrationVariance} \end{gathered}$$ which illustrates the basic picture for modeling the subgrid concentration fluctuations, and provides a useful guideline for formulating a subgid model for the pollution of pristine gas. A dynamic procedure for the subgrid scalar variance and dissipation was developed in Pierce and Moin (1998). The subgrid model we construct for the pollution of pristine gas is based on the the PDF formulation in the context of LES. Applying a Favre filter, eq. (\[Favrefilter\]), to the fine-grained concentration PDF $\phi = \delta (Z-C({\bf x},t))$, we define a density-weighted PDF at the resolution scale, $$\widetilde {\phi}(Z; {\bf x},t) = \frac{\overline {\rho \phi(Z; {\bf x},t)}} {\overline{\rho}}.$$ An exact equation for the filtered PDF, $\widetilde{\phi}$, is derived in Appendix A, $$\begin{gathered} \frac {\partial (\overline {\rho} \hspace{0.5mm}\widetilde {\phi})} {\partial t} + \frac{\partial}{\partial x_i} \left( \overline{\rho} \hspace{0.5mm} \widetilde{\phi} \hspace{0.5mm}\overline{[v_i|C=Z]}_{\rho} \right) = \frac{\partial}{\partial x_i} \left( \overline{\rho \gamma \frac{\partial \phi}{\partial x_i}} \right) \hspace {3cm}\notag \\ \hspace{2.2cm} - \frac{\partial^2}{\partial Z^2} \left(\overline{\rho} \hspace{0.5mm} \widetilde{\phi} \overline{\left[\gamma \left(\frac{\partial C}{\partial x_i} \right)^2 \left\vert \vphantom{\frac{1}{1}} \right. C=Z \right]}_\rho \right)\notag \\ \hspace{.3cm} - \frac{\partial}{\partial Z} \left( \overline{\rho} \hspace{0.5mm}\widetilde{\phi} \hspace{0.5mm} \overline{ [S|C=Z]} _{\rho} \right), \label{rhofilteredeqtext}\end{gathered}$$ where $\overline{[\cdot \cdot \cdot|C=Z]}_\rho$ denotes density-weighted filtering conditioned on the local concentration value. The definition of the conditional filtering is given in Appendix A. Eq. (\[rhofilteredeqtext\]) is essentially identical to eqs. (\[pdfeq\]) and (\[ensemblediffusivityterm\]) for the ensemble-defined PDF, $\Phi$. This implies that, first, the same closure problem exists for the advection and diffusivity terms in eq. (\[rhofilteredeqtext\]), and, second, the PDF closure models in the ensemble context can be applied to the filtered PDF equation. Although our primary goal is not to solve the equation for the entire filtered PDF, we give an outline for modeling the PDF equation, which is helpful for understanding our LES approach for the pristine fraction. We first consider the advection term, which is responsible for the transport of the PDF between different regions by the turbulent velocity. We write it in two terms, $\widetilde{\phi} \hspace{0.5mm}\overline{[v_i|C=Z]}_{\rho} = \widetilde{\phi} \hspace{0.5mm} \widetilde{v}_i + \widetilde{\phi} \left( \overline{[v_i|C=Z]}_{\rho} - \widetilde{v}_i\right)$, and then modeling the second term with an eddy-diffusivity assumption gives $\widetilde{\phi} \hspace{0.5mm}\overline{[v_i|C=Z]}_{\rho} = \widetilde{\phi} \hspace{0.5mm} \widetilde{v_i} - \gamma_{\rm t \phi} { \partial_i \widetilde{\phi}} $ where $\gamma_{\rm t \phi}$ is the eddy-diffusivity for the PDF flux by the subgrid motions (see, e.g., Gao & O’Brien (1993), Colucci et al. (1998), Jaberi et al. (1999)). The filtered PDF equation then becomes, $$\begin{gathered} \frac {\partial (\overline {\rho} \hspace{0.5mm}\widetilde {\phi})} {\partial t} + \frac{\partial}{\partial x_i} \left( \overline{\rho} \hspace{0.5mm} \widetilde{\phi} \hspace{0.5mm} \widetilde{v}_i \right) = \frac{\partial}{\partial x_i} \left( \overline{\rho} ( \gamma_{\rm t \phi} + \gamma) \frac{\partial \widetilde{\phi} }{\partial x_i} \right) \hspace{3cm}\notag\\ \hspace{2cm} - \frac{\partial^2}{\partial Z^2} \left(\overline{\rho} \hspace{0.5mm} \widetilde{\phi} \overline{\left[\gamma \left(\frac{\partial C}{\partial x_i} \right)^2 \left\vert \vphantom{\frac{1}{1}} \right. C=Z \right]}_\rho \right) \notag\\ \hspace{0.1cm} - \frac{\partial}{\partial Z} \left( \overline{\rho} \hspace{0.5mm}\widetilde{\phi} \hspace{0.5mm} \overline{ [S \vert C=Z]} _{\rho} \right), \label{rhofilteredeqadvection}\end{gathered}$$ where we also assumed that $\overline{\rho \gamma \partial_i \phi} \simeq \overline{\rho} \gamma \partial_i \widetilde{\phi}$ (see, e.g., Jaberi et al. 1999). Taking the first-order moment of eq. (\[rhofilteredeqadvection\]) gives an equation for the filtered concentration $\widetilde{C}$, which is the same as eq. (\[FilteredConcentrationAdvection\]) except that $\gamma_{\rm t}$ is replaced by $\gamma_{\rm t \phi}$. This suggests that $\gamma_{\rm t \phi} \simeq \gamma_{\rm t}$ (Gao & O’Brien 1993). Also, using the second-order moment of eq. (\[rhofilteredeqadvection\]), we can derive an equation for the subgrid variance, $\widetilde{(\delta C)^2}$, which is the same as eq. (\[ModeledConcentrationVariance\]) except that $\gamma_{\rm t \phi}$ replaces both $\gamma_{\rm t}$ and $\gamma_{\rm t2}.$ This indicates that the eddy diffusivities for the concentration mean ($\gamma_{\rm t}$) and variance ($\gamma_{\rm t2}$) are automatically set to be equal if one models the advection term in the PDF equation with an eddy-diffusivity closure. The term on the second line of eq. (\[rhofilteredeqadvection\]) represents homogenization by molecular diffusivity, and can be modeled using established PDF closure approximations for turbulent mixing, such as those discussed in §3.1. In §6.1, we derived an expression for the source term in the ensemble PDF equation. Using the same method, we estimate the source term in the last line of eq. (\[rhofilteredeqadvection\]) for the filtered PDF. The source term for new metals from supernovae is $\overline{\dot{n}}_{\rm SN} m_{\rm ej} \left(\delta(Z-Z_{\rm ej}) -\widetilde{\phi}\right)$, where $\overline{\dot{n}}_{\rm SN}({\bf x}, t)$ is the filtered number rate of supernova explosions per unit volume, and ejecta from each supernova is assumed to have the same mass $m_{\rm ej},$ with metallicity $Z_{\rm ej}$. Again the $-\widetilde{\phi}$ term ensures the conservation of the total probability. With the supernova source term in the filtered PDF equation, it is straightforward to calculate the source terms in the filtered concentration and variance equations (\[FilteredConcentrationAdvection\] and \[ModeledConcentrationVariance\]). We find that the source terms are $\overline{\dot{n}}_{\rm SN} m_{\rm ej} (Z_{\rm ej} -\widetilde{C})$ and $\overline{\dot{n}}_{\rm SN} m_{\rm ej} [(Z_{\rm ej} -\widetilde{C})^2-\widetilde{\delta C^2}]$ in the $\widetilde{C}$ and $\widetilde{\delta C^2}$ equations, respectively. If a continuous infall of pristine gas from the halo exists during the formation and evolution of a galaxy, one can maintain a mass flux at the boundary of the simulation box and set $\widetilde{\phi} = \delta(Z) $ as the boundary condition for the (filtered) concentration PDF. We finally consider modeling the pollution of the primordial gas in an LES. Clearly, the fine-grained pristine fraction, $P(Z_{\rm c}; {\bf x},t)$, at a given point is an integral of the fine-grained PDF, ${\phi}(Z; {\bf x},t)$, from $Z=0$ to the threshold, $Z_{\rm c}$, and, similarly, the filtered pristine fraction, $\widetilde{P}$, at the resolution scale is given by, $$\widetilde{P} (Z_{\rm c}; {\bf x},t)= \int_{0}^{Z_{\rm c}} \widetilde{\phi} (Z; {\bf x},t) dZ. \label{filteredpristine}$$ We can therefore derive an equation for $\widetilde{P}$ by integrating the filtered PDF equation (\[rhofilteredeqtext\]) from $0$ to $Z_{\rm c}$. Performing such an integration for the advection term in eq. (\[rhofilteredeqtext\]) yields $\widetilde{P v_i}$, which corresponds to the flux of the pristine fraction into and out of a computational cell due to the transport/advection of the turbulent velocity (see §2.1). The term can be rewritten as $\widetilde{P} \hspace{0.5mm} \widetilde{v}_i + (\widetilde{P v_i} - \widetilde{P} \hspace{0.5mm} \widetilde{v}_i)$ where the term in the brackets is the pristine fraction flux caused by the subgrid turbulent motions. We model this subgrid flux with an eddy-diffusion assumption, $$\widetilde{P v_i} - \widetilde{P} \widetilde{v}_i = -\gamma_{\rm P} \partial_i \widetilde{P},$$ where $\gamma_{\rm P}$ is the eddy diffusivity for the pristine fraction. We use the self-convolution models (§3.2) for the effect of the diffusivity term on the pristine fraction. Integrating the supernova source term from 0 to $Z_{\rm c}$ gives $- \overline{\dot{n}}_{\rm SN} m_{\rm ej} \widetilde{P}$. With these models and assumptions, we obtain, $$\begin{gathered} \frac {\partial (\overline {\rho} \hspace{0.5mm}\widetilde {P})} {\partial t} + \frac{\partial}{\partial x_i} \left( \overline{\rho} \hspace{0.5mm} \widetilde{P} \hspace{0.5mm}\widetilde{v}_i \right) = \frac{\partial}{\partial x_i} \left( \overline{\rho} (\gamma + \gamma_{\rm P}) \frac{\partial \widetilde{P}}{\partial x_i} \right) \hspace{2cm} \notag\\ \hspace{2cm} - \frac{n_{\rm s}}{\tau_{\rm scon}} \widetilde{P}\left(1- \widetilde{P}^{1/n_{\rm s}} \right) - \overline{\dot{n}}_{\rm SN} m_{\rm ej} \widetilde{P} , \label{filteredprimordial}\end{gathered}$$ where it is also assumed $\overline{\rho \gamma \partial_i P} = \overline{\rho} \gamma \widetilde{P}$, and $n_{\rm s}$ and $\tau_{\rm scon}$ correspond to the parameters, $n$ and $\tau_{\rm con}$, in the self-convolution models discussed in §3. As $\gamma$ is likely much smaller than $\gamma_{\rm P}$, the pristine fraction flux due to the molecular diffusivity can be neglected. The choice for $n_{\rm s}$ and $\tau_{\rm scon}$ according to the turbulence and pollutant conditions at subgrid scales will be described and discussed below. If a pristine mass flux is enforced at the boundary of the simulation box to imitate the primordial infall gas, one should set $\widetilde{P} =1$ as a boundary condition. A comparison of eq. (\[filteredprimordial\]) with eq. (\[ModeledConcentrationVariance\]) shows that both equations have transport or flux terms, a mixing/homogenization term by the molecular diffusivity, and a source term. A similar analogy also exists with the equation for the subgrid kinetic energy, eq. (\[subgridkineticmodel\]). There is, however, an interesting difference. The concentration variance equation has a term that tends to increase the subgrid variance, representing the scalar cascade from the resolved scales. On the other hand, there is no such production term in the $\widetilde{P}$ equation, because no mechanism exists in the mixing process that can produce pristine gas at subgrid scales. To derive the equation for the filtered pristine fraction, we could also have started from eq. (\[rhofilteredeqadvection\]) for the filtered PDF, where the advection term is already modeled by an eddy-diffusion assumption. In that case, we would have found that the primordial flux due to subgrid turbulent motions is given by $-\gamma_{\rm t \phi} \partial_i \widetilde{P}$. This suggests that, when applying an eddy-diffusivity closure to the advection term in the filtered PDF equation, it is implicitly assumed that the three eddy diffusivities, $\gamma_{\rm t}$, $\gamma_{\rm t2}$, and $\gamma_{\rm P}$, respectively for the mean, variance and the pristine fraction, are the same and all equal to $\gamma_{\rm t \phi}$. The quantitative accuracy of this assumption is not clear, although all the eddy-diffusivities are expected to be of the same order. A simple estimate for $\gamma_{\rm P}$ is to scale it with the eddy viscosity as, $\gamma_{\rm P} = \nu_{\rm t}/Sc_{\rm P}$, with the Schmidt number $Sc_{\rm P}$ for the pristine fraction in the range from $0.3-0.7$, as in the case of $\gamma_{\rm t}$ discussed earlier (see text below eq. (\[FilteredConcentrationAdvection\])). To implement eq. (\[filteredprimordial\]) in an LES for the pollution of primordial gas in early galaxies, we now only need to specify the two parameters, $n_{\rm s}$ and $\tau_{\rm scon}$, from the self-convolution model. When using the convolution model in eq. (\[filteredprimordial\]), we have implicitly assumed that statistical homogeneity is restored at the resolution scale, $\Delta$, because the applicability of the model is tested and confirmed only in statistically homogeneous turbulent flows. With this assumption, $n_{\rm s}$ and $\tau_{\rm scon}$ can be determined using our simulation results. These parameters are functions of the flow Mach number and the pollutant injection scale relative to the flow driving scale. The subgrid Mach number, $M_{\rm s}$, can be easily computed by $(2K/R\widetilde{T})^{1/2}$ in the one-equation model, where $K$ and $\widetilde{T}$ are, respectively, the subgrid kinetic energy and the filtered gas temperature. The subgrid source injection scale, $L_{\rm sp}$, in a computational cell would be close to the cell size, $\Delta$, if the pollutant source was transported into the cell by advection, or if only one supernova exploded in the cell. In that case, it is appropriate to set $L_{\rm sp} = \Delta$. On the other hand, if multiple supernova explosions occurred in a single cell, then $L_{\rm sp}$ would roughly go like the number of supernovae to -1/3 power, assuming a random distribution. We assume the subgrid flow “driving" scale, $L_{\rm sf}$, is roughly given by the cell size, $\Delta$. With $M_{\rm s}$ and the ratio $L_{\rm sp}/L_{\rm sf}$, one can fix the parameters, $n_{\rm s}$ and $\tau_{\rm scon}$, by interpolating Tables 2, 3 and 4 in §5.4.6, or using the fits given in eqs.(\[eq:taunfit\])). The timescales $\tau_{\rm con1}$ and $\tau_{\rm con2}$ given in Tables 2 and 4 are normalized to the flow dynamical time, therefore the values for $\tau_{\rm scon}$ are in units of the subgrid dynamical time, $\tau_{\rm sdyn}$$(\equiv \Delta/\sqrt{2K})$. We point out there is an uncertainty in the applicability of our tabulated parameters to computation cells with supernova explosions. In these cells, the effective driving is likely better described by a pure compressive force rather than solenoidal. As discussed earlier, this may affect the parameters in the convolution model. Future simulations are needed to investigate the potential dependence of the parameters with the compressibility of the driving force. The convolution timescale also has a dependence on the threshold metallicity, $Z_{\rm c}$, relative to the mean concentration (see §5.4.4). Thus, to determine $\tau_{\rm scon}$, we need to compute the ratio, $Z_{\rm c}/\widetilde{C}$, of the threshold to the mean, $\widetilde{C}$, in a cell. For that purpose, it is necessary to solve the filtered concentration equation (\[FilteredConcentrationAdvection\]) to keep track of $\widetilde{C}$ in all computational cells. As mentioned earlier, the source term in this equation is given by $\overline{\dot{n}}_{\rm SN} m_{\rm ej}(Z_{\rm ej} -\widetilde{C})$. Based our results In §5.4.4, for small values of $Z_{\rm c}/\widetilde{C}$ ($\lsim 10^{-3}$), one can use a weak power-law scaling (see §5.4.4) to rescale the convolution timescales listed in Tables 2 and 4. However, it is possible that $\widetilde{C}$ in a computational cell is close to or even smaller than $Z_{\rm c}$. As discussed in §5.4.4, in the extreme case with $\widetilde{C} \lsim Z_{\rm c}$, the evolution of $\widetilde{P}$ in a cell would be qualitatively different from the prediction of the self convolution models. A careful treatment is thus needed for cells with $\widetilde{C}$ close or smaller than $Z_{\rm c}$. How the fraction, $\widetilde{P}$, evolves under this situation is not explored in the current work, and we defer it to a future paper. In the subgrid model outlined above for the pristine fraction, we adopted a simple approach to fix the model parameters, prescribing them based on our simulation results and previous work on LES. An interesting question is whether the parameters can be determined dynamically using the resolved local flow and scalar structures. It seems highly uncertain whether the dynamic procedure is applicable at all to the problem of how the pristine gas is polluted in a turbulent flow. As mentioned earlier, the validity of the dynamical procedure relies on the existence of scale invariance. This can be justified, e.g., for the cascade of kinetic energy or the concentration variance, based on Kolmogorov’s similarity theory of turbulence. However, unlike kinetic energy or the scalar variance, which are 2nd-order statistical measures, the pristine fraction corresponds to the extreme PDF tail, and it is unknown whether scale invariance exists for such a high-order quantity. Exploring the possibility of developing a dynamic subgrid model for the pristine gas fraction would be an interesting topic for a future study. Conclusions =========== The shift from Population III to normal star formation is a global transition of the universe that is dependent on mixing on scales smaller than a parsec (Pan & Scalo 2007). This means that numerical simulations of this process will only be possible if we first develop a deep understanding of the fundamental physics of how the pristine material is polluted in turbulent flows. In an earlier paper (PSS), we developed a theoretical approach to modeling this process based on the PDF method for passive scalar mixing in statistically homogeneous turbulence, and we explored the evolution of the pristine fraction, $P(Z_{\rm c}, t)$, defined as the mass fraction of the flow with pollutant concentration below a tiny threshold $Z_{\rm c}$. Then we used numerical simulations to show that a class of PDF models, called self-convolution models, provide successful fitting functions to the solution of $P(Z_{\rm c}, t),$ which corresponds to the far left tail of the concentration PDF. The convolution models are based on the physical picture of turbulence stretching pollutants and causing a cascade of concentration structures toward small scales. Mixing then occurs as the scale of the structures becomes sufficiently small for molecular diffusivity to operate efficiently, and the homogenization between neighboring structures corresponds to a convolution of the concentration PDF. The picture suggests that the mixing/pollution timescale is determined by the turbulent stretching rate at the scale where the pollutants are injected, and the main result of PSS was the prediction for the pristine fraction evolution, i.e., eq. (\[pfnconv\]), by the generalized self-convolution model. For convenience, we repeat eq. (\[pfnconv\]) here, $$\frac{dP(Z_{\rm c}, t)}{dt} = - \frac{n}{\tau_{\rm con}} P(1-P^{1/n}), \label{pfnconvconc}$$ where $\tau_{\rm con}$ is the timescale for the PDF convolution, and the parameter $n$ is interpreted as an indicator of the degree of spatial locality of the PDF convolution process. A smaller $n$ corresponds to more local convolution and broader PDF tails. In the present work, we briefly reviewed the formulation of PSS, and conducted a systematic numerical study of the turbulent pollution process, exploring an extended parameter space. We simulated four statistically homogeneous turbulent flows with rms Mach number $M$ ranging from $0.9$ to $6.2$. In each flow, we evolved 20 decaying scalars with different initial pollutant fractions, $H_0$, and different pollutant injection scales, $L_{\rm s}$. The simulation data further confirmed the validity of the convolution model and allowed us to measure the model parameters, $n$ and $\tau_{\rm con}$, in eq. (\[pfnconvconc\]) over a wide range of turbulence and pollutant conditions. Consistent with PSS, we find that, if the initial pollutant fraction $H_0 \gsim 0.1$, the simulation results for the pristine fraction can be well fit by the convolution model prediction, eq. (\[pfnconvconc\]), with properly chosen parameters. Eq. (\[pfnconvconc\]) is solved by $$P(Z_{\rm c}, t) = \frac{P_0}{\left[P_0^{1/n} + (1-P_0^{1/n} ) \exp\left( t /\tau_{\rm con2} \right) \right]^n}, \label{pfnconvsolutionconc}$$ where $P_0$ is the initial pristine fraction, and we have denoted the convolution timescale as $\tau_{\rm con2}$ for these scalar fields. Using eq. (\[pfnconvsolutionconc\]) to fit the simulation data yielded best-fit parameters $n$ and $\tau_{\rm con2}$. On the other hand, if $H_0 \lsim 0.1$, the evolution of $P(Z_{\rm c}, t)$ shows different behaviors at early and late times. In the early phase, the PDF convolution occurs locally in space due to the limited amount of pollutants, and the pristine fraction evolution follows the prediction of the “discrete" convolution model with $n=1$, i.e., $$P(Z_{\rm c}, t) = \frac{P_0}{P_0 + (1-P_0) \exp\left( t /\tau_{\rm con1} \right)}, \label{pfintegralsolutionconc}$$ where the convolution timescale for the early phase is denoted as $\tau_{\rm con1}$. Once a significant fraction (0.2-0.3) of flow is polluted, the pristine fraction evolves in the same way as the scalar fields with $H_{0} \gsim 0.1$. We therefore named the convolution timescale as $\tau_{\rm con2}$ for both scalars with $H_0 \gsim 0.1$ and the late phases of $H_{0} \lsim 0.1$ scalars (see §5.4). A successful two-phase fitting scenario was adopted for scalars fields with $H_{0} \lsim 0.1$, which connects eqs. (\[pfintegralsolutionconc\]) and (\[pfnconvsolutionconc\]) for early and late times. We examined the dependence of the model parameters on the flow Mach number, $M$. We found that the convolution timescales, $\tau_{\rm con1}$ and $\tau_{\rm con2}$, normalized to the flow dynamical time increase by $\simeq 20\%$ as $M$ goes from 0.9 to 2.1 and then saturates at $M \gsim 2$. This is similar to the behavior of the variance decay timescale, $\tau_{\rm m}$, as a function of $M$. For $H_0 \gsim 0.1$ scalars or the late phase of scalars with $H_0 \lsim 0.1$, the parameter $n$ decreases with increasing $M$, indicating that the PDF convolution proceeds more locally in supersonic turbulence with larger $M$. The decrease of $n$ is related to broader concentration PDF tails at higher $M$, corresponding to a larger pristine fraction at the same concentration variance. The pristine fraction evolution also depends on the pollutant injection scale $L_{\rm s}$. As $L_{\rm s}$ deceases, the pollution of the pristine gas is faster and the timescales, $\tau_{\rm con1}$ and $\tau_{\rm con2}$, decrease. This is expected as the mixing timescale scales with the eddy turnover time at $L_{\rm s}$. For scalars with $H_0 \gsim 0.1$ or the late phase of $H_0 \lsim 0.1$ scalars, the parameter $n$ becomes smaller as $L_{\rm s}$ decreases because, intuitively, the convolution is more local if the pollutants are injected at smaller scales. The dependence of the model parameters, $n$, $\tau_{\rm con1}$ and $\tau_{\rm con2}$, on the turbulence and pollutant properties is summarized in Tables 2, 3, and 4, and for convenience we have fit these results with simple functions as, $$\begin{gathered} \tau_{\rm con1} = \left[0.225 - 0.055 \exp(-M^{3/2}/4) \right] \left(\frac{L_{\rm p}}{L_{\rm f}} \right)^{0.63} \times \notag\\ \left(\frac{Z_{\rm c}}{10^{-7} \langle Z \rangle} \right)^{0.015}, \notag \\ \tau_{\rm con2} = \left[0.335 - 0.095 \exp(-M^2/4) \right] \left(\frac{L_{\rm p}}{L_{\rm f}} \right)^{0.63}\hspace{.3cm} \times \notag \\ \left(\frac{Z_{\rm c}}{10^{-7} \langle Z \rangle} \right)^{0.02}, \notag\\ n = 1 + 11 \, \exp(-M/3.5) \left(\frac{L_{\rm p}}{L_{\rm f}}\right)^{1.3}, \hspace{1.7cm}\end{gathered}$$ which are applicable for all Mach numbers and pollution properties, as long as $L_{\rm p} \leq L_{\rm f}.$ Note that unlike eqs. (\[pfnconvconc\]) and (\[pfnconvsolutionconc\]), these fits are for convenience only and not based on an underlying physical picture. We showed that the model is valid for $Z_{\rm c} \lsim 10^{-3} \langle Z \rangle$, where $\tau_{\rm con1}$ and $\tau_{\rm con2}$ only have a weak dependence on $Z_{\rm c}$ (§5.4.4). If $Z_{\rm c}$ is close to or larger than $\langle Z \rangle$, the model is no longer applicable, and we defer a study of this situation to a later work. We also tested the convergence of the model parameters with the numerical resolution (§5.4.5). The parameters $n$ and $\tau_{\rm con}$ may have a dependence on the compressivity of the driving force, which will be systematically examined in a future work. To apply our model and simulation results to the mixing of heavy elements in the interstellar medium, we specified the source term in the concentration PDF equation, accounting for the effects of new metals from supernova explosions and the possible infall of pristine gas from the halo or the intergalactic medium. These two sources force spikes in the PDF at high and low concentration values, respectively. With the source term, we derived an equation (eq. (\[eq:pfgalaxy\])) for the global pristine fraction in early galaxies. A description for how to use the equation and our simulation results to estimate the primordial gas fraction was given in §6. We discussed the timescales of relevant processes that control how and how fast the pollution process proceeds. In particular, the spatial transport by turbulent motions over galactic scales may play an important role if the interstellar turbulence is driven at small scales, e.g., by supernova explosions. Numerical simulations accounting for the interstellar environment at galactic scales are a valuable tool to study the pollution of primordial gas from a less idealized point of view. In fact, recent efforts to track metal mixing in the context of the formation of protogalaxies have made significant improvements in tracking the spatial evolution of the metallicity, averaged over relatively large scales (e.g., Wise & Abel 2008). Greif (2009, 2010) employed a turbulent diffusion formalism to mimic mixing by smoothing over the SPH kernel, and a similar approach was used in the 10 Mpc SPH simulations by Maio (2010) and Campisi (2011), who assumed the initial metal pollution was spread over $\approx$ kpc scales by cluster winds. Another recent simulation by Ritter (2012) used a finite-difference code with adaptive mesh refinement coupled to Lagrangian tracer particles to keep track of the metals produced in an initially metal-free galaxy. Interestingly, they found that a cold supersonically turbulent core developed because of the fallback of metal-enhanced ejecta. However, because the resolution scale was much larger than the scales on which turbulence-enhanced molecular diffusivity operates (Pan & Scalo 2007), they could not resolve a sufficiently large range of scales to track the unmixed, primordial fraction. To overcome limitations such as these, we have developed a large-eddy simulation (LES) approach based on our model and simulation results. In LESs, the flow quantities at resolved scales are directly computed, while the feedback effect of subgrid turbulent motions is modeled. To overcome the over-pollution by numerical diffusion, a subgrid model was constructed to track the evolution of the concentration fluctuations below the resolution scale. Using the standard filtering procedure for the LES formulation, we derived an equation for the filtered concentration PDF representing metallicity fluctuations at subgrid scales, and discussed the treatment of each term in the equation. The core of our subgrid model is equation (\[filteredprimordial\]) for the filtered pristine fraction (i.e., the pristine fraction in each computational cell), which was derived from the filtered PDF equation. Again, we repeat it here for convenience: $$\begin{gathered} \frac {\partial (\overline {\rho} \hspace{0.5mm}\widetilde {P})} {\partial t} + \frac{\partial}{\partial x_i} \left( \overline{\rho} \hspace{0.5mm} \widetilde{P} \hspace{0.5mm}\widetilde{v}_i \right) = \frac{\partial}{\partial x_i} \left( \overline{\rho} (\gamma + \gamma_{\rm P}) \frac{\partial \widetilde{P}}{\partial x_i} \right) \hspace{3cm}\notag \\ \hspace{2cm} - \frac{n_{\rm s}}{\tau_{\rm scon}} \widetilde{P}\left(1- \widetilde{P}^{1/n_{\rm s}} \right) - \overline{\dot{n}}_{\rm SN} m_{\rm ej} \widetilde{P} , \end{gathered}$$ where $n_{\rm s}$ and $\tau_{\rm scon}$ are parameters for the subgrid pollution, corresponding to $n$ and $\tau$ in our convolution model, $\gamma_{\rm P}$ is the eddy diffusivity for the pristine fraction, $\overline{\dot{n}}_{\rm SN}({\bf x}, t)$ is the filtered number rate of supernova explosions per unit volume and each supernova is assumed to have an ejecta mass $m_{\rm ej}$. This equation adopts the commonly-used eddy-diffusivity model for the transport effect of subgrid turbulent motions, and employs the convolution model for the pollution of the pristine gas within each cell. The implementation of our subgrid model was illustrated in the context of a one-equation LES model for the interstellar turbulence, which evolves the kinetic energy of subgrid turbulent motions. Together with the resolved temperature field, the subgrid kinetic energy specifies the turbulence properties in each cell, which are needed to calculate the eddy-diffusivity in the transport term and to determine the parameters in the convolution model for the subgrid pollution. The convolution model parameters depend on the metal/supernova sources in each cell, and can be evaluated using our simulation results. The resulting physically-realistic model for the evolution of the unresolved primordial fraction serves as a prototype for future simulations aimed at interpreting many observations currently probing the nature of early galaxies. The continuing discovery of star-forming galaxies at $z \approx 7-10$ in broad-band photometric searches, for example, (as in the Hubble UDF12 survey, Ellis 2013) suggests that observations of galaxies with significant primordial fractions should soon become available. The situation for galaxies selected on the basis of strong Ly$\alpha$ emission (Cowie & Hu 1998; Rhoads 2000) is even more promising, and it may only be a matter of time before several such galaxies are clearly identified as containing primordial stars recognizable by their large Ly$\alpha$ equivalent width and weak He II emission (Scannapieco 2003; Jimenez & Haiman 2006). In fact, a recent detailed analysis of deep Subaru images by Inoue (2011) strongly supports the interpretation that the mass fraction of stellar populations with extremely small metal abundances in $z \approx 3$ Lyman alpha emitters may be 1-10 % by mass, based on their very strong rest frame Lyman continua. Based on similar diagnostics, Kashikawa et al. (2012) recently proposed a $z = 6.5$ Ly$\alpha$ emitter as a Pop III candidate although enhanced Ly$\alpha$ emission from a clumpy, dusty medium (Neufeld 1991; Hansen & Oh 2006) cannot be ruled out conclusively in this case. If any of these galaxies are convincingly demonstrated to contain primordial stars, their evolution could only be simulated using an approach such as the one outlined here. Currently, a more direct constraint on the evolution of primordial gas is based on the absence of metal lines in absorption line systems in the intergalactic medium. Fumagalli (2011) used this approach to obtain upper limits of $Z< 10^{-6}$ by mass in two Lyman limit systems associated with quasars at $z \sim 3.1$ and 3.4. Simcoe (2012) used the lack of metal lines in a $z \approx 7$ quasar spectrum that shows a large neutral hydrogen column density to obtain an upper limit of $Z \approx 10^{-5} -10^{-6},$ depending on the whether the gas is bound in a galaxy or is diffuse intergalactic gas at that redshift. The implications of these measurements can only be fully explored through models such as ours, which capture the unresolved, unmixed fraction. Finally, at least four examples of Galactic stars with \[Fe/H\] $<$ -4.5 ($Z \lsim 10^{-6.5}$) are known (Christlieb et al. 2002; Frebel et al. 2008; Norris et al. 2007; Caffau et al. 2011), although only one is not enhanced in carbon. Recently, Yong et al. (2012) have shown convincingly that the Milky Way metallicity pdf is still decreasing smoothly down to at least \[Fe/H\] = -4.1 ($Z \approx 10^{-6}$), without the sudden cutoff claimed in earlier work. Once the rather severe selection effects are understood, this measurements could also be directly compared with our models, and even allow their two main parameters to be calibrated outside of numerical simulations. In fact, our proposed large-eddy simulation is expected to give reliable predictions for any physical problem, astrophysical our otherwise, in which the unresolved, low concentration tail of the pdf needs to be tracked. Its numerical implementation and targeted application represents an extremely promising avenue for future studies. We acknowledge support from NASA under theory Grant No. NNX09AD106 and astrobiology institute grant 08-NAI5-0018 and from the National Science Foundation under grant AST 11-03608. All simulations were conducted at the Arizona State University Advanced Computing Center and the Texas Advanced Computing Center, using the FLASH code, a product of the DOE ASC/Alliances funded Center for Astrophysical Thermonuclear Flashes at the University of Chicago. Filtered PDF Equation ===================== We formulate a PDF approach for large-eddy simulations of turbulent mixing. LESs based on the PDF method have been applied to study reacting turbulent flows (e.g., Gao & O’brein 1993, Colucci et al. 1998, Jaberi et al. 1999, Pitsch 2006). We first derive an equation for the local fine-grained concentration PDF, and then apply the filtering procedure to obtain an exact equation for the filtered PDF at the resolution scale. The derivation is similar to that in Appendix A of PSS for the equation of the concentration PDF defined in a statistical ensemble. We start with the definition of the fine-grained concentration PDF as a delta function, $$\phi(Z;{\bf x},t)= \delta (Z-C({\bf x},t)), \label{volumefinepdf}$$ because the concentration field in a given turbulent flow is single-valued at given position and time (PSS). Here $Z$ is the sampling variable. Since $\phi(Z;{\bf x},t)$ depends on $t$ only through the variable $Z-C({\bf x},t)$, the time-derivative $\phi(Z;{\bf x},t)$ can be written as, $$\frac {\partial \phi(Z;{\bf x},t)}{\partial t} = -\frac {\partial \phi(Z;{\bf x},t)} {\partial Z} \frac {\partial C({\bf x},t) } {\partial t}. \label{pdftimederivative}$$ Similarly, the spatial gradient of $\phi$ is given by $$\frac{\partial \phi(Z;{\bf x},t)}{\partial x_i} = -\frac {\partial \phi(Z;{\bf x},t)} {\partial Z} \frac{\partial C({\bf x},t)} {\partial x_i}. \label{pdfgradient}$$ Using eqs. (\[pdftimederivative\]) and (\[pdfgradient\]) and the advection-diffusion equation (\[advection\]), we have $$\frac {\partial \phi(Z;{\bf x},t)}{\partial t} + v_i \frac{\partial \phi(Z;{\bf x},t)} { \partial x_i} = - \frac{\partial}{\partial Z} \left[\phi(Z;{\bf x},t) \left(\frac{1}{\rho} \frac{\partial} {\partial x_i} \left(\rho \gamma \frac{\partial C}{\partial x_i}\right)+S\right) \right], \label{finegraineq}$$ where we used the fact that, except $\phi(Z;{\bf x},t)$, all the quantities on the r.h.s. are independent of $Z$. Combining eq. (\[finegraineq\]) with the continuity equation, we obtain $$\frac {\partial (\rho \phi)} {\partial t} + \frac{\partial (\rho \phi v_i )}{\partial x_i} = - \frac{\partial}{\partial Z} \left[ \rho \phi \left(\frac{1}{\rho} \frac{\partial} {\partial x_i} \left(\rho \gamma \frac{ \partial C } {\partial x_i} \right)+S\right)\right], \label{rhofinegraineq}$$ which was also derived in Appendix A of PSS. The diffusivity term in eq. (\[rhofinegraineq\]) can be rewritten as, $$- \frac{\partial}{\partial Z} \left[\rho \phi \left(\frac{1}{\rho} \frac{\partial}{\partial x_i} \left(\rho \gamma \frac{\partial C}{\partial x_i} \right)\right)\right] = \frac{\partial}{\partial x_i} \left(\rho \gamma \frac{\partial \phi}{\partial x_i} \right)- \frac{\partial^2}{\partial Z^2} \left(\rho \phi \gamma \left(\frac{\partial C}{\partial x_i}\right)^2 \right), \label{diffusivity}$$ where the first term on the r.h.s. is a spatial diffusion of the local PDF. Note that eq. (\[ensemblediffusivityterm\]) in §2.2 is derived by taking the ensemble average of this equation and using the definition and properties of the conditional ensemble average (see Appendix A of PSS). We next apply a filtering procedure to eq. (\[finegraineq\]). A convolution of $\phi(Z; {\bf x}, t)$ with the filter function, $G$, gives a filtered PDF, $\overline{\phi} (Z; {\bf x}, t) = \int_V \phi(Z; {\bf x}', t) G( {\bf x}- {\bf x}') dx'^3$, characterizing the concentration fluctuations within regions of the filter size (or resolution scale). For compressible turbulence, we define a filtered PDF with density weighting (Jaberi et al. 1999), $$\widetilde{\phi} (Z; {\bf x}, t) \equiv \frac { \overline{\rho \phi} } {\overline{\rho}}, \label{densweightedPDF}$$ which is a specific example of eq. (\[Favrefilter\]). Using eqs. (\[filteredvariables\]) and (\[densweightedPDF\]) in eq. (\[rhofinegraineq\]), we obtain the filtered PDF equation, $$\frac {\partial \left( \overline{\rho} \hspace{0.5mm} \widetilde{\phi} \right) } {\partial t} + \frac{\partial \left( \overline{ \rho v_i \phi } \right)}{\partial x_i} = - \frac{\partial}{\partial Z} \left( \overline { \phi \frac{\partial}{\partial x_i} \left(\rho \gamma \frac{ \partial C}{\partial x_i} \right)} \right) - \frac{\partial \left( \overline{\rho S \phi } \right)}{\partial Z} . \label{rhofiltered}$$ To write the equation in a more convenient form, we introduce conditional filtering based on the local concentration values. For any variable $A$ in the flow, we define a conditionally filtered quantity, $$\overline{A|C=Z} = \frac {\overline{A \phi (Z;{\bf x},t) } } {\overline{\phi}}. \label{conditionalvolume}$$ Since the fine-grained PDF, $\phi$, is a delta function, the definition is straightforward to understand: the conditionally filtered variable is the average over the set of points within a filter size satisfying $C ({\bf x}, t) = Z$. It is analogous to the conditional average defined in the context of a statistical ensemble (see §2 & PSS). We further introduce a density-weighted conditional filtering, $$\overline{[A|C=Z]}_{\rho} = \frac{ \overline{\rho A|C=Z} } {\overline{\rho|C=Z} } = \frac { \overline { \rho \phi A} } {\overline {\rho \phi} }, \label{conditionalmass}$$ where the last step follows from eq. (\[conditionalvolume\]). This definition is similar to eq. (\[densweightconditionalave\]) for the density-weighted conditional average over an ensemble. Combining eqs. (\[densweightedPDF\]) and (\[conditionalmass\]), we have $\overline{\rho \phi A } = \overline{\rho} \hspace{0.5mm} \widetilde{\phi} \hspace{0.5mm}\overline{[A|C=Z]}_\rho$. Applying this relation to the last three terms in eqs. (\[rhofiltered\]), we obtain, $$\frac {\partial (\overline {\rho} \hspace{0.5mm}\widetilde {\phi})} {\partial t} + \frac{\partial}{\partial x_i} \left( \overline{\rho} \hspace{0.5mm} \widetilde{\phi} \hspace{0.5mm}\overline{[v_i|C=Z]}_{\rho} \right) = - \frac{\partial}{\partial Z} \left( \overline{\rho} \hspace{0.5mm} \widetilde{\phi} \hspace{0.5mm} \overline { \left[\frac {1}{\rho} \frac{\partial} {\partial x_i} \left(\rho \gamma \frac{\partial C} {\partial x_i} \right) \left\vert \vphantom{\frac{1}{1}} \right. C=Z \right]}_{\rho} \right) - \frac{\partial}{\partial Z} \left( \overline{\rho} \hspace{0.5mm}\widetilde{\phi} \hspace{0.5mm} \overline{ [S|C=Z]} _{\rho} \right), \label{rhofilteredeq}$$ which is equivalent to eq. (22) in Jaberi et al. (1999). Using eq. (\[diffusivity\]) for the diffusivity term gives, $$\begin{array}{lll} {\displaystyle \frac {\partial (\overline {\rho} \hspace{0.5mm}\widetilde {\phi})} {\partial t} + \frac{\partial}{\partial x_i} \left( \overline{\rho} \hspace{0.5mm} \widetilde{\phi} \hspace{0.5mm}\overline{[v_i|C=Z]}_{\rho} \right) = \frac{\partial}{\partial x_i} \left( \overline{\rho \gamma \frac{\partial \phi}{\partial x_i}} \right) - \frac{\partial^2}{\partial Z^2} \left(\overline{\rho} \hspace{0.5mm} \widetilde{\phi} \overline{\left[\gamma \left(\frac{\partial C}{\partial x_i} \right)^2 \left\vert\vphantom{\frac{1}{1}}\right. C=Z \right]}_\rho \right)} \\ \hspace{5.5cm} {\displaystyle - \frac{\partial}{\partial Z} \left( \overline{\rho} \hspace{0.5mm}\widetilde{\phi} \hspace{0.5mm} \overline{ [S|C=Z]} _{\rho} \right)}. \end{array} \label{rhofilteredeqdiffusivity}$$ Note that eq. (\[rhofilteredeq\]) becomes identical to eq. (\[pdfeq\]) for the ensemble-defined PDF, if we replace $\overline{\rho}$, $\widetilde{\phi}$, and $\overline{[\cdot \cdot \cdot |C=Z]}_{\rho}$ by $\langle \rho \rangle$, $\Phi$, and $\langle \cdot \cdot \cdot |C=Z \rangle_{\rho}$, respectively. The equivalence between the filtered PDF and the ensemble-defined PDF has been discussed in §2, based on the ergodic theorem and the assumption that statistical homogeneity is restored at the filter scale. Similar to the case of the ensemble PDF equation, the advection and diffusivity terms in eqs. (\[rhofilteredeq\]) and (\[rhofilteredeqdiffusivity\]) need to be modeled due to the closure problem. In §7, we adopted an eddy-diffusivity assumption for the advection term, and the self-convolution models discussed in §3 may be applied to approximate the diffusivity term. Using the convolution PDF models and the results of our simulations, a subgrid model is constructed in §7 to investigate the pollution of primordial gas in early galaxies. [99]{} Abel, T., Bryan, G., & Norman, M. 2000, ApJ, 540, 39 Bond, J. R., Arnett, W. D., & Carr, B. J. 1984, ApJ, 280, 825 Bromm, V., Coppi, P. S., & Larson, R. B. 2002, ApJ, 564, 23 Bromm, V., & Loeb, A. 2003, , 425, 812 Bromm, V. and Yoshida, N. 2011 ARAA, 49, 373. Brook, C. B., Kawata, D., Scannapieco, E., Martel, H., & Gibson, B. K. 2007, ,661, 10 Caffau, E. et al. 2011, Nature, 477, 67 Campisi, M. A., Maio, U., Salvaterra, R., & Ciardi, B. 2011, MNRAS, 416, 2760 Cayrel, R. et al. 2004, A&A, 416, 1117 Chai, X & Mahesh, K. 2012, J. Fluid Mech., 699, 385 Chen, H., Chen, S. & Kraichnan, R. H. 1989, Phys. Rev. Lett., 63, 2657 Christlieb, N., Bessell, M. S., Beers, T. C., et al. 2002, , 419, 904 Clark, P. C., Glover, S. C. O., Klessen, R. S., & Bromm, V. 2011a, ApJ, 727, 110 Clark, P. C. et al. 2011b, Science, 331, 1040 Colella, P. & Glaz, H. M. 1985, J. Comput. Phys., 59, 264 Colella, P. & Woodward, P. R. 1984, J. Comput. Phys., 54, 174 Colucci, P. J., Jaberi, F. A., Givi, P. & Pope, S. B. 1998, Phys. Fluids, 10, 499 Cowie, L. L., & Hu, E. M. 1998, AJ, 115, 1319 Curl, S. 1963, AIChE J., 9, 175 de Avillez, M. & Mac Low, M.-M. 2002, ApJ, 581, 1047 De Stefano, G., Vasilyev, O. V. & D. E. 2008, Phys. Fluids, 20, 045102 Dimonte, G. & Tipton, R. 2006, Phys. Fluids, 18, 085101 Dopazo, C. 1979, Phys. Fluids, 22, 20 Dopazo, C. & Obrien, E. E. 1974, Acta Astronautica, 1, 1239 Dopazo, C., Valino, L. & Fueyo, N. 1997, Int. J. Mod. Phys. B, 11, 2975 Duplat, J. & Villermaux, E. 2008, J. Fluid Mech., 617, 51 Eidson, T. M. 1985, J. Fluid Mech., 158, 245 Ellis, R. S., McLure, R. J., Dunlop, J. S., et al. 2013, ApJL, 763, L7 Erlebacher, G., Hussaini, M. Y., Speziale, C. G. & Zang, T. A. 1992, J. Fluid Mech. 238, 155 Fang, Y. & Menon, S. 2006, AIAA paper 2006-116 Federrath, C., Roman-Duval, J., Klessen, R. S., Schmidt, W., and Mac Low, M.-M. 2010, A & A, 512, 81 Ferrara, A. 2001, in ASP Conf. Ser. 222, The Physics of Galaxy Formation, ed. M. Umemura & H. Susa (San Francisco: ASP), 301 Fox, R. O. 2003, Computational Models for Turbulent Reacting Flows (Cambridge Univ. Press: Cambridge, UK) Frebel, A et al. 2008, ApJ, 684, 588 Frebel, A, Johnson, J., & Bromm, V. 2009, MNRAS, 392, L50 Fryxell, B., Muller, E. & Arnett, D. 1989, nuas.conf, 5, 100 Fryxell, B., Olson, K., Ricker, P., Timmes, F. X., Zingale, M., Lamb, D. Q., MacNeice, P., Rosner, R., Truran, J. W. & Tufo, H. 2000, ApJS, 131, 273 Fumagalli, M., O’Meara, J. M., & Prochaska, J. X. 2011, Science, 334, 1245 Gallerano, F., Pasero, E. & Cannata, G. 2005, Continuum Mech. Thermodyn., 17, 101 Gao, F. & O’Brien, E. E. 1993, Phys. Fluids A, 5, 1282 Garnier, E., Adams, N., & Sagaut, P. 2009, Large Eddy Simulation for Compressible Flows (Scientific Computation), 1st edn. Springer. Ghosal, S., Lund, T. S., Moin, P., & Akselvoll, K. 1995, J. Fluid Mech., 286, 229 Genin, F. & Menon, S. 2010, Journal of Turbulence, 11, 1 Germano, M. 1992, J. Fluid Mech., 238, 325 Germano, M., Piomelli, U., Moin, P., & Cabot, W. H. 1991, Phys. Fluids A, 3, 1760 Girimaji, S. S. 1991, Combustion Science and Technology, 78, 177 Greif, T. H., Glover, S. C. O., Bromm, V., & Klessen, R. S. 2009, MNRAS, 392, 1381 Greif, T. H., Glover, S. C. O., Bromm, V., & Klessen, R. S. 2010, ApJ, 716, 510 Greif, T. H. 2012, MNRAS, 424, 399 Hanayama, H., & Tomisaka, K. 2006, ApJ, 641, 905. Hansen, M., & Oh, S. P. 2006, , 367, 979 Haworth, D. C. 2010, Prog. Energy Combust. Sci., 36, 168 Hosokawa, T. Omukai, K., Yoshida, N., & Yorke, H. W. 2011, Science, 334, 1250 Heger, A., & Woosley, S. E. 2002, , 567, 532 Hutchins, J. B. 1976, ApJ, 241, 103 Ievlev, V. M. 1973, Dokl. Akad. Nauk SSSR, 208, 1044 Inoue, A. K., Kousai, K., Iwata, I. et al. 2011, MNRAS, 411, 2336. Jaberi, F. A., Colucci, P. J., James, S., Givi, P., & Pope, S. B. 1999, J. Fluid Mech., 401, 85 Janicka, J., Kolbe, W. & Kollmann, W. 1979, Journal of Non-Equilibrium Thermodynamics, 4, 47 Jimenez, C., Ducros, F., Cuenot, B., & Bedat, B. 2001, Phys. Fliuds, 13, 1748 Jimenez, R., & Haiman, Z. 2006, Nature, 440, 501 Johnson, J. L., & Bromm, V. 2006, MNRAS, 366, 247 Kashikawa, N. et al. 2012, ApJ, 761, 85. Kim, W.-W., & Menon, S. 1999, Int. J. Numer. Meth. Fluids, 31 983 Kollmann, W. 1990, Theoret. Comput. Fluid Dynamics, 1, 249 Kosovic, B., Pullin, D. & Samtaney, R. 2002, Phys. Fluids, 14, 1511 Lesieur, M. & Metais, O. 1996, Annu. Rev. Fluid. Mech., 28, 45 Lilly, D. K. 1966, NCAR Manuscript No.123, National Center for Atmosoheric Research, Boulder, CO. Lilly, D. K. 1992, Phys. Fluids A, 4, 633 [Lundgren, T. S.]{} 1967, Phys. Fluids, 12, 485 Maeder, A. 1992, , 264, 105 McKee, C. F., & Tan, J. C. 2008, ApJ, 681, 771 Maio, U., Ciardi, B., Dolag, K., Tornatore, L., & Khochfar, S. 2010, , 407, 1003 Meneveau, C. & Katz, J. 2000, Annu. Rev. Fluid Mech., 32, 1 Menon, S. & Kim, W.-W. 1996, AIAA Paper 96-0425 Moeng, C-H. 1984, J. Atmospheric Sciences, 41, 2052 Moin, P., Squires, K., Cabot, W., & Lee, S. 1991, Phys. Fluids A, 3, 2746 Monin, A. S. 1967, Dokl. Akad. Nauk SSSR, 177, 1036 Nagao, T. et al.  2008, ApJ, 680, 100 Neufeld, D. A. 1991, , 370, L85 Norman, C. A., & Spaans, M. 1997, ApJ, 480, 145. Norris, J. E., Christlieb, N., Korn, A. J., et al. 2007, ApJ, 670, 774. O’Brien, E. E. 1980, in Turbulent Reacting Flows, p185, eds. Libby, P. A. and Williams, F. A. (Spring-Verlag, New York) Oey, M. S. 2000, ApJ, 542, L25 Oey, M. S. 2003, MNRAS, 339, 849 Omukai, K., Tsuribe, T.,Schneider, R., & Ferrara, A. 2005, ApJ, 626, 627 Padoan, P., Nordlund, A., Kritsuk, A. G., Normaln, M. L., & Li, P. S. 2007, ApJ, 661, 972 Pan, L. & Scalo, J. 2007, ApJ, 654, 29 Pan, L. & Scannapieco, E. 2010, ApJ, 721, 1765 (PS10) Pan, L. & Scannapieco, E. 2011, Phys. Rev. E, 83, 045302(R) Pan, L., Scannapieco, E., & Scalo, J. 2012, J. Fluid Mech., 700, 459 (PSS) Park, N. & Mahesh, K. 2007, AIAA paper 2007-0722 Pierce, C. D. & Moin, P. 1998, Phys. Fluids, 10, 3041 Piomelli, U., Cabot, W. H., Moin, P., & Lee, S. 1991, Phys. Fluids A, 3, 1766 Pitsch, H. 2006, Annu. Rev. Fluid Mech., 38, 453 Pitsch, H. & Steiner, H. 2000, Phys. Fluids, 12, 2541 Pope, S. B. 1976, Combustion and Flame, 27, 299 Pope, S. B. 1985, Prog. Energy Combust. Sci.. 11, 119 Pope, S. B. 2000, Turbulent flows. (Cambridge University Press) Rhoads, J. E., Malhotra, S., Dey, A., Stern, D., Spinrad, H., & Jannuzi, B. T. 2000, ApJ, 545, L85 Ritter, J. S., Safranek-Shrader, C., Gnat, O., Milosavljevic, M. & Bromm, V. 2012, ApJ,761,56 Scalo, J. & Biswas, A. 2002, MNRAS, 332, 769 Scannapieco, E., Schneider, R., & Ferrara, A. 2003, ApJ, 589, 35 Scannapieco, E., Madau, P., Woosley, S., Heger, A., & Ferrara, A. 2005, ApJ, 633, 1031 Scannapieco, E., Kawata, D., Brook, C. B., Schneider, R., Ferrara, A., & Gibson, B. K. 2006, ApJ, 653, 285 Scannapieco, E. & Bruggen, M. 2010, MNRAS, 405, 1634 Schaerer, D. 2002, A&A, 382, 28 Schmidt, W., Niemeyer, J. C., & Hillebrandt, W. 2006, A&A, 450, 265 Schneider, R., Ferrara, A., Salvaterra, R., Omukai, K., & Bromm, V. 2003, Nature, 442, 869 Schneider, R., Omukai, K., Bianchi, S., & Valiante, R. 2012 MNRAS, 419, 1566. Schumann, U. 1975, J. Computational Physics, 18, 376 Simcoe, R. A., Sullivan, P. W., Cooksey, K. L., et al. 2012, , 492, 79 Smagorinsky, J. 1963, Mon. Weather Rev. 91, 99 Spaans, M., & Silk, J. 2005, ApJ, 626, 644 Speziale, C. G., Hussaini, M. Y., Erlebacher, G., & Zang, T. A. 1988, Phys. Fluids, 31, 940 Stacy, A. Greif, T. H., & Bromm, V. 2010, MNRAS, 403, 45 Thornton, K., Gaudlitz, M., Janka, H.-Th., & Steinmetz, M. 1998, ApJ, 500, 95. Tinsley, B. M. 1980, , 5, 287 Trenti, M. & Stiavelli, M. 2007 ApJ, 667, 38 Trenti, M., & Stiavelli, M. 2009, , 694, 879 Turk, M. J., Abel, T., & O’Shea, B. 2009, Science, 325, 601 Venaille, A. & Sommeria, J. 2007, Phys. Fluids, 19, 028101 Venaille, A. & Sommeria, J. 2008, Phys. Rev. Lett., 100, 234506 Villermaux, E. & Duplat, J. 2003, Phys. Rev. Lett., 91, 184501 Vremen, B., Geurts, N., & Kuerten, H. 1997, J. Fluid Mech., 339, 357 Walker, T. P., Steigman, G., Kang, Ho-S., Schramm, D. M., & Olive, K. 1991, ApJ, 376, 51 Watanabe, T. and Gotoh, T., 2004, New J. Phys., 6, 40. Whalen, D. J., Even, W., Frey, L. H., et al. 2012, ApJ, submitted (arXiv:1211.4979) Wise, J. H. & Abel, T. 2008, ApJ, 685, 40 Woosley, S. E., & Weaver, T. A. 1995, , 101, 181 Yong, D., Norris, J. E., Bessell, M. S., Christlieb, N., Asplund, M., Beers, T. C., Barklem, P. S., Frebel, A. and Ryan, S. G. 2012, ApJ, 762, 27. Youshizawa, Y. 1986, Phys. Fluids, 29, 2152
--- abstract: 'We consider the class of optimization problems arising from computationally intensive $\ell_1$-regularized $M$-estimators, where the function or gradient values are very expensive to compute. A particular instance of interest is the $\ell_1$-regularized MLE for learning Conditional Random Fields (CRFs), which are a popular class of statistical models for varied structured prediction problems such as sequence labeling, alignment, and classification with label taxonomy. $\ell_1$-regularized MLEs for CRFs are particularly expensive to optimize since computing the gradient values requires an expensive inference step. In this work, we propose the use of a carefully constructed proximal quasi-Newton algorithm for such computationally intensive $M$-estimation problems, where we employ an aggressive active set selection technique. In a key contribution of the paper, we show that the proximal quasi-Newton method is provably *super-linearly convergent*, even in the absence of strong convexity, by leveraging a restricted variant of strong convexity. In our experiments, the proposed algorithm converges considerably faster than current state-of-the-art on the problems of sequence labeling and hierarchical classification.' author: - | Kai Zhong ^1^ Ian E.H. Yen ^2^Inderjit S. Dhillon ^2^Pradeep Ravikumar ^2^\ ^1^ Institute for Computational Engineering & Sciences ^2^ Department of Computer Science\ University of Texas at Austin\ `zhongkai@ices.utexas.edu`, `{ianyen,inderjit,pradeepr}@cs.utexas.edu`\ title: 'Proximal Quasi-Newton for Computationally Intensive $\ell_1$-regularized $M$-estimators' --- Introduction ============ $\ell_1$-regularized $M$-estimators have attracted considerable interest in recent years due to their ability to fit large-scale statistical models, where the underlying model parameters are sparse. The optimization problem underlying these $\ell_1$-regularized $M$-estimators takes the form: $$\label{generalObj} \min_{{\boldsymbol{w}}} f({\boldsymbol{w}}) := \lambda \| {\boldsymbol{w}}\|_1 + \ell ({\boldsymbol{w}}),$$ where $\ell({\boldsymbol{w}})$ is a convex differentiable loss function. In this paper, we are particularly interested in the case where the function or gradient values are very expensive to compute; we refer to these functions as computationally intensive functions, or **CI** functions in short. A particular case of interest are $\ell_1$-regularized MLEs for Conditional Random Fields (CRFs), where computing the gradient requires an expensive inference step. There has been a line of recent work on computationally efficient methods for solving , including [@lhac; @l1comparison; @glmnet; @l1sgd; @owlqn; @pqn]. It has now become well understood that it is key to leverage the sparsity of the optimal solution by maintaining sparse intermediate iterates [@lhac; @quic; @l1comparison]. Coordinate Descent (CD) based methods, like CDN [@l1comparison], maintain the sparsity of intermediate iterates by focusing on an active set of working variables. A caveat with such methods is that, for CI functions, each coordinate update typically requires a call of inference oracle to evaluate partial derivative for single coordinate. One approach adopted in [@bcd] to address this is using Blockwise Coordinate Descent that updates a block of variables at a time by ignoring the second-order effect, which however sacrifices the convergence guarantee. Newton-type methods have also attracted a surge of interest in recent years [@quic; @glmnet], but these require computing the exact Hessian or Hessian-vector product, which is very expensive for CI functions. This then suggests the use of quasi-Newton methods, popular instances of which include OWL-QN [@owlqn], which is adapted from $\ell_2$-regularized L-BFGS, as well as Projected Quasi-Newton (PQN) [@pqn]. A key caveat with OWL-QN and PQN however is that they do not exploit the sparsity of the underlying solution. In this paper, we consider the class of *Proximal Quasi-Newton* (Prox-QN) methods, which we argue seem particularly well-suited to such CI functions, for the following three reasons. Firstly, it requires gradient evaluations only once in each outer iteration. Secondly, it is a second-order method, which has asymptotic superlinear convergence. Thirdly, it can employ some active-set strategy to reduce the time complexity from $O(d)$ to $O(nnz)$, where $d$ is the number of parameters and $nnz$ is the number of non-zero parameters. While there has been some recent work on Prox-QN algorithms [@lhac; @newtontype], we carefully construct an implementation that is particularly suited to CI $\ell_1$-regularized $M$-estimators. We carefully maintain the sparsity of intermediate iterates, and at the same time reduce the gradient evaluation time. A key facet of our approach is our aggressive active set selection (which we also term a ”shrinking strategy”) to reduce the number of active variables under consideration at any iteration, and correspondingly the number of evaluations of partial gradients in each iteration. Our strategy is particularly aggressive in that it runs over multiple epochs, and in each epoch, chooses the next working set as a subset of the current working set rather than the whole set; while at the end of an epoch, allows for other variables to come in. As a result, in most iterations, our aggressive shrinking strategy only requires the evaluation of partial gradients in the current working set. Moreover, we adapt the L-BFGS update to the shrinking procedure such that the update can be conducted *without any loss of accuracy* caused by aggressive shrinking. Thirdly, we store our data in a *feature-indexed* structure to combine data sparsity as well as iterate sparsity. [@newtontype_theory] showed global convergence and asymptotic superlinear convergence for Prox-QN methods under the assumption that the loss function is *strongly convex*. However, this assumption is known to fail to hold in high-dimensional sampling settings, where the Hessian is typically rank-deficient, or indeed even in low-dimensional settings where there are redundant features. In a key contribution of the paper, we provide provable guarantees of asymptotic superlinear convergence for Prox-QN method, even without assuming strong-convexity, but under a restricted variant of strong convexity, termed Constant Nullspace Strong Convexity (CNSC), which is typically satisfied by standard $M$-estimators. To summarize, our contributions are twofold. (a) We present a carefully constructed proximal quasi-Newton method for computationally intensive (CI) $\ell_1$-regularized $M$-estimators, which we empirically show to outperform many state-of-the-art methods on CRF problems. (b) We provide the first proof of asymptotic superlinear convergence for Prox-QN methods without strong convexity, but under a restricted variant of strong convexity, satisfied by typical $M$-estimators, including the $\ell_1$-regularized CRF MLEs. Proximal Quasi-Newton Method ============================ A proximal quasi-Newton approach to solve $M$-estimators of the form  proceeds by iteratively constructing a quadratic approximation of the objective function to find the quasi-Newton direction, and then conducting a line search procedure to obtain the next iterate. Given a solution estimate ${\boldsymbol{w}}_t$ at iteration $t$, the proximal quasi-Newton method computes a descent direction by minimizing the following regularized quadratic model, $$\label{inner} {\boldsymbol{d}}_t = \text{arg }\min_\Delta {\boldsymbol{g}}^T_t \Delta + \frac 1 2 \Delta^T B_t \Delta + \lambda\|{\boldsymbol{w}}_t+\Delta\|_1$$ where ${\boldsymbol{g}}_t = {\boldsymbol{g}}({\boldsymbol{w}}_t)$ is the gradient of $\ell({\boldsymbol{w}}_t)$ and $B_t$ is an approximation to the Hessian of $\ell({\boldsymbol{w}})$. $B_t$ is usually formulated by the L-BFGS algorithm. This subproblem can be efficiently solved by randomized coordinate descent algorithm as shown in Section \[cd\_inner\]. The next iterate is obtained from the backtracking line search procedure, ${\boldsymbol{w}}_{t+1} = {\boldsymbol{w}}_t +\alpha_t {\boldsymbol{d}}_t$, where the step size $\alpha_t $ is tried over $\{ \beta^0,\beta^1,\beta^2,... \}$ until the Armijo rule is satisfied, $$f({\boldsymbol{w}}_t + \alpha_t {\boldsymbol{d}}_t) \leq f({\boldsymbol{w}}_t) + \alpha_t \sigma \Delta_t,$$ where $0<\beta <1$, $0<\sigma<1$ and $\Delta_t = {\boldsymbol{g}}_t^T {\boldsymbol{d}}_t + \lambda( \|{\boldsymbol{w}}_t+{\boldsymbol{d}}_t\|_1 - \|{\boldsymbol{w}}_t\|_1$). BFGS update formula ------------------- $B_t$ can be efficiently updated by the gradients of the previous iterations according to the BFGS update [@L-BFGS], $$\label{BFGS} B_t = B_{t-1} - \frac{B_{t-1}{\boldsymbol{s}}_{t-1} {\boldsymbol{s}}^T_{t-1} B_{t-1} }{ {\boldsymbol{s}}_{t-1}^T B_{t-1} {\boldsymbol{s}}_{t-1} } + \frac{{\boldsymbol{y}}_{t-1} {\boldsymbol{y}}^T_{t-1}} { {\boldsymbol{y}}_{t-1}^T {\boldsymbol{s}}_{t-1}}$$ where ${\boldsymbol{s}}_t = {\boldsymbol{w}}_{t+1} - {\boldsymbol{w}}_t$ and ${\boldsymbol{y}}_t = {\boldsymbol{g}}_{t+1} - {\boldsymbol{g}}_t$\ We use the compact formula for $B_t$ [@L-BFGS], $$B_t = B_0 - Q R Q^T = B_0 - Q \hat Q,$$ where $$Q := \left[ \begin{array}{cc} B_0 S_t & Y_t \end{array} \right], \; R := \left[ \begin{array}{cc} S_t^T B_0 S_t & L_t \\ L_t^T & -D_t \end{array} \right]^{-1} , \hat Q := RQ^T$$ $$S_t = \left[ {\boldsymbol{s}}_0, {\boldsymbol{s}}_1, ..., {\boldsymbol{s}}_{t-1}\right],\; Y_t = \left[ {\boldsymbol{y}}_0, {\boldsymbol{y}}_1, ..., {\boldsymbol{y}}_{t-1}\right]$$ $$D_t = diag[{\boldsymbol{s}}_0^T {\boldsymbol{y}}_0,...,{\boldsymbol{s}}^T_{t-1} {\boldsymbol{y}}_{t-1}]\text{ and } (L_t)_{i,j} = \begin{cases} {\boldsymbol{s}}^T_{i-1} {\boldsymbol{y}}_{j-1} & \text{if } i > j \\ 0 &\text{otherwise} \end{cases}$$ In practical implementation, we apply Limited-memory-BFGS. It only uses the information of the most recent $m$ gradients, so that $Q$ and $\hat Q$ have only size, $d \times 2m$ and $2m \times d$, respectively. $B_0$ is usually set as $ \gamma_t I$ for computing $B_t$, where $\gamma_t = {\boldsymbol{y}}_{t-1}^T {\boldsymbol{s}}_{t-1} / {\boldsymbol{s}}_{t-1}^T {\boldsymbol{s}}_{t-1}$[@L-BFGS]. As will be discussed in Section \[implement\], $Q$($\hat Q$) is updated just on the rows(columns) corresponding to the working set, ${\mathcal{A}}$. The time complexity for L-BFGS update is $O(m^2 |{\mathcal{A}}| + m^3) $. Coordinate Descent for Inner Problem {#cd_inner} ------------------------------------ Randomized coordinate descent is carefully employed to solve the inner problem by Tang and Scheinberg [@lhac]. In the update for coordinate $j$, ${\boldsymbol{d}}\leftarrow {\boldsymbol{d}}+ z^* {\boldsymbol{e}}_j$, $z^*$ is obtained by solving the one-dimensional problem, $$z^* = \text{arg} \min_z \frac{1}{2} (B_t)_{jj} z^2 + (({\boldsymbol{g}}_t)_j + (B_t {\boldsymbol{d}})_j) z + \lambda | ({\boldsymbol{w}}_t)_j + d_j + z |$$ This one-dimensional problem has a closed-form solution, $z^* = -c +\mathcal{S} (c-b/a,\lambda/a)$ ,where $\mathcal{S}$ is the soft-threshold function and $a=(B_t)_{jj} $, $b=({\boldsymbol{g}}_t)_j + (B_t {\boldsymbol{d}})_j$ and $c= ({\boldsymbol{w}}_t)_j + d_j$. For $B_0 = \gamma_t I$, the diagonal of $B_t$ can be computed by $(B_t)_{jj} = \gamma_t - {\boldsymbol{q}}_j^T \hat {\boldsymbol{q}}_j$, where ${\boldsymbol{q}}_j^T$ is the j-th row of $Q$ and $\hat {\boldsymbol{q}}_j$ is the j-th column of $\hat Q$. And the second term in $b$, $(B_t {\boldsymbol{d}})_j$ can be computed by, $$(B_t {\boldsymbol{d}})_j = \gamma_t d_j - {\boldsymbol{q}}_j^T \hat Q {\boldsymbol{d}}=\gamma_t d_j - {\boldsymbol{q}}_j^T \hat {\boldsymbol{d}},$$ where $\hat {\boldsymbol{d}}:= \hat Q {\boldsymbol{d}}$. Since $\hat {\boldsymbol{d}}$ has only $2m$ dimension, it is fast to update $(B_t {\boldsymbol{d}})_j $ by ${\boldsymbol{q}}_j$ and $\hat {\boldsymbol{d}}$. In each inner iteration, only $d_j$ is updated, so we have the fast update of $\hat {\boldsymbol{d}}$, $\hat {\boldsymbol{d}}\leftarrow \hat {\boldsymbol{d}}+ \hat {\boldsymbol{q}}_j z^*$. Since we only update the coordinates in the working set, the above algorithm has only computation complexity $O(m|{\mathcal{A}}| \times inner\_ iter)$, where $inner\_iter$ is the number of iterations used for solving the inner problem. Implementation {#implement} -------------- In this section, we discuss several key implementation details used in our algorithm to speed up the optimization. **Shrinking Strategy**\ In each iteration, we select an active or working subset $\mathcal{A}$ of the set of all variables: only the variables in this set are updated in the current iteration. The complementary set, also called the fixed set, has only values of zero and is not updated. The use of such a shrinking strategy reduces the overall complexity from $O(d)$ to $O(|{\mathcal{A}}|)$. Specifically, we (a) update the gradients just on the working set, (b) update $Q$ ($\hat Q$) just on the rows(columns) corresponding to the working set, and (c) compute the latest entries in $D_t$, $\gamma_t$, $L_t$ and $S_t^TS_t$ by just using the corresponding working set rather than the whole set. The key facet of our “shrinking strategy” however is in aggressively shrinking the active set: at the next iteration, we set the active set to be a subset of the previous active set, so that $\mathcal{A}_t \subset \mathcal{A}_{t-1} $. Such an aggressive shrinking strategy however is not guaranteed to only weed out irrelevant variables. Accordingly, we proceed in epochs. In each epoch, we progressively shrink the active set as above, till the iterations seem to converge. At that time, we then allow for all the “shrunk” variables to come back and start a new epoch. Such a strategy was also called an $\epsilon$-cooling strategy by Fan et al. [@liblinear], where the shrinking stopping criterion is loose at the beginning, and progressively becomes more strict each time all the variables are brought back. For L-BFGS update, when a new epoch starts, the memory of L-BFGS is cleaned to prevent any loss of accuracy. Because at the first iteration of each new epoch, the entire gradient over all coordinates is evaluated, the computation time for those iterations accounts for a significant portion of the total time complexity. Fortunately, our experiments show that the number of epochs is typically between 3-5. **Inexact inner problem solution** Like many other proximal methods, e.g. GLMNET and QUIC, we solve the inner problem inexactly. This reduces the time complexity of the inner problem dramatically. The amount of inexactness is based on a heuristic method which aims to balance the computation time of the inner problem in each outer iteration. The computation time of the inner problem is determined by the number of inner iterations and the size of working set. Thus, we let the number of inner iterations, $ inner\_ iter = \min\{ max\_ inner, \lfloor d/|{\mathcal{A}}| \rfloor \}$, where $max\_inner = 10$ in our experiment. **Data Structure for both model sparsity and data sparsity**\ In our implementation we take two sparsity patterns into consideration: (a) model sparsity, which accounts for the fact that most parameters are equal to zero in the optimal solution; and (b) data sparsity, wherein most feature values of any particular instance are zeros. We use a *feature-indexed* data structure to take advantage of both sparsity patterns. Computations involving data will be time-consuming if we compute over all the instances including those that are zero. So we leverage the sparsity of data in our experiment by using vectors of pairs, whose members are the index and its value. Traditionally, each vector represents an instance and the indices in its pairs are the feature indices. However, in our implementation, to take both model sparsity and data sparsity into account, we use an inverted data structure, where each vector represents one feature (*feature-indexed*) and the indices in its pairs are the instance indices. This data structure facilitates the computation of the gradient for a particular feature, which involves only the instances related to this feature. We summarize these steps in the algorithm below. And a detailed algorithm is in Appendix \[detail\_alg\]. Dataset $\{ {\boldsymbol{x}}^{(i)},{\boldsymbol{y}}^{(i)} \}_{i=1,2,...,N}$, termination criterion $\epsilon$, $\lambda$ and L-BFGS memory size $m$. ${\boldsymbol{w}}^*$ converging to arg min$_{{\boldsymbol{w}}} f({\boldsymbol{w}}) $. Initialize ${\boldsymbol{w}}\leftarrow \mathbf{0}$, ${\boldsymbol{g}}\leftarrow \partial \ell({\boldsymbol{w}}) / \partial {\boldsymbol{w}}$, working set ${\mathcal{A}}\leftarrow \{ 1,2,...d \}$, and $S$, $Y$, $Q$, $\hat Q$ $\leftarrow \phi$. Shrink working set. Take all the shrunken variables back to working set and clean the memory of L-BFGS. Update Shrinking stopping criterion and continue. Solve inner problem over working set and obtain the new direction ${\boldsymbol{d}}$. Conduct line search based on Armijo rule and obtain new iterate ${\boldsymbol{w}}$. Update ${\boldsymbol{g}}$, $\mathbf{s}$, ${\boldsymbol{y}}$, $S$, $Y$, $Q$, $\hat Q$ and related matrices over working set. Convergence Analysis ==================== In this section, we analyze the convergence behavior of proximal quasi-Newton method in the super-linear convergence phase, where the unit step size is chosen. To simplify the analysis, in this section, we assume the inner problem is solved exactly and no shrinking strategy is employed. We also provide the global convergence proof for Prox-QN method with shrinking strategy in Appendix \[shrinking\_proof\]. In current literature, the analysis of proximal Newton-type methods relies on the assumption of *strongly convex* objective function to prove superlinear convergence [@newtontype]; otherwise, only sublinear rate can be proved [@prox_newton_global]. However, our objective is not strongly convex when the dimension is very large or there are redundant features. In particular, the Hessian matrix $H({\boldsymbol{w}})$ of the smooth function $\ell({\boldsymbol{w}})$ is not positive-definite. We thus leverage a recently introduced restricted variant of strong convexity, termed Constant Nullspace Strong Convexity (CNSC) in [@cnsc]. There the authors analyzed the behavior of proximal gradient and proximal Newton methods under such a condition. The proximal *quasi-Newton* procedure in this paper however requires a subtler analysis, but in a key contribution of the paper, we are nonetheless able to show asymptotic superlinear convergence of the Prox-QN method under this restricted variant of strong convexity. A composite function is said to have Constant Nullspace Strong Convexity restricted to space ${\mathcal{T}}$ (CNSC-${\mathcal{T}}$) if there is a constant vector space ${\mathcal{T}}$ s.t. $\ell({\boldsymbol{w}})$ depends only on $ {\mathbf{proj}}_{{\mathcal{T}}}({\boldsymbol{w}})$, i.e. $\ell({\boldsymbol{w}}) = \ell( {\mathbf{proj}}_{{\mathcal{T}}}({\boldsymbol{w}}) )$, and its Hessian satisfies $$\label{CNSC_row} \begin{aligned} &m \| {\boldsymbol{v}}\|^2 \leq {\boldsymbol{v}}^T H({\boldsymbol{w}}) {\boldsymbol{v}}\leq M \| {\boldsymbol{v}}\|^2, &\forall {\boldsymbol{v}}\in {\mathcal{T}}, \forall {\boldsymbol{w}}\in \mathbb{R}^d \end{aligned}$$ for some $M\geq m>0$, and $$\label{CNSC_null} \begin{aligned} &H({\boldsymbol{w}}){\boldsymbol{v}}= {\boldsymbol{0}}, &\forall {\boldsymbol{v}}\in {\mathcal{T}}^{\perp}, \forall {\boldsymbol{w}}\in \mathbb{R}^d , \end{aligned}$$ where ${\mathbf{proj}}_{{\mathcal{T}}}({\boldsymbol{w}})$ is the projection of ${\boldsymbol{w}}$ onto ${\mathcal{T}}$ and ${\mathcal{T}}^{\perp}$ is the complementary space orthogonal to ${\mathcal{T}}$. This condition can be seen to be an algebraic condition that is satisfied by typical $M$-estimators considered in high-dimensional settings. In this paper, we will abuse the use of CNSC-${\mathcal{T}}$ for symmetric matrices. We say a symmetric matrix $H$ satisfies CNSC-${\mathcal{T}}$ condition if $H$ satisfies and . In the following theorems, we will denote the orthogonal basis of ${\mathcal{T}}$ as $U\in \rR ^{d\times \hat d}$, where $\hat d \leq d$ is the dimensionality of ${\mathcal{T}}$ space and $U^T U = I$. Then the projection to ${\mathcal{T}}$ space can be written as ${\mathbf{proj}}_{{\mathcal{T}}} ({\boldsymbol{w}}) = UU^T {\boldsymbol{w}}$. Assume $\nabla^2 \ell ({\boldsymbol{w}}) $ and $\nabla \ell({\boldsymbol{w}})$ are Lipschitz continuous. Let $B_t$ be the matrices generated by BFGS update . Then if $ \ell ({\boldsymbol{w}}) $ and $B_t$ satisfy CNSC-${\mathcal{T}}$ condition, the proximal quasi-Newton method has q-superlinear convergence: $$\|{\boldsymbol{z}}_{t+1}-{\boldsymbol{z}}^*\| \leq o\left(\|{\boldsymbol{z}}_t-{\boldsymbol{z}}^*\|\right) ,$$ where ${\boldsymbol{z}}_t= U^T{\boldsymbol{w}}_t $, ${\boldsymbol{z}}^*=U^T {\boldsymbol{w}}^*$ and ${\boldsymbol{w}}^*$ is an optimal solution of . The proof is given in Appendix \[proof\_theorem1\_2\]. We prove it by exploiting the CNSC-${\mathcal{T}}$ property. First, we re-build our problem and algorithm on the reduced space ${\mathcal{Z}}= \{ {\boldsymbol{z}}\in \rR^{\hat d}| {\boldsymbol{z}}= U^T {\boldsymbol{w}}\}$, where the strong-convexity property holds. Then we prove the asymptotic superlinear convergence on ${\mathcal{Z}}$ following Theorem 3.7 in [@newtontype_theory]. \[F\_bound\_by\_z\] For Lipschitz continuous $\ell({\boldsymbol{w}})$, the sequence $\{{\boldsymbol{w}}_t\}$ produced by the proximal quasi-Newton Method in the super-linear convergence phase has $$\label{tmp4} f({\boldsymbol{w}}_t)-f({\boldsymbol{w}}^*) \leq L \|{\boldsymbol{z}}_t-{\boldsymbol{z}}^*\|,$$ where $L = L_{\ell} + \lambda \sqrt{d} $, $L_{\ell}$ is the Lipschitz constant of $\ell({\boldsymbol{w}})$, ${\boldsymbol{z}}_t=U^T {\boldsymbol{w}}_t$ and ${\boldsymbol{z}}^*=U^T {\boldsymbol{w}}^*$. The proof is also in Appendix \[proof\_theorem1\_2\]. It is proved by showing that both the smooth part and the non-differentiable part satisfy the modified Lipschitz continuity. Application to Conditional Random Fields with $\ell_1$ Penalty ============================================================== In CRF problems, we are interested in learning a conditional distribution of labels ${\boldsymbol{y}}\in {\mathcal{Y}}$ given observation ${\boldsymbol{x}}\in{\mathcal{X}}$, where ${\boldsymbol{y}}$ has application-dependent structure such as sequence, tree, or table in which label assignments have inter-dependency. The distribution is of the form $$P_{{\boldsymbol{w}}}({\boldsymbol{y}}|{\boldsymbol{x}}) = \frac{1}{Z_{{\boldsymbol{w}}}({\boldsymbol{x}})} \exp\left\{ \sum_{k=1}^{d} w_k f_k ( {\boldsymbol{y}},{\boldsymbol{x}}) \right\},$$ where $f_k$ is the feature functions, $w_k$ is the associated weight, $d$ is the number of feature functions and $Z_{{\boldsymbol{w}}} ({\boldsymbol{x}})$ is the partition function. Given a training data set $\{({\boldsymbol{x}}_i,{\boldsymbol{y}}_i)\}_{i=1}^N$, our goal is to find the optimal weights ${{\boldsymbol{w}}}$ such that the following $\ell_1$-regularized negative log-likelihood is minimized. $$\label{obj} \min_{{\boldsymbol{w}}} f({\boldsymbol{w}}) = \lambda \|{\boldsymbol{w}}\|_1-\sum_{i=1}^{N} \log{ P_{{\boldsymbol{w}}} ({\boldsymbol{y}}^{(i)} | {\boldsymbol{x}}^{(i)}) }$$ Since $|{\mathcal{Y}}|$, the number of possible values ${\boldsymbol{y}}$ takes, can be exponentially large, the evaluation of $\ell({\boldsymbol{w}})$ and the gradient $\nabla \ell({\boldsymbol{w}})$ needs application-dependent oracles to conduct the summation over ${\mathcal{Y}}$. For example, in *sequence labeling problem*, a dynamic programming oracle, *forward-backward* algorithm, is usually employed to compute $\nabla \ell({\boldsymbol{w}})$. Such an oracle can be very expensive. In Prox-QN algorithm for sequence labeling problem, the *forward-backward* algorithm takes $O(|Y|^2 NT \times exp)$ time, where $exp$ is the time for the expensive exponential computation, $T$ is the sequence length and $Y$ is the possible label set for a symbol in the sequence. Then given the obtained oracle, the evaluation of the partial gradients over the working set ${\mathcal{A}}$ has time complexity, $O(D_{nnz} |{\mathcal{A}}|T)$, where $D_{nnz}$ is the average number of instances related to a feature. Thus when $O(|Y|^2 NT \times exp + D_{nnz} |{\mathcal{A}}|T) > O(m^3+m^2 |{\mathcal{A}}|)$, the gradients evaluation time will dominate. The following theorem gives that the $\ell_1$-regularized CRF MLEs satisfy the CNSC-${\mathcal{T}}$ condition. With $\ell_1$ penalty, the CRF loss function, $\ell({\boldsymbol{w}}) = -\sum_{i=1}^{N} \log{ P_{{\boldsymbol{w}}} ({\boldsymbol{y}}^{(i)} | {\boldsymbol{x}}^{(i)}) } $, satisfies the CNSC-${\mathcal{T}}$ condition with ${\mathcal{T}}= {\mathcal{N}}^{\perp}$, where ${\mathcal{N}}= \{ {\boldsymbol{v}}\in \rR^{d} | \Phi^T {\boldsymbol{v}}= 0 \}$ is a constant subspace of $\rR^d$ and $\Phi \in \rR^{d \times (N|{\mathcal{Y}}|)}$ is defined as below, $$\Phi_{kn} = f_k({\boldsymbol{y}}_l,{\boldsymbol{x}}^{(i)}) - E \left[ f_k({\boldsymbol{y}},{\boldsymbol{x}}^{(i)}) \right]$$ where $n = (i-1)|{\mathcal{Y}}| + l$, $l=1,2,...|{\mathcal{Y}}|$ and $E$ is the expectation over the conditional probability $P_{{\boldsymbol{w}}} ({\boldsymbol{y}}| {\boldsymbol{x}}^{(i)})$. According to the definition of CNSC-${\mathcal{T}}$ condition, the $\ell_1$-regularized CRF MLEs don’t satisfy the classical strong-convexity condition when ${\mathcal{N}}$ has non-zero members, which happens in the following two cases: (i) the exponential representation is not minimal [@graphical_model], i.e. for any instance $i$ there exist a non-zero vector ${\boldsymbol{a}}$ and a constant $b_i$ such that $ \langle {\boldsymbol{a}},\phi({\boldsymbol{y}},{\boldsymbol{x}}^{(i)}) \rangle = b_i $, where $\phi({\boldsymbol{y}},{\boldsymbol{x}}) = [ f_1( {\boldsymbol{y}},{\boldsymbol{x}}^{(i)} ),f_2( {\boldsymbol{y}},{\boldsymbol{x}}^{(i)} ),...,f_d( {\boldsymbol{y}},{\boldsymbol{x}}^{(i)} ) ]^T$; (ii) $d > N|{\mathcal{Y}}|$, i.e., the number of feature functions is very large. The first case holds in many problems, like the sequence labeling and hierarchical classification discussed in Section \[experiment\], and the second case will hold in high-dimensional problems. Related Methods =============== There have been several methods proposed for solving $\ell_1$-regularized $M$-estimators of the form in . In this section, we will discuss these in relation to our method. **Orthant-Wise Limited-memory Quasi-Newton** (OWL-QN) introduced by Andrew and Gao [@owlqn] extends L-BFGS to $\ell_1$-regularized problems. In each iteration, OWL-QN computes a generalized gradient called *pseudo-gradient* to determine the orthant and the search direction, then does a line search and a projection of the new iterate back to the orthant. Due to its fast convergence, it is widely implemented by many software packages, such as CRF++, CRFsuite and Wapiti. But OWL-QN does not take advantage of the model sparsity in the optimization procedure, and moreover Yu et al. [@owlflaw] have raised issues with its convergence proof.\ **Stochastic Gradient Descent** (SGD) uses the gradient of a single sample as the search direction at each iteration. Thus, the computation for each iteration is very fast, which leads to fast convergence at the beginning. However, the convergence becomes slower than the second-order method when the iterate is close to the optimal solution. Recently, an $\ell_1$-regularized SGD algorithm proposed by Tsuruoka et al.[@l1sgd] is claimed to have faster convergence than OWL-QN. It incorporates $\ell_1$-regularization by using a cumulative $\ell_1$ penalty, which is close to the $\ell_1$ penalty received by the parameter if it had been updated by the true gradient. Tsuruoka et al. do consider data sparsity, i.e. for each instance, only the parameters related to the current instance are updated. But they too do not take the model sparsity into account.\ **Coordinate Descent** (CD) and **Blockwise Coordinate Descent** (BCD) are popular methods for $\ell_1$-regularized problem. In each coordinate descent iteration, it solves an one-dimensional quadratic approximation of the objective function, which has a closed-form solution. It requires the second partial derivative with respect to the coordinate. But as discussed by Sokolovska et al., the exact second derivative in CRF problem is intractable. So they instead use an approximation of the second derivative, which can be computed efficiently by the same inference oracle queried for the gradient evaluation. However, pure CD is very expensive because it requires to call the inference oracle for the instances related to the current coordinate in each coordinate update. BCD alleviates this problem by grouping the parameters with the same ${\boldsymbol{x}}$ feature into a block. Then each block update only needs to call the inference oracle once for the instances related to the current ${\boldsymbol{x}}$ feature. However, it cannot alleviate the large number of inference oracle calls unless the data is very sparse such that every instance appears only in very few blocks.\ **Proximal Newton method** has proven successful on problems of $\ell_1$-regularized logistic regression [@glmnet] and Sparse Invariance Covariance Estimation [@quic], where the Hessian-vector product can be cheaply re-evaluated for each update of coordinate. However, the Hessian-vector product for CI function like CRF requires the query of the inference oracle no matter how many coordinates are updated at a time [@l2cg], which then makes the coordinate update on quadratic approximation as expensive as coordinate update in the original problem. Our proximal quasi-Newton method avoids such problem by replacing Hessian with a low-rank matrix from BFGS update. Numerical Experiments {#experiment} ===================== We compare our approach, Prox-QN, with four other methods, Proximal Gradient (Prox-GD), OWL-QN [@owlqn], SGD [@l1sgd] and BCD [@bcd]. For OWL-QN, we directly use the OWL-QN optimizer developed by Andrew et al.[^1], where we set the memory size as $m=10$, which is the same as that in Prox-QN. For SGD, we implement the algorithm proposed by Tsuruoka et al. [@l1sgd], and use cumulative $\ell_1$ penalty with learning rate $\eta_k = \eta_0/(1+k/N)$, where k is the SGD iteration and $N$ is the number of samples. For BCD, we follow Sokolovska et al. [@bcd] but with three modifications. First, we add a line search procedure in each block update since we found it is required for convergence. Secondly, we apply shrinking strategy as discussed in Section \[implement\]. Thirdly, when the second derivative for some coordinate is less than $10^{-10}$, we set it to be $10^{-10}$ because otherwise the lack of $\ell_2$-regularization in our problem setting will lead to a very large new iterate. We evaluate the performance of Prox-QN method on two problems, sequence labeling and hierarchical classification. In particular, we plot the relative objective difference $(f ({\boldsymbol{w}}_t)-f ({\boldsymbol{w}}^*))/f ({\boldsymbol{w}}^*) $ and the number of non-zero parameters (on a log scale) against time in seconds. More experiment results, for example, the testing accuracy and the performance for different $\lambda$’s, are in Appendix \[more\_exp\]. All the experiments are executed on 2.8GHz Intel Xeon E5-2680 v2 Ivy Bridge processor with 1/4TB memory and Linux OS. Sequence Labeling ----------------- In sequence labeling problems, each instance $({\boldsymbol{x}},{\boldsymbol{y}}) = \left\{({\boldsymbol{x}}_t,y_t)\right\}_{t=1,2...,T}$ is a sequence of $T$ pairs of observations and the corresponding labels. Here we consider the optical character recognition (OCR) problem, which aims to recognize the handwriting words. The dataset [^2] was preprocessed by Taskar et al. [@m3n] and was originally collected by Kassel [@ocr], and contains 6877 words (instances). We randomly divide the dataset into two part: training part with 6216 words and testing part with 661 words. The character label set $Y$ consists of 26 English letters and the observations are characters which are represented by images of 16 by 8 binary pixels as shown in Figure \[seq:a\]. We use degree 2 pixels as the raw features, which means all pixel pairs are considered. Therefore, the number of raw features is $J = 128\times 127 /2 +128 + 1$, including a bias. For degree 2 features, $x_{tj} = 1$ only when both pixels are $1$ and otherwise $x_{tj} = 0$, where $x_{tj}$ is the j-th raw feature of ${\boldsymbol{x}}_i$. For the feature functions, we use unigram feature functions ${\boldsymbol{1}}(y_t = y, x_{tj} = 1)$ and bigram feature functions ${\boldsymbol{1}}(y_{t} = y,y_{t+1} = y')$ with their associated weights, $\Theta_{y,j}$ and $\Lambda_{y,y'}$, respectively. So ${\boldsymbol{w}}= \{\Theta, \Lambda \}$ for $\Theta \in \rR ^{|Y|\times J}$ and $\Lambda \in \rR ^{|Y|\times |Y| }$ and the total number of parameters, $d = {|Y|}^2+|Y|\times J = 215,358$. Using the above feature functions, the potential function can be specified as, $\tilde P_{{\boldsymbol{w}}}({\boldsymbol{y}},{\boldsymbol{x}}) = \exp\left\{\langle \Lambda, \sum_{t=1}^{T} ({\boldsymbol{e}}_{y_t} {\boldsymbol{x}}^T_t) \rangle +\langle \Theta, \sum_{t=1}^{T-1} ({\boldsymbol{e}}_{y_t} {\boldsymbol{e}}^T_{y_{t+1}}) \rangle \right\} $,where $\langle \cdot,\cdot \rangle$ is the sum of element-wise product and ${\boldsymbol{e}}_y \in \rR^{|Y|} $ is an unit vector with 1 at y-th entry and 0 at other entries. The gradient and the inference oracle are given in Appendix \[seq\_gradient\]. In our experiment, $\lambda$ is set as $100$, which leads to a relative high testing accuracy and an optimal solution with a relative small number of non-zero parameters (see Appendix \[testing\_accuracy\]). The learning rate $\eta_0$ for SGD is tuned to be $2\times 10^{-4}$ for best performance. In BCD, the unigram parameters are grouped into $J$ blocks according to the ${\boldsymbol{x}}$ features while the bigram parameters are grouped into one block. Our proximal quasi-Newton method can be seen to be much faster than the other methods. \[H\] \[seqresult\] Hierarchical Classification --------------------------- In hierarchical classification problems, we have a label taxonomy, where the classes are grouped into a tree as shown in Figure \[hier:a\]. Here $y \in {\mathcal{Y}}$ is one of the leaf nodes. If we have totally $K$ classes (number of nodes) and $J$ raw features, then the number of parameters is $d = K\times J$. Let $W\in \rR^{K\times J}$ denote the weights. The feature function corresponding to $W_{k,j}$ is $f_{k,j}(y,{\boldsymbol{x}}) = {\boldsymbol{1}}[k\in \text{Path}( y )] x_j $, where $k \in \text{Path(y)}$ means class $k$ is an ancestor of $y$ or $y$ itself. The potential function is $\tilde P_W(y,{\boldsymbol{x}}) = \exp \left\{ \sum_{k\in \text{Path}(y)} {\boldsymbol{w}}^T_k {\boldsymbol{x}}\right\}$ where ${\boldsymbol{w}}^T_k$ is the weight vector of k-th class, i.e. the k-th row of $W$. The gradient and the inference oracle are given in Appendix \[hier\_gradient\]. The dataset comes from Task1 of the dry-run dataset of LSHTC1[^3]. It has 4,463 samples, each with $J$=51,033 raw features. The hierarchical tree has 2,388 classes which includes 1,139 leaf labels. Thus, the number of the parameters $d=$121,866,804. The feature values are scaled by svm-scale program in the LIBSVM package. We set $\lambda = 1$ to achieve a relative high testing accuracy and high sparsity of the optimal solution. The SGD initial learning rate is tuned to be $\eta_0 = 10$ for best performance. In BCD, parameters are grouped into $J$ blocks according to the raw features. \[H\] \[hierresult\] As both Figure \[seq:b\],\[seq:c\] and Figure \[hier:b\],\[hier:c\] show, Prox-QN achieves much faster convergence and moreover obtains a sparse model in much less time. Acknowledgement {#acknowledgement .unnumbered} --------------- This research was supported by NSF grants CCF-1320746 and CCF-1117055. P.R. acknowledges the support of ARO via W911NF-12-1-0390 and NSF via IIS-1149803, IIS-1320894, IIS-1447574, and DMS-1264033. K.Z. acknowledges the support of the National Initiative for Modeling and Simulation fellowship [99]{} I. E.H. Yen, C.-J. Hsieh, P. Ravikumar, and I. S. Dhillon. Constant Nullspace Strong Convexity and Fast Convergence of Proximal Methods under High-Dimensional Settings. In NIPS 2014. X. Tang and K. Scheinberg. Efficiently Using Second Order Information in Large l1 Regularization Problems. arXiv:1303.6935, 2013. J. D. Lee, Y. Sun, and M. A. Saunders. Proximal newton-type methods for minimizing composite functions. In NIPS 2012. M. Schmidt, E. Van Den Berg, M.P. Friedlander, and K. Murphy. Optimizing costly functions with simple constraints: A limited-memory projected Quasi-Newton algorithm. In Int. Conf. Artif. Intell. Stat., 2009. C.-J. Hsieh, M. A. Sustik, I. S. Dhillon, and P. Ravikumar. Sparse inverse covariance estimation using quadratic approximation. In NIPS 2011. S. Boyd and L. Vandenberghe. Convex Optimization, Cambridge Univ. Press, Cambridge, U.K., 2003. P.-W. Wang and C.-J. Lin. Iteration Complexity of Feasible Descent Methods for Convex Optimization. Technical report, Department of Computer Science, National Taiwan University, Taipei, Taiwan, 2013. G.-X. Yuan, K.-W. Chang, C.-J. Hsieh, and C.-J. Lin. A comparison of optimization methods and software for large-scale l1-regularized linear classification. Journal of Machine Learning Research (JMLR), 11:3183-3234, 2010. A. Agarwal, S. Negahban, and M. Wainwright. Fast Global Convergence Rates of Gradient Methods for High-Dimensional Statistical Recovery. In NIPS 2010. K. Hou, Z. Zhou, A. M.-S. So, and Z.-Q. Luo. “On the linear convergence of the proximal gradient method for trace norm regularization. In NIPS 2014. L. Xiao and T. Zhang. “A proximal-gradient homotopy method for the l1-regularized least-squares problem. In ICML 2012. P. Tseng and S. Yun. A coordinate gradient descent method for nonsmooth separable minimization, Math. Prog. B. 117, 2009. G.-X. Yuan, C.-H. Ho, and C.-J. Lin. “An improved GLMNET for l1-regularized logistic regression, JMLR, 13:1999–-2030, 2012. R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear classification, JMLR, 9:1871-1874, 2008. A. J Hoffman. On approximate solutions of systems of linear inequalities. Journal of Research of the National Bureau of Standards, 1952. N. Sokolovska, T. Lavergne, O. Cappe, and F. Yvon. Efficient Learning of Sparse Conditional Random Fields for Supervised Sequence Labelling. arXiv:0909.1308, 2009. Y. Tsuboi, Y. Unno, H. Kashima, and N. Okazaki. Fast Newton-CG Method for Batch Learning of Conditional Random Fields, Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, 2011. J. Nocedal and S. J. Wright. Numerical Optimization. Springer Series in Operations Research. Springer, New York, NY, USA, 2nd edition, 2006. B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS 2003. R. Kassel. A Comparison of Approaches to On-line Handwritten Character Recognition. PhD thesis, MIT Spoken Language Systems Group, 1995. Y. Tsuruoka, J. Tsujii, and S. Ananiadou. Stochastic gradient descent training for l1- regularized log-linear models with cumulative penalty. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 477-485, Suntec, Singapore, 2009. J. Yu, S.V.N. Vishwanathan, S. Gunter, and N. N. Schraudolph. A Quasi-Newton approach to nonsmooth convex optimization problems in machine learning, JMLR, 11:1-57, 2010. G. Andrew and J. Gao. Scalable training of $\ell_1$-regularized log-linear models. In ICML 2007. J.E. Dennis and J.J. More. A characterization of superlinear convergence and its application to Quasi-Newton methods. Math. Comp., 28(126):549–560, 1974. K. Scheinberg and X. Tang. Practical Inexact Proximal Quasi-Newton Method with Global Complexity Analysis. COR@L Technical Report at Lehigh University. arXiv:1311.6547, 2013 J. D. Lee, Y. Sun, and M. A. Saunders. Proximal Newton-type methods for minimizing composite functions. arXiv:1206.1623, 2012 M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Technical Report 649, Dept. Statistics, Univ. California, Berkeley. 2003 Convergence Proof ================= To exploit the CNSC-${\mathcal{T}}$ property, we first re-build our problem and algorithm on the reduced space ${\mathcal{Z}}= \{ {\boldsymbol{z}}\in \rR^{\hat d}| {\boldsymbol{z}}= U^T {\boldsymbol{w}}\}$, where the strong-convexity property holds. Then we prove the asymptotic super-linear convergence on ${\mathcal{Z}}$ under the condition that the inner problem is solved exactly and no shrinking strategy is not applied. Finally we prove the objective is bounded by the difference between current iterate and the optimal solution. In Section \[shrinking\_proof\], we provide the global convergence proof when the shrinking strategy is applied. Representing the problem in a reduced and compact space ------------------------------------------------------- **Properties of CNSC-${\mathcal{T}}$ condition**\ For $\ell({\boldsymbol{w}})$ satisfying CNSC-${\mathcal{T}}$ condition, we have $\ell({\boldsymbol{w}}) = \ell({\mathbf{proj}}_{{\mathcal{T}}}({\boldsymbol{w}}))$. Define ${\boldsymbol{g}}$ to be the gradient of $\ell({\boldsymbol{w}})$ and $H$ to be the Hessian of $\ell({\boldsymbol{w}})$. As both ${\boldsymbol{g}}$ and $H$ are in the ${\mathcal{T}}$ space, we have ${\boldsymbol{g}}({\boldsymbol{w}}) = UU^T {\boldsymbol{g}}({\mathbf{proj}}_{{\mathcal{T}}}({\boldsymbol{w}})) = {\boldsymbol{g}}({\mathbf{proj}}_{{\mathcal{T}}}({\boldsymbol{w}})) $ and $H({\boldsymbol{w}}) = UU^TH({\mathbf{proj}}_{{\mathcal{T}}}({\boldsymbol{w}}))UU^T = H({\mathbf{proj}}_{{\mathcal{T}}} ({\boldsymbol{w}}))$. **Objective formulation in the reduced space**\ Define ${\hat \ell}({\boldsymbol{z}}) = \ell (U {\boldsymbol{z}})$. Then if ${\boldsymbol{z}}= U^T {\boldsymbol{w}}$, we have ${\hat \ell}({\boldsymbol{z}}) = \ell({\boldsymbol{w}})$, ${\hat {\boldsymbol{g}}}({\boldsymbol{z}}) = U^T {\boldsymbol{g}}({\boldsymbol{w}})$ and ${\hat H}({\boldsymbol{z}}) = U^T H({\boldsymbol{w}}) U$, where ${\hat {\boldsymbol{g}}}({\boldsymbol{z}}) $ and ${\hat H}({\boldsymbol{z}})$ are the gradient and Hessian of ${\hat \ell}({\boldsymbol{z}})$ respectively. Now ${\hat H}$ is positive definite with minimal eigenvalue $m$. The objective can be re-formulated in the reduced space by $$\label{TObj} \min_{{\boldsymbol{z}}} {\hat f}({\boldsymbol{z}}) = h({\boldsymbol{z}}) + {\hat \ell}({\boldsymbol{z}}),$$ where $$h({\boldsymbol{z}}) = \min_{U^T {\boldsymbol{w}}={\boldsymbol{z}}} \lambda \| {\boldsymbol{w}}\|_1$$ We now prove that $h({\boldsymbol{z}})$ is a convex function, i.e., $$c h({\boldsymbol{z}}_1) + (1-c) h({\boldsymbol{z}}_2) \geq h(c{\boldsymbol{z}}_1+(1-c){\boldsymbol{z}}_2)$$ for any $0\leq c \leq 1$, ${\boldsymbol{z}}_1$ and ${\boldsymbol{z}}_2$. Let $${\boldsymbol{w}}_1 = \underset{U^T {\boldsymbol{w}}= {\boldsymbol{z}}_1}{\text{argmin}} \lambda \|w\|_1 \text{ and } {\boldsymbol{w}}_2 = \underset{U^T {\boldsymbol{w}}= {\boldsymbol{z}}_2}{\text{argmin}} \lambda \|w\|_1$$ Then, $$\begin{aligned} c h({\boldsymbol{z}}_1) + (1-c) h({\boldsymbol{z}}_2) & = \lambda (c \|{\boldsymbol{w}}_1\|_1 + (1-c) \|{\boldsymbol{w}}_2\|_1 ) \\ &\geq \lambda (\|c {\boldsymbol{w}}_1 + (1-c) {\boldsymbol{w}}_2\|_1 ) \\ & \geq h(U^T ( c {\boldsymbol{w}}_1 + (1-c) {\boldsymbol{w}}_2 )) \\ & = h(c{\boldsymbol{z}}_1+(1-c){\boldsymbol{z}}_2)\end{aligned}$$ The optimal solution ${\boldsymbol{z}}^*$ of has the following relationship with the optimal solution ${\boldsymbol{w}}^*$ of , $$\label{relationship} {\boldsymbol{w}}^* = \underset{U^T {\boldsymbol{w}}= {\boldsymbol{z}}^*}{\text{argmin}} \textit{ } \lambda \|{\boldsymbol{w}}\|_1 \text{ and } {\boldsymbol{z}}^* = U^T {\boldsymbol{w}}^*$$ **Lipschitz continuity in the reduced space**\ Throughout the paper, we assume the Hessian of $\ell({\boldsymbol{w}})$ has Lipschitz continuity with constant $L_H$. According to the Lipschitz continuity, $$\| H({\boldsymbol{w}}_2) ({\boldsymbol{w}}_1-{\boldsymbol{w}}_2) -({\boldsymbol{g}}({\boldsymbol{w}}_1) - {\boldsymbol{g}}({\boldsymbol{w}}_2) )\| \leq \frac{L_H}{2} \|{\boldsymbol{w}}_1-{\boldsymbol{w}}_2\|^2$$ In the corresponding reduced space, the Lipschitz continuity also holds with the same constant . $$\label{newLipschitz} \| {\hat H}({\boldsymbol{z}}_2) ({\boldsymbol{z}}_1-{\boldsymbol{z}}_2) -({\hat {\boldsymbol{g}}}({\boldsymbol{z}}_1) - {\hat {\boldsymbol{g}}}({\boldsymbol{z}}_2) )\| \leq \frac{L_H}{2} \|{\boldsymbol{z}}_1-{\boldsymbol{z}}_2\|^2$$ **BFGS update formula in the reduced space**\ If $B_0$ is in the ${\mathcal{T}}$ space, $B_t$ is also in the ${\mathcal{T}}$ space. This can be shown by re-formulating the BFGS update and mathematical induction, $$B_t = U {\hat B}_{t-1} U^T - \frac{U {\hat B}_{t-1} U^T s_{t-1} s^T_{t-1}U {\hat B}_{t-1} U^T }{ s_{t-1}^T U {\hat B}_{t-1} U^T s_{t-1} } + \frac{UU^T y_{t-1} y^T_{t-1}UU^T} { y_{t-1}^TU U^T s_{t-1}}$$ Thus $$\label{newBFGS} {\hat B}_t = {\hat B}_{t-1} - \frac{{\hat B}_{t-1} {\hat s}_{t-1} {\hat s}^T_{t-1} {\hat B}_{t-1} }{ {\hat s}_{t-1}^T {\hat B}_{t-1} {\hat s}_{t-1} } + \frac{{\hat y}_{t-1} {\hat y}^T_{t-1}} { {\hat y}_{t-1}^T {\hat s}_{t-1}}$$ where ${\hat s}= U^T s$, ${\hat y}= U^T y$ and $U \hat B_t U^T= B_t$. It can be proved that ${\hat B}_t$ generated in is positive definite provided ${\hat y}^T{\hat s}>0$ [@L-BFGS]. If we additionally assume $m \| {\boldsymbol{z}}\|^2 \leq {\boldsymbol{z}}^T {\hat B}_t {\boldsymbol{z}}\leq M \| {\boldsymbol{z}}\|^2$ for any ${\boldsymbol{z}}\in \rR^{\hat d}$, then $B_t$ satisfies the CNSC-${\mathcal{T}}$ condition. **Iterate in the reduced space**\ The potential new iterate ${\boldsymbol{w}}^{+}$ is $$\label{newIterate} {\boldsymbol{w}}^{+} = \underset{{\boldsymbol{v}}}{\text{argmin}} \textit{ } \lambda \| {\boldsymbol{v}}\|_1 + \frac{1}{2}({\boldsymbol{v}}-{\boldsymbol{w}}_t)^T B_t ({\boldsymbol{v}}-{\boldsymbol{w}}_t) + {\boldsymbol{g}}_t^T ({\boldsymbol{v}}-{\boldsymbol{w}}_t)$$ In the reduced space, the potential new iterate can be represented by, $$\label{nextIter} {\boldsymbol{z}}^{+} = \underset{{\boldsymbol{x}}}{\text{argmin}} \textit{ } h({\boldsymbol{x}}) + \frac{1}{2}({\boldsymbol{x}}-{\boldsymbol{z}}_t)^T \hat B_t ({\boldsymbol{x}}-{\boldsymbol{z}}_t) + \hat {\boldsymbol{g}}_t^T ({\boldsymbol{x}}-{\boldsymbol{z}}_t)$$ ${\boldsymbol{z}}^+$ and ${\boldsymbol{w}}^+$ also satisfy Equation , i.e. $$\label{wt_zt} {\boldsymbol{w}}^+ = \underset{U^T {\boldsymbol{w}}= {\boldsymbol{z}}^+}{\text{argmin}} \textit{ } \|{\boldsymbol{w}}\|_1$$ In this paper, we consider the convergence phase when ${\boldsymbol{z}}_t$ is close enough to the optimum such that the unit step size is always chosen, i.e. ${\boldsymbol{z}}_{t+1} = {\boldsymbol{z}}^+$ [@newtontype_theory]. Global linear Convergence ------------------------- \[global\_linear\] For $\nabla {\hat \ell}({\boldsymbol{z}})$ satisfying Lipschitz-continuity with a constant $L_g$ and $B_t$ satisfying CNSC-${\mathcal{T}}$, the sequence $\{{\boldsymbol{z}}_t\}_{t = 1}^ {\infty}$ produced by Prox-QN method converges at least R-linearly. This theorem follows Theorem 2 in [@nonsmooth], where the coordinate block $J_k$ is chosen to be the whole coordinate set. Assumption 2(a) in [@nonsmooth] is satisfied because of Theorem 4 C4 in [@nonsmooth] by assuming $\nabla {\hat \ell}({\boldsymbol{z}})$ is Lipschitz-continuous. Other conditions of Theorem 2 in [@nonsmooth] can be easily justified. Quadratic Convergence of Proximal Newton Method and Dennis-More Criterion -------------------------------------------------------------------------- \[thm\_QC\_proxNewton\] For $\ell ({\boldsymbol{w}})$ satisfying CNSC-${\mathcal{T}}$ with Lipschitz-continuous second derivative $H({\boldsymbol{w}})= \nabla^2 \ell ({\boldsymbol{w}})$, the sequence $\{{\boldsymbol{w}}_t\}$ produced by proximal Newton Method in the quadratic convergence phase has $$\|{\boldsymbol{z}}_{t+1}-{\boldsymbol{z}}^*\| \leq \frac{L_H}{2m} \|{\boldsymbol{z}}_t-{\boldsymbol{z}}^*\|^2 ,$$ where ${\boldsymbol{z}}^* = U^T {\boldsymbol{w}}^*$, ${\boldsymbol{z}}_t = U^T {\boldsymbol{w}}_t$, ${\boldsymbol{w}}^*$ is the optimal solution and $L_H$ is the Lipschitz constant for $H({\boldsymbol{w}})$. \[dennis\_more\] If $B_0 = U{\hat B}_0 U^T$ satisfies CNSC-${\mathcal{T}}$ condition, then ${\hat B}_t$ generated by satisfies the Dennis-More criterion [@dennis], namely, $$\lim_{t\rightarrow\infty} \frac{\|({\hat B}_t-{\hat H}^*)({\boldsymbol{z}}_{t+1}-{\boldsymbol{z}}_t)\|}{\|{\boldsymbol{z}}_{t+1}-{\boldsymbol{z}}_t\|} = 0,$$ where $ {\hat H}^* = \nabla^2 {\hat \ell}({\boldsymbol{z}}^*)$ and ${\boldsymbol{z}}^*$ is the optimal solution of . We want to show that this proof can follow the proof of Theorem 6.6 in [@L-BFGS]. We will verify that the conditions of Theorem 6.6 in [@L-BFGS] are satisfied here. First, the Lipschitz continuity of ${\hat H}({\boldsymbol{z}})$ is implied by Lipschitz continuity of $H({\boldsymbol{w}})$ : $$\begin{aligned} \| ({\hat H}({\boldsymbol{z}}_1) - {\hat H}({\boldsymbol{z}}_2)) \| &= \| U^T(H({\boldsymbol{w}}_1) - H({\boldsymbol{w}}_2))U \| \\ &\leq \| H({\boldsymbol{w}}_1) - H({\boldsymbol{w}}_2) \| \\ &=\| H(U {\boldsymbol{z}}_1) - H(U {\boldsymbol{z}}_2) \| \\ & \leq L_H\| {\boldsymbol{z}}_1 - {\boldsymbol{z}}_2 \|\end{aligned}$$ where the last inequality is from the Lipschitz continuity of $H({\boldsymbol{w}})$. The second condition, $\sum_{t =0}^{\infty} \|{\boldsymbol{z}}_t - {\boldsymbol{z}}^* \| < \infty$, is implied by the global linear convergence(Lemma \[global\_linear\]). Asymptotic Superlinear Convergence {#proof_theorem1_2} ---------------------------------- **Proof of Theorem 1** If $B_t$ satisfies CNSC-${\mathcal{T}}$ condition, then ${\hat B}_t$ satisfies $m \| {\boldsymbol{z}}\|^2 \leq {\boldsymbol{z}}^T {\hat B}_t {\boldsymbol{z}}\leq M \| {\boldsymbol{z}}\|^2$ for any ${\boldsymbol{z}}\in \rR^{\hat d}$. The Lipschitz-continuous $H$ implies Lipschitz-continuity of ${\hat H}$. Therefore by applying the Prox-QN method in the reduced space, this theorem follows Theorem 3.7 in [@newtontype_theory], Lemma \[dennis\_more\] and Lemma \[thm\_QC\_proxNewton\]. **Proof of Theorem 2** We prove this theorem by showing $| \ell({\boldsymbol{w}}_t) - \ell({\boldsymbol{w}}^*) | \leq L_{\ell} \| {\boldsymbol{z}}_t - {\boldsymbol{z}}^* \| $ and $ \|{\boldsymbol{w}}_t \|_1- \| {\boldsymbol{w}}^* \|_1 \leq \sqrt{d} \|{\boldsymbol{z}}_t - {\boldsymbol{z}}^*\|$. The first part is given by, $$| \ell({\boldsymbol{w}}_t) - \ell({\boldsymbol{w}}^*) | = | \ell(UU^T {\boldsymbol{w}}_t) - \ell(UU^T{\boldsymbol{w}}^*) | \leq L_{\ell} \| UU^T ({\boldsymbol{w}}_t - {\boldsymbol{w}}^*) \| =L_{\ell} \| {\boldsymbol{z}}_t - {\boldsymbol{z}}^* \|$$ where the inequality comes from the Lipschitz-continuity of $\ell({\boldsymbol{w}})$. In the super-linear convergence phase, the unit step size is chosen, so each iterate satisfies . We have $\| {\boldsymbol{w}}_t \|_1 \leq \| UU^T {\boldsymbol{w}}_t + (I-UU^T) {\boldsymbol{w}}^* \|_1 $. Moreover, due to the Lipschitz-continuity of $\ell_1$ norm, which is $\|{\boldsymbol{w}}\|_1 - \|{\boldsymbol{v}}\|_1 \leq \sqrt{d} \|{\boldsymbol{w}}- {\boldsymbol{v}}\| $, we have, $$\begin{aligned} \| UU^T {\boldsymbol{w}}_t + (I-UU^T) {\boldsymbol{w}}^* \|_1 & \leq \| {\boldsymbol{w}}^* \|_1 + \sqrt{d} \| UU^T {\boldsymbol{w}}_t - UU^T {\boldsymbol{w}}^* \| \\ &\leq \| {\boldsymbol{w}}^* \|_1 + \sqrt{d} \| {\boldsymbol{z}}_t - {\boldsymbol{z}}^* \|\end{aligned}$$ Global Convergence with Shrinking {#shrinking_proof} --------------------------------- In Theorem 1, we assume shrinking strategy is not employed and the inner problem is solved exactly. In this subsection, we show that by only assuming the inner problem is solved exactly, Prox-QN method with shrinking will still globally converge to the optimum under the CNSC-${\mathcal{T}}$ condition. We first prove that with sufficient small step size, the Armijo rule will be satisfied. \[armijo\_rule\] If the step size, $$\alpha \leq \min{\{ 1, \frac{m}{L_1}(1-\sigma) \}}$$ then the Armijo rule is satisfied, i.e., $$f({\boldsymbol{w}}+\alpha {\boldsymbol{d}}) \leq f({\boldsymbol{w}}) + \alpha \sigma(\lambda \|{\boldsymbol{w}}+ {\boldsymbol{d}}\|_1 - \lambda \|{\boldsymbol{w}}\|_1 + {\boldsymbol{g}}^T {\boldsymbol{d}})$$ where $L_1$ is the Lipschitz-continuity constant. Let ${\boldsymbol{w}}^+ = {\boldsymbol{w}}+ \alpha {\boldsymbol{d}}$, $$\begin{aligned} f({\boldsymbol{w}}^+) - f({\boldsymbol{w}}) &= \ell({\boldsymbol{w}}^+) - \ell({\boldsymbol{w}}) + \lambda (\|{\boldsymbol{w}}^+\|_1 - \|{\boldsymbol{w}}\|_1) \\ & \leq \int_0^1 \nabla \ell({\boldsymbol{w}}+ s \alpha {\boldsymbol{d}}) (\alpha {\boldsymbol{d}})ds + \alpha \lambda \|{\boldsymbol{w}}+ {\boldsymbol{d}}\|_1 +(1-\alpha ) \lambda \|{\boldsymbol{w}}\|_1 - \lambda \|{\boldsymbol{w}}\|_1 \\ & = \alpha (\nabla \ell({\boldsymbol{w}})^T {\boldsymbol{d}}+ \lambda \|{\boldsymbol{w}}+ {\boldsymbol{d}}\|_1 - \lambda \|{\boldsymbol{w}}\|_1) + \alpha \int_0^1{\boldsymbol{d}}^T (\nabla \ell({\boldsymbol{w}}+ s \alpha {\boldsymbol{d}}) - \nabla \ell({\boldsymbol{w}})) ds \\ & \leq \alpha (\nabla \ell({\boldsymbol{w}})^T {\boldsymbol{d}}+ \lambda \|{\boldsymbol{w}}+ {\boldsymbol{d}}\|_1 - \lambda \|{\boldsymbol{w}}\|_1) + \alpha \int_0^1 \|U^T {\boldsymbol{d}}\| \| \nabla \ell({\boldsymbol{w}}+ s \alpha {\boldsymbol{d}}) - \nabla \ell({\boldsymbol{w}})\| ds \end{aligned}$$ Because $$\| \nabla \ell({\boldsymbol{w}}+ s \alpha {\boldsymbol{d}}) - \nabla \ell({\boldsymbol{w}})\| = \| \nabla \ell(UU^T {\boldsymbol{w}}+ s \alpha UU^T {\boldsymbol{d}}) - \nabla \ell(UU^T {\boldsymbol{w}})\| \leq s L_1 \|U^T{\boldsymbol{d}}\|$$ we have $$\begin{aligned} f({\boldsymbol{w}}^+) - f({\boldsymbol{w}}) & \leq \alpha \left( (\nabla \ell({\boldsymbol{w}})^T {\boldsymbol{d}}+ \lambda \|{\boldsymbol{w}}+ {\boldsymbol{d}}\|_1 - \lambda \|{\boldsymbol{w}}\|_1) + \frac{L_1 \alpha}{2} \|U^T{\boldsymbol{d}}\|^2 \right)\end{aligned}$$ For $\alpha \leq \min{\{ 1, \frac{m}{L_1}(1-\sigma) \}}$, $$\frac{L_1 \alpha}{2} \|U^T{\boldsymbol{d}}\|^2 \leq \frac{m}{2}(1-\sigma) \|U^T{\boldsymbol{d}}\|^2 \leq \frac{1-\sigma}{2} {\boldsymbol{d}}^T B {\boldsymbol{d}}$$ As ${\boldsymbol{d}}$ minimizes Eq. (2) in the main paper, we have $\frac{1}{2} {\boldsymbol{d}}^T B {\boldsymbol{d}}\leq - (\nabla \ell({\boldsymbol{w}})^T {\boldsymbol{d}}+ \lambda \|{\boldsymbol{w}}+ {\boldsymbol{d}}\|_1 - \lambda \|{\boldsymbol{w}}\|_1)$. So we obtain the sufficient descent condition, $$f({\boldsymbol{w}}^+) - f({\boldsymbol{w}}) \leq \alpha \sigma \left( \nabla \ell({\boldsymbol{w}})^T {\boldsymbol{d}}+ \lambda \|{\boldsymbol{w}}+ {\boldsymbol{d}}\|_1 - \lambda \|{\boldsymbol{w}}\|_1 \right)$$ Assume $\nabla^2 \ell ({\boldsymbol{w}}) $ and $\nabla \ell({\boldsymbol{w}})$ are Lipschitz continuous. Let $\{ B_t \}_{t=1,2,3...}$ be the matrices generated by BFGS update. Then if $ \ell ({\boldsymbol{w}}) $ and $B_t$ satisfy CNSC-${\mathcal{T}}$ condition and the inner problem is solved exactly, the proximal quasi-Newton method with shrinking has global convergence. Our algorithm allows all the variables to re-enter the working set at the beginning of each epoch. And before it terminates all the variables must be checked. Thus as many as epochs are taken in the optimization procedure until the global stopping criterion is attained. Let’s denote $\{t_k\}_{k=0,1,2,3...}$ to be the iterations when an epochs begins. In these iterations, all the variables are taken into consideration. As shown in Lemma \[armijo\_rule\], there exists some constant $\alpha_0$, $$f({\boldsymbol{w}}_{t_k+1}) - f({\boldsymbol{w}}_{t_k}) \leq \alpha_0 \sigma \left( \nabla \ell({\boldsymbol{w}}_{t_k})^T {\boldsymbol{d}}_{t_k} + \lambda \|{\boldsymbol{w}}_{t_k} + {\boldsymbol{d}}_{t_k} \|_1 - \lambda \|{\boldsymbol{w}}_{t_k} \|_1 \right)$$ And as in each epoch the function value is non-increasing across the iterations, i.e. for any $k$, $f({\boldsymbol{w}}_{t_{k+1}}) \leq f({\boldsymbol{w}}_{t_{k}+1}) $. Thus, we have $$f({\boldsymbol{w}}_{t_K+1}) - f({\boldsymbol{w}}_{t_0}) \leq \sum_{k=0}^K f({\boldsymbol{w}}_{t_k+1}) - f({\boldsymbol{w}}_{t_k}) \leq - \alpha_0 \sigma \sum_{k=0}^K{\boldsymbol{d}}_{t_k}^T B_{t_k} {\boldsymbol{d}}_{t_k}$$ As $f({\boldsymbol{w}}_{t_K+1}) - f({\boldsymbol{w}}_{t_0}) > -\infty$, $\lim_{k \rightarrow \infty} {\boldsymbol{d}}_{t_k}^T B_{t_k} {\boldsymbol{d}}_{t_k} = 0$. Thus, $U^T {\boldsymbol{d}}_{t_k} \rightarrow {\boldsymbol{0}}$. That is to say, $\lim_{k\rightarrow \infty}{\boldsymbol{d}}_{t_k} \in {\mathcal{T}}^{\perp} $. If ${\boldsymbol{d}}_{t} \in {\mathcal{T}}^{\perp}$, the line search procedure will always pick unit step size. And in the next iteration, ${\boldsymbol{d}}_{t+1} = 0$. So when $U^T {\boldsymbol{d}}_{t_k} \rightarrow {\boldsymbol{0}}$, we also have ${\boldsymbol{d}}_{t_k} \rightarrow {\boldsymbol{0}}$. Therefore, ${\boldsymbol{w}}_{t_k}$ converges to the optimum according to Proposition 2.5 in [@newtontype_theory]. Algorithm Details {#detail_alg} ================= Observations $\{ {\boldsymbol{x}}^{(i)} \}_{i=1,2,...,N}$, labels $\{ {\boldsymbol{y}}^{(i)} \}_{i=1,2,...,N}$, termination criterion $\epsilon$, scalar $\lambda$ and L-BFGS memory size $m$. ${\boldsymbol{w}}^*$ converging to arg min$_{{\boldsymbol{w}}} f({\boldsymbol{w}}) $ Initialize $\gamma =1$, ${\boldsymbol{w}}\leftarrow \mathbf{0}$, ${\boldsymbol{g}}\leftarrow \partial \ell({\boldsymbol{w}}) / \partial {\boldsymbol{w}}$, working set ${\mathcal{A}}\leftarrow \{ 1,2,...d \}$, $\hat M \leftarrow \infty$, and $S$, $Y$, $Q$, $\hat Q$ $\leftarrow \phi$. $\hat {\mathcal{A}}\leftarrow {\mathcal{A}}$, ${\mathcal{A}}\leftarrow \phi$, $M \leftarrow 0$ calculate $\partial_j f$ $$\label{subg} \partial_j f({\boldsymbol{w}}) = \begin{cases} g_j + \text{sgn}(w_j)\lambda & \text{if }w_j \neq 0 \\ \text{sgn}( g_j )\max\{ |g_j| - \lambda,0 \} & \text{if } w_j =0 \end{cases}$$ ${\mathcal{A}}\leftarrow {\mathcal{A}}\cup j$, $ M \leftarrow \max\{ M,|\partial_j f |\}$ $\hat M \leftarrow M$ return ${\boldsymbol{w}}$ ${\boldsymbol{g}}\leftarrow \partial \ell({\boldsymbol{w}}) / \partial {\boldsymbol{w}}$, ${\mathcal{A}}\leftarrow \{ 1,2,...d \}$ and $S$, $Y$, $Q$, $\hat Q$ $\leftarrow \phi$ Update shrinking stopping criterion and then continue ${\boldsymbol{d}}\leftarrow \mathbf{0}$, $\hat {\boldsymbol{d}}\leftarrow \mathbf{0}$ Compute $ inner\_ iter = \min\{ max\_ inner, \lfloor \frac{d}{|{\mathcal{A}}|} \rfloor \}$ $B_{jj} = \gamma - {\boldsymbol{q}}^T_j\hat {\boldsymbol{q}}_j$, $(Bd)_j = \gamma d_j - {\boldsymbol{q}}_j^T \hat {\boldsymbol{d}}$ $a=(B_t)_{jj} $, $b=({\boldsymbol{g}}_t)_j + (B_t {\boldsymbol{d}})_j$ and $c= ({\boldsymbol{w}}_t)_j + d_j$ Compute $z$ according to $z = -c +\mathcal{S} (c-b/a,\lambda/a)$ $d_j \leftarrow d_j+z$, $\hat {\boldsymbol{d}}\leftarrow \hat {\boldsymbol{d}}+ z \hat {\boldsymbol{q}}_j$ break $g^{new}_j = \partial \ell({\boldsymbol{w}})/\partial w_j$, $y_j = g^{new}_j - g_j$, $s_j = \alpha d_j$, $g_j = g^{new}_j$ Update $S$, $Y$ and $Q$ just on the rows corresponding to ${\mathcal{A}}$. Update $\gamma$, $D$, $L$, $S^TS$ where the inner product between $\mathbf{s}$ and another vector is computed just over ${\mathcal{A}}$. Update $R$ and then update $\hat Q$ just on the columns corresponding to ${\mathcal{A}}$. Proof of Theorem 3 {#crfCNSC} ================== The Hessian of $\ell({\boldsymbol{w}})$ for CRF MLEs is $$\label{CRF_Hessian} H = \sum_{i=1}^{N} \left( E \left[ \phi({\boldsymbol{y}},{\boldsymbol{x}}^{(i)}) \phi({\boldsymbol{y}},{\boldsymbol{x}}^{(i)})^T \right]-E \left[ \phi({\boldsymbol{y}},{\boldsymbol{x}}^{(i)} )\right] E \left[ \phi({\boldsymbol{y}},{\boldsymbol{x}}^{(i)} )\right]^T \right),$$ where $\phi({\boldsymbol{y}},{\boldsymbol{x}}^{(i)}) = \left[ f_1( {\boldsymbol{y}},{\boldsymbol{x}}^{(i)} ),f_2( {\boldsymbol{y}},{\boldsymbol{x}}^{(i)} ),...,f_d( {\boldsymbol{y}},{\boldsymbol{x}}^{(i)} ) \right]^T $ and $E$ is the expectation over the conditional probability $P_{{\boldsymbol{w}}} ({\boldsymbol{y}}| {\boldsymbol{x}}^{(i)})$. Now we re-formulate to $$H = \Phi D \Phi$$ Here $D \in \rR^{ (N|{\mathcal{Y}}|) \times (N|{\mathcal{Y}}|)}$ is a diagonal matrix with diagonal elements $D_{nn} = P_{{\boldsymbol{w}}} ({\boldsymbol{y}}_l | {\boldsymbol{x}}^{(i)}) $, where $n = (i-1) |{{\mathcal{Y}}}| + l$ and $l = 1,2,..,|{\mathcal{Y}}|$. $\Phi$ is a $d \times (N|{\mathcal{Y}}|) $ matrix whose column $n$ is defined as $\Phi_n = \phi({\boldsymbol{y}}_l,{\boldsymbol{x}}^{(i)}) - E \left[ \phi({\boldsymbol{y}},{\boldsymbol{x}}^{(i)} \right] $ for $n = (i-1)|{\mathcal{Y}}| + l$. The theorem holds because of the following four reasons.\ **a. ${\mathcal{N}}$ is constant with respect to ${\boldsymbol{w}}$.**\ ${\mathcal{N}}$ is equivalent to $$\label{charN} {\mathcal{N}}= \{ {\boldsymbol{a}}\in \rR^{d} |\forall i, \exists \text{ some constant } b_i, \langle {\boldsymbol{a}},\phi({\boldsymbol{y}},{\boldsymbol{x}}^{(i)}) \rangle = b_i \text{ for } \forall {\boldsymbol{y}}\}$$ Thus ${\mathcal{N}}$ is independent on ${\boldsymbol{w}}$ and so is ${\mathcal{T}}$.\ **b. $\ell({\boldsymbol{w}})$ depends only on ${\boldsymbol{z}}= {\mathbf{proj}}_{{\mathcal{T}}}({\boldsymbol{w}})$.**\ Let ${\boldsymbol{w}}= {\boldsymbol{z}}+{\boldsymbol{u}}$. So ${\boldsymbol{u}}\in {\mathcal{N}}$. $$\begin{aligned} P_{{\boldsymbol{w}}} ({\boldsymbol{y}}^{(i)} | {\boldsymbol{x}}^{(i)}) &= \frac{\exp\left\{ \langle {\boldsymbol{w}},\phi({\boldsymbol{y}}^{(i)},{\boldsymbol{x}}^{(i)}) \rangle \right\}}{\sum_{{\boldsymbol{y}}}\exp\left\{ \langle {\boldsymbol{w}},\phi({\boldsymbol{y}},{\boldsymbol{x}}^{(i)}) \rangle\right\}} \\ &= \frac{\exp\left\{ \langle {\boldsymbol{z}},\phi({\boldsymbol{y}}^{(i)},{\boldsymbol{x}}^{(i)}) \rangle \right\} \exp\left\{ \langle {\boldsymbol{u}},\phi({\boldsymbol{y}}^{(i)},{\boldsymbol{x}}^{(i)}) \rangle \right\}}{\sum_{{\boldsymbol{y}}}\exp\left\{ \langle {\boldsymbol{z}},\phi({\boldsymbol{y}},{\boldsymbol{x}}^{(i)}) \rangle \right\}\exp\left\{ \langle {\boldsymbol{u}},\phi({\boldsymbol{y}},{\boldsymbol{x}}^{(i)}) \rangle \right\}} \\ &= \frac{\exp\left\{ \langle {\boldsymbol{z}},\phi({\boldsymbol{y}}^{(i)},{\boldsymbol{x}}^{(i)}) \rangle \right\}}{\sum_{{\boldsymbol{y}}}\exp\left\{ \langle {\boldsymbol{z}},\phi({\boldsymbol{y}},{\boldsymbol{x}}^{(i)}) \rangle\right\}} \\\end{aligned}$$ The last equality comes from the character of ${\mathcal{N}}$, Equation .\ **c. The first property Eq. holds.**\ $D_{nn} \rightarrow 0$ iff $\|{\boldsymbol{w}}\|_1 \rightarrow \infty$ which is prohibited by $\ell_1$ penalty. Thus there exists $m_p >0$ such that $D_{nn} \geq m_p$ for any $n$. Hence, the positive definiteness of $H$ is determined by $\Phi$.\ So we have for any ${\boldsymbol{v}}\in {\mathcal{T}}$, $$m_p \lambda_{min} (\Phi \Phi^T) \| {\boldsymbol{v}}\|^2 \leq m_p {\boldsymbol{v}}^T \Phi \Phi^T {\boldsymbol{v}}\leq {\boldsymbol{v}}^T H {\boldsymbol{v}}\leq {\boldsymbol{v}}^T \Phi \Phi^T {\boldsymbol{v}}\leq \lambda_{max} (\Phi \Phi^T) \| {\boldsymbol{v}}\|^2$$ where $\lambda_{min} (\Phi \Phi^T)$ is the minimum nonzero eigenvalue of $\Phi \Phi^T$ and $\lambda_{max} (\Phi \Phi^T)$ is the maximum eigenvalue of $\Phi \Phi^T$.\ **d. The second property Eq. holds.**\ This property directly follows the definition of ${\mathcal{N}}$. Gradient evaluation in sequence labeling and hierarchical classification ======================================================================== The gradients for general CRF problems are given by $$\label{grad} \frac{\partial \ell({\boldsymbol{w}})}{\partial w_k} = \sum_{i=1}^{N} \left( \sum_{{\boldsymbol{y}}\in \mathcal{Y}} P_{{\boldsymbol{w}}} ({\boldsymbol{y}}| {\boldsymbol{x}}^{(i)}) f_k({\boldsymbol{y}},{\boldsymbol{x}}^{(i)}) - f_k({\boldsymbol{y}}^{(i)},{\boldsymbol{x}}^{(i)}) \right)$$ Sequence labeling {#seq_gradient} ----------------- The partial gradients of $\ell({\boldsymbol{w}})$ for sequence labeling problem are, $$\begin{aligned} \frac{\partial l(\Theta,\Lambda)}{\partial \Theta_{y,j}} &= \sum_{i=1}^{N} \sum_{t=1}^{T^{(i)}} \left( P_{{\boldsymbol{w}}}(y_t = y|{\boldsymbol{x}}^{(i)}) -{\boldsymbol{1}}\left[ y^{(i)}_t = y\right] \right) x^{(i)}_{tj} \label{seqgrad} \\ \frac{\partial l(\Theta,\Lambda)}{\partial \Lambda_{y,y'}} &= \sum_{i=1}^{N} \sum_{t=1}^{T^{(i)}-1} \left( P_{{\boldsymbol{w}}}(y_t = y, y_{t+1} = y'|{\boldsymbol{x}}^{(i)}) - {\boldsymbol{1}}\left[ y^{(i)}_t = y, y^{(i)}_{t+1} = y'\right] \right) \label{seqgrad2}\end{aligned}$$ The forward-backward algorithm is a popular inference oracle for evaluating the marginal probability in Equation and . In our OCR model, the forward-backward algorithm is $$\begin{cases} \alpha_1(y) = \exp( \Theta^T_y {\boldsymbol{x}}_1) \\ \alpha_{t+1}(y) = \sum_{y'} \alpha_t(y') \exp(\Theta^T_y {\boldsymbol{x}}_{t+1} + \Lambda_{y',y}) \end{cases}$$ $$\begin{cases} \beta_T(y) = 1 \\ \beta_{t}(y') = \sum_{y} \beta_{t+1} (y) \exp( \Theta^T_y {\boldsymbol{x}}_{t+1} + \Lambda_{y',y}) \end{cases}$$ where $\Theta^T_y$ is the y-th row of the matrix $\Theta$. Then the marginal conditional probabilities are given by $$\begin{aligned} P_{{\boldsymbol{w}}} (y_t = y', y_{t+1} = y | {\boldsymbol{x}}) &= \frac{1}{ Z_{{\boldsymbol{w}}} ({\boldsymbol{x}}) }\alpha_t(y') \exp( \Theta_y {\boldsymbol{x}}_{t+1} + \Lambda_{y',y}) \beta_{t+1}(y) \\ P_{{\boldsymbol{w}}} (y_t = y | {\boldsymbol{x}}) &= \frac{1}{ Z_{{\boldsymbol{w}}} ({\boldsymbol{x}}) }\alpha_t(y) \beta_t(y),\end{aligned}$$ where the normalization factor $Z_{{\boldsymbol{w}}} ({\boldsymbol{x}})$ can be computed by $\sum_{y} \alpha_T(y)$. Hierarchical classification {#hier_gradient} --------------------------- The partial gradients of $\ell(W)$ for hierarchical classification problem are, $$\frac{\partial \ell(W)}{\partial W_{k,j}} = \sum_{i=1}^{N} \left( \sum_{y \in {\mathcal{Y}}} {\boldsymbol{1}}\left[ k \in \text{Path}(y)\right] P_W(y|{\boldsymbol{x}}^{(i)}) - {\boldsymbol{1}}\left[ k\in \text{Path}(y^{(i)}) \right] \right)x^{(i)}_{j}$$ They can be evaluated by the downward-upward algorithm. Let $\alpha(k)$ and $\beta(k)$ be the downward message and upward message respectively. $$\begin{cases} \alpha(root) = {\boldsymbol{w}}_{root}^T {\boldsymbol{x}}\\ \alpha(k) = \alpha \left(\text{parent}(k) \right) + {\boldsymbol{w}}_k^T {\boldsymbol{x}}\end{cases}$$ $$\begin{cases} \beta(k) = \alpha(k) / \sum_{y \in Y} \alpha(y) & \text{if $k$ is a leaf node}\\ \beta(k) = \sum_{k' \in \text{children}(k)} \beta(k')& \text{if $k$ is a non-leaf node} \end{cases}$$ So we have $$\label{hiergrad} \frac{\partial \ell(W)}{\partial W_{k,j}} = \sum_{i=1}^{N} \left( \beta^{(i)}(k) - {\boldsymbol{1}}\left[ k\in \text{Path}(y^{(i)}) \right] \right)x^{(i)}_{j}$$ More Experimental Results {#more_exp} ========================= Performance on different values of $\lambda$ -------------------------------------------- $\lambda$ affects the sparsity of the intermediate iterates, and further the speed of the algorithm. In particular, when $\lambda$ is larger, the intermediate iterates are sparser, and then the corresponding iterations, due to the shrinking strategy, will be faster – and vice versa. The effect of $\lambda$ on the performance is shown in this section. ### Sequence Labeling {#sequence-labeling-1} \[H\] \[seq\_result1\] \[H\] \[seq\_result2\] ### Hierarchical Classification {#hierarchical-classification-1} \[H\] \[hier\_result1\] \[H\] \[hier\_result2\] Testing Accuracy {#testing_accuracy} ---------------- The testing accuracy for different $\lambda$’s on these two problems is in the following tables. The testing accuracy across training time is shown in Figure \[test\_accuracy\]. $\lambda$ 50 100 500 ------------------ ---------- ---------- ---------- Testing Accuracy 0.834928 0.736643 0.407895 nnz of optimum 2542 1544 223 : Per-character testing accuracy for OCR dataset[]{data-label="seq_test_rmse"} $\lambda$ 0.5 1 2 ------------------ ---------- ---------- ---------- Testing Accuracy 0.262648 0.249731 0.185684 nnz of optimum 28483 4301 1505 : Testing accuracy for LSHTC1 dataset[]{data-label="hier_test_rmse"} \[H\] Relative objective difference v.s. the number of passes over dataset -------------------------------------------------------------------- Figure \[fig\_num\_passes\] shows the performance under the measure of number of passes (iterations) over dataset. We do not include the plot for BCD method, because the pass over dataset for BCD actually depends on the sparsity pattern of the dataset. Thus it is hard to fairly define the pass over dataset for BCD. In this experiment, shrinking strategy is not applied. \[H\] ![Relative objective difference v.s. the number of passes over dataset for OCR dataset with $\lambda = 100$. []{data-label="fig_num_passes"}](Sequence-Labelling-iter-100.pdf "fig:"){width="3in"} [^1]: http://research.microsoft.com/en-us/downloads/b1eb1016-1738-4bd5-83a9-370c9d498a03/ [^2]: http://www.seas.upenn.edu/ taskar/ocr/ [^3]: http://lshtc.iit.demokritos.gr/node/1
[**Quantifying the sparseness of simple geodesics**]{} [**on hyperbolic surfaces**]{} [**Peter Buser, Hugo Parlier**]{} [*Abstract.*]{} The goal of the article is to provide different explicit quantifications of the non density of simple closed geodesics on hyperbolic surfaces. In particular, we show that within any embedded metric disk on a surface, lies a disk of radius only depending on the topology of the surface (and the size of the first embedded disk), which is disjoint from any simple closed geodesic. Introduction {#sec:introd} ============ The set of simple closed geodesics on finitely generated hyperbolic surfaces has many remarkable properties and is related to various aspects of geometric and dynamical properties of moduli spaces and mapping class groups. Among these properties is a result of Birman and Series which states that the larger set of simple complete geodesics is nowhere dense and has Haussdorf dimension 1, the same result holding true for geodesics with uniformly bounded self-intersection number [@Birman-Series]. This result, which might seem surprising at first in light of other phenomena, answered a question raised by Jorgensen [@Jorgensen] who first exhibited surfaces with non-dense sets of simple closed geodesics. It is easy to see that this is really a feature of negative curvature: for instance simple complete geodesics on a flat torus leaving from a given point cover the entire surface and the closed ones are dense (even though they form a measure $0$ subset of the surface). Of course on a flat torus all closed geodesics are simple and similarly, on a hyperbolic surface, the set of [*all*]{} closed geodesics is not only dense but also dense in the unit tangent bundle. This is one of many instances of how simple geodesics are rare within the set of all closed geodesics. Another prime example is for the growth of the number of curves on hyperbolic surfaces. By Huber’s asymptotic law [@Huber], the number of closed geodesics of length less than $L$ grows asymptotically like $\sfrac{e^L}{L}$. In contrast, Mirzakhani showed that simple geodesics have very different asymptotic growth as they grow polynomially in $L$ with leading term on the order of $L^{6g-6+n}$ where $g$ is the genus and $n$ the number of cusps [@Mirzakhani]. Her results show much more than just asymptotic growth and relate the growth function to the underlying moduli space in many ways. In particular the exact asymptotic behavior doesn’t only depend on the topology but also on the underlying geometry of the surface. That the number of simple curves is bounded above by a polynomial function of length was already was one of the key arguments in the results of Birman and Series, and the correct rough asymptotic growth was first proved by Rivin [@Rivin1]. More generally, a polynomial upper bound holds for curves with bounded intersection number, and there has been a flurry of results showing asymptoptic growth for such curves in more general contexts [@Chas; @Erlandsson-Souto; @Mirzakhani2; @Rivin2; @Sapir]. In this article we quantify the non-density of simple complete geodesics. The Birman-Series result tells us that given any hyperbolic surfaces, in any neighborhood of a point, there is a small disk entirely untouched by any simple complete geodesic. The type of questions we aim to answer are: How large can you take that disk to be? What is the size of the largest disk disjoint from all simple complete geodesics? The set of points of $S$ that lie on a simple complete geodesic will be denoted ${{\mathcal{BS}}}(S)$ or simply ${{\mathcal{BS}}}$ and we shall sometimes refer to this set as the Birman-Series set. Given a surface $S$, we can look at the radius of the largest disk on the complement of ${{\mathcal{BS}}}$. Given a moduli space ${{\mathcal M}}$, the size of this “maximal gap" is a function over ${{\mathcal M}}$ and in fact is continuous. Using this and a computation of what happens towards the boundary of moduli spaces, one can show that there is a positive lower bound to $G_S$ which only depends on the topology of the underlying surface (some of the details can be found in [@BuserParlier1]). In particular, this shows the existence of a constant $K_{g}>0$ such that any closed hyperbolic surface of genus $g$ has a gap of size $K_{g}$ (or similarly the existence of a constant $K_{g,n}>0$ such that any genus $g$ surface with $n$ cusps has a gap of size $K_{g,n}$). One might hope to find a universal lower bound on the size of gaps but in fact this is impossible. Take any $\varepsilon$-dense but finite set of closed geodesics on a closed surface. Then a theorem of Scott [@Scott] asserts that there is a finite cover where all lifts of the closed geodesics are simple. In the cover, the simple geodesics reproduce the $\varepsilon$-density. So as there are no universal positive lower bounds on $K_g$, one of our underlying goals is to quantify the constant $K_g$ in terms of $g$. Our first approach to this leads to a precise computation for surfaces in the thin part of moduli space following the natural thick-thin decomposition of moduli space with the [*systole*]{} function. The [*systole*]{} ${{\rm sys}}(S)$ of a finite type hyperbolic surface $S$ is the length of a non-trivial curve of minimal length (by non-trivial we mean non-homotopically trivial and non-peripheral to boundary). Surfaces with systole at least $\varepsilon>0$ are said to lie in the $\varepsilon$-thick part of moduli space. A first step to proving the above theorem is to show that (closed) surfaces with systole below a certain threshold have a gap of that same size. \[thm:thinsurfaces-i\] Let $a_g = \frac{1}{4\cdot (4 \pi (g-1))^2}$. If ${{\rm sys}}(S) \leq a_g$, then $C_S\geq a_g$. The next step is to deal with surfaces with systole length bounded below. To do so we show the following local result which essentially says that given an embedded disk on the surface, there is a quantifiable gap of a certain radius within that disk. The radius only depends on the topology of the underlying surface (and the size of the initial disk of course). \[thm:lq1-i\] Assume $s = \min\{\frac{1}{2}{{\rm sys}}(S),\frac{1}{3} \}$. Then for any $\rho \leq s$ and any disk $B_{\rho}$ of radius $\rho$ in $S$ there exists a point $p \in B_{\rho}$ such that $${\mathrm{dist}}(p,{{\mathcal{BS}}}) \geq \rho^2 e^{-M(g-1)},$$ where $M$ is an explicit constant that depends only on $s$. The constant $M$ can be taken to be $\frac{194}{s^2} \log(\frac{134}{s})$ and together with the result for thin surfaces, this provides a quantifiable lower bound on $K_g$. Using the same techniques it also holds for (sufficiently thick) surfaces with cusps. Our final goal is to establish a quantified local result which does not depend on systole length. It implies an explicit bound on the constant $K_{g,n}$ discussed previously. \[thm:lq3-i\] Let $S$ be a hyperbolic surface of genus $g$ with $n$ cusps and $B_{\rho}$ a disk of radius $\rho$ in the $\varepsilon$-thick part of $S$, where $0 < \rho < \varepsilon \leq \sfrac{1}{3}$. Then there exists a point $p \in B_{\rho}$ such that $${\mathrm{dist}}(p,{{\mathcal{BS}}}) \geq e^{-3^\kappa R},$$ where $\kappa = 3g-3+n$ and $R = 2 \log \frac{1}{\rho} + M(g-1+\sfrac{n}{2})$ with $M=\frac{195}{\varepsilon^2}\log \frac{134}{\varepsilon}$. Note than in the above, as $B_{\rho}$ always lies in the $\rho$-think part, so in particular the result holds with $\varepsilon = \rho$. Our local estimates without prior assumptions on systole length are weaker. This comes from our method in which we use a sort of classification of short curves: if they are sufficiently short with respect to the other short curves we treat them like cusps and if not we treat them as “short but not too short curves". We end this introduction with some images of the Birman-Series set for genus $2$ surfaces illustrating its intricacy; although these geodesics are nowhere dense, the gaps are already quite small. (0.162\*0.50) $\scriptstyle m=48,\; t=21+15\sqrt{2}$\ (0.50\*0.50) $\scriptstyle m=24,\; t=33+23\sqrt{2}$\ (0.837\*0.50) $\scriptstyle m=48,\; t=109+77\sqrt{2}$\ (0.162\*-0.04) $\scriptstyle m=96,\; t=149+105\sqrt{2}$\ (0.50\*-0.04) $\scriptstyle m=48,\; t=273+193\sqrt{2}$\ (0.837\*-0.04) $\scriptstyle m=24,\; t=1991+1408\sqrt{2}$\ Figure \[Fig:Bolza\] shows a number of simple closed geodesics on the regular fundamental domain of the Bolza surface, the genus 2 Riemann surface with the maximal number of symmetries. Originally the idea was to show all the simple closed geodesics up to length roughly 15 on the same fundamental domain. However, even under optimal printing conditions, the fundamental domain came out evenly black. We have therefore split up the geodesics into families where all members of a family have the same length. Figure \[Fig:Bolza\] shows a few. In this figure, $m$ is the multiplicity, i.e. the number of geodesics in the family, and $t = {{\,\rm arccosh}}(\sfrac{\ell(\gamma)}{2})$ is half the trace of the conjugacy class in the Fuchsian group that represents a closed geodesic of length $\ell(\gamma)$. Figure \[Fig:BolzaP\] shows roughly the first (ordered by length) three hundred simple closed geodesics on two other genus 2 surfaces. These surfaces were obtained by perturbing Fenchel-Nielsen length and twist parameters of the Bolza surface. The size of the largest gaps appears to grow. Although we have no real evidence other than these figures, we wonder if the Bolza surface might be the surface where the largest gaps are the smallest. [**Organization.**]{} The article is organized as follows. In Section 2 we prove Theorem \[thm:thinsurfaces-i\], denoted Theorem \[thm:thinsurfaces\] in the sequel. Section 3 is dedicated to the proof of Theorem \[thm:lq1-i\], referred to later as Theorem \[thm:lq1\]. Thin surfaces and surfaces with cusps are treated in Section 4 where we prove Theorem \[thm:lq3-i\], relabelled as Theorem \[thm:lq3\]. The article is concluded by an appendix which contains two technical results somewhat different in nature from the rest of the article. [**Acknowledgements.**]{} We heartily thank Chris Judge, Manuel Racle, Klaus-Dieter Semmler and Caroline Series for enlightening conversations and their encouragement. Gaps on thin surfaces {#sec:gapthn} ===================== In this part we show the following result, where $a_g = \frac{1}{4\cdot (4 \pi (g-1))^2}$. \[thm:thinsurfaces\] If ${{\rm sys}}(S) \leq a_g$, then $C_S\geq a_g$. We shall show explicitly where on the surface a “forbidden disk” with the indicated radius may be found. For this we first construct a certain pair of pants on $S$. \[lem:lengthbound\]Let $\gamma$ be a simple closed geodesic on $S$ of length $\ell(\gamma) < \frac12$. Then there exists a pair of pants $Y \subset S$ with boundary geodesics $\gamma$, $\gamma_1$, $\gamma_2$ such that $$\label{eq:lengthbound}\cosh(\tfrac12 \ell(\gamma_i))< \frac{\sinh(\frac12 \ell(\gamma))}{\ell(\gamma)}\cdot 4 \pi (g-1), \quad i=1,2$$ Before giving the proof we note that the length bound of $\frac{1}{2}$ on $\gamma$ can be replaced by ${{\,\rm arcsinh}}(1)$ which is “optimal" for the argument we give. We adapt an argument from [@BuserBook Section 5.2]. Cut $S$ open along $\gamma$ into a bordered surface $\tilde{S}$ (consisting of either one or two connected components) with copies $\gamma'$, $\gamma''$ of $\gamma$ on the boundary. Take the component $S'$ of $\tilde{S}$ that has $\gamma'$ on the boundary and look at the sets $C(r) = \{ p \in S' \mid {\mathrm{dist}}(p,\gamma')\leq r \}$ for $r>0$. For small $r$ this set is an annulus and it does not intersect $\gamma''$. As we let $r$ grow, some first value $r_\gamma$ will be reached where one of these two properties ceases to hold. Now, the injectivity radius near $\gamma''$ is far too small to allow the $C(r)$ to come close to $\gamma''$ as long as $r< r_\gamma$. (Their boundaries are simple closed curves of geodesic curvature smaller than the curvature of a horocycle). Hence, it’s the annulus property that ceases to hold for $C(r_\gamma)$. The rest is exactly as in [@BuserBook Section 5.2]: There are two geodesic segments of length $r_\gamma$ emanating orthogonally from $\gamma'$ and meeting each other under the angle $\pi$ at their endpoints, thus forming a smooth geodesic arc $\eta$ of length $2r_\gamma$. The endpoints of $\eta$ on $\gamma'$ dissect $\gamma'$ into two arcs $c_1$, $c_2$. The closed curves $c_1 \eta$ and $\eta^{-1}c_2$ are freely homotopic to simple closed geodesics $\gamma_1$, $\gamma_2$ that together with $\gamma'$ form the boundary of a pair of pants $Y \subset S'$ and by [**formula [@BuserBook 2.3.4(i)]**]{} the lengths satisfy $\cosh(\tfrac12 \ell(\gamma_i)) = \sinh(r_\gamma)\sinh(\tfrac12 c_i)< \sinh(r_\gamma)\sinh(\tfrac12 \ell(\gamma))$ for $i=1,2$. Since the interior of $C(r_\gamma)$ is still an annulus we have ${{\rm area}}(C(r_\gamma)) = \ell(\gamma) \sinh(r_\gamma) < {{\rm area}}(S) = 4\pi(g-1)$ and the lemma follows. Throughout the paper we will be using hyperbolic trigonometry. As in the above proof, we will always refer to formula numbers from [@BuserBook]. We also need an extension of the usual collar lemma [@BuserBook chapter 4] that includes complete non closed simple geodesics. For any simple closed geodesic $\gamma$ on $S$ the *width* is the quantity $$\label{eq:width}w_\gamma = {{\,\rm arcsinh}}(1/\sinh(\tfrac12 \ell(\gamma)))$$The collar theorem states among other things that the *collar* ${{\mathcal C}}_\gamma = \{p \in S \mid {\mathrm{dist}}(p,\gamma) < w_\gamma \}$ is homeomorphic to an annulus. (In contrast to the above $C(r)$ the collars ${{\mathcal C}}_\gamma$ are defined as open sets.) The needed complement is the following. \[lem:collar\]Any complete simple geodesic intersecting ${{\mathcal C}}_\gamma$ either intersects $\gamma$ or converges to it. Let $\eta$ be a complete simple geodesic on $S$, closed or non closed, that neither intersects $\gamma$ nor converges to it. Let $p$ be a point on $\eta$ closest to $\gamma$. Let $a$ be a simple geodesic arc from $p$ to $\gamma$ orthogonal to both $\eta$ and $\gamma$ and of length $\ell(a) = {\mathrm{dist}}(\gamma, \eta) > 0$. In the universal cover of $S$ there are lifts $\tilde{\gamma}$ of $\gamma$ and $\tilde{\gamma_1}$, $\tilde{\gamma_2}$ of $\eta$, as in Figure \[Fig:ExplicitGap\] (which we use for two different purposes), together with lifts $a_1$, $a_2$ of $a$ from $\tilde{\gamma_1}$ and $\tilde{\gamma_2}$ to $\tilde{\gamma}$ whose endpoints on $\tilde{\gamma}$ are distance $\ell(\gamma)$ apart from each other. Since $\eta$ is simple its lifts $\tilde{\gamma_1}$ and $\tilde{\gamma_2}$ are disjoint; they may have a common endpoint at infinity, though. We thus have a, possibly degenerated, right-angled geodesic hexagon with three consecutive sides of lengths $\ell(a)$, $\ell(\gamma)$, $\ell(a)$. This hexagon splits into two isometric pentagons. Applying [**formula [@BuserBook 2.3.4(i)]**]{} to either of them we get $$\sinh(\ell(a)) \cdot \sinh(\frac12 \ell(\gamma)) \geq 1$$and hence, ${\mathrm{dist}}(\eta,\gamma) \geq w_\gamma$. (0.20\*0.32) $a_1$\ (0.80\*0.315) $a_2$\ (0.06\*0.16) $\tilde{\gamma_1}$\ (0.94\*0.16) $\tilde{\gamma_2}$\ (0.5\*0.96) $\tilde{\gamma}$\ (0.683\*0.538) $w_\gamma$\ (0.35\*0.195) $b$\ (0.38\*1.00) $\omega_1$\ (0.62\*1.00) $\omega_2$\ (0.82\*-0.01) $\omega_3$\ (0.179\*0.095) $B_0$\ (0.471\*0.148) $B_1$\ (0.541\*0.148) $B_2$\ (0.824\*0.095) $B_3$\ (0.675\*0.12) $M$\ (0.41\*0.35) $\rho$\ (0.44\*0.6) $g_1$\ (0.55\*0.3) $g_2$\ (0.588\*0.538) $p$\ (0.67\*0.32) $f$\ (0.4\*0.44) $h$\ Let $\gamma$ be a simple closed geodesic of length $\ell(\gamma) \leq a_g$, where we think of $a_g$ as being rather small; the bound for $a_g$ as in the theorem will show up towards the end of the proof. By Lemma \[lem:lengthbound\] there exist simple closed geodesics $\gamma_1$, $\gamma_2$ with lengths bounded as in that together with $\gamma$ form the boundary of a pair of pants $Y$. We shall find a forbidden disk in the half collar $C(w_{\gamma})$ in $Y$. $Y$ may be decomposed into two isometric right-angled geodesic hexagons by drawing the common perpendicular geodesic arcs $a_1$ from $\gamma_1$ to $\gamma$; $a_2$ from $\gamma$ to $\gamma_2$; and $b$ from $\gamma_2$ to $\gamma_1$. Figure \[Fig:ExplicitGap\] shows the lift $G$ of one of the hexagons in the universal covering ${{\mathbb H}}$ of $S$. (It need not be symmetric as drawn.) The lifts of the three perpendiculars are named after their originals, the lifts of the boundary geodesics are named, respectively, $\tilde{\gamma}_1$, $\tilde{\gamma}$, $\tilde{\gamma}_2$. We let $\omega_1$, $\omega_2$ be the endpoints at infinity of $\tilde{\gamma}$, labelled in such a way that $\omega_1$ and $a_2$ lie on different sides of $a_1$. We also consider the endpoint $\omega_3$ at infinity of $\tilde{\gamma}_2$ that is separated from $a_2$ by $b$. The dotted lines in Figure \[Fig:ExplicitGap\] are as follows. The vertical lines $g_1$, $g_2$ are the perpendicular geodesics from $\omega_1$, $\omega_2$ to $b$ with respective endpoints $B_1$, $B_2$ on $b$. The horizontal line $h$ consists of all points in the hexagon that have distance $w_\gamma$ from $\tilde{\gamma}$. Line $f$ is the geodesic from $\omega_1$ to $\omega_3$. We will prove below that if $\gamma$ is as short as indicated, then the configuration of these lines is such that a shaded area $\Delta$ occurs, as drawn in the figure, consisting of points in the hexagon lying above $h$, to the right of $g_2$ and to the left of $f$. We will also estimate the size of $\Delta$. Anticipating this for a moment we prove that $\Delta$ is forbidden, i.e., no lift of a simple complete geodesic of $S$ in ${{\mathbb H}}$ intersects $\Delta$. Since $\Delta$ lies in a lift of ${{\mathcal C}}_\gamma$ Lemma \[lem:collar\] implies that only then a simple geodesics $\eta$ may have a lift passing through $\Delta$ when $\eta$ intersect $\gamma$ or converges to it. Now, take any such $\eta$ and let $c \subset \eta$ be a connected component of $\eta \cap \textrm{interior}(Y)$ with one of its ends converging to $\gamma$ (either by converging to a point on $\gamma$ or by winding infinitely often around it). Let $\tilde{c}$ be a component of its lift in ${{\mathbb H}}$ that meets the part of $G$ above line $h$. We have to show that $\tilde{c} \cap \Delta = \emptyset$. Since $c$ has no self-intersections there are only the following three cases. *Case 1:* the other end of $c$ also converges to $\gamma$. Here we consider the geodesic symmetry $\sigma : {{\mathbb H}}\to {{\mathbb H}}$ with respect to the geodesic through $b$ and observe that $G \cup \sigma (G)$ is a lift of $Y$. Hence, one end of $\tilde{c} $ converges to $\tilde{\gamma}$ and the other to $\sigma(\tilde{\gamma})$. This implies that the part $\tilde{c} \cap G$ lies between $g_1$ and $g_2$ and cannot intersect $\Delta$. *Case 2:* the other end of $c$ converges to $\gamma_1$. In this case the other end of $\tilde{c}$ converges to $\tilde{\gamma}_1$. This implies that, speaking with Figure \[Fig:ExplicitGap\], $\tilde{c}$ lies on the left hand side of $g_2$ while $\Delta$ lies on the right hand side. *Case 3:* the other end of $c$ converges to $\gamma_2$. By the same argument as before $\tilde{c}$ now lies on the right hand side of $f$ and cannot intersect $\Delta$ either. This completes the proof that $\Delta$ is forbidden. For the existence and the size of $\Delta$ we need a number of estimates beginning with the length of side $b = B_0B_3$. To that end, we abbreviate the right hand side of Inequality by $\cosh \lambda$. The inequality in Lemma \[lem:lengthbound\] then becomes $\frac12 \ell(\gamma_i) < \lambda$, $(i=1,2)$. By [**formula [@BuserBook 2.4.1(i)]** ]{}we have the following (where from now on we omit $\ell$ in the formulas): $$\cosh(\frac12 \gamma) = \cosh(b)\sinh(\tfrac12 \gamma_1)\sinh(\tfrac12 \gamma_2) - \cosh(\tfrac12 \gamma_1)\cosh(\tfrac12 \gamma_2).$$Here the right hand side is $\geq 1$ and it increases when we replace $\frac{1}{2}\gamma_1$ and $\frac{1}{2}\gamma_2$ by $\lambda$. Using the identity $\cosh(b) \sinh^2(\lambda)-\cosh^2(\lambda)=2\sinh^2(\frac{1}{2}b) \sinh^2(\lambda)-1$, we get $$\label{eq:boundb}\sinh(\tfrac12 b) \sinh \lambda \geq 1.$$ The next estimate concerns the distance $\rho = {\mathrm{dist}}(b,h)$, where we note that $\rho + w_\gamma$ is the length of the common perpendicular of $b$ and $\tilde{\gamma}$. This perpendicular decomposes $G$ into two right-angled pentagons. Applying [**formula [@BuserBook 2.3.4(i)]**]{} to the one that has the bigger side on $\tilde{\gamma}$ we get $\sinh(\rho + w_\gamma) \cdot \sinh(\frac14 \gamma) \leq \cosh\lambda$. Using that $e^\rho \sinh(w_\gamma) \leq \sinh(\rho + w_\gamma)$ and applying the definition of $w_\gamma$ we get the following, where $\tau = \sinh(\frac12 \gamma)/\sinh(\frac14 \gamma)$ is a factor close to 2: $$\label{eq:boundrho}e^\rho \leq \tau \cdot \cosh \lambda.$$We now assume that the labelling of the sides of $G$ has been set such that ${\mathrm{dist}}(B_1,B_3) > \frac12 b$. Let $M$ be the intersection point of $b$ and $f$, where $f = \omega_1 \omega_3$. Then $M$ is the midpoint of the ideal crossed geodesic quadrilateral $\omega_1 B_1 B_3 \omega_3$ and so we have an ideal right-angled triangle $\omega_1 B_1 M$ with side $$\label{eq:B1M}B_1M > \frac14 b.$$We estimate how far sides $g_1 = \omega_1 B_1$ and $\omega_1 M$ are apart from each other in the neighborhood of $h$. To this end, we take a point $p$ on $f$ and drop the perpendicular $pp_1$ to $g_1$, the position of $p$ being such that ${\mathrm{dist}}(B_1,p_1) = \rho + r$ with $0 \leq r \leq \frac15$. Applying [**formula [@BuserBook 2.2.2(iv)]**]{} to the ideal triangles $\omega_1 B_1 M$ and $\omega_1 p_1 p$ we get, using an obvious limit argument, $$\label{eq:distp}\frac{\tanh(p_1p)}{\tanh(B_1M)}= e^{-(\rho+r)}.$$Finally, we provide a similar estimate for $g_2$ and $g_1$. Applying the pentagon [**formula [@BuserBook 2.3.4.(i)]**]{} to half of the ideal quadrilateral $\omega_1B_1B_2\omega_2$ we have $$\label{eq:distBB}\sinh(\tfrac12 B_1B_2)\cdot \sinh(\rho + w_\gamma) = 1.$$Similarly to the preceding step we take a point $q$ on $g_2$ and drop the perpendicular $qq_1$ to $g_1$, the position of $q$ being such that ${\mathrm{dist}}(B_1,q_1) = \rho + r$ with $0 \leq r \leq \frac15$. By [**formula [@BuserBook 2.3.1(iv)]**]{} and using that $\tanh(2t) \leq 2 \sinh( t)$, for $t \geq 0$, we have $\tanh(q_1q) = \cosh(\rho + r)\cdot \tanh(B_1B_2) \leq 2 \cosh(\rho+r) \sinh(\frac12 B_1B_2)$. Hence, $$\label{eq:boundQQ}\tanh(q_1q) \leq 2 \frac{\cosh(\rho+r)}{\sinh(\rho+w_\gamma)}\leq 2\frac{\cosh(r)}{\sinh(w_\gamma)} = 2\cosh(r) \sinh(\tfrac12 \gamma).$$ By on the one hand and by – on the other, by approximating the small terms in – linearly and using that $r \leq \frac{1}{5}$ so that $\cosh(r) < 1.03$ and $e^{-r}> 0.818$, we get: $${\mathrm{dist}}(q_1,q) < 1.1 \,\ell(\gamma), \quad {\mathrm{dist}}(p_1,p) > \frac{0.8}{(4\pi(g-1))^2}\,.$$ It follows that if $\ell(\gamma) \leq a_g$ with $a_g$ as in the statement of the theorem, then in the vicinity of line $h$ the points on $g_2$ are closer to $g_1$ than $1.1 \ell(\gamma)$ and the points on $f$ are further away than $3.2 \ell(\gamma)$. Hence the shaded domain $\Delta$ shows up and one now easily sees that it contains a disk of radius $a_g$. Local quantification in terms of the systole {#sec:locqua} ============================================ In this section $S$ is a compact hyperbolic surface of genus $g$. We aim to show the following result. \[thm:lq1\] Assume $s = \min\{\frac{1}{2}{{\rm sys}}(S),\frac{1}{3} \}$. Then for any $\rho \leq s$ and any disk $B_{\rho}$ of radius $\rho$ in $S$ there exists a point $p \in B_{\rho}$ such that $${\mathrm{dist}}(p,{{\mathcal{BS}}}) \geq \rho^2 e^{-M(g-1)},$$ where $M$ is a constant that depends only on $s$. We shall get the explicit bound $M = \frac{194}{s^2} \log(\frac{134}{s})$. Observe that Theorem \[thm:lq1\] together with Theorem \[thm:thinsurfaces\] provides a computable lower bound on the minimum value of $C_S$. A Voronoi cell decomposition and its properties {#sec:Voronoi} ----------------------------------------------- We begin with a lemma that concerns *$\varepsilon$-nets* on $S$, by which we mean a set of points of pairwise distance at least $ \varepsilon$ and maximal for this property with respect to inclusion. The maximality of the set implies that the open balls of radius $\varepsilon$ around the points [*c*over]{} the surface. Throughout we shall restrict ourselves to $\varepsilon \leq \frac{s}{2}$, where $s$ is the systole of $S$. \[lem:netcard\] For fixed $0<\varepsilon\leq \frac{s}{2}$ there exists an $\varepsilon$-net on $S$ consisting of points $\{p_i\}_{i=1}^N$ with $$N\leq \frac{2}{\cosh \sfrac{ \varepsilon }{2}- 1} (g-1).$$ To construct such a set begin with an arbitrary point and add successively new points at least $\varepsilon$ away from the preceding ones until this is no longer possible. The open balls of radius $\varepsilon/2$ around the resulting points $p_i$ are pairwise disjoint, embedded and of area $2\pi (\cosh \sfrac{ \varepsilon }{2} -1)$. The total area of the surface is $4\pi(g-1)$ thus the number of balls cannot be greater than $$\frac{4\pi(g-1)}{2\pi (\cosh \sfrac{ \varepsilon }{2} -1)}$$which proves the lemma. Given an $\varepsilon$-net one gets a cell decomposition of the surface given by the Voronoi cells associated to each point: each open cell $V_i$ is the set of points whose closest point in the net is $p_i$. The boundary of $V_i$ consist of points closest to more than one of the $p_k$s. By the maximality of the $\varepsilon$-net any boundary point of $V_i$ is closer to $p_i$ than $\varepsilon$. Since $\varepsilon \leq \frac{s}{2}$ it follows that $V_i$ is contained in an embedded disc. Thus, the $V_i$ are simple convex hyperbolic polygons. Since the vertices lie at distance $\leq \varepsilon$ from the center the sides of $V_i$ have lengths $\leq 2 \varepsilon$. \[rem:perturbation\] Note that up until now we haven’t given any restrictions on how we choose the points. By standard perturbation techniques, if the points are chosen “generically", there will be exactly three cells adjacent to any vertex. By this we mean that by using a standard measure on the choice of points (using the measure on the surface), the choices where all vertices are [*not*]{} adjacent to exactly three cells lie in a measure $0$ set. In all that follows, we shall suppose that this is the case. Hence, there will always be a triangulation dual to our Voronoi cell decompositions. \[lem:vornum\] Each Voronoi cell has at most $v(\varepsilon)$ sides, where $v(\varepsilon)$ is the integer part of $$\pi / {{\,\rm arccot}}\left( \cosh({{\varepsilon}}) \cdot\left \{ \sqrt{1 + 2 \cosh{{{\varepsilon}}}} + \sqrt{2 + 2 \cosh{{{\varepsilon}}}} \right\}\right).$$ Consider the triangulation dual to the Voronoi cell decomposition. Each side of a triangle has length at least $\varepsilon$. Any of the triangles is contained in a ball of radius $\varepsilon$ (around the intersection point of the three Voronoi edges dual to the triangle). The number of sides of a Voronoi cell is the number of dual triangles that meet in its center. We now apply Lemma \[lem:MinimalAngle\] from the appendix. From the two previous results we deduce a bound on the number of edges found in our Voronoi cell decomposition. We shall, however, assume from now on that $\varepsilon$ is small. A restriction that has proved to be practical is $\varepsilon \leq \frac{1}{3}$. \[cor:edges\] Assuming that $\varepsilon \leq \min\{\frac{s}{2},\frac{1}{3} \}$ we have $v(\varepsilon) = 12$, and there are at most $$12\frac{g-1}{\cosh \sfrac{ \varepsilon }{2}- 1} < \left(\frac{97}{\varepsilon^2}-10\right)(g-1)$$edges in a cell decomposition obtained as above. The number of cells times the maximum number of edges per cell is an upper bound on twice the number of edges. Now $v(\varepsilon)$ from Lemma \[lem:vornum\] is monotone increasing and $v(0) = v(\frac{1}{3}) = 12$. This proves the result. In everything that follows in this section we shall suppose that we have a fixed Voronoi cell decomposition ${{\mathscr V}}$ like the one we have just constructed with $\varepsilon \leq \min\{\frac{s}{2},\frac{1}{3} \}$. Scheme of proof {#sec:scheme} --------------- In this subsection we describe the strategy of our proof without any computations in order to motivate the estimates we perform afterwards. Given an embedded disk $D = B_{\rho}$ of radius $\rho>0$ on $S$, we look at the restriction of the Birman-Series set to $D$. This consists of a countable collection of geodesic segments between points on the boundary of $D$. To find quantifiable “empty space” between these segments we begin by introducing a constant $R>0$ that later will be adjusted. For each segment $c$ we look at a larger geodesic segment $\bar{c}$ obtained by extending $c$ outside of $D$. Specifically, we take the continuation of $c$ of length $R-2{{\varepsilon}}$ in both directions (measured from the midpoint of $c$). This larger “segment" is not necessarily embedded but as $c$ belongs to the Birman-Series set, it does not contain any transversal self-intersections. Either endpoint of $\bar{c}$ lies in some Voronoi cell of ${{\mathscr V}}$. (The two cells may coincide.) To “capture” $\bar{c}$ we associate to it what we shall call a [*model strand*]{} $m^c$ the precise construction of which shall follow in the next subsection. Among its properties we have the following. The endpoints of $m^c$ lie on vertices of the $1$-skeleton of ${{\mathscr V}}$ and $m^c$ is freely homotopic to $\bar{c}$, where one allows endpoints to move within the start and terminal Voronoi cells of $\bar{c}$. The important part of $m^c$ is its middle intersection with $D$, i.e., the connected component $m_c$ of $m^c \cap D$ that contains the midpoint of $m^c$. We call $m_c$ a *model strand in* $D$. The model strands shall allow us to quantify the arguments in [@Birman-Series] by showing that the number of all $m_c$ grows polynomially with $R$ while the distance between $c$ and $m_c$ has an upper bound that decays exponentially with $R$. As a result, for large enough $R$ there are “relatively few” model strands in $D$ and each segment $c$ lies in an “extremely small” tubular neighborhood of some of these. Outside these tubular neighborhoods we shall then have empty space whose size can be estimated in terms of $R$. The model strands will be obtained in two steps. First we associate to $\bar{c}$ a combinatorial path $P(\bar{c})$ on the $1$-skeleton of ${{\mathscr V}}$ that is simple in the combinatorial sense (see the next subsection ). The number of such paths can be estimated using combinatorial arguments. In the second step we then homotope the path $P(\bar{c})$ into a geodesic arc $m^c$ keeping the endpoints fixed. It may turn out that $m^c$ has self-intersections, but $m_c$ is simple. The distance between $c$ and $m_c$ is estimated in Section \[sec:bounddist\]. Constructing combinatorial arcs and their properties {#sec.concom} ---------------------------------------------------- Here we describe a method for associating to any simple arc such as $\bar{c}$ a combinatorial path that lies in the $1$-skeleton of ${{\mathscr V}}$. Consider a finite oriented geodesic arc $a$ on $S$ which we assume to be simple in the sense that it has no transversal self-intersctions. We also assume that $a$ is longer than $4 \varepsilon$ (in our applications $a$ will be much larger). Then $a$ begins in some Voronoi cell $V_0$, traverses a sequence of cells $\{V_i\}_{i = 1}^n$ and ends up in some cell $V_{n+1}$ (each cell may appear multiple times). We break up $a$ into smaller arcs $a_i$ where for $i = 0, \dots, n+1$ each $a_i$ is a connected component of $a \cap V_i$ and $a_i$ is connected to $a_{i+1}$ ($i \leq n$). Intersections that consist of a vertex only are ignored, that is we list the cells such that each $a_i$ has positive length. For the combinatorial path we proceed as follows. - $a_0$ and $a_{n+1}$ do not contribute to our combinatorial path. - For $i = 1, \dots, n$, if $a_i$ intersects the interior of $V_i$ we associate to it the homotopic polygonal arc $P_i$ on the boundary $\partial V_i$ that has the same endpoints as $a_i$ and uses the minimal number of edges of $V_i$ (see Figure \[fig:PolygonPath\]). - If $a_i$ is a side of $V_i$ we set $P_i$ to be $a_i$. We add a refinement to item 2: if $V_i$ has an even number of sides, say $m_i$, and if $a_i$ connects a pair of opposite sides of $V_i$, then there are two choices for the path $P_i$, either using $\frac{1}{2}m_i + 1$ edges of $V_i$. To remove this ambiguity we choose, for any Voronoi cell $V$, a “separator point” $p_V$ (not necessarily the geometric center of $V$) in the interior of $V \setminus a$ and then require that in the above cases $P_i$ is the path homotopic to $a_i$ in the punctured disk $V_i \setminus \{ p_{V_i} \}$. The concatenation of these paths, $P_1 \cup \dots \cup P_n$, yields a connected path that lies entirely on the $1$-skeleton of ${{\mathscr V}}$, possibly with partial edges. We shall remove the latter by “shrinking” homotopies that take place on the edges of the $1$-skeleton as indicated in Figure \[fig:PartialElim\] so as to obtain a purely combinatorial connected sequence of edges $P(a)$ of the Voronoi polygons. Thus, the construction is in two steps, the first step consisting in “pushing the arcs to the boundary” as indicated in Figure \[fig:PushPath\], the second step being the shrinking away of the partial edges. The combinatorial path $P(a)$ obtained in this way is simple in that it does not contain any transversal self-intersections. To see this one my consider an arbitrarily thin tubular neighborhood $T$ around the $1$-skeleton and slightly modify the pushing and shrinking homotopies so as to obtain a genuine simple path $P'(a)$ in $T$ with the same combinatorics as $P(a)$. We define the *combinatorial length* of $P(a)$ to be the number of edges it contains. The next lemma provides an upper bound of this length in terms of the original path length. \[lem:comblength\] A path $a$ as above of length $\ell$ passes through at most $$\frac{4}{\varepsilon}(\ell + 3\varepsilon)$$Voronoi cells. This multiplied by 6 is an upper bound for the combinatorial length of $P(a)$. We argue in the universal cover by considering a lift $\tilde{a}$ of $a$. Along $\tilde{a}$ we have the sequence of lifts $\tilde{V}_1, \dots, \tilde{V}_{n}$ of the Voronoi cells $V_1, \dots, V_{n}$ with the property that $\tilde{V}_i \cap \tilde{a}$ is a lift of $a_i$, $i=1, \dots, n$ (recall that $a_0$ and $a_{n+1}$ are not taken into account). The interiors of the lifted cells are pairwise disjoint and so are the balls of radius $\sfrac{\varepsilon}{2}$ around their centers $q_1, \dots, q_{n}$. Furthermore, these balls lie in a tubular neighborhood of radius $\sfrac{3\varepsilon}{2}$ around $\tilde{a}$. We now apply an area argument. The total area of the $\sfrac{\varepsilon}{2}$-balls around the points $\{q_i\}_{i=1}^n$ is $$n \cdot 2\pi (\cosh \sfrac{\varepsilon}{2}-1).$$The area of the $\sfrac{3\varepsilon}{2}$ neighborhood of $\tilde{a}$ is $$2\left(\ell \sinh\sfrac{3\varepsilon}{2} + \pi(\cosh \sfrac{3\varepsilon}{2}-1)\right).$$Area comparison and elementary simplification using that $\varepsilon \leq \frac{1}{3}$ now yields the first statement. The second statement follows from Lemma \[lem:vornum\] using that $v(\varepsilon) \leq v(\frac{1}{3}) =12$ and that for the crossing of any cell $V_i$ the combinatorial path $P(a)$ runs along at most half the edges of $V_i$. Although this is not needed in what follows we indicate a reverse inequality giving a lower bound on the number of Voronoi cells a path $a$ of length $\ell$ as in the lemma necessarily traverses. To see this consider a maximal set of points $2\varepsilon$ apart on the covering $\tilde{a}$. There are at least $\sfrac{\ell}{2 \varepsilon}$ such points. As Voronoi cells have diameters less than $2 \varepsilon$ this means that $\tilde{a}$, respectively $a$ needs to traverse at least $$\frac{\ell}{2\varepsilon}-1$$Voronoi cells. There is also a lower bound on the combinatorial length $L$ of $P(a)$: using that the edges of a Voronoi cell are not longer than the diameter the triangle inequality yields $2 \varepsilon (L+2) \geq \ell$ and thus $$L \geq \frac{\ell}{2\varepsilon}-2.$$ Counting combinatorial paths {#S.count} ---------------------------- We need to bound the number of combinatorial paths in terms of the lengths of the model strands. Using the lemma above, this can be obtained via the combinatorial lengths. For the following we denote by $E$ the number of edges of our Voronoi decomposition ${{\mathscr V}}$. \[lem:combcount\] There are at most $4L^2\binom{L+E}{L}$ combinatorial simple paths of length at most $L$. The model for the argument that follows is the following useful fact: the homotopy class of a (simple) multicurve lying on a triangulated surface is determined by its intersection numbers with each of the sides of the triangulation. The triangulation in our case is the triangulation dual to the Voronoi cells and the intersection numbers are exactly the numbers of times the path traverses a given edge. So if our path was closed it would be uniquely determined by the number of times it passes through each edge. Ł(.47\*1.02) $a=3$\ Ł(.32\*.01) $b=5$\ Ł(.63\*.01) $c=4$\ We now modify this for non closed paths. In order not to confuse path edges with the $E$ edges of the $1$-skeleton of the Voronoi decomposition, we shall call path edges *segments*. We begin by distributing a number $\leq L$ of segments (later to be concatenated) among the $E$ edges. By elementary combinatorics this is possible in $\binom{L+E}{L}$ different ways. For ease of description we place them as distinct segments parallel to the corresponding edges in a thin tubular neighborhood of the $1$-skeleton of ${{\mathscr V}}$ as drawn in Figures \[fig:triangles\] and \[fig:trianglestwo\]. Next we select, for any such distribution, a pair of segments that shall play the role of the end segments of the path to be constructed and erase half of each of the two segments. This is possible in at most $4L^2$ different ways. Figure \[fig:trianglestwo\] shows an example with half a segment erased. At any vertex of the $1$-skeleton there are now a number of incoming segments and there is at most one way to paste these together at the endpoints such that the resulting arcs do not intersect each other (pairs of segments on the same edge are not allowed to be pasted together). Of course, not every distribution allows one to paste all segments at all vertices, and even if the pasting is possible the result may be disconnected; but every simple path of length $\leq L$ may be obtained by some distribution and, hence, there are at most $4L^2 \binom{L+E}{L}$ such paths. Bounding the distance between arcs and model arcs {#sec:bounddist} ------------------------------------------------- In what follows, we need to metrically compare the model arcs to the arcs they are intended to approximate. This is achieved via the following lemma about arcs in the hyperbolic plane. \[lem:width\] Fix $\delta>0$, $\rho>0$. In the hyperbolic plane, let $b,b'$ be two geodesic arcs of lengths $> 2(\rho + \delta)$ such that the two initial points and the two endpoints are at respective distances $\leq \delta$ from each other. Furthermore, consider a disk $D$ of radius $\rho$ centered at the midpoint of $b$. Then $D\cap b$ lies in an $r$-neighborhood of $b'$, where $$r\leq {{\,\rm arcsinh}}\left( \frac{\cosh \rho \sinh \delta}{\cosh \frac{\ell(b)}{2}}\right).$$ We search for an “extremal" $b'$ that shall allow us to compute the constants appearing in a worst case scenario. For this we consider the two disks of radius $\delta$ surrounding the endpoints of $b$. An extremal $b'$ must have its endpoints on the boundary of these disks and a moment’s reflection shows that the worst case scenario is given by the two geodesics tangent to the boundary of these disks that do not cross $b$ (see Figure \[fig:quad\]). Ł(.30\*.397) $b$\ Ł(.30\*.83) $b'$\ Ł(.55\*.43) $\rho$\ Ł(.628\*.41) $p$\ Ł(.157\*.625) $\delta$\ Ł(.422\*.60) $d(b,b')$\ Ł(.55\*.605) $d_{\max}$\ Ł(.67\*.545) $\ell(b)/2$\ Ł(.827\*.625) $\delta$\ Ł(.60\*.10) $D$\ Let $b'$ be one of them. By symmetry, the distance path between $b$ and $b'$ reaches $b$ at its midpoint. One can now compute in the trirectangles as shown in the figure. The bigger trirectangle has sides of lengths $\delta$, $\frac{1}{2} \ell(b')$, $d(b,b')$, $\frac{1}{2} \ell(b)$. Using hyperbolic trigonometry [**formula [@BuserBook 2.3.1(v)]**]{}) we obtain $$\sinh \delta = \sinh d(b,b') \cosh\frac{\ell(b)}{2}.$$ The least upper bound $d_{\max}$ for the distances to $b'$ of points on $D \cap b$ is reached for the point $p$ where $b$ intersects the boundary of $D$. It remains to show that for $r := d_{\max}$ the inequality as in the lemma is satisfied. We actually have equality: $p$ is the vertex of the smaller trirectangle with sides $d_{\max}$, $\rho'$, $d(b,b')$, $\rho$; using the same formula as before we obtain $$\sinh d_{\max} = \sinh d(b,b')\cosh\rho$$ and putting the two formulas together we get $$\sinh d_{\max} = \frac{\cosh \rho \sinh \delta}{\cosh \frac{\ell(b)}{2}}.$$ Estimates and finalizing the proof {#sec:finaliz} ---------------------------------- We first collect and simplify some of the earlier bounds. [*Number of edges.*]{} In view of Corollary \[cor:edges\], we use the abbreviations $$\label{eq:defG} G :=\frac{97(g-1)}{\varepsilon^2}, \quad G' := G-10.$$ By the same corollary, ${{\mathscr V}}$ has at most $G'$ edges. [*Combinatorial length.*]{} As described in Section \[sec:scheme\] any arc $c$ in the disk $D = B_{\rho}$ that occurs as a connected component of $D \cap \gamma$ for some simple complete geodesic $\gamma$ on $S$ is extended to a larger arc $\bar{c}$ on $\gamma$. The extension goes in both directions, starting from the midpoint of $c$, each half of the extension having length $R - 2\varepsilon$. To $\bar{c}$ we associate the combinatorial path $P(\bar{c})$ as described in Section \[sec.concom\] (with $\bar{c}$ in the role of $a$). The extension $\bar{c}$ has length $\ell = 2R-4\varepsilon$ and by Lemma \[lem:comblength\] the combinatorial length of $P(\bar{c})$ is bounded above by $$\label{eq:defL} L_R:=\frac{48}{\varepsilon}R.$$ [*The number of model strands.*]{} By Lemma \[lem:combcount\] there are at most $$\label{eq:bdLGG} 4L^2\binom{L+G'}{L} \leq \frac{4L^2 (L+G')^{G'}}{G'!} \leq \frac{4G^2 (L+G)^G}{G!}$$ combinatorial paths $P(\bar{c})$ of combinatorial length $L$. For $L=L_R$ this is at the same time an upper bound for the number of model strands $m^c$, respectively the number of strands $m_c$ in $D$. A heuristic check shows that the area argument that will follow further down can only succeed if $$R > G.$$We shall therefore work from now on under this hypothesis. Using Stirling’s formula and the fact that $G\geq \frac{97}{\varepsilon^2}$ we then get the following bound for the number of model strands which has been tailored in view of its later application: $$\label{eq:defNR} N(R):= \frac{1}{10} \frac{m^G}{G^G}R^G \quad \text{with}\quad m = \frac{134}{\varepsilon}.$$ [*Distance between $c$ and $m_c$.*]{} By construction the two initial, respectively endpoints of $\bar{c}$ and its associated model strand $m^c$ lie in the same Voronoi cells and their respective distances are smaller than $2 \varepsilon$. Furthermore, $\bar{c}$ has length $2R - 4\varepsilon$. By Lemma \[lem:width\], $c$ lies in an $r$-neighborhood of $m_c$ where, by elementary simplification and using that $\rho \leq \varepsilon \leq \frac{1}{3}$, $$\label{eq:tubular} r \leq \frac{\cosh \rho \sinh 2\varepsilon}{\cosh(R-2\varepsilon)} \leq 3 e^{-R} =: w_R.$$ [*Area argument.*]{} This is the heart of the proof. For subsets $A \subset D$ and $t >0$ we shall denote by $A^t$ the part of the $t$-neighborhood of $A$ that lies in $D$. For each model strand $m_c$ in $D$ the set $m_c^{2w_R}$ has area $$\label{eq:areamcwR} {{\rm area}}(m_c^{2w_R}) \leq 2 \ell(m_c)\sinh(2w_R)\leq 4\rho \sinh(2w_R) < 9\, \rho\, w_R.$$ Now let $\mathscr{M}$ be the union of all model strands in $D$. Then $\mathscr{M}^{2w_R}$ has area less than $9 \rho\, w_R $ times the number of model strands in $D$, while $D$ has area $2\pi( \cosh(\rho)-1) > \pi \rho^2$. On the other hand ${{\mathcal{BS}}}$ is contained in $\mathscr{M}^{w_R}$. Hence, if we can determine $R$ such that the area bound for $\mathscr{M}^{2w_R}$ is smaller than $\pi \rho^2$, then $\mathscr{M}^{2w_R}$ does not cover $D$. By we therefore get a point $p \in D$ at distance $w_R$ from ${{\mathcal{BS}}}$ if we take $R$ to be a solution $>G$ to the equation $$\label{eq:theequation} \frac{1}{\rho}\frac{m^G}{G^G}R^G = e^R.$$ [*Estimating the solution and end of proof.*]{} We first state the following lemma which shall also be used in the next section. \[lem:solutionbound\] Let $\alpha$, $\gamma$ be positive constants, $\alpha > \frac{e^\gamma}{\gamma^\gamma}$. Then the equation $\alpha t^\gamma = e^t$ for $t>0$ has two solutions $t_1 < \gamma$, $t_2 > \gamma$ and $$\log(\alpha \gamma^\gamma) < t_2 < 2\log(\alpha \gamma^\gamma).$$ The equation is equivalent to $\gamma \alpha^{\frac{1}{\gamma}} \frac{t}{\gamma} = e^{\frac{t}{\gamma}}$. Substituting $\tau = \frac{t}{\gamma}$ we transform it into $$\label{eq:prooflemma} \frac{e^\tau}{\tau}=\beta$$ for $\tau>0$ with $\beta=\gamma \alpha^{\frac{1}{\gamma}}$. By the hypothesis on $\alpha$ we have $\beta >e$ and so has two solutions $\tau_1 < 1$ and $\tau_2 > 1$. For $\tau' = \log{\beta}$ we have $\tau' > 1$ and $\frac{e^{\tau'}}{\tau'}<\beta$. For $\tau''=2\log{\beta}$ we have $\tau'' < \beta$ and $\frac{e^{\tau''}}{\tau''}>\beta$. Hence, $\log{\beta} < \tau_2 < 2\log{\beta}$. Substituting back we get the claims of the lemma. In the case of equation we have $\gamma = G$ and $\alpha = \frac{1}{\rho} \frac{m^G}{G^G}$. By Lemma \[lem:solutionbound\] the larger of the two solutions to the equation has the bound $R \leq 2 \log(\frac{1}{\rho}) + 2G \log(m)$. We thus get the lower bound on the maximal distance to ${{\mathcal{BS}}}$ in $D$: $$\label{eq.final bound} w_R \geq 3\rho^2 e^{-2G \log{m}}.$$ For $\varepsilon = s$ this yields the bound as stated in Theorem \[thm:lq1\]. Surfaces with cusps and (very) small geodesics {#sec:smallcusps} ============================================== In this section we generalize the local quantification results of the previous sections to include surfaces with cusps and small geodesics. The process will be almost identical to previously with just a number of necessary changes that will be detailed. Again $\varepsilon$ is a fixed constant, $0 < \varepsilon \leq \frac{1}{3}$. Thick and thin decomposition {#sec:thickthin} ---------------------------- We consider this time a complete orientable finite area hyperbolic surface $S$ of genus $g$ with $n \geq 0$ cusps. A closed geodesic on $S$ shall be called *small* if its length is $\leq 2\varepsilon$. We let $\beta_1, \dots, \beta_h$ be the list of all small geodesics of $S$ arranged with decreasing lengths $$2\varepsilon \geq \ell(\beta_1) \geq \ell(\beta_2) \geq \dots \geq \ell(\beta_h).$$The list may be void, but we shall assume that $n+h > 0$ to contrast this section from the preceding ones. We make use of the *collar theorems* as e.g. Theorems 4.1.1, 4.1.6 and 4.4.6 in [@BuserBook] of which we recall the following. For the cusps we have the *cusp neighborhoods* $\mathscr{P}_i$, $i=1,\dots,n$, filled by the horocycles of lengths $< 2$. Each $\beta_k$ is simple and has a *collar neighborhood* ${{\mathscr C}}_k$, $k=1,\dots,h$, filled by the points at distance $< w(\beta_k)$ from $\beta_k$, where $\sinh(w(\beta_k)) \sinh(\frac{1}{2}\ell(\beta_k)) = 1$. The two boundary curves of ${{\mathscr C}}_k$ are parallel curves to $\beta_k$ (i.e. all points have the same distance o $\beta_k$) and their lengths satisfy $\ell(\beta_k) \cdot \cosh(w(\beta_k)) > 2$. Topologically the $\mathscr{P}_i$ are punctured discs, the ${{\mathscr C}}_k$ are annuli and all these neighborhoods are pairwise disjoint. Finally we note that $$\label{eq:hleq3gn}h \leq 3g -3 + n.$$ (.51\*.13) $\beta_k$\ Ł(0.883\*.96) $p'$\ Ł(0.809\*.40) $p''$\ Ł(0.887\*.0) $p'''$\ Ł(0.899\*.49) $p$\ (0.70\*.80) $\omega_k$\ (0.70\*.495) $\omega'_k$\ (0.881\*.71) $\varepsilon$\ (0.884\*.26) $\varepsilon$\ (0.13\*.26) $\varepsilon$\ (0.131\*.71) $\varepsilon$\ (0.1855\*.29) $\varepsilon$\ (0.185\*.71) $\varepsilon$\ For $k = 1, \dots, h$ we now choose $\omega_k < w(\beta_k)$ in such a way that on either side of $\beta_k$ the parallel curve at distance $\omega_k$ admits a quadruple of points at consecutive distances $\varepsilon$ as the points $p, p', p'', p'''$ shown in Figure \[fig:ThinHandle\]. The value of $\omega_k$ is given by $$\sinh(\sfrac{ \varepsilon }{2}) = \sinh(\frac{1}{8}\ell(\beta_k)) \cosh(\omega_k),$$which is [**formula [@BuserBook 2.3.1(v)]**]{} applied to the trirectangle with acute angle at $p'$ and consecutive sides of lengths $\frac{\varepsilon}{2}$, $\omega_k$, $\frac{1}{8} \ell(\beta_k)$, $\omega'_k$, where $\omega'_k$ is the distance between $\beta_k$ and the segment from $p$ to $p'$. From the formulas for $w(\beta_k)$ and $\omega_k$ we deduce by elementary computation that indeed $\omega_k < w(\beta_k)$. More accurately (using a numerical check for the upper bound), $$\label{eq:lowboundwk} {{\,\rm arccosh}}(2) < \omega_k < w(\beta_k) - \frac{1}{3}.$$ The same formula also yields the injectivity radius $r_p$ at $p$ (and $p'$, etc.). Indeed, the shortest geodesic loop at $p$ together with $\beta_k$ forms two identical trirectangles with consecutive sides $\omega_k, \frac{1}{2}\ell(\beta_k), \cdot \,, r_p$ and we have $\sinh(r_p) = \sinh(\frac{1}{2}\ell(\beta_k)) \cosh(\omega_k)$. Bringing the two formulas together we obtain $$\sinh(r_p) = \frac{\sinh(\frac{1}{2}\ell(\beta_k))}{\sinh(\frac{1}{8}\ell(\beta_k))}\sinh\big(\sfrac{ \varepsilon }{2}\big) > 4 \sinh\big(\sfrac{ \varepsilon }{2}\big).$$Since $\varepsilon \leq \frac{1}{3}$ we have $r_p > 1.8 \varepsilon$. The geodesic segments of length $\varepsilon$ that connect the points $p,p',p'',p''',p$ form a simple curve homotopic to $\beta_k$ and there is another such curve on the other side of $\beta_k$. We choose it in such a way that the subset ${{\mathscr C}}'_k$ of ${{\mathscr C}}_k$ that lies between these two curves is symmetric with respect to $\beta_k$ as shown in Figure \[fig:ThinHandle\]. We call ${{\mathscr C}}'_k$ the *reduced collar*. In a similar manner we define the reduced cusp neighborhoods $\mathscr{P}'_i \subset \mathscr{P}_i$, $i=1,\dots,n$. For each $\mathscr{P}_i$ the vertices $p, p', p'', p'''$ on the boundary of $\mathscr{P}'_i $ lie on a horocycle. Since cusps may be viewed as limits of half collars we may apply the previous estimates taking the limit for $\ell(\beta_k) \to 0$. In particular the injectivity radius at the points $p$, $p'$, etc. has again the lower bound $r_p > 1.8 \varepsilon$. We call the union of the reduced cusps and collars the thin part of $S$ and the complement $$S' = S \setminus \big( {{\mathscr C}}'_1 \cup \dots \cup {{\mathscr C}}'_h \cup \mathscr{P}'_1 \cup \dots \cup \mathscr{P}'_n\big)$$the thick part. By the collar theorems and the lower bound on $r_p$ the injectivity radius at any point in $S'$ is larger than $\varepsilon$. The main result is the following. \[thm:lq2\] Let $S$ be as described and set $$\sigma = \ell(\beta_1)\cdots \ell(\beta_h)$$if $h\geq 1$ and $\sigma = 1$ if $h=0$. Then for any $\rho \leq \varepsilon$ and any disk $B_{\rho}$ of radius $\rho$ in the thick part of $S$ there exists a point $p \in B_{\rho}$ such that $${\mathrm{dist}}(p,{{\mathcal{BS}}}) \geq \rho^2 \sigma^2 e^{-M(2g-2+n)},$$ where $M$ is a constant that depends only on $\varepsilon$. The proof will be finalized in section \[sec:finaliz2\], where we shall get the explicit bound $M = \frac{97}{\varepsilon^2} \log(\frac{134}{\varepsilon})$. An $\varepsilon$-net in the thick part and Voronoi cells in $S$ {#sec:epsnetthick} --------------------------------------------------------------- We construct an $\varepsilon$-net in $S'$ beginning with the above vertices $p$, $p', \dots$, on the boundaries of the reduced collars and cusp neighborhoods. We call these points the “special points”. By the estimates in the preceding section and because collars and cusps are pairwise disjoint the special points have pairwise distances $\geq \varepsilon$. We now complete them into an $\varepsilon$-net on $S'$ by successively adding additional “ordinary points” on $S'$ at pairwise distances $\geq \varepsilon$ until this is no longer possible. For the properties of the resulting Voronoi cells we first prove the following. \[lem:specialpoints\]For $x \in S \setminus S'$ the distance to the special points of the $\varepsilon$-net is smaller than the distance to the ordinary points. Let $q$ be an ordinary point of the $\varepsilon$-net. The shortest connection from $x$ to $q$ intersects one of the boundary segments of length $\varepsilon$ in some point $y$. One of the endpoints of the segment, say $p$, satisfies ${\mathrm{dist}}(y,p) \leq \sfrac{ \varepsilon }{2}$. We now have ${\mathrm{dist}}(x,p) \leq {\mathrm{dist}}(x,y) + {\mathrm{dist}}(y,p) \leq {\mathrm{dist}}(x,y) + \sfrac{ \varepsilon }{2}$, where at least one of the inequalities is strict. On the other hand ${\mathrm{dist}}(x,q) = {\mathrm{dist}}(x,y) + {\mathrm{dist}}(y,q) \geq {\mathrm{dist}}(x,y) + {\mathrm{dist}}(p,q) - \sfrac{ \varepsilon }{2} \geq {\mathrm{dist}}(x,y)+ \sfrac{ \varepsilon }{2}$. Altogether ${\mathrm{dist}}(x,p) < {\mathrm{dist}}(x,q)$ which proves the claim. Our $\varepsilon$-net will now be considered as a distribution of points on $S$ and defines a decomposition of $S$ into Voronoi cells. We shall call “special Voronoi cells" those around special points and “ordinary Voronoi cells” those around ordinary points. The cell decomposition consisting of all special and all ordinary Voronoi cells is again denoted by ${{\mathscr V}}$. The following properties take over from those in Section \[sec:Voronoi\]: \[lem:voronoisides\]Any cell of ${{\mathscr V}}$ contains an embedded disk of radius $\sfrac{ \varepsilon }{2}$ and has at most 12 sides. Lemma \[lem:specialpoints\] implies that the ordinary Voronoi cells are contained in $S'$ and have the same properties as those in Section \[sec:Voronoi\]. For the special cells we argue as follows. Consider, for instance, the special point $p$ on the boundary of the reduced collar ${{\mathscr C}}'_k$ as shown in Figure \[fig:ThinHandle\]. It follows from Lemma \[lem:specialpoints\] that the part in ${{\mathscr C}}'_k$ of the Voronoi cell at $p$ is the shaded polygon shown in the figure with two sides of length $\sfrac{ \varepsilon }{2}$ on the segments adjacent to $p$, then the two perpendiculars from these segments to $\beta_k$ and finally a side of length $\frac{1}{4} \ell(\beta_k)$ on $\beta_k$. The domain may be decomposed into two identical trirectangles with acute angle, say $\varphi$, at $p$. [**Formula [@BuserBook 2.3.1(iii)]**]{} yields $\cosh(\frac{1}{8}\ell(\beta_k))=\cosh(\sfrac{ \varepsilon }{2}) \sin \varphi$. Given that $\varepsilon \leq \frac{1}{3}$ and $\ell(\beta_k) \leq 2\varepsilon$ we deduce from it that the obtuse angle of the shaded domain at $p$ is $2 \varphi \geq 0.9 \pi$. Proceeding as in the proof of Lemma \[lem:vornum\] we now conclude that the Voronoi cell at $p$ has at most 9 sides. (From a combinatorial point of view, however, we count an additional degenerate side on $\beta_k$ since at the endpoints of the sides on $\beta_k$ there are four meeting cells.) Finally, with a glance at Figure \[fig:ThinHandle\] we see that the cell contains an embedded disk of radius $\sfrac{ \varepsilon }{2}$ centered at $p$. When $p$ is on the boundary of a reduced cusp neighborhood then the result is the same except that the shaded domain is degenerate and one side is replaced by a point at infinity. Summing up we have the same conclusion as in Corollary \[cor:edges\] using that the area of $S$ is now $2\pi(2g-2+n)$ \[cor:edges2\] The number of sides of ${{\mathscr V}}$ is bounded above by $ \left(\frac{97}{\varepsilon^2}-10\right)(g-1+\frac{n}{2})$. Traversing and terminal arcs {#sec:travend} ---------------------------- We now proceed as in the previous section to construct our model strands. As before, we consider an embedded disk $D$ of radius $\rho>0$ and a geodesic segment $c$ in $D$ belonging to the Birman-Series set. This time, however, $D$ is contained in the thick part $S'$. We take the continuation $\bar{c}$ of $c$ in both directions of length $R-2\varepsilon$. The endpoints lie in Voronoi cells. The only difference with the previous sections is the way in which we deal with the special Voronoi cells. The intersections of $\bar{c}$ with the collars and cusps may be of two kinds: *traversing arcs* and *terminal arcs*. The precise definition and properties are as follows. *Traversing arcs.* Figure \[fig:TraversingArc\] shows half of a traversing arc $\mathcal{T}$ lifted to the universal covering of $S$. The horizontal line $\tilde{\beta}_k$ on the top is a lift of the geodesic $\beta_k$, the lower end is a lift of one of the boundary curves of ${{\mathscr C}}'_k$ consisting of geodesic segments of length $\varepsilon$. The dotted line is tangent to these and is a curve parallel to $\tilde{\beta}_k$ at distance $\omega'_k$. The vertices have distance $\omega_k$ to $\tilde{\beta}_k$. The meanings of $\omega_k$ and $\omega'_k$ are the same as in Section \[sec:thickthin\] (Figure \[fig:ThinHandle\]). The projection (under the universal covering map) on $S$ of the strip between the dotted lines is the *dotted collar* ${{\mathscr C}}''_k \subset {{\mathscr C}}'_k \subset {{\mathscr C}}_k$ defined as $$\label{eq:dotted reduced}{{\mathscr C}}''_k = \{ x \in S \mid {\mathrm{dist}}(x,\beta_k) \leq \omega'_k \}.$$ The boundary curves of ${{\mathscr C}}''_k$ are tangent to the boundary curves of ${{\mathscr C}}'_k$. In a similar way we define (for later use) in each cusp neighborhood $\mathscr{P}_i$ the *dotted* cusp neighborhood $\mathscr{P}''_i \subset \mathscr{P}'_i \subset \mathscr{P}_i$ whose boundary curve is a horocycle tangent to the boundary curve of $\mathscr{P}'_i$. Note that the traversing arcs begin and end on the dotted lines, i.e., any traversing arc $\mathcal{T}$ is a connected component, for some $k = 1, \dots, h$, of $\bar{c} \cap {{\mathscr C}}''_k$ that has its endpoints on the two opposite boundary curves of ${{\mathscr C}}''_k$. The arc labelled $\lambda$ in Figure\[fig:TraversingArc\] is half of the lift of $\mathcal{T}$ going from the dotted line to $\tilde{\beta}_k$. The label also denotes the length of this arc. Projecting $\lambda$ orthogonally to $\tilde{\beta}_k$ (0.24\*1.03) $\tilde{\beta}_k$\ Ł(0.162\*.55) $\omega_k$\ Ł(0.225\*.65) $\omega'_k$\ Ł(0.295\*.65) $\omega'_k$\ (0.275\*.0) $\sfrac{\varepsilon}{2}$\ (0.355\*.0) $\sfrac{\varepsilon}{2}$\ (0.62\*.0) $\varepsilon$\ (0.74\*.0) $\varepsilon$\ (0.45\*.61) $\lambda$\ (0.45\*1.03) $\hat{\lambda}$\ (0.318\*-0.07) $\tilde{p}$\ (0.345\*.33) $\varepsilon$\ (0.304\*.0945) $A$\ (0.34\*.18) $\mathcal{U}$\ we obtain the leg $\hat{\lambda}$ of a right-angled geodesic triangle with hypothenuse $\lambda$ whose other leg is $\omega'_k$. We now compare $\lambda$ with $\hat{\lambda}$. [**Formula [@BuserBook 2.2.2(i)]**]{} applied to the aforementioned right angled triangle yields $\cosh(\lambda) = \cosh(\hat{\lambda})\cosh(\omega'_k)$, and by [**formula [@BuserBook 2.3.1(iv)]**]{} applied to the trirectangle with sides $\frac{1}{2}\varepsilon, \omega'_k, \frac{1}{8}\ell(\beta_k), \omega_k$ we have $\cosh(\omega'_k)=\tanh(\sfrac{\varepsilon}{2})\coth(\frac{1}{8}\ell(\beta_k))$. Bringing this together and recalling that $\ell(\beta_k) \leq 2 \varepsilon$, $\varepsilon \leq 1/3$ we get $\cosh(\lambda) \geq \cosh(\hat{\lambda})\tanh(\sfrac{\varepsilon}{2})\coth(\sfrac{\varepsilon}{4}) \geq 1.98 \cosh(\hat{\lambda})$ and then, by an elementary estimate, $$\label{eq:lamlam}\lambda - \hat{\lambda} > 2/3.$$The endpoint $A$ of $\lambda$ on the dotted line lies in a disk $\mathcal{U}$ of radius $\sfrac{\varepsilon}{2}$ around one of the vertices $\tilde{p}$. By the triangle inequality this implies, in turn, that *$\mathcal{U}$ is contained in the disk of radius $\varepsilon$ around $A$*. *Terminal arcs.* These are the connected components of the intersections of $\bar{c}$ with the dotted collars and cusp neighborhoods that have one endpoint on the boundary while the other lies in the interior. There are at most two such components. Since the infinite geodesic extension of $\bar{c}$ is a simple curve it follows that terminal arcs in a cusp neighborhood are “vertical” that is, orthogonal to the horocycles. In the collars the situation is different. (0.18\*0.50) $\tilde{\beta}_k$\ (0.493\*0.72) $\omega'_k$\ (0.215\*0.30) $\omega'_k$\ (0.377\*0.71) $\sfrac{R'}{2}$\ (0.33\*0.30) $\sfrac{R'}{2}$\ (0.43\*0.53) $t$\ (0.268\*0.53) $\sigma_k$\ (0.50\*0.315) $R'$\ (0.265\*-0.055) $\tilde{p}$\ (0.225\*-0.022) $\tilde{q}$\ (0.605\*0.71) $\tilde{r}$\ (0.24\*1.03) $B$\ (0.47\*1.03) $C$\ (0.627\*0.395) $D$\ (0.265\*0.93) $\varepsilon'$\ (0.325\*0.93) $\varepsilon'$\ (0.555\*0.86) $\varepsilon$\ (0.618\*0.635) $\varepsilon$\ (0.25\*0.16) $\phi$\ (0.455\*0.85) $\phi$\ (0.275\*0.115) $\theta$\ (0.785\*0.455) $\eta$\ To deal with the terminal arcs in ${{\mathscr C}}_k$ ($k = 1, \dots, h$) we supplement the $1$-skeleton of ${{\mathscr V}}$ by a set of *[terminal segments]{}* since the $\varepsilon$-net has no vertices in ${{\mathscr C}}''_k$. For the description we use again the universal covering. Figure \[fig:EndArc\] shows the part of the lift of ${{\mathscr C}}_k$ that lies between the dotted lines. The horizontal line in the middle is a lift $\tilde{\beta}_k$ of $\beta_k$. The vertical lines are geodesic arcs orthogonal to $\tilde{\beta}_k$ at successive distances $$\sigma_k = \frac{1}{4}\ell(\beta_k).$$ The shaded domain with vertices $\tilde{p}$, $\tilde{q}$ and sides of lengths $\sfrac{\varepsilon}{2}, \omega'_k, \sigma_k, \omega'_k, \sfrac{\varepsilon}{2}$ has the same meaning as in Figure \[fig:TraversingArc\] and is a lift of the shaded domain shown in Figure \[fig:ThinHandle\]. The terminal segments will be issued at the midpoints of the boundary segments of lengths $\varepsilon$. We describe these arcs in the case of the midpoint point $q$ at the boundary of ${{\mathscr C}}'_k$ whose lift $\tilde{q}$ in the universal covering is shown in Figure \[fig:EndArc\]. We shall assume that $R \geq 2 \omega'_k$ and add the simple modification for $R < 2 \omega'_k$ at the end. The midpoints of the boundary segments on the dotted line on the top – we shall call them “white points” – have successive distances of some value $\varepsilon'$ slightly smaller than $\varepsilon$. We let $B$ be the white point on the top opposite to $\tilde{q}$ and then $C$ the white point to the right of $B$ whose distance $R'$ to $\tilde{q}$ is as close to $R$ as possible. Then $R -\varepsilon' \leq R' \leq R + \varepsilon'$. For the number $z_1$ of white points from $B$ to $C$, not counting $B$, we have the following, where $t$ is the orthogonal projection of the segment of length $\sfrac{R'}{2}$ onto $\tilde{\beta}_k$ (Figure \[fig:EndArc\]), $$z_1 = \frac{2t}{\sigma_k} \leq \frac{R'}{\sigma_k} \leq \frac{R+\varepsilon}{\sigma_k}$$Some of the comparison arcs will go from $\tilde{q}$ to these points, but we need further ones. To this end we look at the geodesic ray $\eta$ emanating from $\tilde{q}$ asymptotic to $\tilde{\beta_k}$ as shown in Figure \[fig:EndArc\]. Together with $\tilde{\beta}_k$ and the vertical line at $\tilde{q}$ it forms an ideal right-angled triangle with finite leg $\omega'_k$ and acute angle at $\tilde{q}$ which we write as a sum $\phi + \theta$, $\phi$ being the angle at $\tilde{q}$ of the right-angled triangle with hypothenuse $\sfrac{R'}{2}$ and legs $\omega'_k$, $t$. By [**formula [@BuserBook 2.2.2(vi)]**]{} $$\cos(\phi) = \frac{\tanh(\omega'_k)}{\tanh(\sfrac{R'}{2})}, \quad \cos(\phi + \theta) = \tanh(\omega'_k).$$Out of these relations we get, by elementary transformations, $$\sin(\theta) = \frac{\tanh(\omega'_k)}{\tanh(\sfrac{R'}{2})}\cdot \frac{1}{\cosh(\omega'_k)}\left\{1 - \sqrt{1- \frac{\cosh(\omega'_k)^2}{\cosh(\sfrac{R'}{2})^2}}\right\}.$$Using that for $0 \leq x \leq 1$ we have $1-\sqrt{1-x} \leq x$ and that $\theta < \sfrac{\pi}{2} \sin(\theta)$ we further obtain $$\theta < \frac{\pi\sinh(\omega'_k)}{\sinh(R')}.$$ We now introduce additional white points on the arc that goes from $C$ to $D \in \eta$ on the circle with radius $R'$ and center $\tilde{q}$ as shown in Figure \[fig:EndArc\]. We position them in such a way that the successive arcs between them have lengths $\varepsilon$. Let $z_2$ be the number of these points, not counting $C$. Then $z_2 \leq \frac{\theta}{\varepsilon} \sinh(R')<\frac{\pi}{\varepsilon} \sinh(\omega_k')$. Using that $\cosh(\omega'_k) = \tanh(\sfrac{\varepsilon}{2})\coth(\sfrac{\sigma_k}{2}) \leq\frac{\varepsilon}{\sigma_k}$, [**formula [@BuserBook 2.3.1(iv)]**]{} applied to the trirectangle with sides $\sfrac{\varepsilon}{2}, \omega'_k, \sfrac{\sigma_k}{2}, \omega_k$) we further get $$z_2 < \frac{\pi}{\sigma_k}.$$Now $z_1+z_2$ is the number of white points to the right of $B$ (Figure \[fig:EndArc\]) and there is the same number of similar points on the left. Drawing the connecting geodesic arcs from $\tilde{q}$ to these points (including the arc from $\tilde{q}$ to $B$) and projecting them from the universal covering to $S$ we get the *terminal segments at* $q$. So far we have assumed that $R \geq2 \omega'_k$. If $R < 2 \omega'_k$, then $z_1 = 0$ and the number of white points on the circular arc of radius $R$ is smaller than or equal to the number of such points we would get on the arc of radius $2 \omega'_k$. For the latter we have already found the bound $\frac{\pi}{\sigma_k}$. Hence again $z_2 < \frac{\pi}{\sigma_k}$. Summing up we have the following. \[lem:ModelEnd\]At any white point $q$ on the boundary of ${{\mathscr C}}''_k$ there are at most $\frac{2}{\sigma_k}(R+4)$ terminal segments. Finally, if $q$ is a white point on the boundary of a reduced cusp neighborhood $\mathscr{P}'_i$ then we need only one terminal segment, namely the geodesic arc of length $R$ at $q$ that is orthogonal to the horocycles. Constructing the model strands {#sec:modelstrands} ------------------------------ We begin by assigning to each collar ${{\mathscr C}}''_k$ a *winding number* $\tau_k$ with respect to $\bar{c}$ as follows. If ${{\mathscr C}}''_k$ contains no traversing arc we set $\tau_k = 0$. Otherwise we let $\mathcal{T}_k$ be the longest traversing arc in ${{\mathscr C}}''_k$, project it orthogonally to a parametrized curve $\hat{\mathcal{T}}_k$ on $\beta_k$ (that may go around $\beta_k$ many times) and set $$\label{eq:windingnumber}\tau_k = s_k \left[ \frac{\ell(\hat{\mathcal{T}}_k)}{\ell(\beta_k)} \right],$$where $[x]$ for $x \in {{\mathbb R}}$ denotes the largest integer $\leq x$ and $s_k \in \{-1,1\}$ is the orientation, i.e., $s_k = 1$ if $\hat{\mathcal{T}}_k$ winds around $\beta_k$ in the positive sense (with respect to a fixed orientation of $\beta_k$) and $s_k = -1$ otherwise. At some later point we shall simultaneously unwind all traversing arcs in ${{\mathscr C}}''_k$ by applying a Dehn twist $D_k$ of order $\tau_k$ along $\beta_k$ in the “unwinding direction”: if $\mathcal{T}$ traverses ${{\mathscr C}}''_k$ from, say $A$ to $A'$, then $D_k(\mathcal{T})$ is the geodesic arc from $A$ to $A'$ in ${{\mathscr C}}''_k$ that is homotopic (with fixed endpoints) to the curve $\mathcal{T}'$ that goes along $\mathcal{T}$ from $A$ to $\beta_k$, then $\vert\tau_k\vert$ times around $\beta_k$ in the opposite direction of $\hat{\mathcal{T}}_k$ and after that along $\mathcal{T}$ to $A'$. It follows from this construction that the orthogonal projection of $D_k(\mathcal{T}_k)$ on $\beta_k$ has length $< \ell(\beta_k)$. Since the traversing arcs are pairwise disjoint we conclude that all of them have this property, i.e., *any $D_k(\mathcal{T})$ winds less than once around ${{\mathscr C}}''_k$*. The model strand for $\bar{c}$ is constructed in a similar fashion as in Section \[sec:locqua\], though with a modification for the traversing arcs and the terminal arcs. We proceed in three steps. First we split $\bar{c}$ into a product $$\bar{c} = c_0 c_1 c_2 c_3 \cdots c_{2J+1}c_{2J+2}$$($J\geq0$), where $c_0$ and $c_{2J+2}$ are either terminal arcs (Section \[sec:travend\]) or point curves depending on whether or not $\bar{c}$ begins, respectively ends with a terminal arc; the parts $c_1, c_3, \dots, c_{2J+1}$ are outside the dotted collars and dotted cusp neighborhoods; the parts $c_{2j}$, for $j=1,\dots,J$, comprise all the traversing arcs, each of them traversing some dotted collar ${{\mathscr C}}''_{k_j}$. Here we are using that a complete simple geodesic cannot enter and leave a collar on the same side (Lemma \[lem:collar\]) and that, similarly, it cannot enter and leave a cusp neighborhood. To deal with the windings separately we first “unwind” $\bar{c}$ and also leave out $c_0$ and $c_{2J+2}$ setting $$\breve{c} = c_1 \breve{c}_2 c_3\breve{c}_4c_5 \cdots c_{2J-1} \breve{c}_{2J} c_{2J+1},$$where $\breve{c}_{2j} = D_{k_j}(c_{2j})$, $j=1, \dots, J$. Now $\breve{c}$ traverses a succession of Voronoi cells and we associate to it a polygonal curve $P(\breve{c})$ by the same procedure as in Section \[sec.concom\] with a minor adaptation: if $c_0$ is a terminal arc, say $c_0 \subset {{\mathscr C}}''_{\ell}$, then $P(\breve{c})$ has its initial point on the boundary of ${{\mathscr C}}''_{\ell}$ (as though ${{\mathscr C}}''_{\ell}$ was the 0-th Voronoi cell that $\breve{c}$ goes along). This initial point is then one of the white points. Similarly, if $c_{2J+2}$ is a terminal arc in some ${{\mathscr C}}''_k$, then the endpoint of $P(\breve{c})$ is a white point on the boundary of ${{\mathscr C}}''_k$. The curves $\breve{c}$ and $P(\breve{c})$ have nearby endpoints: there is a geodesic arc $\breve{u}$ of length $\leq 2 \varepsilon$ leading from the initial point of $\breve{c}$ to the initial point of $P(\breve{c})$ and a geodesic arc $\breve{v}$ of length $\leq 2 \varepsilon$ leading from the endpoint of $P(\breve{c})$ to the endpoint of $\breve{c}$. The paths $\breve{c}$ and $\breve{u}P(\breve{c})\breve{v}$ are homotopic. In the next step we apply to $P(\breve{c})$ the reversed Dehn twists $D_k^{-1}$, $k=1,\dots,h$. The resulting curve ${\rule{-3pt}{0pt}\stackrel{ \frown}{\rule{0pt}{6pt}\smash P\rule{1pt}{0pt}}\rule{-4pt}{0pt}}(\breve{c}) = D_1^{-1} \cdots D_h^{-1}(P(\breve{c}))$ has the same endpoints as $P(\breve{c})$ and $\breve{u}{\rule{-3pt}{0pt}\stackrel{ \frown}{\rule{0pt}{6pt}\smash P\rule{1pt}{0pt}}\rule{-4pt}{0pt}}(\breve{c})\breve{v}$ is homotopic to the curve $c_1c_2c_3c_4\cdots c_{2J+1}$. In the final step we add the possible terminal segments. We describe this for $c_{2J+2}$, the procedure for $c_0$ being the same. If $c_{2J+2}$ is a point curve we set $\breve{c}_{2J+2} = c_{2J+2}$. Now assume that $c_{2J+2}$ is a terminal arc in some dotted collar ${{\mathscr C}}''_k$. Then the initial point of $c_{2J+2}$ lies within distance $\leq \sfrac{\varepsilon}{2}$ of the endpoint, say $q$, of ${\rule{-3pt}{0pt}\stackrel{ \frown}{\rule{0pt}{6pt}\smash P\rule{1pt}{0pt}}\rule{-4pt}{0pt}}(\breve{c})$ on the boundary of ${{\mathscr C}}''_k$ and we first extend $c_{2J+2}$ to a longer geodesic arc $c'_{2J+2}$ in such a way that if we lift it to the universal covering, then the other endpoint lies within distance $\leq \sfrac{\varepsilon}{2}$ of some white point, say $\tilde{r}$, on the arc $BCD$ as depicted in Figure \[fig:EndArc\] (or its symmetric image across $\tilde{q}B$), where the dotted line is $c'_{2J+2}$. The geodesic from $\tilde{q}$ to $\tilde{r}$ is the lift of a terminal segment and we let $\breve{c}_{2J+2}$ be this terminal segment. There is a connecting arc $v$ of length $\leq \sfrac{\varepsilon}{2}$ from the endpoint of $\breve{c}_{2J+2}$ to the endpoint of $c'_{2J+2}$, and the curves $(\breve{v})^{-1} \breve{c}_{2J+2} v$ and $c'_{2J+2}$ are homotopic. Finally, if $c_{2J+2}$ is a terminal arc in some dotted cusp neighborhood $\mathscr{P}''_i$, then we proceed similarly except that we take the extension $c'_{2J+2}$ to be of length $R$. In this latter case $\breve{c}_{2J+2}$ is a geodesic arc orthogonal to the horocycles. We now let $\bar{c}'$ be the extended geodesic arc $\bar{c}' = c_0' c_1 c_2 c_3 c_4 \cdots c_{2J+1} c'_{2J+2}$ (where $c'_0$ is defined in the same way as $c'_{2J+2}$) and define the model strand $m^c$ to be the geodesic arc in the homotopy class of $\breve{c}_0 {\rule{-3pt}{0pt}\stackrel{ \frown}{\rule{0pt}{6pt}\smash P\rule{1pt}{0pt}}\rule{-4pt}{0pt}}(\breve{c})\breve{c}_{2J+2}$. Then there is a connecting arc $u$ of length $\leq 2 \varepsilon$ from the initial point of $\bar{c}'$ to the initial point of $m^c$, and the already described connecting arc $v$ of length $\leq 2 \varepsilon$ from the endpoint of $m^c$ to the endpoint of $\bar{c}'$. By the aforementioned homotopies the curves $u \,m^c v$ and $\bar{c}'$ are homotopic. In the disk $D$ we have therefore the same estimate for the distance between $c$ and $m_c$ as in and so we get the following result for our model strands $m^c$ and their components $m_c$ in $D$: \[lem:distcmc\]Any component $c$ of the Birman-Series set in $D$ lies in a tubular neighborhood of radius $w_R := 3e^{-R}$ of the model strand $m_c$. Estimating the number of model strands {#sec:estmodelstrands} -------------------------------------- \[lem:edgetwist\]Let $\breve{L}$ be the number of path edges of $P(\breve{c})$ and $\tau_1, \dots \tau_h$ the winding numbers of $\bar{c}$ at $\beta_1, \dots, \beta_h$ as in , then $$\frac{\varepsilon}{24}\, \breve{L} + \sum_{k=1}^{h}\ell(\beta_k) \vert\tau_k\vert \leq \ell(\bar{c}) + 3 \varepsilon.$$ For $j=0,\dots,J$, any Voronoi cell (including the special ones) that is crossed by $c_{2j+1}$ has its center at distance $\leq \varepsilon$ from $c_{2j+1}$ (by the completeness of the $\varepsilon$ net in $S'$). By the same area argument as in the proof of Lemma \[lem:comblength\] it follows that $c_{2j+1}$ crosses at most $\frac{4}{\varepsilon}(\ell(c_{2j+1})+3\varepsilon)$ times some Voronoi cell. Each $\breve{c}_{2j}$, for $j=1, \dots, J$, crosses an additional number of at most 4 special Voronoi cells. This is so because $\breve{c}_{2j}$ winds at most once around the corresponding collar and its initial and final cells are on the accounts of $c_{2j-1}$ and $c_{2j+1}$, respectively. Altogether $\breve{c}$ crosses at most $\frac{4}{\varepsilon}\sum_{j=0}^J(\ell(c_{2j+1})+3\varepsilon) + 4 J$ times some Voronoi cell and this multiplied by 6 is an upper bound of $\breve{L}$: $$\label{eq:ineqA}\breve{L} \leq \frac{24}{\varepsilon}\sum_{j=0}^J(\ell(c_{2j+1})+3\varepsilon) + 24 J.$$For any $k=1,\dots,h$, such that $\tau_k \neq 0$ there is the longest traversing arc $\mathcal{T}_k$ in ${{\mathscr C}}''_k$ which is one of the $c_{2j}$ say $\mathcal{T}_k = c_{2j(k)}$. By its orthogonal projection $\hat{c}_{2j(k)}$ on $\beta_k$ has length satisfying $ \ell(\beta_k) \vert \tau_k \vert \leq \ell(\hat{c}_{2j(k)}) < \ell(c_{2j(k)}) -\sfrac{4}{3}\leq \ell(c_{2j(k)})-4 \varepsilon$. Using that any other traversing arc has length $\ell(c_{2j}) > 4 \epsilon$ we get $$\label{eq:ineqB}\sum_{k=1}^h \ell(\beta_k)\vert \tau_k \vert \leq \sum_{j=1}^{J}(\ell(c_{2j})-4\varepsilon)$$With and the proof is complete. In the next lemma the constant $G$ stems from Corollary \[cor:edges2\]. In the case $n=0$ it is the same as in , Section \[sec:finaliz\]. \[lem:numbermc\]Set $G = \frac{97}{\varepsilon^2}(g-1+\frac{n}{2})$ and $$\mathcal{N}(R) = \frac{1}{10}\, \frac{m^G}{G^G}R^G \frac{1}{\ell(\beta_1)\cdots\ell(\beta_h)},$$with $m = \frac{134}{\varepsilon}$. Then for given $R > G$ there are at most $\mathcal{N}(R)$ model strands in $D$. We first count how many model strands arise from extensions $\bar{c}$ that have both endpoints outside the dotted collars. We shall say that these curves belong to the *first category*. For any of them the model strand $m^c$ is uniquely determined by the following three sets of data: the winding numbers $\tau_1, \dots, \tau_h$, the numbers $n_1, \dots, n_V$ of path edges of $P(\breve{c})$ on each of the $V$ edges of the cell decomposition ${{\mathscr V}}$ (which we assume enumerated from $1$ to $V$ in some way) and the selection of the initial and end path edge. By Lemma \[lem:edgetwist\] we have, noting that $\breve{L} = n_1+\dots+n_V$, and $\ell(\bar{c}) \leq 2R - 4 \varepsilon$, $$\label{eq:ineqn1nV}\frac{\varepsilon}{24}(n_1+\dots+n_V) + \sum_{k=1}^{h}\ell(\beta_k) \vert\tau_k\vert \leq 2R.$$The number of model strands in the present case is thus bounded above by $4\breve{L}^2 2^h$ times the number of strings $(n_1, \dots, n_V, \vert \tau_1\vert, \dots, \vert \tau_h \vert)$ satisfying inequality . By Corollary \[cor:comblem\] this number is bounded above by $$\label{eq:numbstring}B := \frac{1}{(V+h)! \left(\frac{\varepsilon}{24}\right)^V \ell(\beta_1) \cdots \ell(\beta_h)}(2R + \gamma)^{V+h},$$ where $\gamma= \frac{\varepsilon}{24} V+ \ell(\beta_1) + \dots + \ell(\beta_h)$. Hence, the upper bound $4\breve{L}^2 2^h B$ for the number of model strands with endpoints outside the dotted collars. We also note using and Corollary \[cor:edges2\] that $h \leq 3(g-1+\frac{n}{2})$ and hence, $$\label{eq:boundgamma}V+h\leq G-5, \qquad \gamma < \frac{1}{50} R.$$We extend our first category of curves by allowing, in addition, the endpoints to lie in dotted collars that contain traversing arcs. For this extended category we have the previous bound multiplied by the number of possible choices of the terminal segments that are attached to ${\rule{-3pt}{0pt}\stackrel{ \frown}{\rule{0pt}{6pt}\smash P\rule{1pt}{0pt}}\rule{-4pt}{0pt}}(\breve{c})$. Any such segment must be homotopic with fixed endpoints to a curve that does not intersects ${\rule{-3pt}{0pt}\stackrel{ \frown}{\rule{0pt}{6pt}\smash P\rule{1pt}{0pt}}\rule{-4pt}{0pt}}(\breve{c})$ except at the endpoints. One of the latter is the attachement to ${\rule{-3pt}{0pt}\stackrel{ \frown}{\rule{0pt}{6pt}\smash P\rule{1pt}{0pt}}\rule{-4pt}{0pt}}(\breve{c})$, the other is among the four white points on the opposite boundary component of the corresponding dotted collar. Hence, at either end of ${\rule{-3pt}{0pt}\stackrel{ \frown}{\rule{0pt}{6pt}\smash P\rule{1pt}{0pt}}\rule{-4pt}{0pt}}(\breve{c})$ there are at most 5 possible choices for the terminal segment. This yields the bound $100\breve{L}^2 2^h B$ for the number of curves in the extended first category. The *second category* consists of the cases where exactly one endpoint of $\bar{c}$ lies in a dotted collar that contains *no* traversing arcs or where both endpoints lie in the same dotted collar that contains no traversing arcs. We shall count the arising model strands for the cases for which this collar is ${{\mathscr C}}''_k$ and then take the sum for $k=1,\dots,h$. Now for $\bar{c}$ with initial point in ${{\mathscr C}}''_k$ (and possibly the end point also but without traversing arcs in ${{\mathscr C}}''_k$) the attachment point for the terminal segment that is glued to the beginning of ${\rule{-3pt}{0pt}\stackrel{ \frown}{\rule{0pt}{6pt}\smash P\rule{1pt}{0pt}}\rule{-4pt}{0pt}}(\breve{c})$ is uniquely determined by the sequence $(n_1,\dots,n_V)$, but the previous bound of 5 for the possible directions is now replaced by $\frac{8}{\ell(\beta_k)}(R+4)$ (Lemma \[lem:ModelEnd\]). At the same time, since [[$\tau_k = 0$]{}]{}, inequality is now replaced by $\frac{\varepsilon}{24}(n_1+\dots+n_V) + \sum_{j=1,j\neq k}^{h}\ell(\beta_k) \vert\tau_k\vert \leq 2R$, and Corollary \[cor:comblem\] yields the bound $$B' := \frac{\ell(\beta_k)}{(V+h-1)! \left(\frac{\varepsilon}{24}\right)^{V} \ell(\beta_1) \cdots \ell(\beta_{h})}(2R + \gamma)^{V+h-1}.$$for the number of strings that satisfy it. If the second endpoint of $\bar{c}$ happens to lie in ${{\mathscr C}}''_k$ also, then both attachment points are determined by $(n_1, \dots, n_V)$ but for given direction of the terminal segment at the beginning there are at most 5 directions for the terminal segment at the end, owing to the fact that both terminal arcs of $\bar{c}$ in ${{\mathscr C}}''_k$ lie on the same simple geodesic. It follows that for ${{\mathscr C}}''_k$ there are at most $10\breve{L} 2^{h-1} B' \frac{8}{\ell(\beta_k)}(R+4) \leq 40 \breve{L} 2^h (V+h) B$ possible cases, and summing up for $k = 1, \dots, h$ we get the upper bound $40 h \breve{L} 2^h (V+h) B$ for the number of model strands arising from curves in category 2. The *third* and final category consists of the cases where the two endpoints of $\bar{c}$ lie in distinct dotted collars that both contain no traversing arcs. A similar argument as before shows that there are at most $8h^2 2^h (V+h)(V+h-1)B$ model strands arising from this last category. Let now $N$ be the sum of the bounds for the three categories. By Lemma \[lem:edgetwist\] and since $\ell(\bar{c}) \leq 2R-4$ (Section \[sec:travend\]) we have $\breve{L} \leq \frac{48}{\varepsilon} R$ . Furthermore, by the hypothesis of the lemma, $G < R$ and therefore $V+h < R$. Hence, allowing rough estimates at this point, $$N \leq 100\left( \frac{48}{\varepsilon}\right)^2 R^2 \{1 + \frac{1}{10} h + \frac{1}{100}h^2 \} 2^h B \leq 100\left( \frac{48}{\varepsilon}\right)^2 R^2 3^h B.$$Applying to $B$ that $\gamma < \frac{1}{50}R$ and using that $x! \geq x^x e^{-x}$ for $x > 0$ we get $$B \leq \frac{1}{\ell(\beta_1)\cdots \ell(\beta_h)} \left(\frac{24}{\varepsilon}\right)^V\left(2+\frac{1}{50}\right)^{V+h}\left(\frac{e R}{V+h}\right)^{V+h}.$$Since the function $x \to (e R/x)^x$ is monotone increasing for $x \in [1, R]$ and, by , $V+h \leq G-2 < R$, the last factor has the bound $(eR/(V+h))^{V+h}\leq (eR/(G-2))^{G-2}$. The bound in the lemma now follows by elementary simplification using that $G>400$. Finalizing the proof {#sec:finaliz2} -------------------- Theorem \[thm:lq2\] is now proved by the same argument as in the proof of Theorem \[thm:lq1\]: the given disc $B_{\rho}$ has area $\geq \pi \rho^2$; for any model strand $m_c$ in $B_{\rho}$ the part $m_c^{2w_R}$ of the tubular neighborhood of radius $2w_R$ that lies in the disc has area $< 9\, \rho\, w_R$ (see ), where $w_R = 3e^{-R}$. By Lemma \[lem:numbermc\] there are at most $\mathcal{N}(R)$ such strands and by Lemma \[lem:distcmc\] the tubular neighborhoods of radius $w_R$ around them contain the Birman-Series set $\cap B_{\rho}$. We thus get a point $p$ in $B_{\rho}$ at distance $\geq w_R$ from it if we set $R$ to be a solution $\geq G$ to the equation $$\frac{1}{\rho \, \sigma} \, \frac{m^G}{G^G} R^G = e^R.$$This is with $\rho$ replaced by $\rho \, \sigma$. By Lemma \[lem:solutionbound\] this solution satisfies $$\label{eq:boundRzero}R \leq 2 \log(\frac{1}{\rho\, \sigma}) + 2G \log(m)$$and, analogously to , we get the bound $w_R \geq 3\rho^2\sigma^2 e^{-2G \log{m}}$ with $G$ and $m$ as in Lemma \[lem:numbermc\]. This completes the proof. Systole independent bounds {#sec:sysindep} ========================== From a geometric point of view collars around very small geodesics are similar to pairs of cusps. This suggests that there should also exist a version of Theorem \[thm:lq1\] with bounds that are independent of the systole. Here we show that this is indeed possible. However, the constants arising from our approach become extremely small. The main idea is that if the width of a collar is sufficiently large with respect to $R$ then it cannot contain traversing arcs and, furthermore, the number of possible directions of terminal segments (c.f. Lemma \[lem:ModelEnd\]) at any white point on the boundary is just equal to 1 as in the case of a cusp. In Figure \[fig:WindingArc\] we calculate how much larger than $R$ the width must be. The figure depicts again part of the universal covering of the surface $S$ and is in correspondence with the earlier Figure \[fig:EndArc\]. The distance from the white point $\tilde{q}$ to the lift $\tilde{\beta}_k$ of the small geodesic $\beta_k$ is equal to $R+H$ with $H$ to be determined and $\sigma_k = \sfrac{1}{4} \ell(\beta_k)$. The geodesic ray $\eta$ issued at $\tilde{q}$ is asymptotic to $\tilde{\beta}_k$ and forms an angle $\theta$ with the vertical geodesic from $\tilde{q}$ orthogonally to $\tilde{\beta}_k$. We now determine $H$ in such a way that $\eta$ contains the hypothenuse of a geodesic triangle $\tilde{q} CD$ with right angle at $C$ and small sides $R$ and $\varepsilon$. The latter is almost identical with the arc $CD$ from $C$ to $\eta$ on the cricle of radius $R$ centered at $\tilde{q}$. Speaking in terms of Figure \[fig:EndArc\] we have $B=C$ and there is only one white point on the curve $BCD$ (and its symmetric image across $\tilde{q}C$) and thus only one choice for the direction of a terminal segment at $q$ in $S$. The necessary distance $H$ is determined by the following triangle formulas ([**formula [@BuserBook 2.2.2(iv)]**]{}) $$\label{eq:determineH}\sinh(R) = \tanh(\varepsilon) \cot(\theta), \quad \sinh(R+H) = \cot(\theta).$$By the formula for the grey shaded trirectangle already used prior to (with $\omega'_k = R+H$) we have $\cosh(R+H) = \tanh(\sfrac{\varepsilon}{2}) \coth(\sfrac{\sigma_k}{2}) $. For $H$ determined by the length of $\beta_k$, with negligible error, is equal to $16 \tanh(\sfrac{\varepsilon}{2})\tanh(\varepsilon) e^{-R}$. Thus, if we define $$\label{eq:betacondition}\mathcal{L}(R) = 4 \varepsilon^2 e^{-R}$$then for given $R$ any collar ${{\mathscr C}}''_k$ with $\ell(\beta_k) \leq \mathcal{L}(R)$ may be dealt with as though it was a pair of cusps. (0.45\*1.03) $\tilde{\beta}_k$\ (0.244\*.10) $\sfrac{\varepsilon}{2}$\ (0.315\*.788) $\varepsilon$\ (0.29\*.018) $\tilde{q}$\ (0.265\*.696) $C$\ (0.353\*.696) $D$\ (0.265\*.45) $R$\ (0.265\*.85) $H$\ (0.60\*.92) $\eta$\ (0.335\*.31) $\theta$\ (0.244\*1.03) $\sfrac{\sigma_k}{2}$\ We now start an iteration beginning by setting $$\ell_0 = 1, \quad R_0 = 2\log(\frac{1}{\rho}) + 2G \log(m),$$where $\rho$, $G$, $m$ are as in Theorem \[thm:lq2\] and Lemma \[lem:numbermc\]. The value for $R_0$ stems from and is the bound for $R$ in the proof of Theorem \[thm:lq2\] that holds if $h = 0$, i.e., in the case where $S$ has no small geodesics. By what we have said above, this bound is also valid if all small geodesics on $S$ have lengths $\leq \mathcal{L}(R_0)$, and so under this weaker hypothesis we still have the lower bound $3 e^{-R_0}$ for the largest distance to the Birman-Series set in the disc $B_{\rho}$. In the first iteration step we set $\ell_1 = \mathcal{L}(R_0)$ and set $R_1$ equal to the right hand side of for the special case $\sigma = \ell_1$. This is then the bound for $R$ in the proof of Theorem \[thm:lq2\] that holds if $h = 1$, and the same bound is valid in the more general case where all small geodesics different from $\beta_1$ have lengths $\leq \mathcal{L}(R_1)$. In this way we continue getting two sequences sequence $\ell_0, \ell_1, \ell_2, \dots$ and $R_0, R_1, R_2, \dots$, with the iteration scheme $\ell_k = \mathcal{L}(R_{k-1}) = 4 \varepsilon^2 e^{-R_{k-1}}$ $ R_k = 2 \log{\frac{1}{\rho}} + 2 \log\frac{1}{\ell_1} + \cdots + 2 \log\frac{1}{\ell_k} + 2G\, \log(m)$ and at each step the result that some point in $B_{\rho}$ has distance $\geq w_{R_k} = 3 e^{-R_k}$ to the Birman-Series set provided that on $S$ all small geodesics different from $\beta_1, \dots, \beta_k$ are shorter than $\mathcal{L}(R_k)$. Now the recursion for $R_k$ is equivalent to $R_k = 3R_{k-1} + 2 \log \frac {1}{4\varepsilon^2}$ and $R_k$ has the closed form $R_k = 3^k R_0 +(3^k-1) \log\frac{1}{4\varepsilon^2}.$ But the iteration stops at $k = 3g-3+n$ at the latest because this is the maximal possible number of small geodesics for $S$ of genus $g$ with $n$ cusps. Hence we have the following. \[thm:lq3\] Let $S$ be a hyperbolic surface of genus $g$ with $n$ cusps and $B_{\rho}$ a disk of radius $\rho$ in the $\varepsilon$-thick part of $S$, where $0 < \rho < \varepsilon \leq \sfrac{1}{3}$. Then there exists a point $p \in B_{\rho}$ such that $${\mathrm{dist}}(p,{{\mathcal{BS}}}) \geq e^{-3^\kappa R},$$ where $\kappa = 3g-3+n$ and $R = 2 \log \frac{1}{\rho} + M(g-1+\sfrac{n}{2})$ with $M=\frac{195}{\varepsilon^2}\log \frac{134}{\varepsilon}$. For the convenience of the reader we gather a number of properties of hyperbolic triangles inscribed in a circle that are certainly well known but not easily accesible in the literature. \[lem:MinimalAngle\]Consider, for given ${{\varepsilon}}> 0$, a hyperbolic triangle with sides of lengths $\geq {{\varepsilon}}$ inscribed in a circle of radius $\leq {{\varepsilon}}$. Then all angles are bounded from below by $\varphi_{{{\varepsilon}}}$, where $$\label{eq:XXXX}\cot{\frac{\varphi_{{{\varepsilon}}}}{2}} = \cosh({{\varepsilon}}) \cdot\left \{ \sqrt{1 + 2 \cosh{{{\varepsilon}}}} + \sqrt{2 + 2 \cosh{{{\varepsilon}}}} \right\}.$$ For the proof we shall show that the smallest possible angle is $\varphi_{{{\varepsilon}}}$ and is achieved in the case as shown on the left hand side in Figure \[Fig:InscribedTriangles\], where we have an isosceles triangle $ABC$ with base $AB$ of length ${{\varepsilon}}$ inscribed in a circle of center $O$ and radius ${{\varepsilon}}$. An intermediate step in the proof is the following property that does not hold in Euclidean geometry. (0.10\*-0.01) $A$\ (0.33\*-0.01) $B$\ (0.215\*1.02) $C$\ (0.2\*0.52) $O$\ (0.759\*0.477) $O$\ (0.215\*0.115) ${{\varepsilon}}$\ (0.227\*0.65) ${{\varepsilon}}$\ (0.168\*0.21) ${{\varepsilon}}$\ (0.258\*0.21) ${{\varepsilon}}$\ (0.285\*0.83) $\frac12 \varphi_{{{\varepsilon}}}$\ (0.215\*0.305) $\psi_{{{\varepsilon}}}$\ (0.85\*0.46) $t$\ (0.78\*0.4) $\zeta$\ (0.59\*0.18) $A$\ (0.975\*0.18) $B$\ (0.895\*0.97) $C$\ (0.868\*0.755) $\beta$\ (0.76\*0.73) $\alpha$\ (0.7\*0.33) $\rho$\ (0.865\*0.33) $\rho$\ (0.835\*0.65) $\rho$\ (0.783\*0.60) $s$\ (0.29\*0.51) $M$\ (0.94\*0.58) $M$\ \[lem:Isosceles\]Let $ABC$ be an arbitrary hyperbolic triangle inscribed in some circle of radius $\rho > 0$. If we allow $C$ to move on the circle without crossing $A$ or $B$, then the angle $\gamma$ at $C$ becomes minimal if and only if ${\mathrm{dist}}(A,C) = {\mathrm{dist}}(B,C)$. We let $\zeta \in {}]0,2\pi{}[$ be the measure of the angular region between $OA$ and $OB$ into which $C$ is not allowed to enter. As a parameter for the position of $C$ we then take the angle $2t \in {}]0,2\pi - \zeta[{}$ from $OB$ to $OC$. The orthogonal line from $O$ to $BC$ decomposes the triangle $OBC$ into two isometric right-angled triangles $OMC$, $OMB$ with the oriented angle $\beta = \beta(t)$ at $C$, the orientation being from $CO$ to $CB$. In a similar way we let $2s \in {}]0,2\pi - \zeta[{}$ be the angle between $OC$ and $OA$. The orthogonal line from $O$ to $AC$ decomposes the triangle $OAC$ into two isometric right-angled triangles with oriented angle $\alpha = \alpha(s)$ at $C$, the orientation being from $CA$ to $CO$. With these orientation conventions we have $$2t + 2s + \zeta = 2\pi$$For the right-angled triangles the following formulas hold: $$\cosh \rho = \cot{\beta} \cot(t); \quad \cosh \rho = \cot{\alpha} \cot(s).$$Indeed, for the configuration as in Figure \[Fig:InscribedTriangles\], where $s,t,\alpha,\beta \in {}]0,\frac{\pi}{2}[$ these are instances of [**formula [@BuserBook 2.2.2(ii)]**]{}, and using that $\cot(\pi-s) = - \cot(s)$ we easily see that the formulas remain valid in the cases where $\alpha \leq 0$ or $\beta \leq0$. One also may check that in all cases the angle $\gamma$ of triangle $ABC$ at $C$ satisfies $$\gamma = \alpha + \beta.$$A straightforward computation involving trigonometric identities (including $\tan(s) + \tan(t) = \frac{\sin(s+t)}{\cos(s)\cos(t)}$) now yields the following formula for $\gamma$ as a function of $t \in {}]0,\pi - \frac{\zeta}{2}[$, $$\cot \gamma(t) = \cot \tfrac{\zeta}{2} \cdot \cosh \rho -\frac12 \frac{{\sinh}^2\rho}{\sin \tfrac{\zeta}{2}\cosh \rho}\left \{ \cos(2t + \tfrac{\zeta}{2} ) + \cos \tfrac{\zeta}{2} \right\}.$$Thus, $\gamma$ becomes minimal when $2t = \pi - \frac{\zeta}{2}$. By Lemma \[lem:Isosceles\] it suffices to look at triangles $ABC$ that are isosceles at $C$. If we move $A$, $B$ on the given circle towards each other until their distance is ${{\varepsilon}}$ the angle $\gamma$ decreases. If we then increase the height of the isosceles triangle keeping the base $AB$ fixed until the circumcircle reaches radius ${{\varepsilon}}$ the angle $\gamma$ decreases again. Hence, the comparison triangle with the minimal angle $\varphi_{{{\varepsilon}}}$ as on the left hand side in Figure \[Fig:InscribedTriangles\]. To compute $\varphi_{{{\varepsilon}}}$ we begin with the angle $\psi_{{{\varepsilon}}}$ at $O$ of the equilateral triangle $OAB$. By [**formula [@BuserBook 2.2.2(iii)]**]{}we have $\sinh \frac{{{\varepsilon}}}{2} = \sin \frac{\psi_{{{\varepsilon}}}}{2} \cdot \sinh{{{\varepsilon}}}$ or equivalently, using the half angle formulas for $\cot$ and $\sinh$, $$\label{eq:PsiEps}\cot \frac{\psi_{{{\varepsilon}}}}{4}= \sqrt{1+2\cosh {{\varepsilon}}} + \sqrt{2 + 2\cosh{{\varepsilon}}}.$$In the right-angled triangle $OMC$ the angles at $O$ and $C$ are $\frac{1}{2}\pi - \frac{1}{4}\psi_{{{\varepsilon}}}$ and $\frac{1}{2}\varphi_{{{\varepsilon}}}$, respectively. By the formula used earlier, $$\cosh {{\varepsilon}}= \cot \tfrac{\varphi_{{{\varepsilon}}}}{2} \cdot \cot(\tfrac{\pi}{2}-\tfrac{\psi_{{{\varepsilon}}}}{4})= \cot \tfrac{\varphi_{{{\varepsilon}}}}{2} \cdot \tan \tfrac{\psi_{{{\varepsilon}}}}{4},$$which implies the formula of the lemma. For completeness we add an upper bound on the angles of an inscribed triangle. The argument for this is easy: the largest angle is reached for the triangle $ABD$ inscribed in a circle of radius ${{\varepsilon}}$ with sides $AB$ and $BD$ of length ${{\varepsilon}}$. Triangles $OAB$ and $OBD$ are equilateral with the interior angles $\psi_{{{\varepsilon}}}$ and so we have \[lem:MaximalAngle\]Consider, for given ${{\varepsilon}}> 0$, a hyperbolic triangle with sides of lengths $\geq {{\varepsilon}}$ inscribed in a circle of radius $\leq {{\varepsilon}}$. Then all angles are bounded from above by $2\psi_{{{\varepsilon}}}$, where $\psi_{{{\varepsilon}}}$ is as in . [Appendix B:]{} [*A combinatorial lemma*]{} To distribute up to $L$ identical objects into $K$ distinct boxes is possible in $\binom{L+K}{L}$ different ways; a simple bound is $\frac{(L+K)^K}{K!}$. Here we prove a bound for the case for distributions in packets. \[lem:comblem\]For any string of positive integers $\vec{s} = (s_1, \dots, s_K) \in ({{\mathbb N}}\setminus \{0\})^K$ and any $L \in {{\mathbb N}}\setminus \{0\}$ we denote by $\chi_{\vec{s}}(L)$ the number of ordered sequences ${{\color{leafgreen}(n_1, \dots n_K)}} \in {{\mathbb N}}^K$ satisfying $$n_1 s_1 + \dots + n_K s_K \leq L.$$Then $$\chi_{\vec{s}}{\,} \leq \frac{1}{K! \, s_1 \cdots s_K}(L+s_1+\dots+s_K)^K.$$ By induction over $K$. For $K=1$ the inequality is clear. For the step from $K$ to $K+1$ we abbreviate $s_{K+1} = s$, $n_{K+1} = n$. The possible values for $n$ are $n = 0,1,2,\dots, \left[\frac{L}{s}\right]$ (largest integer $\leq \frac{L}{s}$). Now we observe that $$\chi_{(s_1, \dots, s_{K+1})} = \sum_{n=0}^{\left[\frac{L}{s}\right]} \chi_{\vec{s}}(L-n \, s).$$By the induction hypothesis, for any $n = 0, \dots, \left[\frac{L}{s}\right]$ the number of sequences $n_1, \dots, n_K$ has the upper bound $$\chi_{\vec{s}}(L- n\, s) \leq \frac{1}{K! \, s_1 \cdots s_K} (L-n\, s + A)^K,$$where we have abbreviated $A= s_1 + \dots + s_K$. It remains to prove that $$\sum_{n=0}^{\left[\frac{L}{s}\right]} \frac{1}{K! s_1 \cdots s_K}(L-n\, s +A)^K \leq \frac{1}{(K+1)! \, s_1 \cdots s_K s}( L+ A +s)^{K+1}.$$Now $$\begin{aligned} \sum_{n=0}^{\left[\frac{L}{s}\right]}( L+ A - n\, s)^{K} &= \sum_{n=0}^{\left[\frac{L}{s}\right]} \int_{n-1}^n( L+ A - n\, s)^{K} dt \leq \sum_{n=0}^{\left[\frac{L}{s}\right]} \int_{n-1}^n( L+ A - t\, s)^{K} dt \\ &\leq \int_{-1}^{L/s}(L+A-t \, s)^K dt = \frac{1}{s(K+1)}\left( (L+A+s)^{K+1}-A^{K+1}\right)\\ &\leq \frac{1}{s(K+1)} (L+A+s)^{K+1} \end{aligned}$$and the above inequality follows. There is also a real valued version of the lemma: \[cor:comblem\]For any string of positive real numbers $\vec{\rho} = (\rho_1, \dots, \rho_K) \in {{\mathbb R}}^K$ and any $\lambda>0$ we denote by $\chi_{\vec{\rho}}(\lambda)$ the number of ordered sequences $(n_1, \dots n_K) \in {{\mathbb N}}^K$ satisfying $$n_1 \rho_1 + \dots + n_K \rho_K \leq \lambda.$$Then $$\chi_{\vec{\rho}}{\,} \leq \frac{1}{K! \, \rho_1 \cdots \rho_K}(\lambda+\rho_1+\dots+\rho_K)^K.$$ Take $t>0$ smaller than $\rho_1, \dots, \rho_K$ and set $L = \left[\frac{\lambda}{t}\right]+1$, $s_k = \left[ \frac{\rho_k}{t}\right]$, for $k = 1, \dots, K$. Any string satisfying $n_1 \rho_1 + \dots + n_K \rho_K \leq \lambda$ then also satisfies $n_1 s_1 + \dots + n_K s_K \leq L$. We may thus apply Lemma \[lem:comblem\] and the corollary is obtained in the limit as $t \to 0$. [1]{} Lipman Bers. An inequality for [R]{}iemann surfaces. In [*Differential geometry and complex analysis*]{}, pages 87–93. Springer, Berlin, 1985. Joan S. Birman and Caroline Series. Geodesics with bounded intersection number on surfaces are sparsely distributed. , 24(2):217–225, 1985. Peter Buser. Geometry and spectra of compact [R]{}iemann surfaces, volume 106 of [*Progress in Mathematics*]{}. Birkhäuser Boston Inc., Boston, MA, 1992. Peter Buser and Hugo Parlier, The distribution of simple closed geodesics on a Riemann surface, Complex analysis and its applications, OCAMI Stud., 2. Osaka Munic. Univ. Press, Osaka, 2007. Moira Chas, Non-abelian number theory and the structure of curves on surfaces, [arXiv:1608.02846]{}, Preprint, 2016. Viveka Erlandsson and Juan Souto, Counting curves in hyperbolic surfaces, [*Geom. Funct. Anal.*]{}, 26(3):729–777, 2016. Heinz Huber. Zur analytischen Theorie hyperbolischer Raumformen und Bewegungsgruppen. , 138: 1–26, 1959. Troels J[ø]{}rgensen. Simple geodesics on Riemann surfaces. , 86(1):120–122, 1982. Maryam Mirzakhani. Growth of the number of simple closed geodesics on hyperbolic surfaces, , 168(1):97–125, 2008. Maryam Mirzakhani, Counting mapping class group orbits on hyperbolic surfaces, [arXiv:1601.03342]{}, Preprint, 2016. Igor Rivin, Simple curves on surfaces, , 87(1-3): 345–360, 2001. Igor Rivin, Geodesics with one self-intersection, and other stories, , 231(5): 2391–2412, 2012. Jenya Sapir, Bounds on the number of non-simple closed geodesics on a surface., , 24:7499–7545, 2016. Jenya Sapir, A Birman-Series type result for geodesics with infinitely many self-intersections, [arXiv:1609.00428]{}, Preprint, 2016. Peter Scott. Subgroups of surface groups are almost geometric. , 17(3):555–565, 1978. [*Addresses:*]{} Institute of Mathematics, EPFL, Switzerland\ [*Email:*]{} <peter.buser@epfl.ch>\ Department of Mathematics, University of Luxembourg, Luxembourg\ [*Email:*]{} <hugo.parlier@uni.lu>\
--- abstract: 'Density fluctuations resulting from spinodal decomposition in a non-equilibrium first-order chiral phase transition are explored. We show that such instabilities generate divergent fluctuations of conserved charges along the isothermal spinodal lines appearing in the coexistence region. Thus, divergent density fluctuations could be a signal not only for the critical end point but also for the first order phase transition expected in strongly interacting matter. We also compute the mean-field critical exponent at the spinodal lines. Our analysis is performed in the mean-field approximation to the NJL model formulated at finite temperature and density. However, our main conclusions are expected to be generic and model independent.' author: - 'C. Sasaki' - 'B. Friman' - 'K. Redlich' title: Density fluctuations in the presence of spinodal instabilities --- One of the central questions addressed in the context of QCD is the phase structure and the phase diagram of strongly interacting matter at finite temperature and baryon number density [@pd]. Based on calculations in effective models and on universality arguments one finds that the order of the transition from the hadronic to the quark gluon plasma phase depends on the number of quark flavors and on the value of the quark masses [@pisarski; @ef5; @ef1; @Klevansky; @ef2; @ef3; @our; @hatta; @fujii]. For physical values of the parameters one expects that the transition at high temperature and low net baryon number density is continuous. In the opposite limit of low temperature and large density the QCD phase transition is expected to be first order. This suggests that the phase diagram exhibits a critical end point (CEP), where the first order chiral transition of QCD terminates [@ef5; @stephanov]. Recent, however still preliminary, results obtained in first principle calculations of QCD, in lattice gauge theory (LGT), confirm the existence of such a point in the temperature T and chemical potential $\mu_q$ plane [@LGT1; @LGT3; @LGT2]. The search for the CEP has recently attracted considerable attention. It is of particular interest to identify the position of the critical end point in the phase diagram and to study generic properties of thermodynamic quantities in its vicinity. The qualitative behavior of physical observables and their dependence on thermodynamic variables in the critical region can be studied in effective chiral models. However, a quantitative description of the thermodynamics near the phase transition can only be obtained theoretically by solving QCD [*ab initio*]{} in LGT or phenomenologically in the context of experimental studies of heavy ion collisions. To locate the phase transition line in the QCD phase diagram one needs observables that are sensitive probes of the critical structure. Modifications in the magnitude of fluctuations or the corresponding susceptibilities have been suggested as a possible signal for deconfinement and chiral symmetry restoration [@hatta; @stephanov]. In this context, fluctuations related to conserved charges are of particular interest [@fluct]. The fluctuations of baryon number and electric charge diverge at the critical end point while they are finite along the cross over and first order phase boundaries [@hatta; @our; @bj]. Consequently, singular fluctuations of baryon number and electric charge as well as a non monotonic behavior of these fluctuations as functions of the collision energy in heavy ion collisions have been proposed as possible signals for the QCD critical end point [@hatta; @stephanov; @our]. However, the finiteness of the fluctuations along the first order transition depend on the assumption that this transition appears in equilibrium. A first order phase transition is intimately linked with the existence of a convex anomaly in the thermodynamic pressure [@ran], which can be uncovered only in non-equilibrium systems. There is an interval of energy density or baryon number density where the derivative of the pressure is positive, $\partial P/{\partial V}>0$. This anomalous behavior characterizes a region of instability in the ($T,n_q)$-plane, where $n_q$ is the net quark number density. This region is bounded by the spinodal lines, where the pressure derivative with respect to volume vanishes. The derivative taken at constant temperature and that taken at constant entropy define the isothermal and isentropic spinodal lines, respectively. The consequences of spinodal decomposition have been discussed in connection with the chiral/deconfinement phase transition in heavy ion collisions [@ran; @gavin; @rans; @ranhi; @heiselberg; @polony]. Furthermore, spinodal decomposition plays a crucial role in the description of the 1st order nuclear liquid-gas transition in low energy nuclear collisions [@ran; @heiselberg]. It has also been argued that in the region of phase coexistence, a phase separation can lead to an enhancement of baryon [@gavin] and strangeness fluctuations [@rans]. In this letter we consider the fluctuations of conserved charges along the spinodal lines, expected at finite net baryon density in the QCD phase diagram. We show that if the chiral phase transition is first order, then the fluctuations of the net densities of baryon number and electric charge diverge along the isothermal spinodal lines. Consequently, large fluctuations of these quantities may be a signal for a first order phase transition in the QCD medium. We also compute the mean-field critical exponent of the quark susceptibility at the spinodal lines and compare with that at the CEP. In our study of fluctuations across the first order chiral phase transition we adopt the Nambu–Jona-Lasinio (NJL) model. For two quark flavors and three colors the NJL Lagrangian reads [@nambu; @review]: $$\begin{aligned} \label{eq1} {\mathcal L} = \bar{\psi}( i{\ooalign{\hfil/\hfil\crcr$\partial$}} -m + \mu\gamma_0)\psi {}+ G_S \Bigl[ \bigl( \bar{\psi}\psi \bigl)^2 + \bigl( \bar{\psi}i\vec{\tau}\gamma_5\psi \bigl)^2 \Bigr]\,,\end{aligned}$$ where $m = \mbox{diag}(m_u, m_d)$ is the current quark mass, $\mu = \mbox{diag} (\mu_u, \mu_d)$ are the quark chemical potentials and $\vec{\tau}$ are Pauli matrices. The strength of the interactions between the constituent quarks is controlled by the coupling $G_S \Lambda^2 = 2.44$ with the three momentum cut-off $\Lambda=587.9$ MeV, introduced to regularize the ultraviolet divergences. The parameters are fixed to reproduce the pion decay constant in vacuum and the pion mass for $m_u=m_d=5.6$ MeV. In the mean field approximation the thermodynamics of the NJL model is, for an isospin symmetric system, given by the thermodynamic potential [@review]: $$\begin{aligned} \label{eq2} & \Omega (T,\mu;M)/V = \frac{(M-m)^2}{4G_S} {}- 12 \int\frac{d^3p}{(2\pi)^3} \Bigl[E(\vec{p}\,) \nonumber\\ & {}- T\ln ( 1-n^{(+)}(\vec{p},T,\mu) )\Bigr. {}-\Bigl. T\ln (1-n^{(-)}(\vec{p},T,\mu) \Bigr]\end{aligned}$$ where $M = m- 2G_S\langle \bar{\psi}\psi \rangle$ is the dynamical quark mass, $E(\vec{p}\,) = \sqrt{\vec{p}^{\,2} + M^2}$ is the quasiparticle energy and $n^{(\pm)}(\vec{p},T,\mu) = \Bigl( 1 + \exp\bigl[ (E(\vec{p}\,) \mp \mu)/T \bigr] \Bigr)^{-1}$ are the quark/antiquark distribution functions. The dynamical mass $M$ is obtained self consistently from the stationarity condition ${\partial\Omega}/{\partial M} = 0$, which implies $$\begin{aligned} \label{eq3} M = m+24 G_S \int\frac{d^3 p}{(2\pi)^3} \frac{M}{E} \Bigl[ 1 - n^{(+)} - n^{(-)} \Bigr]\,.\end{aligned}$$ The relevant thermodynamic observables, the quark number density and the corresponding susceptibility, are given by $$\begin{aligned} \label{eq4} n_q &= -\frac{\partial \Omega}{\partial \mu}\,, \qquad\qquad \chi_{\mu\mu}=\frac{\partial n_q}{\partial\mu}\,.\end{aligned}$$ ![ The phase diagram of the NJL model in the temperature $(T,n_q)$-plane. The filled point indicates the CEP. The full lines starting at the CEP represent boundary of the coexistence region in equilibrium. The broken-curves are the isothermal whereas the dotted ones are the isentropic spinodal lines. ](phase2.eps){width="8.6cm"} The NJL model has a generic, QCD like, phase diagram. It exhibits a critical end point that separates the cross over from the first order chiral phase transition. The relevant part of the phase diagram in the $(T, n_q)$–plane is shown in Fig. 1. If the first order phase transition takes place in equilibrium, there is a coexistence region, which ends at the critical end point. However, in a non-equilibrium first order phase transition, the system supercools/superheats and, if driven sufficiently far from equilibrium, it becomes unstable due to the convex anomaly in the thermodynamic pressure. In other words, in the coexistence region there is a range of densities and temperatures, bounded by the spinodal lines, where the spatially uniform system is mechanically unstable. The location of the spinodal lines is determined by the conditions $$\label{eq5} \left( \frac{\partial P}{\partial V} \right)_T=0~~~~~{\rm and} ~~~~~~ \left( \frac{\partial P}{\partial V} \right)_S=0\,,$$ for the isothermal and isentropic spinodal lines, respectively. Both these lines are shown in Fig. 1. From the thermodynamic relation $$\begin{aligned} \label{eq6} & & \left( \frac{\partial P}{\partial V} \right)_T = \left( \frac{\partial P}{\partial V} \right)_S {}+ \frac{T}{C_V}\left[ \left( \frac{\partial P}{\partial T} \right)_V \right]^2\,,\end{aligned}$$ it is clear that the isentropic spinodal lines are located inside the instability region bounded by the isothermal spinodals and that the two sets of lines coincide at $T=0$. In the mean field approximation the isothermal instabilities disappear at the CEP, where the first order transition ends, while the isentropic spinodal curves join below the CEP. We note that the isentropic spinodal lines will probably be modified considerably when fluctuations are properly included. This is because the specific heat $C_V$ diverges at the CEP [@pd], while in the mean-field approximation it remains finite. It then follows from $(\ref{eq6})$ that both the isothermal and the isentropic pressure derivatives in $(\ref{eq5})$ vanish at the CEP. Consequently, in a more complete description, the isentropic spinodal lines are expected to move up in temperature and to also join at the CEP. Since the isothermal spinodal lines join at the CEP, as shown in Fig. 1, it is natural to explore how the charge fluctuations develop when going beyond the critical end point into the first order phase transition. In Fig. 2 we show the evolution of the net quark number fluctuations along a path of fixed $T=50$ MeV in the $(T,n_q)$–plane. When entering the coexistence region, there is a singularity in $\chi_{\mu\mu}$ that appears when crossing the isothermal spinodal lines, where the fluctuations diverge and the susceptibility changes sign. Between the spinodal lines, the susceptibility is negative. This implies an instability of the baryon number fluctuations when crossing the transition between the chirally symmetric and broken phases. ![\[fig:Mu\] The net quark number density fluctuations $\chi_{\mu\mu}/\Lambda^2$ as a function of the quark number density $n_q$ across the first order phase transition. The susceptibility $\chi_{\mu\mu}$ was computed in the NJL model along the line of constant temperature $T=50$ MeV. ](chi_mumu2.eps){width="8.6cm"} ![The net quark number susceptibility $\chi_{\mu\mu}/\Lambda^2$ in the stable and meta stable regions, as a function of the quark number density $n_q$ and temperature T. The full line corresponds to the critical temperature $T_{CEP}$, and the dashed lines to the temperatures 30, 40, 50, 60, 70, 75 and 80 MeV. The dotted line shows the projection of the isothermal spinodal onto the $(T,n_q)$ plane.](chiq2.eps){width="8.5cm" height="5.9cm"} An estimate of the growth rate for unstable density fluctuations, assuming a typical wave vector $q = 100-300$ MeV and using the results of [@heiselberg2] for the collisionless regime, yields a characteristic growth time $\tau\sim 1-10$ fm in agreement with [@gavin:hep]. We note that the detailed calculations of ref. [@heiselberg2] show that the growth rates of the most unstable mode deep inside the spinodal region depends only weakly on the collision rate. Thus, our estimate is expected to be valid also in the intermediate and hydrodynamic regimes. Within the growth time an initial fluctuation grows by a factor $e$. Since this time is smaller than or on the order of the typical expansion time scale in relativistic heavy-ion collisions, we conclude that the spinodal instability may lead to observable fluctuations. A more quantitative estimate of the expected fluctuations in heavy-ion collisions would require a kinetic calculation, e.g. along the lines of ref. [@berdnikov]. The behavior of $\chi_{\mu\mu}$ seen in Fig. 2 is a direct consequence of the thermodynamic relations $$\begin{aligned} \label{eq7} & \left( \frac{\partial P}{\partial V} \right)_T = - \frac{n_q^2}{V}\frac{1}{\chi_{\mu\mu}}\,, \\ & \left( \frac{\partial P}{\partial V} \right)_S = - \frac{n_q^2}{V}\frac{\chi_{TT} - \frac{2 s}{n_q}\chi_{\mu T} {}+ \left( \frac{s}{n_q} \right)^2 \chi_{\mu\mu}} {\chi_{\mu\mu}\chi_{TT} - \chi_{\mu T}^2}\,,\label{eq8}\end{aligned}$$ which connect the pressure derivatives with the susceptibilities $\chi_{xy}=-\partial^2\Omega /\partial x\partial y$. Along the isothermal spinodal lines the pressure derivative in (\[eq7\]) vanishes. Thus, for non-vanishing density $n_q$, $\chi_{\mu\mu}$ must diverge to satisfy (\[eq7\]). Furthermore, since the pressure derivative ${\partial P}/{\partial V}|_T$ changes sign when crossing the spinodal line, there must be a corresponding sign change in $\chi_{\mu\mu}$, as seen in Fig. 2. Due to the linear relation between $\chi_{\mu\mu}$, the isovector susceptibility $\chi_I$ and the charge susceptibility $\chi_Q$ [@fn1], the charge fluctuations are also divergent at the isothermal spinodal line. Thus, in heavy-ion collisions, fluctuations of the baryon number and electric charge could show enhanced fluctuations, as a signal of the spinodal decomposition. The spinodal phase separation can also lead to fluctuations in strangeness [@rans] and isospin densities [@fn2]. At the isentropic spinodal line the baryon number susceptibility is in general finite. This is also true for the other susceptibilities appearing in Eq. (\[eq8\]). The isentropic spinodal line (\[eq5\]) corresponds to a zero of the numerator in (\[eq8\]) and of the velocity of hydrodynamic sound waves. In the hydrodynamic limit the instability sets in at the isothermal spinodal, but the growth rate becomes large only at the isentropic spinodal line [@heiselberg2]. In the case of an equilibrium first order phase transition, the density fluctuations do not diverge; in the coexistence region the susceptibility is a linear combination of the positive susceptibilities above and below the phase boundary. Thus, the fluctuations increase as one approaches the CEP along the first order transition and decrease again in the cross over region. This led to the prediction of a non-monotonous behavior of the fluctuations with increasing beam energy as a signal for the existence of a CEP [@stephanov; @sfr]. We stress that strictly speaking this is relevant only for the idealized situation where the first order phase transition takes place in equilibrium. In the more realistic non-equilibrium system one expects fluctuations in a larger region of the phase diagram, i.e., over a broader range of beam energies, due to the spinodal instabilities. The critical exponent at the isothermal spinodal line is found to be $\gamma=1/2$, with $\chi_{\mu\mu}\sim (\mu-\mu_c)^{-\gamma}$, while $\gamma=2/3$ at the CEP, in agreement with the mean-field results [@hatta; @bj]. Thus, the singularities at the two spinodal lines conspire to yield a somewhat stronger divergence as they join at the CEP. The exponents are renormalized by fluctuations, but the smooth evolution of the singularity from the spinodal lines to the CEP, illustrated in Fig. 3, is expected to be generic. We have shown that the net quark number fluctuations diverge at the isothermal spinodal lines of the first order chiral phase transition [@34]. As the system crosses this line, it becomes unstable with respect to spinodal decomposition. The unstable region is in principle reachable in non-equilibrium systems, created e.g. in heavy ion collisions. This means that large fluctuations of the density is expected not only at the second order critical end point but also at a non-equilibrium first order transition. In fact, the signal from the first-order transition may be much stronger than that from the CEP. Model calculations suggest that the critical region of enhanced susceptibility around the CEP is fairly small [@hatta; @bj; @sfr], while it is large in the spinodal region, where the fluctuations appear due to the divergence of $\chi_{\mu\mu}$ and due to the mechanical instability of the system. We stress that there is a close relation between the singularities at the CEP and the spinodal lines. The properties of different susceptibilities were obtained within the NJL model in the mean field approximation. However, the singular properties of charge susceptibilities in the presence of spinodal instabilities are quite general. They appear due to the straightforward thermodynamic relation between the pressure derivatives and charge susceptibilities. Thus, although the NJL model is non-confining, the results presented in this paper are expected to be robust on a qualitative level. Acknowledgments {#acknowledgments .unnumbered} ================ We acknowledge stimulating discussions with F. Karsch, V. Koch, J. Randrup, M. Stephanov and V. Toneev. We thank M. Stephanov for an illuminating discussion of the singularities at the CEP. The work of B.F. and C.S. was supported in part by the Virtual Institute of the Helmholtz Association under the grant No. VH-VI-041. K.R. acknowledges partial support of the Gesellschaft für Schwerionenforschung (GSI), the Polish Ministry of National Education (MEN) and DFG under the Mercator program. [99]{} M. Stephanov, Acta Phys.Polon. B [**35**]{}, 2939 (2004); Prog. Theor. Phys. Suppl.  [**153**]{}, 139 (2004); Int. J. Mod. Phys. A [**20**]{}, 4387 (2005). R. D. Pisarski and F. Wilczek, Phys. Rev. D [**29**]{}, 338 (1984). J. Berges and K. Rajagopal, Nucl. Phys. B [**538**]{}, 215 (1999). A. Barducci [*et al.*]{}, Phys. Lett. B [**231**]{}, 463 (1989). S. P. Klevansky, Rev. Mod. Phys.  [**64**]{} (1992) 649. M. A. Stephanov, Phys. Rev. Lett.  [**76**]{}, 4472 (1996). M. G. Alford, K. Rajagopal and F. Wilczek, Phys. Lett. B [**422**]{}, 247 (1998). K. Redlich, B. Friman and C. Sasaki, J. Phys. G [**32**]{}, S283 (2006). T. Kunihiro, Phys. Lett.  B [**271**]{}, 395 (1991); Y. Hatta and T. Ikeda, Phys. Rev. D [**67**]{}, 014028 (2003). H. Fujii, Phys. Rev. D [**67**]{}, 094018 (2003). H. Fujii and M. Ohtani, Phys. Rev. D [**70**]{}, 014016 (2004). M. A. Stephanov, K. Rajagopal and E. V. Shuryak, Phys. Rev. Lett.  [**81**]{}, 4816 (1998). C. R. Allton [*et al.*]{}, Phys. Rev. D [**66**]{}, 074507 (2002).\ C. R. Allton [*et al.*]{}, Phys. Rev. D [**71**]{}, 054508 (2005). R.V. Gavai and S. Gupta, Phys. Rev. D [**71**]{}, 114014 (2005). Z. Fodor and S. D. Katz, JHEP [**0203**]{}, 014 (2002); JHEP [**0404**]{}, 050 (2004). S. Jeon and V. Koch, in Quark Gluon Plasma 3. pp 430, [*Eds. R.C. Hwa and X. N. Wang, World Scientific Publishing, 2004*]{}. B. J. Schaefer and J. Wambach, Phys. Rev.  D [**75**]{}, 085015 (2007). P. Chomaz, M. Colonna and J. Randrup, Phys. Rept. [**389**]{}, 263 (2004). D. Bower and S. Gavin, Acta Phys. Hung. A15, 269, 1219-7580 (2002). J. Randrup, Acta Phys. Hung. A22, 69, 1219-7580 (2005). V. Koch, A. Majumder and J. Randrup, Phys. Rev. C [**72**]{}, 064903 (2005). J. Randrup, Phys. Rev. Lett. [**92**]{}, 122301 (2004). H. Heiselberg, C.J. Pethick and D.G. Ravenhall, Phys. Rev. Lett. [**61**]{}, 818 (1988). J. Polonyi, hep-th/0509078. Y. Nambu and G. Jona-Lasinio, Phys. Rev.  [**122**]{}, 345 (1961); Phys. Rev.  [**124**]{}, 246 (1961). For the recent reviews see eg.: M. Buballa, Phys. Rept.  [**407**]{}, 205 (2005). H. Heiselberg, C.J. Pethick and D.G. Ravenhall, Ann. Phys. [**223**]{}, 37 (1993). S. Gavin, arXiv:nucl-th/9908070. In an isospin symmetric system $\chi_Q=\chi_{\mu\mu}/36+\chi_I/4$; see [@sfr]. C. Sasaki, B. Friman and K. Redlich, Phys. Rev.  D [**75**]{}, 054026 (2007). B. Berdnikov and K. Rajagopal, Phys. Rev. D [**61**]{}, 105017 (2000). In heavy ion collisions, the rapid expansion and the finite volume of the system suppress divergent fluctuations of conserved charges. Consequently, the results may change significantly, when these effects are properly accounted for. See refs. [@pd; @berdnikov] The divergence of $\chi_{\mu\mu}$ along the spinodal lines implies a corresponding divergence of the electric charge susceptibility $\chi_Q$ due to the relation [@fn1].
--- abstract: '[The paper overviews our recent work on the synthesis of metasurfaces and related concepts and applications. The synthesis is based on generalized sheet transition conditions (GSTCs) with a bianisotropic surface susceptibility tensor model of the metasurface structure. We first place metasurfaces in a proper historical context and describe the GSTC technique with some fundamental susceptibility tensor considerations. Upon this basis, we next provide an in-depth development of our susceptibility-GSTC synthesis technique. Finally, we present five recent metasurface concepts and applications, which cover the topics of birefringent transformations, bianisotropic refraction, light emission enhancement, remote spatial processing and nonlinear second-harmonic generation.]{}' author: - 'Karim Achouri and Christophe Caloz[^1]' bibliography: - 'NewLib.bib' title: 'Design, Concepts and Applications of Electromagnetic Metasurfaces' --- Introduction {#sec:intro} ============ Metamaterials reached a peak of interest in the first decade of the 21st century. Then, due to their fabrication complexity, bulkiness and weight, and their limitations in terms of losses, frequency range and scalability, they became less attractive, and were progressively superseded by their two-dimensional counterparts, the metasurfaces [@Holloway2009; @holloway2012overview; @Minovich2015; @Glybovski20161]. The idea of controlling electromagnetic waves with electromagnetically thin structures is clearly not a new concept. The first example is probably that of Lamb, who studied the reflection and transmission from an array of metallic strips, already back in 1897 [@lamb1897reflection]. Later, in the 1910s, Marconi used arrays of straight wires to realize polarization reflectors [@marconi1919reflector]. These first two-dimensional electromagnetic structures were later followed by a great diversity of systems that emerged mainly with the developments of the radar technology during World War II. Many of these systems date back to the 1960s. The Fresnel zone plate reflectors, illustrated in Fig. \[fig:FZPR\], were based on the concept of the Fresnel lens demonstrated almost 150 years earlier and used in radio transmitters [@1144988]. The frequency selective surfaces (FSS), illustrated in Fig. \[fig:FSS\], were developed as spatial filters [@MunkFSS; @6867682]. The reflectarray antennas [@huang2007reflectarray], were developed as the flat counterparts of parabolic reflectors, and were initially formed by short-ended waveguides [@1138112]. \ They were later progressively improved and the short-ended waveguides were replaced with mircrostrip printable scattering elements in the late 1970s [@malagisi1978microstrip; @montgomery1978microstrip], as shown in Fig. \[fig:Reflarray\]. The transmissive counterparts of the reflectarrays are the transmitarrays, which were used as array lens systems and date back to the 1960s [@1150235; @1448703; @1143726]. They were first implemented in the form of two interconnected planar arrays of dipole antennas, one for receiving and one for transmitting, where each antenna on the receiver side was connected via a delay line to an antenna on the transmit side, as depicted in Fig. \[fig:Arraylens\]. Through the 1990ies, the transmitarrays evolved from interconnected antenna arrays to layered metallic structures that were essentially the functional extensions of FSS [@543779; @659416; @959839] with efficiency limited by the difficulty to control the transmission phase over a $2\pi$ range while maintaining a high enough amplitude. Finally, compact quasi-transparent transmitarrays or phase-shifting surfaces, able to cover a $2\pi$-phase range, were demonstrated in 2010 [@5395659]. The aforementioned Fresnel lenses, FSS, reflectarrays and transmitarrays are the precursors of today’s *metasurfaces*[^2]. From a general perspective, metasurfaces can be used to manipulate the polarization, the phase and the amplitude of electromagnetic fields. A rich diversity of metasurface applications have been reported in the literature to date and many more are expected to emerge. These applications are too numerous to be exhaustively cited. Some of the most significant ones are reported in [@1.4820810; @6704736; @6692975; @2367668; @PhysRevX.4.021026; @Huang2013345; @MOP:MOP29003; @4869127; @Kenanakis:12] (polarization tranformations), [@li2014ultra; @4832785; @PhysRevB.91.115305; @Wen:14; @dincer2014polarization; @6748871; @6553200] (absorption) and [@capasso1; @yu2014flat; @6748002; @Shi:14; @Yang:14; @PhysRevLett.114.095503; @Lipworth:13; @Hunt18012013; @6933919; @4821357; @20060309; @6805160; @6705631; @6894567] (wavefront manipulations). More sophisticated metasurfaces, transforming both phase and polarization, have been recently realized. This includes metasurfaces producing beams possessing angular orbital momentum [@nye1974dislocations] or vortex waves [@karimi2014generating; @PhysRevApplied.2.044012; @Yi:14; @Veysi:15; @nl500658n; @doi:10.1021/nl402039y], holograms [@Hunt:14; @zheng2015metasurface] and stable beam traction [@PhysRevB.91.115408]. Additionally, nonreciprocal transformations [@PhysRevB.88.205405; @PhysRevE.89.053203; @PhysRevB.89.075109; @3615688; @6280630], nonlinear interactions [@1.4914343; @valev2014nonlinear; @lee2014giant], analog computing [@nl5047297; @Silva10012014] and spatial filtering [@ortiz2013spatial; @PhysRevB.90.125422; @Shen28032014] have also been reported. To deploy their full potential, metasurfaces must be designed efficiently. This requires a solid *model*, that both simplifies the actual problem and provides insight into its physics. Metasurfaces are best modelled, according to Huygens’ principle, as *surface polarization current sheets* via continuous (locally homogeneous) bianisotropic surface susceptibility tensorial functions. Inserting the corresponding surface polarization densities into Maxwell equations results in electromagnetic sheet transition conditions, which consist in the key equations to solve in the design of metasurfaces. The objective of this paper is twofold. First, it will present a general framework for the synthesis of the aforementioned metasurface surface susceptibility functions for arbitrary (amplitude, phase, polarization, propagation direction and waveform) specified fields. From this point, the physical structure (material and geometry of the scattering particles, substrate parameters and layer configuration, thickness and size) is tediously but straightforwardly determined, after discretization of the susceptibility functions, using scattering parameter mapping. The synthesis of metasurfaces has been the objective of many researches in recent years [@achouri2014general; @salem2014metasurface; @Salem2013c; @nl5001746; @Pfeiffer2013a; @6648706; @PhysRevApplied.2.044011; @Wong2014360; @6905746; @6891256; @6477089; @452013]. Second, the paper will show how this synthesis framework provides a general perspective of the electromagnetic transformations achievable by metasurfaces, and present subsequent concepts and applications. Sheet Transition Conditions {#sec:BC} =========================== The general synthesis problem of a metasurface is represented in Fig. \[fig:MS\]. As mentioned in Sec. \[sec:intro\], the metasurface is modeled as an electromagnetic sheet (zero-thickness film)[^3]. In the most general case, a metasurface is made of an array of polarizable scattering particles that induce both electric and magnetic field discontinuities. It is therefore necessary to express the discontinuities of these fields as functions of the electric and magnetic surface polarization densities (${\boldsymbol{P}}$ and ${\boldsymbol{M}}$). The rigorous boundary conditions that apply to such an interface have been originally derived by Idemen [@Idemen1973]. ![Metasurface synthesis problem. The metasurface to be synthesized lies in the $xy$-plane at $z=0$. The synthesis procedure consists in finding the susceptibility tensors characterizing the metasurface, ${\overline{\overline{\chi}}}({\boldsymbol{\rho}})$, in terms of specified arbitrary incident, ${\boldsymbol{\psi}}_\text{i}({\boldsymbol{r}})$, reflected, ${\boldsymbol{\psi}}_\text{r}({\boldsymbol{r}})$, and transmitted, ${\boldsymbol{\psi}}_\text{t}({\boldsymbol{r}})$, waves.[]{data-label="fig:MS"}](MSfig){width="0.8\linewidth"} For a metasurface lying in the $xy$-plane at $z=0$, these transition conditions follow from the idea that all the quantities in Maxwell equations can be expressed in the following form $$\label{eq:General_F} f(z)=\left \{ f(z) \right\}+\sum_{k=0}^{N}f_{k}\delta^{(k)}(z),$$ where the function $f(z)$ is discontinuous at $z=0$. The first term of the right-hand side of  is the *regular part* of $f$, which corresponds to the value of the function everywhere except at $z=0$, while the second term is the *singular part* of $f$, which is an expansion over the $k$-th derivatives of the Dirac delta distribution (corresponding to the discontinuity of $f$ and the $k$-th derivatives of $f$). Most often, the series in  may be truncated at $N=0$, so that only the discontinuities of the fields are taken into account while the discontinuities of the derivatives of the fields are neglected. With this truncation, the metasurface transition conditions, known as the generalized sheet transition conditions (GSTCs), are found as[^4] \[eq:BC\] $$\begin{aligned} {\boldsymbol{\hat{z}}}\times\Delta{\boldsymbol{H}} &=j\omega{\boldsymbol{P}}_{\parallel}-{\boldsymbol{\hat{z}}}\times\nabla_{\parallel}M_{z},\label{eq:CurlH}\\ \Delta{\boldsymbol{E}}\times{\boldsymbol{\hat{z}}} &=j\omega\mu_0 {\boldsymbol{M}}_{\parallel}-\nabla_{\parallel}\bigg(\frac{P_{z}}{\epsilon_0 }\bigg)\times{\boldsymbol{\hat{z}}},\label{eq:CurlE}\\ {\boldsymbol{\hat{z}}}\cdot\Delta{\boldsymbol{D}} &=-\nabla\cdot{\boldsymbol{P}}_{\parallel},\label{eq:divD}\\ {\boldsymbol{\hat{z}}}\cdot\Delta{\boldsymbol{B}} &=-\mu_0 \nabla\cdot{\boldsymbol{M}}_{\parallel},\label{eq:divB}\end{aligned}$$ where the terms on the left-hand sides of the equations correspond to the differences of the fields on both sides of the metasurface, which may be expressed as $$\label{eq:field_diff} \Delta \Psi_{u} ={\boldsymbol{\hat{u}}}\cdot\Delta{\boldsymbol{\Psi}}\Bigr|_{z=0^{-}}^{0^{+}} =\Psi_{u,\text{t}}-(\Psi_{u,\text{i}}+\Psi_{u,\text{r}}),\; u=x,y,z,$$ where ${\boldsymbol{\Psi}}$ represents any of the fields ${\boldsymbol{E}}$, ${\boldsymbol{H}}$, ${\boldsymbol{D}}$ or ${\boldsymbol{B}}$, and where the subscripts i, r, and t denote the incident, reflected and transmitted fields, and ${\boldsymbol{P}}$ and ${\boldsymbol{M}}$ are the electric and magnetic surface polarization densities, respectively. In the general case of a linear bianisotropic metasurface, these polarization densities are related to the acting (or local) fields, ${\boldsymbol{E}}_\text{act}$ and ${\boldsymbol{H}}_\text{act}$, by [@kong1986electromagnetic; @lindell1994electromagnetic] \[eq:pola\_dens\] $$\begin{aligned} {\boldsymbol{P}}&=\epsilon_0 N{\overline{\overline{\alpha}}}_{\text{ee}}\cdot{\boldsymbol{E}}_\text{act}+\frac{1}{c_0} N{\overline{\overline{\alpha}}}_{\text{em}}\cdot{\boldsymbol{H}}_\text{act},\\ {\boldsymbol{M}}&=N{\overline{\overline{\alpha}}}_{\text{mm}}\cdot{\boldsymbol{H}}_\text{act}+\frac{1}{\eta_0}N{\overline{\overline{\alpha}}}_{\text{mm}}\cdot{\boldsymbol{E}}_\text{act},\end{aligned}$$ where the ${\overline{\overline{\alpha}}}_{\text{ab}}$ tensors represent the polarizabilities of a given scatterer, $N$ is the number of scatterers per unit area, $c_0$ is the speed of light in vacuum and $\eta_0$ is the vacuum impedance[^5]. This is a *microscopic* description of the metasurface response which requires an appropriate definition of the coupling between adjacent scattering particles. In this work, we use the concept of susceptibilities rather than the polarizabilities to provide a *macroscopic* description of the metasurface, which allows a direct connection with material parameters such as ${\overline{\overline{\epsilon}}}_r$ and ${\overline{\overline{\mu}}}_r$. To bring about the susceptibilities, relations  can be transformed by noting that the acting fields, at the position of a scattering particle, can be defined as the average total fields minus the field scattered by the considered particle [@kuester2003av], i.e. ${\boldsymbol{E}}_\text{act}={\boldsymbol{E}}_\text{av} - {\boldsymbol{E}}_\text{scat}^\text{part}$. The contributions of the particle may be expressed by considering the particle as a combination of electric and magnetic dipoles contained within a small disk, and the field scattered from this disk can be related to ${\boldsymbol{P}}$ and ${\boldsymbol{M}}$ by taking into account the coupling with adjacent scattering particles. Therefore, the acting fields are functions of the average fields and the polarization densities. Upon substitution of this definition of the acting fields in , the expressions of the polarization densities become \[eq:pola\_dens\_2\] $$\begin{aligned} {\boldsymbol{P}}&=\epsilon_0 {\overline{\overline{\chi}}}_\text{ee}\cdot{\boldsymbol{E}}_\text{av}+\frac{1}{c_0}{\overline{\overline{\chi}}}_\text{em}\cdot{\boldsymbol{H}}_\text{av},\\ {\boldsymbol{M}}&={\overline{\overline{\chi}}}_\text{mm}\cdot{\boldsymbol{H}}_\text{av}+\frac{1}{\eta_0}{\overline{\overline{\chi}}}_\text{me}\cdot{\boldsymbol{E}}_\text{av},\end{aligned}$$ where the average fields are defined as $$\label{eq:field_av} \Psi_{u,\text{av}} ={\boldsymbol{\hat{u}}}\cdot{\boldsymbol{\Psi}}_\text{av} =\frac{\Psi_{u,\text{t}}+(\Psi_{u,\text{i}}+\Psi_{u,\text{r}})}{2}, \; u=x,y,z,$$ where ${\boldsymbol{\Psi}}$ corresponds to ${\boldsymbol{E}}$ or ${\boldsymbol{H}}$. Susceptibility Tensor Considerations ==================================== Before delving into the metasurface synthesis, it is important to examine the susceptibility tensors in  in the light of fundamental electromagnetic considerations pertaining to reciprocity, passivity and loss. The *reciprocity* conditions for a bianisotropic metasurface, resulting from the Lorentz theorem [@kong1986electromagnetic], read $$\label{eq:reciprocity} {\overline{\overline{\chi}}}_{\text{ee}}^{\text{T}} = {\overline{\overline{\chi}}}_{\text{ee}}, \quad {\overline{\overline{\chi}}}_{\text{mm}}^{\text{T}}={\overline{\overline{\chi}}}_{\text{mm}}, \quad {\overline{\overline{\chi}}}_{\text{me}}^{\text{T}}=-{\overline{\overline{\chi}}}_{\text{em}},$$ where the superscripts $T$ denotes the matrix transpose operation[^6]. Adding the property of losslessness, resulting from the bianisotropic Poynting theorem [@kong1986electromagnetic], restricts  to $$\label{eq:lossless} {\overline{\overline{\chi}}}_{\text{ee}}^{\text{T}} = {\overline{\overline{\chi}}}_{\text{ee}}^\ast, \quad {\overline{\overline{\chi}}}_{\text{mm}}^{\text{T}}={\overline{\overline{\chi}}}_{\text{mm}}^*, \quad {\overline{\overline{\chi}}}_{\text{me}}^{\text{T}}={\overline{\overline{\chi}}}_{\text{em}}^*,$$ which characterize a simultaneously passive, lossless and reciprocal metasurface. The conditions  and  establish relations between different susceptibility components of the constitutive tensors. Therefore, requiring the metasurface to be reciprocal or reciprocal and lossless/gainless, as often practically desirable, reduces the number of independent susceptibility components [@achouri2014general; @Achouri2015c; @Achouri2015b], and hence reduces the diversity of achievable field transformations, as will be shown next. Metasurface Synthesis {#sec:Syn} ===================== General Concepts {#sec:GenConc} ---------------- We follow here the metasurface synthesis procedure[^7] introduced in [@achouri2014general], which seems the most general approach reported to date. This procedure consists in solving the GSTCs equations  to determine the unknown susceptibilities in  required for the metasurface to perform the electromagnetic transformation specified in terms of the incident, reflected and transmitted fields. Note that Eqs.  and  are redundant in the system , due to the absence of impressed sources, so that Eqs.  and  are sufficient to fully describe the metasurface and synthesize it. Consequently, only the transverse (tangential to the metasurface) components of the specified fields, explicitly apparent in  and in , are involved in the the synthesis, even though these fields may generally include longitudinal (normal to the metasurface) components as well. According to the uniqueness theorem, the longitudinal components of the fields are automatically determined from the transverse fields. The GSTC equations  and  form a set of (inhomogeneous) coupled partial differential equations, due to the partial derivatives of the normal components of the polarization densities, $P_{z}$ and $M_{z}$. The resolution of the corresponding inverse problem is nontrivial and requires involved numerical processing. In contrast, if $P_{z}=M_{z}=0$, the differential system reduces to a simple algebraic system of equations, most conveniently admitting closed-form solutions for the synthesized susceptibilities. For this reason, we will focus on this case in this section, while a transformation example with nonzero normal susceptibilities will be discussed in Sec. \[sec:PzMz\]. Enforcing that $P_{z}=M_{z}=0$ may a priori seem to represent an important restriction, particularly, as we shall see, in the sense that it reduces the number of degrees of freedom of the metasurface. However, this is not a major restriction since a metasurface with normal polarization currents can generally be reduced to an equivalent metasurface with purely tangential polarization currents, according to Huygens’ theorem. This restriction mostly affects the realization of the scattering particles, that are then forbidden to exhibit normal polarizations, which ultimately limits their practical implementation[^8]. Substituting the constitutive relations  into the GSTCs  and with $M_z=P_z=0$ leads to \[eq:InvProb\] $$\begin{aligned} {\boldsymbol{\hat{z}}}\times\Delta{\boldsymbol{H}} &=j\omega\epsilon_0{\overline{\overline{\chi}}}_\text{ee}\cdot{\boldsymbol{E}}_\text{av}+jk_0{\overline{\overline{\chi}}}_\text{em}\cdot{\boldsymbol{H}}_\text{av},\label{eq:diffH}\\ \Delta{\boldsymbol{E}}\times{\boldsymbol{\hat{z}}} &=j\omega\mu_0 {\overline{\overline{\chi}}}_\text{mm}\cdot{\boldsymbol{H}}_\text{av}+jk_0{\overline{\overline{\chi}}}_\text{me}\cdot{\boldsymbol{E}}_\text{av},\label{eq:diffE}\end{aligned}$$ where $k_0=\omega/c_0$ is the free-space wavenumber and where the susceptibility tensors only contain the tangential susceptibility components. This system can also be written in matrix form $$\label{eq:InvProbMatrix} \begin{pmatrix} \Delta H_y\\ \Delta H_x\\ \Delta E_y\\ \Delta E_x \end{pmatrix}= \begin{pmatrix} \widetilde{\chi}_\text{ee}^{xx} & \widetilde{\chi}_\text{ee}^{xy} & \widetilde{\chi}_\text{em}^{xx} & \widetilde{\chi}_\text{em}^{xy}\\ \widetilde{\chi}_\text{ee}^{yx} & \widetilde{\chi}_\text{ee}^{yy} & \widetilde{\chi}_\text{em}^{yx} & \widetilde{\chi}_\text{em}^{yy}\\ \widetilde{\chi}_\text{me}^{xx} & \widetilde{\chi}_\text{me}^{xy} & \widetilde{\chi}_\text{mm}^{xx} & \widetilde{\chi}_\text{mm}^{xy}\\ \widetilde{\chi}_\text{me}^{yx} & \widetilde{\chi}_\text{me}^{yy} & \widetilde{\chi}_\text{mm}^{yx} & \widetilde{\chi}_\text{mm}^{yy} \end{pmatrix} \cdot \begin{pmatrix} E_{x,\text{av}}\\ E_{y,\text{av}}\\ H_{x,\text{av}}\\ H_{y,\text{av}} \end{pmatrix},$$ where the tilde symbol indicates normalized susceptibilities, related to the non-normalized susceptibilities in  by $$\label{eq:conv} \begin{split} & \begin{pmatrix} \chi_\text{ee}^{xx} & \chi_\text{ee}^{xy} & \chi_\text{em}^{xx} & \chi_\text{em}^{xy}\\ \chi_\text{ee}^{yx} & \chi_\text{ee}^{yy} & \chi_\text{em}^{yx} & \chi_\text{em}^{yy}\\ \chi_\text{me}^{xx} & \chi_\text{me}^{xy} & \chi_\text{mm}^{xx} & \chi_\text{mm}^{xy}\\ \chi_\text{me}^{yx} & \chi_\text{me}^{yy} & \chi_\text{mm}^{yx} & \chi_\text{mm}^{yy} \end{pmatrix}=\\ &\qquad =\begin{pmatrix} \frac{j}{\omega\epsilon_0}\widetilde{\chi}_\text{ee}^{xx} & \frac{j}{\omega\epsilon_0}\widetilde{\chi}_\text{ee}^{xy} & \frac{j}{k_0}\widetilde{\chi}_\text{em}^{xx} & \frac{j}{k_0}\widetilde{\chi}_\text{em}^{xy}\\ -\frac{j}{\omega\epsilon_0}\widetilde{\chi}_\text{ee}^{yx} & -\frac{j}{\omega\epsilon_0}\widetilde{\chi}_\text{ee}^{yy} & -\frac{j}{k_0}\widetilde{\chi}_\text{em}^{yx} & -\frac{j}{k_0}\widetilde{\chi}_\text{em}^{yy}\\ -\frac{j}{k_0}\widetilde{\chi}_\text{me}^{xx} & -\frac{j}{k_0}\widetilde{\chi}_\text{me}^{xy} & -\frac{j}{\omega\mu_0}\widetilde{\chi}_\text{mm}^{xx} & -\frac{j}{\omega\mu_0}\widetilde{\chi}_\text{mm}^{xy}\\ \frac{j}{k_0}\widetilde{\chi}_\text{me}^{yx} & \frac{j}{k_0}\widetilde{\chi}_\text{me}^{yy} & \frac{j}{\omega\mu_0}\widetilde{\chi}_\text{mm}^{yx} & \frac{j}{\omega\mu_0}\widetilde{\chi}_\text{mm}^{yy} \end{pmatrix}. \end{split}$$ The system  contains 4 equations for 16 unknown susceptibilities. It is therefore heavily under-determined and cannot be solved directly[^9]. This leaves us with two distinct resolution possibilities. The first possibility would be to reduce the number of susceptibilities from 16 to 4 in order to obtain a fully determined (full-rank) system. Since there exists many combinations of susceptibility quadruplets[^10], different sets can be chosen, each of them naturally corresponding to different field transformations. This approach thus requires an educated selection of the susceptibility quadruplet that is the most likely to enable the specified operation, within existing constraints[^11]. These considerations immediately suggest that a second possibility would be to augment the number of field transformation specifications, i.e. allow the metasurface to perform more independent transformations, which may be of great practical interest in some applications. We would have thus ultimately three possibilities to resolve : a) reducing the number of independent unknowns, b) increasing the number of transformations and c) a combination of a) and b). As we shall see in the forthcoming sections, the number ${\cal N}$ of physically or practically achievable transformations for a metasurface with $P$ susceptibility parameters, ${\cal N}(P)$, is not trivial; specifically, ${\cal N}(P)=P/4$, that may be expected from a purely mathematical viewpoint, is not always true! Four-Parameter Transformation {#sec:singletrans} ----------------------------- We now provide an example for the approach where the number of susceptibility parameters has been reduced to 4, or $P=4$, so that the system  is of full-rank nature. We thus have to select 4 susceptibility parameters and set all the others to zero in . We decide to consider the simplest case of a monoanisotropic (8 parameters $\widetilde{\chi}_\text{em,me}^{uv}=0$, $u,v=x,y$) axial (4 parameters $\widetilde{\chi}_\text{ee,mm}^{uv}=0$ for $u\neq v$, $u,v=x,y$) metasurface, which is thus characterized by the four parameters $\widetilde{\chi}_\text{ee}^{xx}$, $\widetilde{\chi}_\text{ee}^{yy}$, $\widetilde{\chi}_\text{mm}^{xx}$ and $\widetilde{\chi}_\text{mm}^{yy}$, so that Eq.  reduces to the diagonal system $$\label{eq:birefsystem} \begin{pmatrix} \Delta H_y\\ \Delta H_x\\ \Delta E_y\\ \Delta E_x \end{pmatrix}= \begin{pmatrix} \widetilde{\chi}_\text{ee}^{xx} & 0 & 0 & 0 \\ 0 & \widetilde{\chi}_\text{ee}^{yy} & 0 & 0 \\ 0 & 0 & \widetilde{\chi}_\text{mm}^{xx} & 0 \\ 0 & 0 & 0 & \widetilde{\chi}_\text{mm}^{yy} \end{pmatrix} \cdot \begin{pmatrix} E_{x,\text{av}}\\ E_{y,\text{av}}\\ H_{x,\text{av}}\\ H_{y,\text{av}} \end{pmatrix}.$$ This metasurface is a *biregringent* structure [@saleh2007fundamentals], with decoupled $x$-polarized and $y$-polarized susceptibility pairs \[eq:chi\_diag\] $$\chi_{\text{ee}}^{xx}=\frac{j\Delta H_{y}}{\omega\epsilon_0 E_{x,\text{av}}}, \quad\chi_{\text{mm}}^{yy}=\frac{j\Delta E_{x}}{\omega\mu_0 H_{y,\text{av}}}\label{eq:chi_diag_Exx_Myy}$$ and $$\chi_{\text{ee}}^{yy}=\frac{-j\Delta H_{x}}{\omega\epsilon_0 E_{y,\text{av}}}, \quad\chi_{\text{mm}}^{xx}=\frac{-j\Delta E_{y}}{\omega\mu_0 H_{x,\text{av}}},\label{eq:chi_diag_Eyy_Mxx}$$ respectively[^12]. In these relations, according to  and , $\Delta H_y=H_{y,\text{t}}-(H_{y,\text{i}}+H_{y,\text{r}})$, $E_{x,\text{av}}=(E_{x,\text{t}}+E_{x,\text{i}}+E_{x,\text{r}})/2$, and so on. By synthesis, the metasurface with the susceptibilities  will exactly transform the specified incident field into the specified reflected and transmitted fields, in an arbitrary fashion, except for the constraint of reciprocity since the susceptibility tensor in  inherently satisfies . It should be noted that the example of , with 4 distinct susceptibility parameters, is a very particular case of a four-parameter transformation since the components in  and  are decoupled from each other, which is the origin of birefringence. Now, birefringence may be considered as a *pair* of distinct and independent transformations (one for $x$-polarization one for $y$-polarization), i.e. ${\cal N}(4)=2>4/4$. Thus, the specification of 4 susceptibility parameters may lead to more than 1 transformation, which, by extension, already suggests that $P$ susceptibilities may lead to more than $P/4$ transformations, as announced in Sec. \[sec:GenConc\] and will be further discussed in Sec. \[sec:MultiTrans\]. So far, the fields have not be explicitly specified in the metasurface described by . Since the metasurface can perform arbitrary transformations under the reservation of reciprocity, it may for instance by used for polarization rotation, which will turn to be a most instructive example here. Consider the reflectionless metasurface, depicted in Fig. \[fig:Faraday\_rot1\], which transforms the polarization of a normally incident plane wave. The fields corresponding to this transformation are \[eq:polrotinc\] $$\begin{aligned} {\boldsymbol{E}}_{\text{i}}(x,y)&={\boldsymbol{\hat{x}}}\cos(\pi/8)+{\boldsymbol{\hat{y}}}\sin(\pi/8),\\ {\boldsymbol{H}}_{\text{i}}(x,y)&=\frac{1}{\eta_0}\left[-{\boldsymbol{\hat{x}}}\sin(\pi/8)+{\boldsymbol{\hat{y}}}\cos(\pi/8)\right],\end{aligned}$$ $$\begin{aligned} {\boldsymbol{E}}_{\text{r}}(x,y)&=0,\\ {\boldsymbol{H}}_{\text{r}}(x,y)&=0,\end{aligned}$$ and \[eq:polrottrans\] $$\begin{aligned} {\boldsymbol{E}}_{\text{t}}(x,y)&={\boldsymbol{\hat{x}}}\cos(11\pi/24)+{\boldsymbol{\hat{y}}}\sin(11\pi/24),\\ {\boldsymbol{H}}_{\text{t}}(x,y)&=\frac{1}{\eta_0}\left[-{\boldsymbol{\hat{x}}}\sin(11\pi/24)+{\boldsymbol{\hat{y}}}\cos(11\pi/24)\right].\end{aligned}$$ Inserting these fields into  and , and substituting the result in  yields the susceptibilities \[eq:chi\_reciprocal\_rot\] $$\begin{aligned} \chi_{\text{ee}}^{xx}&=\chi_{\text{mm}}^{yy}=-\frac{1.5048}{k_0}j,\label{eq:chi_reciprocal_rot_a}\\ \chi_{\text{ee}}^{yy}&=\chi_{\text{mm}}^{xx}=\frac{0.88063}{k_0}j.\label{eq:chi_reciprocal_rot_b}\end{aligned}$$ ![Polarization reflectionless rotating metasurface. The metasurface rotates the polarization of a linearly polarized normally incident plane wave from the angle $\pi/8$ to the angle $11\pi/24$ with respect to the $x$-axis (rotation of $\pi/3$). The metasurface is surrounded on both sides by vacuum, i.e. $\eta_1 = \eta_2 = \eta_0$.[]{data-label="fig:Faraday_rot1"}](Faraday1){width="1\linewidth"} Note that in this example[^13], the aforementioned double transformation reduces to a single transformation, ${\cal N}(4)=1=4/4$, because the specified fields possess both $x$ and $y$ polarizations. The susceptibilities do not depend on position since the specified transformation, being purely normal, only rotates the polarization angle and does not affect the direction of wave propagation. The negative and positive imaginary natures of $\chi_{\text{ee}}^{xx}=\chi_{\text{mm}}^{yy}$ and $\chi_{\text{ee}}^{yy}=\chi_{\text{mm}}^{xx}$ in  correspond to absorption and gain, respectively. These features may be understood by noting, with the help of Fig. \[fig:Faraday\_rot1\], that polarization rotation is accomplished here by attenuation and amplification of $(E_{x,\text{i}},H_{y,\text{i}})$ and $(E_{y,\text{i}},H_{x,\text{i}})$, respectively. Moreover, this metasurface can rotate the polarization *only* by the angle $\pi/3$ when the incident wave is polarized at a $\pi/8$ angle[^14]. This example certainly represents an awkward approach to rotate the field polarization! A more reasonable approach is to consider a *gyrotropic* metasurface, where the only nonzero susceptibilities are $\chi_{\text{ee}}^{xy}$, $\chi_{\text{ee}}^{yx}$, $\chi_{\text{mm}}^{xy}$ and $\chi_{\text{mm}}^{yx}$. This corresponds to a different quadruplet of tensor parameters than in , which illustrates the aforementioned multiplicity of possible parameter set selection. With these susceptibilities, the system  yields the following relations: \[eq:chi\_off\_diag\] $$\begin{aligned} \chi_{\text{ee}}^{xy}&=\frac{j\Delta H_{y}}{\omega\epsilon_0 E_{y,\text{av}}},\\ \chi_{\text{ee}}^{yx}&=\frac{-j\Delta H_{x}}{\omega\epsilon_0 E_{x,\text{av}}},\\ \chi_{\text{mm}}^{xy}&=\frac{-j\Delta E_{y}}{\omega\mu_0 H_{y,\text{av}}},\\ \chi_{\text{mm}}^{yx}&=\frac{j\Delta E_{x}}{\omega\mu_0 H_{x,\text{av}}},\end{aligned}$$ which, upon substitution of the fields in  to , become \[eq:pol\_rot\_imag\_chi\] $$\begin{aligned} \chi_{\text{ee}}^{xy}&=\chi_{\text{mm}}^{xy}=-\frac{1.1547}{k_0}j,\\ \chi_{\text{ee}}^{yx}&=\chi_{\text{mm}}^{yx}=\frac{1.1547}{k_0}j.\end{aligned}$$ Contrary to the susceptibilities in , those in  perform the specified $\pi/3$ polarization rotation *irrespectively* of the initial polarization of the incident wave, due to the gyrotropic nature of the metasurface. It appears that these susceptibilities violate the reciprocity conditions in , and the metasurface is thus *nonreciprocal*, which is a necessary condition for polarization rotation with this choice of susceptibilities. Thus, the metasurface is a Faraday rotation surface, whose direction of polarization rotation is independent of the direction of wave propagation [@Kodera_APL_07_2011; @Sounas_APL_01_2011]. However, contrary to conventional Faraday rotators [@kong1986electromagnetic], this metasurface is also reflectionless due to the presence of both electric and magnetic gyrotropic susceptibility components (Huygens matching). The positive and negative imaginary susceptibilities indicate that the metasurface is simultaneously active and lossy, respectively. It is this combination of gain and loss that allows perfect rotation in this lossless design. This design is naturally appropriate if Faraday rotation is required. However, it is not optimal in applications not requiring non-reciprocity, i.e. reciprocal gyrotropy, where the required loss and gain would clearly represent a drawback. Reciprocal gyrotropy may be achieved using bianisotropic chirality, i.e which involves the parameter set $\chi_{\text{em}}^{xx}, \chi_{\text{em}}^{yy}, \chi_{\text{me}}^{xx}$ and $\chi_{\text{me}}^{yy}$. Following the same synthesis procedure as before, we find \[eq:pol\_rot\_chiral\] $$\begin{aligned} \chi_{\text{em}}^{xx}&=\chi_{\text{em}}^{yy}=-\frac{2}{\sqrt{3}k_0}j,\\ \chi_{\text{me}}^{xx}&=\chi_{\text{me}}^{yy}=\frac{2}{\sqrt{3}k_0}j.\end{aligned}$$ The corresponding metasurface is readily verified to be reciprocal, passive and lossless, since the susceptibilities  satisfy the conditions . So, if the purpose of the metasurface is to simply perform polarization rotation in a given direction, without specification for the opposite direction, this design is the most appropriate of the three discussed, as it is purely passive, lossless and working for all incident polarizations. Note that the metasurfaces  and  both correspond to ${\cal N}(4)=1=4/4$. More-Than-Four-Parameter Transformation {#sec:MultiTrans} --------------------------------------- In the previous section, we have seen how the system  can be solved by reducing the number of susceptibilities to $P=4$ parameters so as to match the number of GSTCs equations, and seen some of the resulting single-transformation (${\cal N}=1$, e.g. monoisotropic structure) or double-transformation (${\cal N}=2$, e.g. birefringence) metasurface possibilities. However, as mentioned in Sec. \[sec:GenConc\], the general system of equations , given its 16 degrees of freedom (16 susceptibility components), corresponds to a metasurface with the potential capability to perform *more transformations* than a metasurface with 4 parameters, or generally less than 16 parameters, ${\cal N}(16)>{\cal N}(P<16)$. In what follows, we will see how the system  can be solved for several *independent* transformations, which includes the possibility of differently processing waves incident from either sides. To accommodate for the additional degrees of freedom, a total of 4 wave transformations are considered, instead of only one as done in Sec. \[sec:singletrans\], so that  becomes a full-rank system. The corresponding equations related to the system  may then be written in the compact form $$\begin{split} \label{eq:fullsystem} & \begin{pmatrix} \Delta H_{y1} & \Delta H_{y2} & \Delta H_{y3} & \Delta H_{y4} \\ \Delta H_{x1} & \Delta H_{x2} & \Delta H_{x3} & \Delta H_{x4} \\ \Delta E_{y1} & \Delta E_{y2} & \Delta E_{y3} & \Delta E_{y4} \\ \Delta E_{x1} & \Delta E_{x2} & \Delta E_{x3} & \Delta E_{x4} \end{pmatrix}=\\ &\qquad\qquad \begin{pmatrix} \widetilde{\chi}_\text{ee}^{xx} & \widetilde{\chi}_\text{ee}^{xy} & \widetilde{\chi}_\text{em}^{xx} & \widetilde{\chi}_\text{em}^{xy}\\ \widetilde{\chi}_\text{ee}^{yx} & \widetilde{\chi}_\text{ee}^{yy} & \widetilde{\chi}_\text{em}^{yx} & \widetilde{\chi}_\text{em}^{yy}\\ \widetilde{\chi}_\text{me}^{xx} & \widetilde{\chi}_\text{me}^{xy} & \widetilde{\chi}_\text{mm}^{xx} & \widetilde{\chi}_\text{mm}^{xy}\\ \widetilde{\chi}_\text{me}^{yx} & \widetilde{\chi}_\text{me}^{yy} & \widetilde{\chi}_\text{mm}^{yx} & \widetilde{\chi}_\text{mm}^{yy} \end{pmatrix}\\ &\qquad\qquad\qquad \cdot \begin{pmatrix} E_{x1,\text{av}} & E_{x2,\text{av}} & E_{x3,\text{av}} & E_{x4,\text{av}} \\ E_{y1,\text{av}} & E_{y2,\text{av}} & E_{y3,\text{av}} & E_{y4,\text{av}} \\ H_{x1,\text{av}} & H_{x2,\text{av}} & H_{x3,\text{av}} & H_{x4,\text{av}} \\ H_{y1,\text{av}} & H_{y2,\text{av}} & H_{y3,\text{av}} & H_{y4,\text{av}} \end{pmatrix}, \end{split}$$ where the subscripts 1, 2, 3 and 4 indicate the electromagnetic fields corresponding to 4 distinct and *independent* sets of waves[^15]. The susceptibilities can be obtained by matrix inversion conjointly with the normalization . The resulting susceptibilities will, in general, be all different from each other. This means that the corresponding metasurface is both active/lossy and nonreciprocal. Consider for example a metasurface with $P=8$ parameters. In such a case, the system  is under-determined, since it features 4 equations in 8 unknowns. This suggests the possibility to specify more than 1 transformations, ${\cal N}>1$. Let us thus consider for instance a monoanisotropic (8-parameter) metasurface, and let us see whether such a metasurface can indeed perform 2 transformations. The corresponding system for 2 transformation reads $$\label{eq:T2transfo} \begin{split} & \begin{pmatrix} \Delta H_{y1} & \Delta H_{y2} \\ \Delta H_{x1} & \Delta H_{x2} \\ \Delta E_{y1} & \Delta E_{y2} \\ \Delta E_{x1} & \Delta E_{x2} \end{pmatrix}=\\ &\quad \begin{pmatrix} \widetilde{\chi}_\text{ee}^{xx} & \widetilde{\chi}_\text{ee}^{xy} & 0 & 0\\ \widetilde{\chi}_\text{ee}^{yx} & \widetilde{\chi}_\text{ee}^{yy} & 0 & 0\\ 0 & 0 & \widetilde{\chi}_\text{mm}^{xx} & \widetilde{\chi}_\text{mm}^{xy}\\ 0 & 0 & \widetilde{\chi}_\text{mm}^{yx} & \widetilde{\chi}_\text{mm}^{yy} \end{pmatrix}\cdot \begin{pmatrix} E_{x1,\text{av}} & E_{x2,\text{av}} \\ E_{y1,\text{av}} & E_{y2,\text{av}} \\ H_{x1,\text{av}} & H_{x2,\text{av}} \\ H_{y1,\text{av}} & H_{y2,\text{av}} \end{pmatrix}. \end{split}$$ This system , being full-rank, automatically admits a solution for the 8 susceptibilities, i.e. ${\cal N}=2$. The only question is whether this solution complies with practical design constraints. For instance, the electric and magnetic susceptibility submatrices are non-diagonal, and may therefore violate the reciprocity condition . If nonreciprocity is undesirable or unrealizable in a practical situation, then one would have to try another set of 8 parameters. If this 8-parameter metasurface performs only 2 transformations, then one may wonder what is the difference with the 4-parameter birefringent metasurface in  which can also provide 2 transformations with just 4 parameters. The difference is that the 2-transformation property of the metasurface in  is restricted to the case where the fields of the two transformations are orthogonally polarized[^16], whereas the 2-transformation property of the metasurface in  is completely general. As an illustration of the latter metasurface, consider the two transformations depicted in Figs. \[fig:doubletransfo\]. The first transformation, shown in Fig. \[fig:doubletransfo1\], consists in reflecting at $45^\circ$ a normally incident plane wave. The second transformation, shown in Fig. \[fig:doubletransfo2\], consists in fully absorbing an incident wave impinging on the metasurface under $45^\circ$. In both cases, the transmitted field is specified to be zero for the first and second transformations. The transverse components of the electric fields for the two transformations are, at $z=0$, given by \[eq:Especdbltrs\] $$\label{eq:Especdbltrs_1} {\boldsymbol{E}}_\text{i,1} = \frac{\sqrt{2}}{2}({\boldsymbol{\hat{x}}}+{\boldsymbol{\hat{y}}}),\; {\boldsymbol{E}}_\text{r,1} = \frac{\sqrt{2}}{2}(-\cos{\theta_\text{r}}{\boldsymbol{\hat{x}}}+{\boldsymbol{\hat{y}}})e^{-jk_xx},$$ $$\label{eq:Especdbltrs_2} {\boldsymbol{E}}_\text{i,2} = \frac{\sqrt{2}}{2}(\cos{\theta_\text{i}}{\boldsymbol{\hat{x}}}+{\boldsymbol{\hat{y}}})e^{-jk_xx},$$ respectively. The synthesis is then performed by inserting the electric fields , and the corresponding magnetic fields, into . The susceptibilities are then straightforwardly obtained by matrix inversion in . For the sake of conciseness, we do not give them here, but we point out that they include nonreciprocity, loss and gain, and complex spatial variations. This double-transformation response is verified by full-wave simulation and the resulting simulations are plotted in Figs. \[fig:COMSOLdlb\]. The two simulations in this figure have been realized in the commercial FEM software COMSOL, where the metasurface is implemented as a thin material slab of thickness $d = \lambda_0/100$[^17]. The simulation corresponding to the transformation of Fig. \[fig:doubletransfo1\] is shown in Fig. \[fig:COMSOLdlb1\], while the simulation corresponding to the transformation of Fig. \[fig:doubletransfo2\] is shown in Fig. \[fig:COMSOLdlb2\]. The simulated results are in agreement with the specification \[Eq. \], except for some scattering due to the non-zero thickness of the full-wave slab approximation. \ The example just presented, where both the transformations 1 and 2 include all the components of the fields, corresponds to ${\cal N}(8)=2=8/4$, i.e. ${\cal N }(P)=P/4$. However, in the same manner as the birefringent metasurface of , featuring ${\cal N}(4)=2>4/4$, i.e. specifically ${\cal N }(P)=P/2$, the metasurface in  may lead to ${\cal N }(P)>P/4$. This depends essentially on whether the specified transformations are composed of fields that are either: a) only $x$- *or* $y$-polarized, or b) both $x$- *and* $y$-polarized. The two transformations given by the fields  are both $x$- and $y$-polarized, which thus limits the number of transformations to ${\cal N }(P)=P/4$. If the transformation given by  was specified such that $E_\text{iy,2}=0$ (i.e. no polarization along $y$), then this would release degrees of freedom, and hence allow a triple transformation, i.e. ${\cal N}(8)=3>8/4$. In addition, if the first transformation, given by , also had transverse components of the electric field polarized only along $x$ or $y$, then we could achieve ${\cal N}(8)=4>8/4$ transformations. These considerations illustrate the necessity to perform educated selections in the metasurface synthesis procedure, as announced in Sec. . Metasurface with Nonzero Normal Polarizations {#sec:PzMz} --------------------------------------------- So far, we have discarded the possibility of normal polarizations by enforcing $P_z = M_z = 0$ in . This is not only synthesis-wise convenient, since this suppresses the spatial derivatives in , but also typically justified by the fact that any electromagnetic field can be produced by purely tangential surface currents/polarizations according to the Huygens theorem. It was accordingly claimed in [@8072896] that these normal polarizations, and corresponding susceptibility components, do not bring about any additional degrees of freedom and can thus be completely ignored. It turns out that this claim is generally not true: in fact $P_z$ and $M_z$ provide extra degrees of freedom that allow a metasurface to perform a larger number of distinct operations for *different incident field configurations* and *at different times*. The Huygens theorem *exclusively* applies to a *single* (arbitrarily complex) combination of incident, reflected and transmitted waves. This means that any metasurface, possibly involving normal polarizations, that performs the specified operation for such a single combination of fields can be reduced to an equivalent metasurface with purely transverse polarizations. However, the Huygens theorem does not apply to case of waves impinging on the metasurface at *different times*. Indeed, it is in this case impossible to superimpose the different incident waves to form a total incident field since they are not simultaneously illuminating the metasurface. Consequently, a purely tangential description of the metasurface is incomplete, and normal polarizations thus become necessary to perform the synthesis. In fact, the presence of these normal susceptibility components greatly increases the number of degrees of freedom since the susceptibility tensors are now $3\times 3$ matrices, instead of $2\times 2$ as in . This means that, for the 4 relevant GSTCs equations, we have now access to 36 unknown susceptibilities, instead of only 16, which increases the potential number of electromagnetic transformations from 4 to 9, provided that these transformations include fields that are independent from each other. The synthesis of metasurfaces with nonzero normal polarization densities may be performed following similar procedures as those already discussed. As before, one needs to balance the number of unknown susceptibilities to the number of available equations provided by the GSTCs. Depending on the specifications, this may become difficult since many transformations may be required to obtain a full-rank system. Additionally, if the specified transformations involve changing the direction of wave propagation, then the system  becomes a coupled system of partial differential equations in terms of the susceptibilities since the latter would now depend on the position. This generally prevents the derivation of closed-form solutions of the susceptibilities, which should rather be obtained numerically. However, we will now provide an example of a synthesis problem, where the susceptibilities are obtainable in closed form. More specifically, we discuss the synthesis and analysis of a reciprocal metasurface with controllable angle-dependent scattering [@Gordon2009; @DiFalco2011; @Radi2015]. To synthesize this metasurface, we consider the three *independent*[^18] transformations depicted in Fig. \[Fig:schem\]. ![Multiple scattering from a uniform bianisotropic reflectionless metasurface.[]{data-label="Fig:schem"}](schem2){width="0.55\columnwidth"} Specifying these three transformations allows one to achieve a relatively smooth control of the scattering response of the metasurface for any non-specified incidence angles. For simplicity, we specify that the metasurface does not change the direction of wave propagation, which implies that it is uniform, i.e. susceptibilities are not functions of position. Moreover, we specify that it is also reflectionless and only affects the transmission phase of p-polarized incident waves as function of their incidence angle. To design this metasurface, we consider that it may be composed of a total number of 36 susceptibility components. However, since all the waves interacting with the metasurface are p-polarized, most of these susceptibilities will not be excited by these fields and, thus, will not play a role in the electromagnetic transformations. Accordingly, the only susceptibilities that are excited by the fields are \[eq:susc\] $${\overline{\overline{\chi}}}_\text{ee}= \begin{pmatrix} {\chi}_\text{ee}^{xx} & 0 & {\chi}_\text{ee}^{xz} \\ 0 & 0 & 0 \\ {\chi}_\text{ee}^{zx} & 0 & {\chi}_\text{ee}^{zz} \end{pmatrix}, \quad {\overline{\overline{\chi}}}_\text{em}= \begin{pmatrix} 0 & {\chi}_\text{em}^{xy} & 0\\ 0 & 0 & 0\\ 0 & {\chi}_\text{em}^{zy} & 0\\ \end{pmatrix},$$ $${\overline{\overline{\chi}}}_\text{me}= \begin{pmatrix} 0 & 0 & 0 \\ {\chi}_\text{me}^{yx} & 0 & {\chi}_\text{me}^{yz} \\ 0 & 0 & 0 \end{pmatrix}, \quad {\overline{\overline{\chi}}}_\text{mm}= \begin{pmatrix} 0 & 0 & 0\\ 0 & {\chi}_\text{mm}^{yy} & 0\\ 0 & 0 & 0 \end{pmatrix},$$ where the susceptibilities not excited have been set to zero for simplicity. In order to satisfy the aforementioned specification of reciprocity, the conditions  must be satisfied. This implies that ${\chi}_\text{ee}^{xz} = {\chi}_\text{ee}^{zx}$, ${\chi}_\text{em}^{xy} = -{\chi}_\text{me}^{yx}$ and ${\chi}_\text{em}^{zy} = -{\chi}_\text{me}^{yz}$. As a consequence, the total number of independent susceptibility components in  reduces from 9 to 6. Upon insertion of , the GSTCs in  and  become \[eq:sys\] $$\Delta H_y = -j\omega\epsilon_0(\chi_\text{ee}^{xx}E_{x,\text{av}} + \chi_\text{ee}^{xz}E_{z,\text{av}}) - jk_0\chi_\text{em}^{xy}H_{y,\text{av}},$$ $$\begin{split} \Delta E_x =& -j\omega\mu_0\chi_\text{mm}^{yy}H_{y,\text{av}}+jk_0(\chi_\text{em}^{xy}E_{x,\text{av}} + \chi_\text{em}^{zy}E_{z,\text{av}})\\ &-\chi_\text{ee}^{xz} \partial_x E_{x,\text{av}} -\chi_\text{ee}^{zz} \partial_x E_{z,\text{av}} - \eta_0 \chi_\text{em}^{zy} \partial_x H_{y,\text{av}}, \end{split}$$ where the spatial derivatives only apply to the fields and not to the susceptibilities since the latter are not functions space due to the uniformity of the metasurface. The system  contains 2 equations in 6 unknown susceptibilities and is thus under-determined. In order to solve it, we apply the multiple transformation concept discussed in Sec. \[sec:MultiTrans\], which consists in specifying three independent sets of incident, reflected and transmitted waves. These fields can be simply defined by their respective reflection ($R$)[^19] and transmission ($T$) coefficients as well as their incidence angle ($\theta_\text{i}$). In our case, the metasurface exhibits a transmission phase shift, $\phi$, that is function of the incidence angle, i.e. $T = e^{j\phi(\theta_\text{i})}$. Let us consider, for instance, that the 3 incident plane waves impinge on the metasurface at $\theta_{\text{i},1}=-45^\circ$, $\theta_{\text{i},2}=0^\circ$ and $\theta_{\text{i},3}=+45^\circ$, and are transmitted at $\theta_\text{t}=\theta_\text{i}$ with transmission coefficients $T_1 = e^{-j\alpha}$, $T_2 = 1$ and $T_3 = e^{j\alpha}$, where $\alpha$ is a given phase shift. Solving relations  with these specifications yields the following nonzero susceptibilities: $$\label{eq:suscAPS} \chi_\text{ee}^{xz} = \chi_\text{ee}^{zx} = \frac{2\sqrt{2}}{k_0}\tan{\left(\frac{\alpha}{2}\right)}.$$ It can be easily verified that these susceptibilities satisfy the reciprocity, passivity and losslessness conditions . Since the susceptibilities  correspond to the only solution of the system  for our specifications and since these susceptibilities correspond to the excitation of normal polarization densities, the normal polarizations are indeed useful and provide additional degrees of freedom. This proves the claim in the first paragraph of this section that normal polarizations lead to metasurface functionalities that are unattainable without them. Now that the metasurface has been synthesized, we analyze its scattering response for all (including non-specified) incidence angles. For this purpose, we substitute the susceptibilities  into  and consider an incident wave, impinging on the metasurface at an angle $\theta_\text{i}$, being reflected and transmitted with unknown scattering parameters. The system  can then be solved to obtain these unknown scattering parameters for any value of $\theta_\text{i}$. In our case, the analysis is simple because the metasurface is uniform, which means that the reflected and transmitted waves obey Snell laws. The resulting angular dependent transmission coefficient is $$\label{eq:trans2} T(\theta_\text{i}) = -1 + \frac{2}{1-j\sqrt{2}\sin(\theta_\text{i})\tan{\left(\frac{\alpha}{2}\right)}},$$ while the reflection coefficient is $R(\theta_\text{i}) = 0$. In order to illustrate the angular behavior of the transmission coefficient in , it is plotted in Figs. \[Fig:example2\] for a specified phase shift of $\alpha = 90^\circ$. As expected, the transmission amplitude remains unity for all incidence angles while the transmission phase is asymmetric around broadside and covers about a $220^\circ$-phase range. \ Relations with Scattering Parameters {#sec:Scat} ------------------------------------ We have seen how a metasurface can be synthesized so as to obtain its susceptibilities in terms of specified fields. We shall now investigate how the synthesized susceptibilities may be related to the shape of the scattering particles that will constitute the metasurfaces to be realized. Here, we will only present the mathematical expressions that relate the susceptibilities to the scattering particles. The reader is referred to [@Achouri2015c; @nl5001746; @Pfeiffer2013a; @6648706; @PhysRevApplied.2.044011; @Wong2014360; @6905746; @6891256; @6477089; @452013] for more information on the practical realization of these structures. The conventional method to relate the scattering particle shape to equivalent susceptibilities (or material parameters) is based on homogenization techniques. In the case of metamaterials, these techniques may be used to relate homogenized material parameters to the scattering parameters of the scatterers. From a general perspective, a single isolated scatterer is not sufficient to describe an homogenized medium. Instead, we shall rather consider a periodic array of scatterers, which takes into account the interactions and coupling between adjacent scatterers hence leading to a more accurate description of a “medium” compared to a single scatterer. The susceptibilities, which describe the macroscopic responses of a medium, are thus naturally well-suited to describe the homogenized material parameters of metasurfaces. It follows that the equivalent susceptibilities of a scattering particle may be related to the corresponding scattering parameters, conventionally obtained via full-wave simulations, of a periodic array made of an infinite repetition of that scattering particle [@GrbicLightBending; @asadchy2011simulation; @asadchy2014determining; @Pfeiffer2013a]. Because the periodic array of scatterers is uniform with subwavelength periodicity, the scattered fields obey Snell laws. More specifically, if the incident wave propagates normally with respect to the array, then the reflected and transmitted waves also propagate normally. In most cases, the periodic array of scattering particles is excited with normally propagating waves. This allows one to obtain the 16 *tangential* susceptibility components in . However, it does not provide any information about the normal susceptibility components of the scattering particles. This is because, in the case of normally propagating waves, the normal susceptibilities do not induce any discontinuity of the fields, as explained in Sec. \[sec:GenConc\]. Nevertheless, this method allows one to match the tangential susceptibilities of the scattering particle to the susceptibilities found from the metasurface synthesis procedure and that precisely yields the ideal tangential susceptibility components. It is clear that the scattering particles may, in addition to their tangential susceptibilities, possess nonzero normal susceptibility components. In that case, the scattering response of the metasurface, when illuminated with obliquely propagating waves, will differ from the expected ideal behavior prescribed in the synthesis. Consequently, the homogenization technique only serve as an initial guess to describe the scattering behavior of the metasurface[^20]. We will now derive the explicit expressions relating the tangential susceptibilities to the scattering parameters in the general case of a fully bianisotropic uniform metasurface surrounded by different media and excited by normally incident plane waves. Let us first write the system  in the following compact form: $$\label{eq:reducedSys} {\overline{\overline{\Delta}}} = \widetilde{{\overline{\overline{\chi}}}}\cdot {\overline{\overline{A}}}_v,$$ where the matrices ${\overline{\overline{\Delta}}}$, $\widetilde{{\overline{\overline{\chi}}}}$ and ${\overline{\overline{A}}}_v$ correspond to the field differences, the normalized susceptibilities and the field averages, respectively. In order to obtain the 16 tangential susceptibility components in , we will now define four transformations by specifying the fields on both sides of the metasurface. Let us consider that the metasurface is illuminated from the left with an $x$-polarized normally incident plane wave. The corresponding incident, reflected and transmitted electric fields read $$\label{eq:xPol} {\boldsymbol{E}}_{\text{i}}={\boldsymbol{\hat{x}}}, \quad {\boldsymbol{E}}_{\text{r}}=S_{11}^{xx}{\boldsymbol{\hat{x}}} + S_{11}^{yx}{\boldsymbol{\hat{y}}}, \quad {\boldsymbol{E}}_{\text{t}}=S_{21}^{xx}{\boldsymbol{\hat{x}}} + S_{21}^{yx}{\boldsymbol{\hat{y}}},$$ where the terms $S_{ab}^{uv}$, with $a, b = \{1,2\}$ and $u, v = \{x,y\}$, are the scattering parameters with ports 1 and 2 corresponding to the left and right sides of the metasurface, respectively, as shown in Fig. \[fig:UnitCellSim\]. ![Full-wave simulation setup for the scattering parameter technique leading to the metasurface physical structure from the metasurface model based on . The unit cell is surrounded by periodic boundary conditions (PBC) and excited from port 1 and 2.[]{data-label="fig:UnitCellSim"}](UnitCellSim){width="1\linewidth"} The medium of the left of the metasurface has the intrinsic impedance $\eta_1$, while the medium on the right has the intrinsic impedance $\eta_2$. In addition to , three other cases have to be considered, i.e. $y$-polarized excitation incident from the left (port 1), and $x$- and $y$-polarized excitations incident from the right (port 2). Inserting these fields into , leads, after simplification, to the matrices ${\overline{\overline{\Delta}}}$ and ${\overline{\overline{A}}}_v$ given below, $$\label{eq:deltaMat} {\overline{\overline{\Delta}}}= \begin{pmatrix} -{\overline{\overline{N}}}_2/\eta_1 + {\overline{\overline{N}}}_2\cdot{\overline{\overline{S}}}_{11}/\eta_1 + {\overline{\overline{N}}}_2\cdot{\overline{\overline{S}}}_{21}/\eta_2 & -{\overline{\overline{N}}}_2/\eta_2 +{\overline{\overline{N}}}_2\cdot{\overline{\overline{S}}}_{12}/\eta_1 + {\overline{\overline{N}}}_2\cdot{\overline{\overline{S}}}_{22}/\eta_2 \\ -{\overline{\overline{N}}}_1\cdot{\overline{\overline{N}}}_2 - {\overline{\overline{N}}}_1\cdot{\overline{\overline{N}}}_2\cdot{\overline{\overline{S}}}_{11} + {\overline{\overline{N}}}_1\cdot{\overline{\overline{N}}}_2\cdot{\overline{\overline{S}}}_{21} & {\overline{\overline{N}}}_1\cdot{\overline{\overline{N}}}_2 - {\overline{\overline{N}}}_1\cdot{\overline{\overline{N}}}_2\cdot{\overline{\overline{S}}}_{12}+ {\overline{\overline{N}}}_1\cdot{\overline{\overline{N}}}_2\cdot{\overline{\overline{S}}}_{22} \end{pmatrix},$$ $$\label{eq:AvMat} {\overline{\overline{A}}}_v=\frac{1}{2} \begin{pmatrix} {\overline{\overline{I}}} + {\overline{\overline{S}}}_{11}+ {\overline{\overline{S}}}_{21} & {\overline{\overline{I}}} + {\overline{\overline{S}}}_{12}+ {\overline{\overline{S}}}_{22} \\ {\overline{\overline{N}}}_1/\eta_1 - {\overline{\overline{N}}}_1\cdot{\overline{\overline{S}}}_{11}/\eta_1 + {\overline{\overline{N}}}_1\cdot{\overline{\overline{S}}}_{21}/\eta_2 & -{\overline{\overline{N}}}_1/\eta_2 - {\overline{\overline{N}}}_1\cdot{\overline{\overline{S}}}_{12}/\eta_1 + {\overline{\overline{N}}}_1\cdot{\overline{\overline{S}}}_{22}/\eta_2 \end{pmatrix}.$$ where the matrices ${\overline{\overline{S}}}_{ab}$, ${\overline{\overline{I}}}$, ${\overline{\overline{N}}}_1$ and ${\overline{\overline{N}}}_2$ are defined by $$\begin{split} &{\overline{\overline{S}}}_{ab}= \begin{pmatrix} S_{ab}^{xx} & S_{ab}^{xy} \\ S_{ab}^{yx} & S_{ab}^{yy} \end{pmatrix},\qquad {\overline{\overline{I}}}= \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix},\\ &{\overline{\overline{N}}}_1= \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix},\qquad {\overline{\overline{N}}}_2= \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}. \end{split}$$ Now, the procedure to obtain the susceptibilities of a given scattering particle is as follows: firstly, the scattering particle is simulated with periodic boundary conditions and normal excitation. Secondly, the resulting scattering parameters obtained from the simulations are used to define the matrices in  and . Finally, the susceptibilities corresponding to the particle are obtained by matrix inversion of . Alternatively, it is possible to obtain the scattering parameters of a normally incident plane being scattered by a uniform metasurface with known susceptibilities. This can be achieved by solving  for the scattering parameters. This leads to the following matrix equation: $$\label{eq:reducedSysInv} {\overline{\overline{S}}} = {\overline{\overline{M}}}_1^{-1}\cdot{\overline{\overline{M}}}_2,$$ where the scattering parameter matrix, ${\overline{\overline{S}}}$, is defined as $$\label{eq:Smatrix} {\overline{\overline{S}}}= \begin{pmatrix} {\overline{\overline{S}}}_{11} & {\overline{\overline{S}}}_{12} \\ {\overline{\overline{S}}}_{21} & {\overline{\overline{S}}}_{22} \end{pmatrix},$$ and the matrices ${\overline{\overline{M}}}_1$ and ${\overline{\overline{M}}}_2$ are obtained from ,  and  by expressing the scattering parameters in terms of the normalized susceptibility tensors. The resulting matrices ${\overline{\overline{M}}}_1$ and ${\overline{\overline{M}}}_2$ are given below. $$\label{eq:Mat1} {\overline{\overline{M}}}_1= \begin{pmatrix} {\overline{\overline{N}}}_2/\eta_1 - \widetilde{{\overline{\overline{\chi}}}}_\text{ee}/2 + \widetilde{{\overline{\overline{\chi}}}}_\text{em}\cdot{\overline{\overline{N}}}_1/(2\eta_1) & {\overline{\overline{N}}}_2/\eta_2 - \widetilde{{\overline{\overline{\chi}}}}_\text{ee}/2 - \widetilde{{\overline{\overline{\chi}}}}_\text{em}\cdot{\overline{\overline{N}}}_1/(2\eta_2) \\ -{\overline{\overline{N}}}_1\cdot{\overline{\overline{N}}}_2 - \widetilde{{\overline{\overline{\chi}}}}_\text{me}/2 + \widetilde{{\overline{\overline{\chi}}}}_\text{mm}\cdot{\overline{\overline{N}}}_1/(2\eta_1) & {\overline{\overline{N}}}_1\cdot{\overline{\overline{N}}}_2 - \widetilde{{\overline{\overline{\chi}}}}_\text{me}/2 - \widetilde{{\overline{\overline{\chi}}}}_\text{mm}\cdot{\overline{\overline{N}}}_1/(2\eta_2) \end{pmatrix},$$ $$\label{eq:Mat2} {\overline{\overline{M}}}_2= \begin{pmatrix} \widetilde{{\overline{\overline{\chi}}}}_\text{ee}/2 + {\overline{\overline{N}}}_2/\eta_1+\widetilde{{\overline{\overline{\chi}}}}_\text{em}\cdot{\overline{\overline{N}}}_1/(2 \eta_1) & \widetilde{{\overline{\overline{\chi}}}}_\text{ee}/2 + {\overline{\overline{N}}}_2/\eta_2-\widetilde{{\overline{\overline{\chi}}}}_\text{em}\cdot{\overline{\overline{N}}}_1/(2 \eta_2) \\ \widetilde{{\overline{\overline{\chi}}}}_\text{me}/2 + {\overline{\overline{N}}}_1\cdot{\overline{\overline{N}}}_2+\widetilde{{\overline{\overline{\chi}}}}_\text{mm}\cdot{\overline{\overline{N}}}_1/(2 \eta_1) & \widetilde{{\overline{\overline{\chi}}}}_\text{me}/2 - {\overline{\overline{N}}}_1{\overline{\overline{N}}}_2-\widetilde{{\overline{\overline{\chi}}}}_\text{mm}\cdot{\overline{\overline{N}}}_1/(2 \eta_2) \end{pmatrix}.$$ Thus, the final metasurface physical structure is obtained by mapping the scattering parameters  obtained from the discretized synthesized susceptibilities by  via  and  to those obtained by full-wave simulating metasurface unit cells with tunable parameters, in an approximate periodic environment, as illustrated in Fig. \[fig:UnitCellSim\]. Concepts and Applications {#sec:ConApp} ========================= In the previous section, we have shown several metasurface examples as *illustrations* of the proposed synthesis technique. These examples did not necessarily correspond to practical designs but, in addition to illustrating the proposed synthesis technique, they did set up the stage for the development of useful and practical concepts and applications, which is the object of the present section. We shall present here 5 of our most recent works representing novel concepts and applications of metasurfaces. In the order of appearance, we present our work on birefringent transformations [@Achouri2015c; @Achouri2016e], bianisotropic refraction [@Lavigne2017], light emission enhancement [@Chen2016], remote spatial processing [@Achouri2016c] and nonlinear second-harmonic generation [@achouri2017mathematical]. The reader is also referred to our related works on nonreciprocal nongyrotropic isolators [@Taravati2016], dielectric metasurfaces for dispersion engineering [@Achouri2016d] and radiation pressure control [@achouri2017metasurface]. Birefringent Operations ----------------------- A direct application of the synthesis procedure discussed in Sec. \[sec:Syn\], and more specifically of the susceptibilities in , is the design of birefringent metasurfaces. These susceptibilities are split into two independent sets that allow to individually control the scattering of s- and p-polarized waves. In particular, the manipulation of the respective transmission phases of these orthogonal waves allows several interesting operations. In [@Achouri2016e], we have used this approach to realize half-wave plates, which rotate the polarization of linearly polarized waves by $90^\circ$ or invert the handedness of circularly polarized waves, quarter-wave plates, which convert linear polarization into circular polarization, a polarization beam splitter, which spatially separates orthogonally polarized waves, and an orbital angular momentum generator, which generates topological charges that depend on the incident wave polarization. These operations are depicted in Fig. \[Fig:Blend\]. ![Birefringent metasurface transformations presented in [@Achouri2016e].[]{data-label="Fig:Blend"}](blend){width="1\columnwidth"} “Perfect” Refraction -------------------- Most refractive operations realized so far with a metasurface have been based on the concept of the generalized law of refraction [@capasso1], which requires the implementation of a phase gradient structure. However, such structures are plagued by undesired diffraction orders and are thus not fully efficient. It turns out that the fundamental reason for this efficiency limitation is the symmetric nature of simple (early as in [@capasso1]) refractive metasurfaces with respect to the $z$-direction. This can be demonstrated by the following ad absurdum argument. Let us consider a passive metasurface surrounded by a given reciprocal[^21] medium and denote the two sides of the structure by the indices 1 and 2. Assume that this metasurface *perfectly* refracts (without reflection and spurious diffraction) a wave incident under the angle $\theta_1$ in side 1 to the angle $\theta_2$ in side 2, and assume, *ad absurdum*, that this metasurface is *symmetric* with respect to its normal. Since it is reciprocally perfectly refracting, it is perfectly matched for both propagation directions, 1 to 2 and 2 to 1. Consider first wave propagation from side 1 to side 2. Due to perfect matching, the wave experiences no reflection and, due to perfect refraction, it is fully transmitted to the angle $\theta_2$ in side 2. Consider now wave propagation in the opposite direction, along the reciprocal (or time-reversed) path. Now, the wave incident in side 2 has different tangential field components than that incident in side 1, assuming $\theta_2\neq\theta_1$, and, therefore, it will see a different impedance, which means that the metasurface is necessarily mismatched in the direction 2 to 1. But this is in contradiction with the assumption of perfect (reciprocal) refraction! Consequently, the symmetric metasurface does not produce perfect refraction. Part of the wave incident from side 2 is reflected back and therefore, by reciprocity, matching also did not actually exist in the direction 1 to 2, so all of the energy of the wave incident under $\theta_1$ in side 1 cannot completely refract into $\theta_2$; part of it has to be transmitted to other directions in side 2, which typically represents spurious diffraction orders assuming a periodic-gradient metasurface. These diffraction orders are consistently visible in reported simulations and experiments of symmetric metasurfaces intended to perform refraction. It was demonstrated in [@7506314; @Lavigne2017] that *bianisotropy* was the solution to realize perfect (reciprocal) refraction ($100\%$ power transmission efficiency from $\theta_1$ to $\theta_2$). In what follows, we summarize the main synthesis steps for such a metasurface. Let us consider the bianisotropic GSTCs relations in . For a refractive metasurface, rotation of polarization is not required and usually undesired. Therefore, the relevant nonzero susceptibility components reduce to the diagonal components of ${\overline{\overline{\chi}}}_\text{ee}$ and ${\overline{\overline{\chi}}}_\text{mm}$ and the off-diagonal components of ${\overline{\overline{\chi}}}_\text{em}$ and ${\overline{\overline{\chi}}}_\text{me}$. This corresponds to $4\times 2=8$ susceptibility parameters, leading, according to Sec. \[sec:MultiTrans\], to the double-transformation full-rank system $$\begin{split} \label{eq:Biani} & \begin{pmatrix} \Delta H_{y1} & \Delta H_{y2} \\ \Delta H_{x1} & \Delta H_{x2} \\ \Delta E_{y1} & \Delta E_{y2} \\ \Delta E_{x1} & \Delta E_{x2} \end{pmatrix}=\\ &\qquad \begin{pmatrix} \widetilde{\chi}_\text{ee}^{xx} & 0 & 0 & \widetilde{\chi}_\text{em}^{xy}\\ 0 & \widetilde{\chi}_\text{ee}^{yy} & \widetilde{\chi}_\text{em}^{yx} & 0\\ 0 & \widetilde{\chi}_\text{me}^{xy} & \widetilde{\chi}_\text{mm}^{xx} & 0\\ \widetilde{\chi}_\text{me}^{yx} & 0 & 0 & \widetilde{\chi}_\text{mm}^{yy} \end{pmatrix} \cdot \begin{pmatrix} E_{x1,\text{av}} & E_{x2,\text{av}} \\ E_{y1,\text{av}} & E_{y2,\text{av}} \\ H_{x1,\text{av}} & H_{x2,\text{av}} \\ H_{y1,\text{av}} & H_{y2,\text{av}} \end{pmatrix}, \end{split}$$ where we naturally specify the second transformation as the reciprocal of the first one. Assuming that the refraction takes places in the $xz$-plane and that the waves are all p-polarized, the system  reduces to $$\label{eq:Biani2} \begin{pmatrix} \Delta H_{y1} & \Delta H_{y2} \\ \Delta E_{x1} & \Delta E_{x2} \end{pmatrix}= \begin{pmatrix} \widetilde{\chi}_\text{ee}^{xx} & \widetilde{\chi}_\text{em}^{xy}\\ \widetilde{\chi}_\text{me}^{yx} & \widetilde{\chi}_\text{mm}^{yy} \end{pmatrix} \cdot \begin{pmatrix} E_{x1,\text{av}} & E_{x2,\text{av}} \\ H_{y1,\text{av}} & H_{y2,\text{av}} \end{pmatrix},$$ which strictly corresponds to a system that is ${\cal N}(4)=2$ although the initial goal might have been to perform refraction in one propagation direction only. An illustration of the first and second transformations is presented in Figs. \[fig:BianiRef1\] and \[fig:BianiRef2\], respectively. Note that the subscripts i and t respectively refer to the incident and transmit sides of the metasurface rather than the incident and transmitted waves. The electromagnetic fields on the incident and transmit sides of the metasurface, assuming that the media on both sides are vacuum, and that correspond to the first transformation read \[eq:Field1\] $$\begin{aligned} E_{x1,\text{i}} &= \frac{k_{z,\text{i}}}{k_0}e^{-jk_{x,\text{i}}x},\qquad E_{x1,\text{t}} = A_\text{t}\frac{k_{z,\text{t}}}{k_0}e^{-jk_{x,\text{t}}x},\\ H_{y1,\text{i}} &= e^{-jk_{x,\text{i}}x}/\eta_0, \qquad H_{y1,\text{t}} =A_\text{t} e^{-jk_{x,\text{t}}x}/\eta_0,\end{aligned}$$ where $A_\text{t}$ is the amplitude of the wave on the transmit side. The fields corresponding to the second transformation read \[eq:Field2\] $$\begin{aligned} E_{x2,\text{i}} &= -\frac{k_{z,\text{i}}}{k_0}e^{jk_{x,\text{i}}x},\qquad E_{x2,\text{t}} = -A_\text{t}\frac{k_{z,\text{t}}}{k_0}e^{jk_{x,\text{t}}x},\\ H_{y2,\text{i}} &= e^{jk_{x,\text{i}}x}/\eta_0, \quad\qquad H_{y2,\text{t}} =A_\text{t} e^{jk_{x,\text{t}}x}/\eta_0.\end{aligned}$$ In order to ensure power conservation between the incident and transmitted waves, the amplitude of the transmitted wave must be $A_\text{t}=\sqrt{k_{z,\text{i}}/k_{z,\text{t}}}=\sqrt{\cos{\theta_\text{i}}/\cos{\theta_\text{t}}}$, as shown in [@Lavigne2017]. Under this condition, the metasurface susceptibilities, obtained by substituting  and  into  and considering the normalization , read \[eq:bianiChi\] $$\chi_\text{ee}^{xx} = \frac{4 \sin ( \alpha x)}{ \beta \cos ( \alpha x)+ \sqrt{\beta^2-\gamma^2}},$$ $$\chi_\text{mm}^{yy} = \frac{\beta^2-\gamma^2}{4 k_0^2 } \frac{4 \sin ( \alpha x)}{ \beta \cos ( \alpha x)+ \sqrt{\beta^2-\gamma^2}},$$ $$\chi_\text{em}^{xy} = -\chi_\text{me}^{yx}= \frac{2j}{k_0}\frac{ \gamma\cos ( \alpha x)}{ \beta \cos ( \alpha x)+ \sqrt{\beta^2-\gamma^2}},$$ where $\alpha = k_{x,\text{t}} - k_{x,\text{i}}$, $\beta = k_{z,\text{i}} + k_{z,\text{t}}$ and $\gamma = k_{z,\text{i}} - k_{z,\text{t}}$. It can be easily verified, using , that the bianisotropic refractive metasurface with the susceptibilities  corresponds to a reciprocal, passive and lossless structure, in addition to being immune to reflection and spurious diffraction, and is hence a perfectly refractive metasurface. To demonstrate the performance of the synthesis method, we have built two bianisotropic refractive metasurfaces [@Lavigne2017]. They respectively transform an incident wave impinging at $\theta_\text{i}=20^\circ$ into a transmitted wave refracted at $\theta_\text{i}=-28^\circ$, and a normally incident wave into a transmitted wave refracted at $\theta_\text{i}=-70^\circ$. The full-wave simulations corresponding to these transformations are respectively plotted in Figs. \[Fig:Demo1\] and. \[Fig:Demo2\]. The simulated power transmission of these two structures is respectively $86.7\%$ and $83.2\%$. These efficiencies are mostly limited to the inherent dielectric and metallic losses of the scattering particles and, to a lesser extent, to the undesired diffraction orders due to the imperfection of these particles. A corresponding metasurface was demonstrated in [@Lavigne2017] with an efficiency (79 $\%$) that is around 4 $\%$ superior to the theoretical limit of a *lossless* monoanisotropic metasurface, hence unquestionably demonstrating the superiority of the bianisotropic design! Remote Spatial Processing ------------------------- Metasurface remote spatial processing, introduced in [@Achouri2016c], consists in controlling the transmission of a signal beam through a metasurface by remotely sending a control beam, which properly interferes with the signal beam. This interference is thus used to shape the metasurface transmission pattern by varying the phase and/or amplitude of the control beam. Figure \[fig:param\_prob\] presents an example of such remote spatial processing. Initially, the signal beam (in blue) in Fig. \[fig:param\_prob3\] is refracted by the metasurface according to some initial specification. When the control beam (in red) is next added to the signal beam on the metasurface, as in Fig. \[fig:param\_prob4\], it changes the overall radiation pattern of the metasurface. We have used this concept to implement remote spatial switch/modulators. The operation principle of such a modulator is presented in Fig. \[Fig:TMconcept\]. To avoid the collocation of the control and signal beam sources, the control beam impinges on the metasurface at an angle while the signal beam is normally incident. In order to independently control the transmission of both beams, they must be orthogonally polarized on the incident side of the metasurface. However, they must exhibit the same polarization on the transmit side so as to interfere. In [@Achouri2016c], we show that such a transformation can only be achieved using a bianisotropic metasurface, which must also be chiral so as to rotate the polarization of the control beam. On the transmit side, the two beams interfere and the corresponding amplitude thus depends of the phase difference between them. ![Coherent modulator metasurface. The signal and control beams are impinging on the metasurface at different angles to avoid collocation of their source. The amplitude of the transmitted wave depends on the phase difference between the two beams by interference.[]{data-label="Fig:TMconcept"}](TM2){width="0.65\columnwidth"} The fabricated metasurface performing the operation depicted in Fig. \[Fig:TMconcept\] has been experimentally measured, and the corresponding results are plotted in Fig. \[Fig:BWnormal\] for an operating frequency of 16 GHz. ![Measured transmission coefficients for the metasurface in Fig. \[Fig:TMconcept\]. The blue curve is the transmission of the signal beam only, while the black and green curves are the destructive and constructive interferences of the signal and control beams, respectively.[]{data-label="Fig:BWnormal"}](BWNormal){width="0.75\columnwidth"} Light Emission Enhancement -------------------------- In the perspective of enhancing the efficiency of light-emitting diodes (LEDs), we have reported in [@Chen2016] a partially reflecting metasurface cavity (PRMC) increasing the emission of photon sources in layered semiconductor structures, using the susceptibility-GSTC technique presented in this paper. This PRMC simultaneously enhances the light extraction efficiency (LEE), spontaneous emission rate (SER) and far-field directivity of the photon source. The LEE is enhanced by enforcing the emitted light to optimally refract/radiate perpendicularly to the device. Such refraction suppresses the wave trapping loss, represented in Fig. \[fig:led1\]. The requirement of total normal refraction, represented in Fig. \[fig:led2\], is excessively stringent, leading to susceptibilities with prohibitive spatial variations, and is not required in this application. A better strategy consists, as illustrated in Fig. \[fig:led3\], in allowing partial local reflection, and ultimately collecting the reflected part of the energy by Fabry-Perot resonance in the PRMC formed with a mirror plane at the bottom of the slab. The double-metasurface cavity, depicted in Fig.  is an even more sophisticated design, leading to dramatic LEE enhancement. \ The SER is enhanced by maximizing the confinement of coherent electromagnetic energy in the vicinity of the source and leveraging the Purcell effect, which is particularly well achieved in the double-metasurface PRMC (Fig. \[fig:led4\]). Finally, the far-field directivity is maximized as an optimization tradeoff for maximal overall power conversion ratio. Figure \[Fig:LEDsim\] shows full-wave simulated flux densities for the designs of Figs. \[fig:led1\] and \[fig:led4\], where the latter features LEE and SER enhancements by factors of 4.0 and 1.9, respectively, with half-power beam width of 22.5$^\circ$. ![Full-wave (COMSOL) simulated energy flux densities for a dipole emitter embedded in a GaN slab. (a) Configuration of Fig. \[fig:led1\]. (b) Configuration of Fig. \[fig:led4\]. Original images from [@Chen2016].[]{data-label="Fig:LEDsim"}](LEDsim){width="1\columnwidth"} The case of a real LED is more complex due to the incoherence and distribution emission of the quantum well emitters. Different metasurface strategies are currently being investigated to maximize the power conversion efficiency of a complete LED. Second-Order Nonlinearity ------------------------- So far, we have only discussed linear metasurfaces, i.e. metasurfaces whose polarization densities are linear functions of the electric and magnetic fields. Given the wealth of potential applications of nonlinear metasurfaces, it is highly desirable to develop tools for the design of such metasurfaces. Therefore, we extended our susceptibility-GSTC technique to the case of a second-order nonlinear metasurface in [@achouri2017mathematical]. In this case, the polarization densities can be written as \[eq:NLPol\] $$\begin{aligned} {\boldsymbol{P}} &= \epsilon_0{\overline{\overline{\chi}}}^{(1)}_\text{ee}\cdot{\boldsymbol{E}}_\text{av} + \epsilon_0{\overline{\overline{\chi}}}^{(2)}_\text{ee}:{\boldsymbol{E}}_\text{av}{\boldsymbol{E}}_\text{av},\\ {\boldsymbol{M}} &= {\overline{\overline{\chi}}}^{(1)}_\text{mm}\cdot{\boldsymbol{H}}_\text{av} + {\overline{\overline{\chi}}}^{(2)}_\text{mm}:{\boldsymbol{H}}_\text{av}{\boldsymbol{H}}_\text{av},\end{aligned}$$ where ${\overline{\overline{\chi}}}^{(1)}$ and ${\overline{\overline{\chi}}}^{(2)}$ are to the linear and nonlinear (second-order) susceptibilities of the metasurface. For the sake of simplicity, we assume that these susceptibility tensors are scalar. Being nonlinear, the metasurface will generate harmonics of the excitation frequency $\omega_0$. Consequently, we have to express the GSTCs in  in the time-domain to properly take into account the generation of these new frequencies. The relevant GSTCs are then, in the case of $x$-polarized waves, given by[^22] \[eq:TDgstc\] $$\begin{aligned} -\Delta H &= \epsilon_0 \chi^{(1)}_\text{ee} \frac{\partial}{\partial t} E_\text{av} + \epsilon_0 \chi^{(2)}_\text{ee} \frac{\partial}{\partial t} E^2_\text{av},\\ -\Delta E &= \mu_0 \chi^{(1)}_\text{mm} \frac{\partial}{\partial t} H_\text{av} + \mu_0 \chi^{(2)}_\text{mm} \frac{\partial}{\partial t} H^2_\text{av}\label{eq:TDgstc2},\end{aligned}$$ where $E$ and $H$ are, respectively, the $x$-component of the electric field and the $y$-component of the magnetic field. From these relations, we can either perform a synthesis, i.e. expressing the susceptibilities as functions of the fields, or an analysis, i.e. computing the fields scattered from a metasurface with known susceptibilities. Here, for the sake of briefness, we will not elaborate on the synthesis and analysis operations but shall rather present one of the main results obtained in [@achouri2017mathematical], which are the reflectionless conditions for the metasurface. The metasurface with susceptibilities  exhibit different reflectionless conditions for the two propagation directions since, due to the presence of the square of both the electric and magnetic fields, the relations  are asymmetric with respect to the $z$-direction. It follows that the reflectionless conditions for waves propagating in the forward (+$z$) direction are \[eq:RLCond\] $$\begin{aligned} \chi^{(1)}_\text{ee} = \chi^{(1)}_\text{mm},\\ \eta_0\chi^{(2)}_\text{ee} = \chi^{(2)}_\text{mm},\end{aligned}$$ while for backward (-$z$) propagation they are \[eq:RLCond2\] $$\begin{aligned} \chi^{(1)}_\text{ee} = \chi^{(1)}_\text{mm},\\ -\eta_0\chi^{(2)}_\text{ee} = \chi^{(2)}_\text{mm}.\end{aligned}$$ An important consequences of the fact that the metasurface cannot be matched from both sides is that its second-harmonic generation (SHG) is inherently nonreciprocal. Conclusions =========== We have presented an overview of electromagnetic metasurface designs, concepts and applications based on a bianisotropic surface susceptibility tensors model. This overview probably represents only a small fraction of this approach, which nevertheless already represents a solid foundation for future metasurface technology. Acknowledgment {#acknowledgment .unnumbered} ============== This work was accomplished in the framework of the Collaborative Research and Development Project CRDPJ 478303-14 of the Natural Sciences and Engineering Research Council of Canada (NSERC) in partnership with the company Metamaterial Technology Inc. [^1]: K. Achouri and C. Caloz are with the Department of Electrical Engineering, Polytechnique Montreal, Montreal, Quebec, Canada email: karim.achouri@polymtl.ca, christophe.caloz@polymtl.ca [^2]: So far, and throughout this paper, we essentially consider metasurfaces illuminated by waves incident on the them under a non-zero angle with respect to their plane, i.e. space waves, which represent the metasurfaces leading to the main applications. However, metasurface may also be excited within their plane, i.e. by surface waves or leaky-waves, as in [@Imani2010104; @Grbic25042008; @6576211; @5498959; @6127895]. [^3]: This approximation is justified by the fact that a physical metasurface is electromagnetically very thin, so that it cannot support significant phase shifts and related effects, such as Fabry-Perot resonances. [^4]: Note that these relations can also be obtained following the more traditional technique of box integration, as demonstrated in [@albooyeh2016electromagnetic]. [^5]: Despite being indeed quite general, these relations are still restricted to *linear* and *time-invariant* metasurfaces. The synthesis of nonlinear metasurfaces has been approached using extended GSTCs in [@achouri2017mathematical]. [^6]: These conditions are identical to those for a bianisotropic medium [@kong1986electromagnetic; @lindell1994electromagnetic], except that the metasurfaces in  are surface instead of volume susceptibilities [^7]: The *synthesis* procedure consists in determining the physical metasurface structure for specified fields. The inverse procedure is the *analysis*, which consists in determining the fields scattered by a given physical metasurface structures for a given incident field, and is generally coupled (typically iteratively) with the synthesis for the efficient design of a metasurface [@Vahab2017metasurface]. The overall design procedure thus consists of the combination of the synthesis and analysis operations. This paper focuses on the direct synthesis of the susceptibility functions, as this is the most important aspect for the understanding of the physical properties of metasurfaces, the elaboration of related concepts, and the development of resulting applications. [^8]: Moreover, in the particular case where all the specified waves are normal to the metasurface, the excitation of normal polarization densities do not induce any discontinuity in the fields. This is because the corresponding fields, and hence the related susceptibilities, are not functions of the $x$ and $y$ coordinates, so that the spatial derivatives of $P_z$ and $M_z$ in Eqs.  and  are zero, i.e. do not induce any discontinuity in the fields across the metasurface. Thus, susceptibilities producing normal polarizations can be ignored, and only tangential susceptibility components must be considered, when the metasurface is synthesized for normal waves. [^9]: Even if it would be solved, this would probably result in an inefficient metasurface, as it would use more susceptibility terms than required to accomplish the specified task. [^10]: Mathematically, the number of combinations would be $16!/[(16-4)!4!]=1,820$, but only a subset of these combinations represent physically meaningful combinations. [^11]: For instance, the specification of a reciprocal transformation, corresponding to the metasurface properties in Eq. , would automatically preclude the selection of off-diagonal pairs for ${\overline{\overline{\chi}}}_{\text{ee,mm}}$. [^12]: If the two electric and the two magnetic susceptibilities in  are equal to each other ($\chi_{\text{ee}}^{xx} = \chi_{\text{ee}}^{yy}$ and $\chi_{\text{mm}}^{xx} = \chi_{\text{mm}}^{yy}$), the monoanisotropic metasurface in  reduces to the simplest possible case of a monoisotropic metasurface, and hence performs the same operation for $x$- and $y$-polarized waves. [^13]: Incidently, the equality between the electric and magnetic susceptibilities results from the specification of zero reflection in addition to normal incidence. The reader may easily verify that in the presence of reflection, the equalities do not hold. [^14]: If, for instance, the incident was polarized along $x$ only, then only the susceptibilities in  would be excited and the resulting transmitted field would still be polarized along $x$, just with a reduced amplitude with respect to that of the incident wave due to the loss induced by these susceptibilities. [^15]: It is also possible to solve a system of equations that contains less than these 16 susceptibility components. In that case, less than 4 wave transformations should be specified so that the system remains fully determined. For instance, two independent wave transformations (possessing both $x$ and $y$ polarizations) could be solved with 8 susceptibilities. Similarly, 3 wave transformations could be solved with 12 susceptibilities. [^16]: For instance, if the fields of the first transformation are only $x$-polarized, while the fields of the second transformation are only $y$-polarized. [^17]: The synthesis technique yields the susceptibilities for an ideal zero-thickness metasurface. However, the metasurface sheet may be *approximated* by an electrically thin slab of thickness $d$ ($d\ll\lambda$) with volume susceptibility corresponding to a diluted version of the surface susceptibility, i.e. $\chi_\text{vol}=\chi/d$ [@achouri2014general]. [^18]: It is essential to understand that these three sets of incident and transmitted waves *cannot* be combined, by superposition, into a single incident and a single transmitted wave because these waves are not necessarily impinging on the metasurface at the same time. This means that the Huygens theorem cannot be used to find purely tangential equivalent surface currents corresponding to these fields. [^19]: Here $R=0$ since the metasurface is reflectionless by specification. [^20]: Note that is possible to obtain all 36 susceptibility components of a scattering particle provided that the 4 GSTCs relations are solved for 9 independent sets of incident, reflected and transmitted waves. In practice, such an operation is particularity tedious and is thus generally avoided. [^21]: The quasi-totality of the refracting metasurfaces discussed in the literature so far have been reciprocal. The following argument does not hold for the nonreciprocal case, where perfect refraction could in principle be achieved by a symmetric metasurface structure. [^22]: In these expressions, the susceptibilities are dispersion-less. Meaning that $\chi(\omega_0)=\chi(2\omega_0)=\chi(3\omega_0)=...$, as discussed in [@achouri2017mathematical], which is essentially equivalent to the conventional condition of phase-matching in nonlinear optics.
--- abstract: | In this paper, we study Mountain Pass Characterization of the second eigenvalue of the operator $-{\Delta}_p u -{\Delta}_{J,p}u$ and study shape optimization problems related to these eigenvalues. **Key words:** nonlocal $p$-Laplacian, Eigenvalue problem, Faber-Krahn inequality, nonlocal Hong-Krahn-Szego inequality. *2010 Mathematics Subject Classification: 35P30, 47J10, 49Q10.* author: - | [**Divya Goel[^1] and K. Sreenadh[^2]**]{}\ Department of Mathematics,\ Indian Institute of Technology Delhi,\ Hauz Khaz, New Delhi-110016, India. title: 'On the Second Eigenvalue of Combination Between Local and Nonlocal $p$-Laplacian ' --- Introduction ============ Let ${\Omega}$ be an open and bounded domain in $\mathbb{R}^N$ with $C^{1,\alpha}$ boundary. In this article, we study the following eigenvalue problem $$(P_{\lambda})\; \left.\begin{array}{rllll} \mathcal{L}_{J,p}(u) ={\lambda}|u|^{p-2}u \text{ in } {\Omega}, \; u=0 \text{ in } \mathbb{R}^N\setminus {\Omega}, \end{array} \right.$$ where the operator $\mathcal{L}_{J,p}(u)$ is defined as $\mathcal{L}_{J,p}u:= -{\Delta}_p u -{\Delta}_{J,p}u $, ${\Delta}_p(u):= \text{div}(|{\nabla}u|^{p-2}{\nabla}u)$ is the usual $p$-Laplacian operator and the nonlocal $p$-Laplacian is given by $$\begin{aligned} {\Delta}_{J,p}u(x):= 2 {\displaystyle}\int_{\mathbb{R}^N} |u(x)-u(y)|^{p-2}(u(x)-u(y))J(x-y)~ dy, \quad 1< p< \infty .\end{aligned}$$ Here the kernel $J:\mathbb{R}^N {\rightarrow}\mathbb{R}$ is a radially symmetric, nonnegative continuous function with compact support, $J(0)>0$ and $\int_{\mathbb{R}^N} J(x)~ dx =1$. Recently, the study of nonlocal equations fascinate a lot of researchers. In particular, equations involving fractional $p$-Laplacian operator gain lot of attention. In [@pal], Lindgren and Lindqvist studied the eigenvalues of the following problem $$\label{fs16} \begin{aligned} -2 \int_{\mathbb{R}^N} \frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{N+sp}}~dy = {\lambda}|u(x)|^{p-2}u(x) \text{ in } {\Omega}, \; u=0 \text{ in } \mathbb{R}^N\setminus {\Omega}\end{aligned}$$ Here they studied the eigenvalues, viscosity solutions and the limit case as $p {\rightarrow}\infty$. Later in [@second], Brasco and Parini studied the problem in an open bounded, possibly disconnected set ${\Omega}\subset \mathbb{R}^N$ and $1<p<\infty$. In this paper, authors also discussed about the regularity of the eigenfunctions of the operator fractional $p$-Laplacian and gave the mountain pass characterization of the second eigenvalue of fractional $p$-Laplacian. Moreover, authors proved the nonlocal Hong-Krahn-Szego inequality. We cite [@bisci; @goel1; @hardy; @hitchhker] and references therein for the work on equations involving fractional $p$-Laplacian. For the work on second eigenvalue of $p$-Laplacian we cite [@cfg; @sree1] and references therein.\ On the other hand, nonlocal equations involving nonlocal $p$-Laplacian of zero-order, that is, the following problem $$\begin{aligned} \label{fs17} - \int_{\mathbb{R}^N} |u(x)-u(y)|^{p-2}(u(x)-u(y))J(x-y)~ dy= {\lambda}|u|^{p-2}u \end{aligned}$$ has been studied in [@an1; @an3]. In these papers it has been proved that the Rayleigh quotient corresponding to problem is strictly positive. We refer [@an1; @an2; @an3] and references therein for the work on equations involving nonlocal $p$-Laplacian of zero-order.\ The inspiring point of our work is the work of Del Pezzo et al. ([@rossi1]), where authors studied the eigenvalue problem of the operator $\mathcal{L}_{J,p}$ and proved the existence of the eigenfunction of the smallest eigenvalue. In particular, authors proved the following result: \[fsthm4\] Assume $p\geq 2$. There exists a sequence of eigenvalues $\{{\lambda}_k\}_{k \in \mathbb{N}}$ of the operator $\mathcal{L}_{J,p}$ such that ${\lambda}_k{\rightarrow}+\infty$. The first eigenvalue ${\lambda}_1({\Omega})$ is simple, isolated and its corresponding eigenfunctions have a constant sign. Moreover, ${\lambda}_1({\Omega})$ can be characterized by $$\begin{aligned} {\lambda}_1({\Omega}):= \inf_{u \in W_0^{1,p}({\Omega})}\bigg\{ \int_{{\Omega}}|{\nabla}u|^p~dx+ {\displaystyle}\int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |u(x)-u(y)|^{p}J(x-y)~ dxdy :\int_{{\Omega}}|u|^p~dx=1 \bigg\}. \end{aligned}$$ Furthermore, every eigenfunction belongs to $C^{1,{\alpha}}(\overline{{\Omega}})$ for some ${\alpha}\in (0,1)$. We remark that by using the discrete picone identity as in [@hardy], one can get ${\lambda}_1({\Omega})$ is simple, isolated and eigenfunctions corresponding to eigenvalue other than ${\lambda}_1({\Omega})$ changes sign for all $1<p<\infty$. The variational characterization of second eigenvalue and the Sharp lower bounds on the first and second eigenvalue remains open question. In the present paper, we prove the variational characterization of the second eigenvalue of the operator associated to the problem $(P_{\lambda})$. Also, we consider the following shape optimization problems $$\begin{aligned} \label{fs18} & \inf\{{\lambda}_1({\Omega}): |{\Omega}|=c \},\\ \label{fs19} & \inf \{{\lambda}_2({\Omega}): |{\Omega}|=c \},\end{aligned}$$ where $c$ is a positive number. For the optimization problem , we prove the Faber-Krahn inequality (See Theorem \[fsthm2\]) which says that “In the class of all domains with fixed volume, the ball has the smallest first eigenvalue." Corresponding to the optimization problem , we first prove a result for nodal domains (See Lemma \[fslem7\]) whose statement can be rephrased as “Restriction of an eigenfunction to a nodal domain is not an eigenfunction of this nodal domain." This Lemma is due to the nonlocal nature of the operator. Next we prove the Nonlocal Hong-Krahn-Szego inequality for the operator associated to problem $(P_{\lambda})$ (See Theorem \[fsthm3\]) which states that “In the class of all domains with fixed volume, the smallest second eigenvalue is obtained for the disjoint union of two balls." It implies shape optimization problem does not admit a solution. Since the Rayleigh quotient corresponding to problem $(P_{\lambda})$ does not follow the scale invariance, there is significant amount of difference in handling the combined effects of $p$-Laplacian and nonlocal $p$-Laplacian of zero order. With this introduction we will state our main results: \[fsthm1\] Let $1<p<\infty$ and ${\Omega}\subset \mathbb{R}^N$ be an open and bounded set. Then there exists a positive number ${\lambda}_2({\Omega})$ with the following properties: 1. ${\lambda}_2({\Omega})$ is an eigenvalue of the operator $\mathcal{L}_{J,p}$. 2. ${\lambda}_2({\Omega})> {\lambda}_1({\Omega})$. 3. if ${\lambda}> {\lambda}_1({\Omega})$ is an eigenvalue then ${\lambda}\geq {\lambda}_2({\Omega})$. Furthermore, ${\lambda}_2({\Omega})$ has the following variational characterization $$\begin{aligned} {\lambda}_2({\Omega})= \inf_{{\gamma}\in \Gamma}\sup_{u\in {\gamma}}\left(\int_{{\Omega}}|{\nabla}u|^{p} ~ dx +\int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |u(x)-u(y)|^p J(x-y)~dxdy\right), \end{aligned}$$ where $\Gamma =\{{\gamma}\in C([-1,1], \mathcal{M}): {\gamma}(-1)=-\phi_{1} \;\mbox{and}\; {\gamma}(1)=\phi_1\}$, $\phi_1$ is the normalized eigenfunction corresponding to ${\lambda}_1({\Omega})$ and $\mathcal{M}$ is defined . \[fsthm2\] (Faber-Krahn inequality): Let $p\geq 2$, $c$ be a positive real number and $B$ be the ball of volume $c$. Then $$\begin{aligned} {\lambda}_1(B) = \inf\left\{ {\lambda}_1({\Omega}), \; {\Omega}\text{ open subset of } \mathbb{R}^N, \; |{\Omega}|=c \right\}. \end{aligned}$$ Next we will state theorem related to a sharp lower bound in ${\lambda}_2({\Omega})$. \[fsthm3\](Nonlocal Hong-Krahn-Szego inequality) Let $p\geq 2$ and ${\Omega}\subset\mathbb{R}^N$ be an open bounded set. Assume $B$ is any ball of volume $|{\Omega}|/2$. Then $$\begin{aligned} \label{fs15} {\lambda}_2({\Omega})>{\lambda}_1(B). \end{aligned}$$ Moreover, equality is never attained in , but the estimate is sharp in the following sense: if $\{s_n\}$ and $\{t_n\}$ are two sequences in $\mathbb{R}^N$ such that ${\displaystyle}\lim_{n{\rightarrow}\infty}|s_n-t_n|= +\infty$ and ${\Omega}_n:= B_R(s_n)\cup B_R(t_n)$ then ${\displaystyle}\lim_{n{\rightarrow}\infty}{\lambda}_2({\Omega}_n)= {\lambda}_1(B_R) $. The paper is organized as follows: In Section 2 we give the Variational Framework and Preliminary results. In Section 3 we give the proof of Theorem \[fsthm1\]. In Section 4 we give the sharp lower bounds on the first and second eigenvalue of the operator associated to problem $(P_{\lambda})$. In particular, we prove the Faber-Krahn inequality and nonlocal Hong-Krahn-Szego inequality. In Section 5, we discuss the eigenvalue problem associated with the combination of $p$-Laplacian and fractional $p$-Laplacian. Variational Framework and Preliminary results ============================================= The energy functional $I: W^{1,p}_0 ({\Omega}) {\rightarrow}{\mathbb}R $ associated with problem $(P_{\lambda})$ is given by $$\begin{aligned} I(u)= \int_{{\Omega}}|{\nabla}u|^{p} ~ dx +\int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |u(x)-u(y)|^p J(x-y)~dxdy -{\lambda}\int_{{\Omega}}|u|^p dx.\end{aligned}$$ Note that $I$ is well defined on $W^{1,p}_0 ({\Omega})$ by extending $u=0$ on $\mathbb{R}^N\setminus{\Omega}$. Moreover, a direct computation show that $I\in C^{1}( W^{1,p}_0 ({\Omega}),{\mathbb}R)$ with $$\begin{aligned} \langle I^{\prime}(u),\phi \rangle = p\; \mathcal{H}_{J,p}(u,\phi) - {\lambda}p \int_{{\Omega}}|u|^{p-2}u \phi dx, \end{aligned}$$ for any $\phi\in W^{1,p}_0 ({\Omega})$. A function $u \in W^{1,p}_0({\Omega})$ is a solution of $(P_{\lambda})$ if $u$ satisfies the equation $$\begin{aligned} \mathcal{H}_{J,p}(u,\phi)= {\lambda}\int_{{\Omega}}|u|^{p-2}u \phi~ dx,\; \; \text{for all } \phi \in W^{1,p}_0({\Omega}), \end{aligned}$$ where $$\begin{aligned} \mathcal{H}_{J,p}(u,\phi):=& \int_{{\Omega}}|{\nabla}u|^{p-2}{\nabla}u \cdot {\nabla}\phi~ dx\\ & \quad + \int_{\mathbb{R}^N} \int_{\mathbb{R}^N} |u(x)-u(y)|^{p-2}(u(x)-u(y))(\phi(x)-\phi(y))J(x-y)~ dx dy. \end{aligned}$$ Also $\tilde{I}:= I|_{{\mathcal}M}$ is $C^1(W^{1,p}_0 ({\Omega}),{\mathbb}R)$, where ${\mathcal}M$ is defined as $$\begin{aligned} \label{fs6} {\mathcal}M :=\left\{u\in W^{1,p}_0 ({\Omega}):\; \; S(u):= \int_{{\Omega}}|u|^p=1\right\}.\end{aligned}$$ Hence, $u\in {\mathcal}M$ is a nontrivial weak solution of the problem $(P_{\lambda})$. \[fsprop1\] [@AR] Let $Y$ be a Banach space, $F,G \in C^{1}(Y,{\mathbb}R)$, $M=\{u\in Y \;|\; G(u)=1\}$ and $u$, $v\in M$. Let ${\varepsilon}>0$ such that $\|u-v\|>{\varepsilon}$ and $$\inf\{F(w): w\in M \;\mbox{and}\; \|w-u\|_{Y}={\varepsilon}\}>\max\{F(u),F(v)\}.$$ Assume that $F$ satisfies the Palais-Smale condition on $M$ and that $$\Gamma =\{{\gamma}\in C([-1,1], M): {\gamma}(-1)=u \;\mbox{and}\; {\gamma}(1)=v\}$$ is non empty. Then ${\displaystyle}c=\inf_{{\gamma}\in \Gamma}\max_{u\in{\gamma}[-1,1]} F(u) >\max\{F(u),F(v)\}$ is a critical value of $F|_M$. Observe that $$\begin{aligned} \tilde{I}(u)= \int_{{\Omega}}|{\nabla}u|^{p} ~ dx +\int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |u(x)-u(y)|^p J(x-y)~dxdy \geq {\lambda}_1({\Omega}) \int_{{\Omega}}|u|^p, \end{aligned}$$ for all $u \in W^{1,p}_0 ({\Omega})$. It implies for any $u \in \mathcal{M}$, we have $\tilde{I}(u)\geq {\lambda}_1({\Omega})$. Since $\tilde{I}(\pm \phi_1)= {\lambda}_1({\Omega})$, we deduce that $\pm \phi_1$ are the two global minimum of $\tilde{I}$ as well as critical points of $\tilde{I}$. We will now find the third critical point via Proposition \[fsprop1\]. A norm of derivative of the restriction $\tilde{I}$ of $I$ at $u\in {\mathcal}M$ is defined as $$\|\tilde{I}^{\prime}(u)\|_{*}=\inf\{\|I^{\prime}(u)- t S^{\prime}(u)\|_{*}: t\in {\mathbb}R\}.$$ \[fslem8\] $\tilde{I}$ satisfies the Palais-Smale condition on ${\mathcal}M$. Let $\{u_n\}_{n\in N}$ be a sequence in ${\mathcal}M$ such that $\tilde{I}(u_n){\rightarrow}c$ and $\|\tilde{I}^{\prime}(u_n)\|_{*} {\rightarrow}0 $ for some $c \in \mathbb{R}$. As a consequence, there exists sequence $t_n\in {\mathbb}R$ such that for all $\phi \in W^{1,p}_0 ({\Omega}) $ and for some $C>0$, $$\begin{aligned} \label{fs2} |I(u_n)|\leq C \text{ and } \left| \mathcal{H}_{J,p}(u_n, \phi) - t_n \int_{{\Omega}} |u_{n}|^{p-2} u_{n} \phi ~dx \right|\leq {\varepsilon}_{n}\|\phi \|\end{aligned}$$ where ${\varepsilon}_n{\rightarrow}0$. From and Sobolev embedding, we obtain $\{u_n\}$ is bounded in $W^{1,p}_0 ({\Omega})$. It implies up to a subsequence, still denoted by $u_n$, there exists $u \in W^{1,p}_0 ({\Omega})$ such that $u_n\rightharpoonup u$ weakly in $W^{1,p}_0 ({\Omega})$. Moreover, $u_{n}{\rightarrow}u $ strongly in $L^{p}({\Omega})$ for all $1\leq p< p^*$ and $u_n {\rightarrow}u $ a.e in ${\Omega}$. Let $\phi=u_n$ in , we get $$|t_k|\leq \int_{{\Omega}}| {\nabla}u_n|^{p} ~ dx + \int_{\mathbb{R}^N}\int_{\mathbb{R}^N}|u_n(x)-u_n(y)|^p J(x-y)~dxdy + {\varepsilon}_{n}\|u_n\|\leq C.$$ Thus $t_n$ is bounded sequence i.e, up to a subsequence $t_n {\rightarrow}t $ as $n {\rightarrow}\infty$, for some $t \in \mathbb{R}$.\ **Claim :** $u_n{\rightarrow}u$ strongly in $W^{1,p}_0 ({\Omega})$. Since $u_n\rightharpoonup u$ weakly in $W^{1,p}_0 ({\Omega})$, we get $$\label{fs3} \begin{aligned} \mathcal{H}_{J,p}(u,u_n) {\rightarrow}\mathcal{H}_{J,p}(u,u) \text{ as } n\rightarrow \infty. \end{aligned}$$ Using the inequality which states that: for all $ a, b \in \mathbb{R}^{n}$, we have $$\begin{aligned} |a-b|^{r} \leq \left\{ \begin{array}{ll} C_{r}\left((|a|^{r-2}a-|b|^{r-2}b)(a-b)\right)^{\frac{r}{2}}\left(|a|^r+|b|^{r}\right)^{\frac{2-r}{2}} , & \text{ if }1< r < 2, \\ 2^{r-2}(|a|^{r-2}a-|b|^{r-2}b)(a-b) & \text{ if } r \geq 2 .\\ \end{array} \right. \end{aligned}$$ with the fact that ${\langle}\tilde{I^{\prime}}(u_n),(u_n-u){\rangle}= o({\varepsilon}_n)$ and , we deduce that $$\begin{aligned} \int_{{\Omega}}|{\nabla}(u_n-u)|^p ~ dx + \int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |(u_n-u)(x)-(u_n-u)(y)|^{p} J(x-y) ~ dx dy {\longrightarrow}0\;\mbox{as}\; n{\rightarrow}\infty.\end{aligned}$$ Thus, $u_n$ converges strongly to $u$ in $W^{1,p}_0 ({\Omega})$. Define $$\begin{aligned} \label{fs7} {\lambda}_*={\displaystyle}\inf_{{\gamma}\in \Gamma}\max_{u\in{\gamma}[-1,1]} \tilde{I}(u), \end{aligned}$$ where $\Gamma =\{{\gamma}\in C([-1,1], \mathcal{M}): {\gamma}(-1)=-\phi_{1} \;\mbox{and}\; {\gamma}(1)=\phi_1\}$. Let ${\gamma}(t)= \frac{t\phi_1+(1-|t|)\phi}{\|t\phi_1+(1-|t|)\phi\|_{L^p}}$, where $\phi \not \in \mathbb{R} \phi_1$. It shows that $\Gamma$ is nonempty. Using Proposition \[fsprop1\], ${\lambda}_*$ is a critical point of $\tilde{I}$ and ${\lambda}_*>{\lambda}_1({\Omega})$. \[fsprop2\] Let $A$ and $B$ be two bounded open sets in $\mathbb{R}^N$ with $A \subsetneq \; B$ and $B$ connected then ${\lambda}_1(A)> {\lambda}_1(B)$. By definition of ${\lambda}_1(A)$, ${\lambda}_1(A)\geq {\lambda}_1(B)$. Now, let if possible ${\lambda}_1(A)={\lambda}_1(B)$ and let $\phi_A$ be normalized eigenfunction of ${\lambda}_1(A)$, it implies $\phi_A=0 $ on $\mathbb{R}^N\setminus A$. Therefore, $$\begin{aligned} \int_{B}|{\nabla}u|^p ~dx &+ \int_{\mathbb{R}^N}\int_{\mathbb{R}^N}|\phi_A(x)-\phi_A(y)|^pJ(x-y)~dxdy\\ & = \int_{A}|{\nabla}u|^p ~dx + \int_{\mathbb{R}^N}\int_{\mathbb{R}^N}|\phi_A(x)-\phi_A(y)|^pJ(x-y)~dxdy\\ & = {\lambda}_1(A)\int_A |\phi_A|^p~dx\\ & = {\lambda}_1(B)\int_B |\phi_A|^p~dx. \end{aligned}$$ This implies $\phi_A$ is an eigenfunction of ${\lambda}_B$. But this is impossible since $B$ is connected and $\phi_A$ vanishes on $B\setminus A \neq \emptyset$. In [@cfg Lemmas 3.5 and 3.6 ] and [@second Lemma B.1] the following lemmas were proved: \[fslem1\] Let ${\mathcal}M= \{u\in W^{1,p}_0 ({\Omega}) : \int_{{\Omega}}|u|^p~dx =1\}$ then ${\mathcal}M$ is locally arcwise connected and any open connected subset ${\mathcal}S$ of ${\mathcal}M$ is arcwise connected. Moreover, If ${\mathcal}S^{'}$ is any connected component of an open set ${\mathcal}S\subset {\mathcal}M$, then $\partial {\mathcal}S^{\prime}\cap {\mathcal}S=\emptyset$. \[fslem2\] Let ${\mathcal}S=\{u\in {\mathcal}M : \tilde{I}(u)<r\}$, then any connected component of ${\mathcal}S$ contains a critical point of $\tilde{I}$. \[fslem3\] Let $1 \leq p \leq \infty$ and $U,V \in {\mathbb}R$ such that $U\cdot V \leq 0$. Define the following function $$g(t)=|U -tV|^p+|U-V|^{p-2}(U-V)V|t|^p,\; t \in {\mathbb}R.$$ Then we have $$g(t)\leq g(1)=|U-V|^{p-2}(U-V)U,\; t \in {\mathbb}R.$$ \[fslem4\] Let ${\alpha}\in (0,1)$ and $p>1$. For any non-negative functions $u$, $v \in W^{1,p}_0 ({\Omega})$, consider the function $\sigma_t (x):= \left[(1-t)v^p(x)+ tu^p(x)\right]^{\frac{1}{p}}$ for all $t\in[0, 1]$. Then for all $t \in [0,1]$, $$\begin{aligned} \int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |\sigma_t (x)-\sigma_t (y)|^p J(x-y) ~dxdy & \leq (1-t) \int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |v (x)-v (y)|^p J(x-y) ~dxdy \\& \quad + t\int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |u (x)-u (y)|^p J(x-y) ~dxdy. \end{aligned}$$ Proof is analogous to [@pal Lemma 4.1]. Proof of Theorem \[fsthm1\] =========================== \[fslem5\] Let $1<p<\infty$. Then number ${\lambda}_*$ (defined in ) is the second smallest eigenvalue of $\mathcal{L}_{J,p}$. On the contrary assume that there exists an eigenvalue $s$ such that ${\lambda}_{1}({\Omega})<s <{\lambda}_*$. It implies that $s$ is a critical value of $\tilde{I}$ . Since ${\lambda}_1({\Omega})$ is isolated, we may assume that $\tilde{I}$ has no critical value in $({\lambda}_{1}({\Omega}),s)$. To get a contradiction, it is enough to construct a path ${\gamma}$ connecting from $\phi_{1}$ to $-\phi_{1}$ such that $\tilde{I}({\gamma})\leq s $. Let $u\in {\mathcal}M$ be a critical point of $\tilde{I}$ at level $s$. Then $u$ satisfies, $$\label{fs8} \begin{aligned} \mathcal{H}_{J,p}(u,\phi)={\lambda}_*\int_{{\Omega}}|u|^{p-2}u \phi ~dx \text{ for all } \phi \in {W}^{1,p}_0({\Omega}). \end{aligned}$$ Since, $u$ changes sign in ${\Omega}$ . Taking $\phi= u^{+}$ and $\phi= u^-$ in , we get [$$\label{fs9} \int_{{\Omega}}|{\nabla}u^+ |^{p} ~dx+ \int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |u(x)-u(y)|^{p-2}(u(x)-u(y))(u^+(x)-u^+(y)) J(x-y)~ dxdy = {\lambda}_* \int_{{\Omega}}(u^{+})^p dx,$$]{} and [$$\label{fs10} \int_{{\Omega}}|{\nabla}u^- |^{p} ~dx- \int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |u(x)-u(y)|^{p-2}(u(x)-u(y))(u^-(x)-u^-(y))J(x-y)~ dxdy = {\lambda}_*\int_{{\Omega}} (u^{-})^p dx.$$]{} So as a consequence, we have $$\begin{aligned} \label{fs11} & \int_{{\Omega}}|{\nabla}u^+ |^{p} ~dx+ \int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |u^+(x)-u^{+}(y)|^pJ(x-y)~ dxdy \leq {\lambda}_* \int_{{\Omega}}|u^+|^p~dx,\\ \label{fs12} & \int_{{\Omega}}|{\nabla}u^- |^{p} ~dx+ \int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |u^-(x)-u^{-}(y)|^pJ(x-y)~ dxdy \leq {\lambda}_* \int_{{\Omega}}|u^-|^p~dx.\end{aligned}$$ It further implies that $$\tilde{I}(u)={\lambda}_*,\; \tilde{I}\left(\frac{ u^+}{ \|u^+\|_{L^p}}\right) \leq {\lambda}_*,\tilde{I}\left(\frac{u^-}{\|u^-\|_{L^p}}\right)\leq {\lambda}_*,\tilde{I}\left( \frac{-u^-}{\|u^-\|_{L^p}}\right)\leq {\lambda}_* .$$ Now, we will define three paths in ${\mathcal}M$ which go $u$ to $\frac{ u^+}{ \|u^+\|_{L^p}}$ , $\frac{ u^+}{ \|u^+\|_{L^p}}$ to $\frac{u^-}{\|u^-\|_{L^p}}$ and $\frac{u^-}{\|u^-\|_{L^p}}$ to $\frac{-u^-}{\|u^-\|_{L^p}}:$ $$\begin{aligned} & {\gamma}_{1}(t)=\frac{u^+- (1-t)u^-}{ \|u^+- (1-t)u^-\|_{L^p}}, \; {\gamma}_2(t)=\frac{[(1-t)(u^{+})^p+ t(u^{-})^p]^{1/p}}{\|(1-t)(u^{+})^p+ t(u^{-})^p\|_{L^p}} ,\; {\gamma}_3(t)=\frac{(1-t)u^{+}-u^{-}}{ \|(1-t)u^{+}-u^{-}\|_{L^p}}.\end{aligned}$$ Taking into account , and Lemma \[fslem3\] with $U=u^+(x)-u^+(y)$ and $V=u^-(x)-u^-(y)$, we deduce that for all $t\in[0,1]$, $$\begin{aligned} \tilde{I}({\gamma}_1(t)) &\leq \frac{ {\displaystyle}\int_{{\Omega}}|{\nabla}u^+|^p~dx + \displaystyle\int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |U-V|^{p-2}(U-V)U J(x-y)~dxdy }{ \|u^+- (1-t)u^-\|_{L^p}^p}\\ &\quad +\frac{|1-t|^p\left[{\displaystyle}\int_{{\Omega}}|{\nabla}u^-|^p~dx -\displaystyle\int_{\mathbb{R}^N}\int_{\mathbb{R}^N}|U-V|^{p-2}(U-V)V J(x-y)~dxdy \right]}{\|u^+- (1-t)u^-\|_{L^p}^p}\\ &={\lambda}_*.\end{aligned}$$ By means of Lemma \[fslem4\], we deduce $$\begin{split} \tilde{I}({\gamma}_2(t)) &\leq \frac{(1-t)\left[{\displaystyle}\int_{{\Omega}}|{\nabla}u^+|^p~dx +\displaystyle\int_{\mathbb{R}^N}\int_{\mathbb{R}^N}|u^+(x)-u^+(y)|^p J(x-y)~dxdy \right] }{ \|(1-t)(u^{+})^p+ t(u^{-})^p\|^p_{L^p}}\\ & \quad +\frac{t \left[ {\displaystyle}\int_{{\Omega}}|{\nabla}u^-|^p~dx+\displaystyle\int_{\mathbb{R}^N}\int_{\mathbb{R}^N}|u^-(x)-u^-(y)|^p J(x-y)~dxdy \right]}{ \|(1-t)(u^{+})^p+ t(u^{-})^p\|^p_{L^p}}\\ & \leq {\lambda}_*. \end{split}$$ Once again from , and Lemma \[fslem3\] with $U=u^-(y)-u^-(x)$ and $V=u^+(y)-u^+(x)$, we obtain $$\begin{aligned} \tilde{I}({\gamma}_3(t)) &\leq \frac{ {\displaystyle}\int_{{\Omega}}|{\nabla}u^-|^p~dx+ \displaystyle\int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |U-V|^{p-2}(U-V)U J(x-y) ~dxdy }{ \|(1-t)u^{+}-u^{-}\|_{L^p}^p}\\&\quad + \frac{ |1-t|^p \left[{\displaystyle}\int_{{\Omega}}|{\nabla}u^+|^p~dx - \displaystyle \int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |U-V|^{p-2}(U-V)V J(x-y)~dxdy \right]}{\|(1-t)u^{+}-u^{-}\|_{L^p}^p}\\ &={\lambda}_*.\end{aligned}$$ Clearly $\pm \phi_{1}\in {\mathcal}S$, where ${\mathcal}S = \{v\in{\mathcal}M : \tilde{I}(v)<{\lambda}_* \}$. Also, $\frac{ u^-}{\|u^-\|_{L^p}} $ is not a critical point of $\tilde{I}$, thanks to the fact that $\frac{ u^-}{\|u^-\|_{L^p}} $ does not change sign and vanishes on a set of positive measure. Therefore, there exists a $C^1$ path $\sigma : [-{\delta},{\delta}]{\rightarrow}{\mathcal}M$ with $\sigma(0)= \frac{u^-}{\|u^-\|_{L^p}}$ and $\frac{d}{dt}\tilde{I}(\sigma(t))|_{t=0}\ne 0$. With the help of this path we can move from $\frac{ u^-}{\|u^-\|_{L^p}}$ to a point $v$ with $\tilde{I}(v)<{\lambda}_*$. Consider a connected component of ${\mathcal}S$ containing $v$ and employing Lemma \[fslem2\] we get $\phi_{1}$ (or $-\phi_{1}) $ is in this component. Let us assume that it is $\phi_{1}$. At this point we construct a path ${\gamma}_{4}(t)$ from $\frac{ u^-}{\|u^-\|_{L^p}}$ to $\phi_{1}$ which is at level less than ${\lambda}_*$. Consider the symmetric path $-{\gamma}_{4}(t)$ connects $\frac{- u^-}{\|u^-\|_{L^p}}$ to $-\phi_{1}$. Since $\tilde{I}$ is even, $$\tilde{I}(-{\gamma}_4(t))= \tilde{I}({\gamma}_4(t))\leq {\lambda}_* \;\mbox{for all}\; t .$$ Lastly, we can connect ${\gamma}_1(t)$, ${\gamma}_2(t)$ and ${\gamma}_4(t)$, to obtain a path from $u$ to $\phi_{1}$ and joining ${\gamma}_3(t)$ and $-{\gamma}_4(t)$ we get a path from $u$ to $-\phi_{1}$. Taking account all this together, we get a path in ${\mathcal}M$ from $\phi_{1}$ to $-\phi_{1}$ at levels $\leq {\lambda}_*$ for all $t$. This completes the proof. **Proof of Theorem \[fsthm1\] :** By Theorem 3.3 of [@rossi1], there exists a positive number ${\lambda}_2({\Omega})$ given by $$\begin{aligned} {\lambda}_2({\Omega}) = \inf_{A\in \mathcal{A}}\sup_{ u\in A} \mathcal{H}_{J,p}(u,u), \end{aligned}$$ where $\mathcal{A}= \{ A \subset \mathcal{M}: A \text{ compact, symmetric, of genus } \geq 2 \}$. Let ${\gamma}$ be a curve in ${\Lambda}$ then by joining this with its symmetric path $-{\gamma}$ we obtain a set of genus $\geq 2$ where $\tilde{I}$ does not increase its value. Hence, $ {\lambda}_2({\Omega})\leq {\lambda}_*$ (defined in ). From Lemma \[fslem5\], ${\lambda}_*$ is the smallest eigenvalue. That is, there is no eigenvalue between ${\lambda}_1({\Omega})$ and ${\lambda}_*$, it implies ${\lambda}_*\leq {\lambda}_2({\Omega})$. Therefore, ${\lambda}_2({\Omega})$ is second eigenvalue of the operator $\mathcal{L}_{J,p}$ with variational characterization $$\begin{aligned} {\lambda}_{2}({\Omega}) := & \inf_{{\gamma}\in \Gamma}\sup_{u\in {\gamma}}\left(\int_{{\Omega}}|{\nabla}u|^p~dx \int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |u(x)-u(y)|^p J(x-y)~dxdy \right),\end{aligned}$$ where $\Gamma =\{{\gamma}\in C([-1,1], \mathcal{M}): {\gamma}(-1)=-\phi_{1} \;\mbox{and}\; {\gamma}(1)=\phi_1\}$. Proof of Theorems \[fsthm2\] and \[fsthm3\] ============================================ In this Section we will give a sharp lower bound on ${\lambda}_1({\Omega})$ and ${\lambda}_{2}({\Omega})$ in terms of volume of ${\Omega}$. We will assume that $p\geq 2$ and $J$ is radially symmetric decreasing nonnegative continuous function with compact support, $J(0)>0$ and $\int_{\mathbb{R}^N} J(x)~ dx =1$. With this assumption, $J^*(x)= J(x)$, where $J^*$ stands for the symmetric decreasing rearrangement of the function $J$. Also, we have the following Polya-Szego inequality: $$\begin{aligned} \label{fs13} \int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |u^*(x)-u^*(y)|^{p}J(x-y)~ dxdy \leq \int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |u(x)-u(y)|^{p}J(x-y)~ dxdy. \end{aligned}$$ For the proof of , we refer [@leib1 Corrollary 2.3].\ **Proof of Theorem \[fsthm2\] :** Let ${\Omega}$ be a bounded open set of volume $c$ and ${\Omega}^* = B$ the ball of same volume. Let $\phi_1$ be the eigenfunction corresponding to ${\lambda}_1({\Omega})$ and $\phi_1^*$ be the Schwarz symmetrization of the function $\phi_1$ then by Polya-Szego inequality (See [@henrot Theorem 2.1.3] and [@leib1 Corrollary 2.3]), we have $$\begin{aligned}\label{fs20} & \int_{{\Omega}^*}|{\nabla}\phi_1^*|^p~dx + \int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |\phi_1^*(x)-\phi_1^*(y)|^{p}J(x-y)~ dxdy \\ & \quad \leq \int_{{\Omega}}|{\nabla}\phi_1|^p~dx +\int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |\phi_1(x)-\phi_1(y)|^{p}J(x-y)~ dxdy. \end{aligned}$$ Moreover, we know that ${\displaystyle}\int_{{\Omega}^*}| \phi_1^*|^p~dx= \int_{{\Omega}}| \phi_1|^p~dx$. Therefore by definition of ${\lambda}_1({\Omega})$, we obtain $$\begin{aligned} {\lambda}_1({\Omega}^*) & \leq \frac{ {\displaystyle}\int_{{\Omega}^*}|{\nabla}\phi_1^*|^p~dx + \int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |\phi_1^*(x)-\phi_1^*(y)|^{p}J(x-y)~ dxdy}{\|\phi_1^*\|^p_{L^p}}\\ & \leq \frac{ {\displaystyle}\int_{{\Omega}}|{\nabla}\phi_1|^p~dx + \int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |\phi_1(x)-\phi_1(y)|^{p}J(x-y)~ dxdy}{\|\phi_1\|^p_{L^p}} = {\lambda}_1({\Omega}). \end{aligned}$$ Furthermore, if ${\lambda}_1({\Omega})= {\lambda}_1(B)$ then equality must hold in . Then using [@frank Lemma A.2], we have that $\phi$ is a translation of a radially symmetric decreasing function. It implies that ${\Omega}$ is a ball. It yields the required result. \[fslem6\] Let $1<p<\infty$ and $a,\; b \in \mathbb{R}$ then the following holds: 1. There exists $c_p>0$ such that $$\begin{aligned} |a-b|^p\leq |a|^p+|b|^p+c_p(|a|^2+|b|^2)^{\frac{p-2}{2}}|ab| \end{aligned}$$ 2. If $ab\leq 0$ then $$|a-b|^{p-2}(a-b)a \geq \; \left\{ \begin{array}{ll} |a|^p-(p-1)|a-b|^{p-2}ba & \text{ if }1<p<2, \\ |a|^p-(p-1)|a|^{p-2}ba & \text{ if } p>2. \\ \end{array} \right.$$ For detailed proof, see [@second Lemmas B.2 and B.3]. \[fslem7\] (Nodal domains) Let ${\lambda}>{\lambda}_1({\Omega})$ be an eigenvalue of $\mathcal{L}_{J,p}$ and $\phi_{\lambda}$ be the associated eigenfunction. Assume the set $$\begin{aligned} {\Omega}^+:= \{ x \in {\Omega}: \phi_{\lambda}(x)>0 \} \quad \text{and} \quad {\Omega}^-:= \{ x \in {\Omega}: \phi_{\lambda}(x)<0 \} .\end{aligned}$$ Then ${\lambda}> \{ {\lambda}_1({\Omega}^+),\; {\lambda}_1({\Omega}^-) \}$. By [@rossi1 Corrollary 3.1], we have $\phi_{\lambda}\in C^{1,{\alpha}}(\overline{{\Omega}})$ for some ${\alpha}\in (0,1)$. Therefore, ${\Omega}^+$ and ${\Omega}^-$ are open subsets of ${\Omega}$ and hence ${\lambda}_1({\Omega}^+) $ and ${\lambda}_1({\Omega}^-)$ are well defined. Also, from [@rossi1 Lemma 3.3] $\phi_{\lambda}$ changes sign in ${\Omega}$. Since $\phi_{\lambda}$ is an eigenfunction, it implies $$\begin{aligned} \label{fs14} \mathcal{H}_{J,p}(\phi_{\lambda},v)= {\lambda}\int_{{\Omega}}|\phi_{\lambda}|^{p-2}\phi_{\lambda}v ~ dx,\; \; \text{for all } v \in W^{1,p}_0({\Omega}). \end{aligned}$$ Let $v= \phi_{\lambda}^+$. Using Lemma \[fslem2\](ii) with $ a= \phi_{\lambda}^+(x)-\phi_{\lambda}^+(y)$ and $b= \phi_{\lambda}^-(x)-\phi_{\lambda}^-(y)$ then we have $$\begin{aligned} & \int_{{\Omega}^+}|{\nabla}\phi_{\lambda}^+|^p~dx + \int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |\phi_{\lambda}^+(x)-\phi_{\lambda}^+(y)|^{p}J(x-y)~ dxdy \\ & < \int_{{\Omega}^+}|{\nabla}\phi_{\lambda}^+|^p~dx +\int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |\phi_{\lambda}(x)-\phi_{\lambda}(y)|^{p-2}(\phi_{\lambda}(x)-\phi_{\lambda}(y))(\phi_{\lambda}^+(x)-\phi_{\lambda}^+(y))J(x-y)~ dxdy\\ & = {\lambda}\int_{{\Omega}^+}|\phi_{\lambda}^+|^{p}~dx . \end{aligned}$$ Taking in to account that $\phi_{\lambda}^+$ is admissible in variational framework defined for ${\lambda}_1({\Omega}^+)$. Indeed, $$\begin{aligned} {\lambda}_1({\Omega}^+) \int_{{\Omega}^+}|\phi_{\lambda}^+|^{p}~dx\leq \int_{{\Omega}^+}|{\nabla}\phi_{\lambda}^+|^p~dx + \int_{\mathbb{R}^N}\int_{\mathbb{R}^N} |\phi_{\lambda}^+(x)-\phi_{\lambda}^+(y)|^{p}J(x-y)~ dxdy. \end{aligned}$$ Therefore, ${\lambda}> {\lambda}_1({\Omega}^+)$. Now for the set ${\Omega}^-$, we will proceed analogously as above with $v= \phi_{\lambda}^-, \; a=\phi_{\lambda}^-(x)-\phi_{\lambda}^-(y) $ and $b= \phi_{\lambda}^+(x)-\phi_{\lambda}^+(y)$ to achieve ${\lambda}> {\lambda}_1({\Omega}^-)$. Hence we get the desired result. **Proof of Theorem \[fsthm2\] :** Let $\phi_2$ be the eigenfunction corresponding to the eigenvalue ${\lambda}_2({\Omega})$, let $$\begin{aligned} {\Omega}^+:= \{ x \in {\Omega}: \phi_2(x)>0 \} \quad \text{and} \quad {\Omega}^-:= \{ x \in {\Omega}: \phi_2(x)<0 \} . \end{aligned}$$ It implies $|{\Omega}^+|+ |{\Omega}^-|\leq |{\Omega}|$ and using Lemma \[fslem7\] and Theorem \[fsthm2\], we have $$\begin{aligned} {\lambda}_2({\Omega})> {\lambda}_1({\Omega}^+)\geq {\lambda}_1(B_{r_1}) \quad \text{ and } \quad {\lambda}_2({\Omega})> {\lambda}_1({\Omega}^+)\geq {\lambda}_1(B_{r_2}), \end{aligned}$$ where $B_{r_1}$ and $B_{r_2}$ are two balls such that $|B_{r_1}| =|{\Omega}^+|$ and $|B_{r_2}| =|{\Omega}^-|$. Hence $$\begin{aligned} {\lambda}_2({\Omega})> \max \{ {\lambda}_1(B_{r_1}),\; {\lambda}_1(B_{r_2}) \} \quad \text{and} \quad |B_{r_1}|+ |B_{r_2}|\leq |{\Omega}|. \end{aligned}$$ **Claim:** $\max \{ {\lambda}_1(B_{r_1}),\; {\lambda}_1(B_{r_2}) \}$ is minimized when $|B_{r_1}|= |B_{r_2}|= |{\Omega}|/2$.\ Let $B_r$ be a ball such that $|B_r|=|{\Omega}|/2 $. Since $|B_{r_1}|+ |B_{r_2}|\leq |{\Omega}|$ therefore we will divide the proof of claim in three cases.\ **Case 1:** If $|B_{r_1}|, |B_{r_2}|\leq |{\Omega}|/2$.\ It implies that balls $B_{r_1},\;B_{r_2}$ are contained in ball $B_r$ then by Proposition \[fsprop2\] we have ${\lambda}_1(B_r)\leq {\lambda}_1(B_{r_1}),\; {\lambda}_1(B_{r_2})$. It implies $\max \{ {\lambda}_1(B_{r_1}),\; {\lambda}_1(B_{r_2}) \} \geq {\lambda}(B_r)$.\ **Case 2:** If $ |B_{r_1}|< |{\Omega}|/2< |B_{r_2}| $.\ It implies $|B_{r_1}|< |B_r|< |B_{r_2}|$. From Proposition \[fsprop2\], we have ${\lambda}_1(B_{r_1})\leq {\lambda}_1(B_r)\leq {\lambda}_1(B_{r_2})$. Thus, $\max \{ {\lambda}_1(B_{r_1}),\; {\lambda}_1(B_{r_2}) \}\geq {\lambda}_1(B_{r_2})\geq {\lambda}_1(B_r)$.\ **Case 3:** If $ |B_{r_2}|< |{\Omega}|/2< |B_{r_1}| $.\ Similarly as in case 2 we have $\max \{ {\lambda}_1(B_{r_1}),\; {\lambda}_1(B_{r_2}) \}\geq {\lambda}_1(B_r)$.\ Hence, from all cases we have $\max \{ {\lambda}_1(B_{r_1}),\; {\lambda}_1(B_{r_2}) \}$ is minimized only when $|B_{r_1}|= |B_{r_2}|= |{\Omega}|/2$. It proves .\ Now for equality we define ${\Omega}_n:= B_r(s_n)\cup B_r(t_n)$, where $\{s_n\}$ and $\{t_n\}$ are sequences in $\mathbb{R}^N$ such that $|s_n-t_n|$ diverges as $n {\rightarrow}\infty$. Let $\phi_{s_n}$ and $\phi_{t_n}$ are the positive normalized eigenfunctions on $B_R(s_n)$ and $B_R(t_n)$ respectively. Let $ f : \mathbb{S}^1 {\rightarrow}\mathcal{M}$ given by $$\begin{aligned} f(\theta_1, \theta_2)= \frac{|\theta_1|^{\frac{2-p}{p}}\theta_1\phi_{s_n}- |\theta_2|^{\frac{2-p}{p}}\theta_2\phi_{t_n}}{ \bigg \| |\theta_1|^{\frac{2-p}{p}}\theta_1\phi_{s_n}- |\theta_2|^{\frac{2-p}{p}}\theta_2\phi_{t_n}\bigg \|_{L^p}} \end{aligned}$$ Then define $A= \text{Range}(f)$. It implies that $A$ is compact, symmetric, and of genus $\geq 2$. Now taking in account the definition of ${\lambda}_2({\Omega})$ and Lemma \[fslem6\](ii) with $a= \phi_{s_n}(x)-\phi_{s_n}(y)$ and $b= \phi_{t_n}(x)-\phi_{t_n}(y)$, we obtain $$\begin{aligned} {\lambda}_2({\Omega}_n)& \leq \max_{|\theta_1|^p+|\theta_2|^p=1} \bigg \{ \int_{{\Omega}_n}|{\nabla}(\theta_1\phi_{s_n}-\theta_2\phi_{t_n})|^p~dx+ \int_{\mathbb{R}^N}\int_{\mathbb{R}^N}|\theta_1a- \theta_2b|^pJ(x-y)~dxdy\bigg\}\\ & = \max_{|\theta_1|^p+|\theta_2|^p=1} \bigg \{ \int_{{\Omega}_n}|{\nabla}\theta_1\phi_{s_n}|^p~dx+ \int_{{\Omega}_n}|{\nabla}\theta_2\phi_{t_n}|^p~dx\\ & \hspace{4 cm} + \int_{\mathbb{R}^N}\int_{\mathbb{R}^N}|\theta_1a- \theta_2b|^pJ(x-y)~dxdy\bigg\}\\ & \leq \max_{|\theta_1|^p+|\theta_2|^p=1} \bigg \{ \int_{{\Omega}_n}|{\nabla}\theta_1\phi_{s_n}|^p~dx+ \int_{{\Omega}_n}|{\nabla}\theta_2\phi_{t_n}|^p ~dx\\ & \hspace{2.5cm}+ \int_{\mathbb{R}^N}\int_{\mathbb{R}^N}|\theta_1a|^pJ(x-y)~dxdy + \int_{\mathbb{R}^N}\int_{\mathbb{R}^N}| \theta_2b|^pJ(x-y)~dxdy\\ & \hspace{3cm} + c_p \int_{\mathbb{R}^N}\int_{\mathbb{R}^N}(|\theta_1a|^2+ |\theta_2b|^2)^{\frac{p-2}{2}} |\theta_1\theta_2ab|J(x-y) ~dxdy \bigg\}\\ & = {\lambda}_1(B_R) + c_p\max_{|\theta_1|^p+|\theta_2|^p=1} \int_{\mathbb{R}^N}\int_{\mathbb{R}^N}(|\theta_1a|^2+ |\theta_2b|^2)^{\frac{p-2}{2}} |\theta_1\theta_2ab|J(x-y) ~dxdy. \end{aligned}$$ Since $ab= -\phi_{s_n}(x)\phi_{t_n}(y)- \phi_{s_n}(y)\phi_{t_n}(x)$ is nonzero only when $(x,y)\in B_R(s_n)\times B_R(t_n) \cup B_R(t_n)\times B_R(s_n)$. And $s_n-t_n-2R<x-y$ for all $(x,y)\in B_R(s_n)\times B_R(t_n) \cup B_R(t_n)\times B_R(s_n)$. Hence $$\begin{aligned} {\lambda}_2({\Omega}_n)& \leq {\lambda}_1(B_R)\\ & + 2J(s_n-t_n-2R)c_p\max_{|\theta_1|^p+|\theta_2|^p=1} \int_{B_R(s_n)}\int_{B_R(t_n)}(|\theta_1a|^2+ |\theta_2b|^2)^{\frac{p-2}{2}} |\theta_1\theta_2ab| ~dxdy. \end{aligned}$$ Since $$\begin{aligned} 2c_p\max_{|\theta_1|^p+|\theta_2|^p=1} \int_{B_R(s_n)}\int_{B_R(t_n)}(|\theta_1a|^2+ |\theta_2b|^2)^{\frac{p-2}{2}} |\theta_1\theta_2ab| ~dxdy< \infty \end{aligned}$$ and $J(s_n-t_n-2R) {\rightarrow}0 $ as $n {\rightarrow}\infty$. Thus $ {\displaystyle}\lim_{n{\rightarrow}\infty}{\lambda}_2({\Omega}_n) \leq {\lambda}_1(B_R)$. This proved the desired result. Remarks on the eigenvalues of combination of $p$-Laplacian and fractional $p$-Laplacian ======================================================================================= We consider the following eigenvalue problem: $$(F_{\lambda})\; \left.\begin{array}{rllll} \mathcal{L}(u) ={\lambda}|u|^{p-2}u \text{ in } {\Omega},\; u=0 \text{ in } \mathbb{R}^N\setminus {\Omega}, \end{array} \right.$$ where $1< p< \infty$ and the operator $\mathcal{L}(u)$ is defined as $\mathcal{L}(u):= -{\Delta}_p u +(-{\Delta})^s_p u $ where ${\Delta}_p u$ is the usual $p$-Laplacian operator and $(-{\Delta})^s_p u$ is the fractional $p$-Laplacian is given by $$\begin{aligned} (-{\Delta})^s_p u(x):= 2 \text{ P.V } {\displaystyle}\int_{\mathbb{R}^N} \frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{N+ps}}~ dy,\end{aligned}$$ where ${\Omega}\subset\mathbb{R}^N (N>ps)$ be a bounded open set, $0<s<1$.\ A function $u \in W^{1,p}_0({\Omega})$ is a solution of $(F_{\lambda})$ if $u$ satisfies the equation $$\begin{aligned} \mathcal{H}(u,\phi)= {\lambda}\int_{{\Omega}}|u|^{p-2}u \phi~ dx,\; \; \text{for all } \phi \in W^{1,p}_0({\Omega}), \end{aligned}$$ where $$\begin{aligned} \mathcal{H}(u,\phi):=& \int_{{\Omega}}|{\nabla}u|^{p-2}{\nabla}u \cdot {\nabla}\phi~ dx\\ & \quad + \int_{\mathbb{R}^N} \int_{\mathbb{R}^N} \frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))(\phi(x)-\phi(y))}{|x-y|^{N+ps}}~ dx dy \end{aligned}$$ The energy functional associated with problem $(F_{\lambda})$ is the functional $\mathcal{I}: W^{1,p}_0 ({\Omega}) {\rightarrow}{\mathbb}R $ given by $$\begin{aligned} \mathcal{I}(u)= \int_{{\Omega}}|{\nabla}u|^{p} ~ dx +\int_{\mathbb{R}^N} \int_{\mathbb{R}^N} \frac{|u(x)-u(y)|^{p}}{|x-y|^{N+ps}}~ dx dy-{\lambda}\int_{{\Omega}}| u|^{p} ~ dx . \end{aligned}$$ Let $u \in C_c^{\infty}({\Omega})$ then by extending $u=0$ on $\mathbb{R}^N\setminus {\Omega}$, we see that $$\begin{aligned} \int_{\mathbb{R}^N} \int_{\mathbb{R}^N} \frac{|u(x)-u(y)|^{p}}{|x-y|^{N+ps}}~ dx dy= \int_{Q} \frac{|u(x)-u(y)|^{p}}{|x-y|^{N+ps}}~ dx dy, \; \text{ where }Q= \mathbb{R}^{2N}\setminus ({\Omega}^c\times {\Omega}^c). \end{aligned}$$ Also, it is not difficult to show that $$\int_{Q} \frac{|u(x)-u(y)|^{p}}{|x-y|^{N+ps}}~ dx dy\le C \|{\nabla}u\|_{L^p}^{p}\; \text{for all}\; u \in C_c^{\infty}({\Omega}).$$ By density, we get $\mathcal{I}$ is well defined on $W^{1,p}_{0}({\Omega}).$ Also, $\mathcal{I}\in C^{1}( W^{1,p}_0 ({\Omega}),{\mathbb}R)$. Moreover, $\tilde{ \mathcal{I}}:= \mathcal{I}|_{{\mathcal}M}$ is $C^1(W^{1,p}_0 ({\Omega}),{\mathbb}R)$, where ${\mathcal}M$ is defined as in . By using the same assertions and arguments as in the proofs of Theorem \[fsthm4\] and Theorem \[fsthm1\] we can obtain Theorems \[fsthm4\] and \[fsthm1\] for the operator $\mathcal{L}$. [11]{} A. Ambrosetti and P. H. Robinowitz, [*Dual variational methods in critical point theory and applications*]{}, Journal of Functional Analysis 14 (1973), 349-381. F.J Almgren and E.H Lieb, [*Symmetric decreasing rearrangement is sometimes continuous*]{}, Journal of the American Mathematical Society (1989), 683-773. F. Andreu-Vaillo, J. M. Mazon, J. D. Rossi and J. J. Toledo-Melero, [*Nonlocal Diffusion Problems*]{}, American Mathematical Society, Mathematical Surveys and Monographs 165 (2010). F. Andreu, J. M. Mazon, J. D. Rossi and J. Toledo, [*The limit as $p {\rightarrow}\infty$ in a nonlocal p−Laplacian evolution equation. A nonlocal approximation of a model for sandpiles*]{}. Calculus of Variation and Partial Differential Equations 35 (2009), no. 3, 279-316. F. Andreu, J. M. Mazon, J. D. Rossi and J. Toledo, [*A nonlocal $p$-Laplacian evolution equation with non homogeneous Dirichlet boundary conditions*]{}, SIAM Journal of Mathematical Analysis 40 (2009), no. 5, 1815-1851. G.M Bisci, V. D. Radulescu and R. Servadei, [*Variational methods for nonlocal fractional problems*]{}, Cambridge University Press 162 (2016). L. Brasco and E. Parini, [*The second eigenvalue of the fractional $p$-Laplacian*]{}. Advances in Calculus of Variations 9 (2016), no. 4, 323-355. M. Cuesta, D. de Figueiredo and J. P. Gossez, [*The Beginning of the Fučik Spectrum for the $p$-Laplacian*]{}, Journal of Differential Equations 159 (1999), 212-238. R.L. Frank and R. Seiringer, [ *Non-linear ground state representations and sharp Hardy inequalities*]{}, Journal of Functional Analysis 255 (2008), no. 12, 3407-3430. G. Franzina and G. Palatucci, [*Fractional $p$-eigenvalues*]{}, Rivista di Matematica della Università di Parma 5 (2014), no. 2, 373-386. D. Goel, S. Goyal and K. Sreenadh, [*First curve of Fučik spectrum for the p-fractional Laplacian operator with nonlocal normal boundary conditions*]{}, Electronic Journal of Differential Equations (2018), no. 74, 1-21. S. Goyal, [*On the eigenvalues and Fučik spectrum of $p$-fractional Hardy-Sobolev operator with weight function*]{}, Applicable Analysis (2017), 1-26. A. Henrot, [*Extremum problems for eigenvalues of elliptic operators*]{}, Springer Science and Business Media (2006). E. D. Nezza, G. Palatucci and E. Valdinoci, [*Hitchhikerʼs guide to the fractional Sobolev spaces*]{}, Bulletin des Sciences Mathèmatiques 136 (2012), 521-573. L.M Del Pezzo, R. Ferreira and J.D Rossi, [*Eigenvalues for a combination between local and nonlocal $ p$-Laplacians*]{}, arXiv:1803.07988. K. Sreenadh, [*On the second eigenvalue of a Hardy-Sobolev operator*]{}, Electronic Journal of Differential Equations (2004). [^1]: e-mail: divyagoel2511@gmail.com [^2]: e-mail: sreenadh@maths.iitd.ac.in
--- author: - 'J. Klüter, U. Bastian, J. Wambsganss' date: 'Received 05 November 2019/ Accepted ???' title: 'Expectations on the mass determination using astrometric microlensing by [*Gaia*]{}' --- [Astrometric gravitational microlensing can be used to determine the mass of a single star with an accuracy of a few percent. To do so, precise measurements of the angular separations between lenses and background stars with an accuracy below $1\,\text{milli-arcsecond}$ at different epochs are needed. Hence only the most accurate instruments can be used. However, due to the long timescale (months to years) it might be possible to detect astrometric microlensing also with [*Gaia*]{}, which does observe each star only sparsely.]{} [We want to show how accurately [*Gaia*]{} can determine the mass of the lensing star]{} [Using conservative assumptions based on the results of the second [*Gaia*]{} Data release, we simulated the individual [*Gaia*]{} measurements for 530 predicted astrometric microlensing events during the [*Gaia*]{} era (2014.5 - 2026.5). For this purpose we use the astrometric parameters of [*Gaia*]{} DR2, as well as an approximative mass based on the absolute G magnitude. By fitting the motion of lens and source simultaneously we then reconstruct the 11 parameters of the lensing event. For lenses passing by multiple background sources, we also fit the motion of all background sources and the lens simultaneously. Using a Monte-Carlo simulation we determine the achievable precision of the mass determination. ]{} [We find that [*Gaia*]{} can detect the astrometric deflection for 137 events. Further, for 16 events [*Gaia*]{} can determine the mass of the lens with a precision better than $15\%$ and for 16+26 = 42 with a precision of $30\%$ or better. ]{} Introduction ============ The mass is the most substantial parameter of a star. It defines its temperature, surface gravity and evolution. Currently, relations concerning stellar mass are based on binary stars, where a direct mass measurement is possible . However, it is known that single stars evolve differently. Hence it is important to derive the masses of single stars directly. Apart from strongly model-dependent asteroseismology; astrometric microlensing is the only usable tool [@1995AcA....45..345P; @1991ApJ...371L..63P]. As sub-area of gravitational lensing, which was first described by Einstein’s theory of relativity [@1915SPAW...47..831E], microlensing describes the time-dependent positional deflection (astrometric microlensing) and magnification (photometric microlensing) of a background source by an intervening stellar mass. Up to now, almost exclusively the photometric magnification was monitored and investigated by surveys such as OGLE [@2003AcA....53..291U] or MOA [@2001MNRAS.327..868B] and also led to the discovery of many exoplanets [e.g. @2015AcA....65....1U], whereas the astrometric shift of the source was detected for the first time only recently [@2017Sci...356.1046S; @2018MNRAS.480..236Z]. Especially @2017Sci...356.1046S showed the potential of astrometric microlensing to measure the mass of a single star with a precision of a few per cent [@1995AcA....45..345P]. However, even though astrometric microlensing events are much rarer than photometric events, they can be *predicted* from stars with known proper motions. The first systematic search for astrometric microlensing events was done by @2000ApJ...539..241S. Currently, the precise predictions make use of the second Data Release of the [*Gaia*]{} mission (hereafter [*Gaia*]{} DR2) or even combine [*Gaia*]{} DR2 with further catalogues [e.g. @2018AcA....68..351N; @2019MNRAS.487L...7M]. Further, due to a longer baseline [a few months instead of a few weeks; @2000ApJ...534..213D] it is also possible to detect and characterise these events with [*Gaia*]{} alone, which observes each star only sparsely. The [*Gaia*]{} mission of the European space agency (ESA) is currently the most precise astrometric survey. Since mid-2014 [*Gaia*]{} observes the full sky with an average of about 70 measurements within 5 years (nominal mission). [*Gaia*]{} DR2 contains only summary results from the data analysis (J2015.5 position, proper motion, parallax etc.) for its 1.6 billion stars. However, with the fourth data release (expected in 2024) and the final data release after the end of the extended mission, also the individual [*Gaia*]{} measurements will be published. Using these measurements, it should be possible to determine the masses of individual stars using astrometric microlensing. This will lead to a better understanding of mass relations for main-sequence stars [@1991ApJ...371L..63P]. In the present paper, we show the potential of [*Gaia*]{} to determine stellar masses using astrometric microlensing. We do so by simulating the individual measurements for 530 predicted microlensing events by 470 different stars. We also show the potential of combining the data for multiple microlensing events caused by the same lens. In a similar study @2018MNRAS.476.2013R showed that [*Gaia*]{} might be able to measure the astrometric deflection caused by a stellar mass black hole ($M \approx 10 {\,\mathrm{M_{\odot}}}{}$) which was discovered by OGLE. Further, they claimed that for faint background sources ($G> 17.5\,\mathrm{mag}$) [*Gaia*]{} might be able to detect the deflection of black holes more massive than $30 {\,\mathrm{M_{\odot}}}{}$. In the present paper, we consider bright lenses, which can also be observed by Gaia. Hence, due to the additional measurements of the lens positions, [*Gaia*]{} can measure much smaller masses. In Sect. \[chapter:microlensing\] we describe astrometric microlensing. In Sect. \[chapter:gaia\] we explain shortly the [*Gaia*]{} mission and satellite, with a focus on important aspects for this paper. In Sect. \[chapter:Analysis\] we show our analysis, starting with the properties of the predicted events in \[section:Data\], the simulation of the [*Gaia*]{} measurements in \[section:Simulation\], the fitting procedure \[section:reconstruction\], and finally the statistical analysis in \[section:data\_analysis\]. In Sect. \[chapter:Result\] we present the opportunities of direct stellar mass determinations by [*Gaia*]{}. Finally, we summarize the simulations and results and present our conclusions in Sect. \[chapter:conclusion\]. Astrometric Microlensing {#chapter:microlensing} ======================== The change of the center of light of the background star (“source”) due to the gravitational deflection of a passing foreground star (“lens”) is called astrometric microlensing. This is shown in Figure\[figure:shift\]. While the lens (red line) is passing the source (black star in the origin), two images of the source are created (blue lines): a brighter image $(+)$ close to the unlensed position, and a fainter image$(-)$ close to the lens. In case of a perfect alignment, both images merge to an Einstein ring, with a radius of [@1924AN....221..329C; @1936Sci....84..506E; @1986ApJ...301..503P]: $$\theta_{E} = \sqrt{\frac{4GM_{L}}{c^{2}} \frac{D_{S}-D_{L}}{D_{S}\cdot D_{L}}} = 2.854\,\mathrm{mas} \sqrt{\frac{M_{L}}{{\,\mathrm{M_{\odot}}}{}}\cdot\frac{\varpi_{L}-\varpi_{S}}{1\,\mathrm{mas}}}, \label{equation:Einsteinradius}$$ where $M_{L}$ is the mass of the lens and $D_{L}$, $D_{S}$ are the distances of the lens and source from the observer, and $\varpi_{L}$,$\varpi_{S}$ are the parallaxes of lens and source, respectively. $G$ is the gravitational constant, and $c$ the speed of light. For a solar-type star at a distance of about $1\,\mathrm{kiloparsec}$ the Einstein radius is in the order of a few milli-arcseconds ($\mathrm{mas}$), and only if the separation is in the same order of magnitude (or smaller) the influence of the fainter image can be observed. Therefore the fainter image is only hardly resolvable and so far was resolved only once [@Dong_2019]. The Einstein radius not only defines when the fainter image gets important, but it also scales all connected effects. The lensed position $\boldsymbol{\theta_{\pm}}$ of the two images relative to the lens position can be described as function of the unlensed normalised angular separation on the sky $\boldsymbol{u} = \boldsymbol{\Delta\phi}/\theta_{E}$, where $\boldsymbol{\Delta\phi}$ is the two-dimensional unlensed angular separation, by : $$\boldsymbol{\theta_{\pm}} = \frac{ u \pm \sqrt{(u^{2}+4)}}{2} \cdot \frac{\boldsymbol{u}}{u} \cdot{\theta_{E}},$$ with $u = \lvert \boldsymbol{u} \rvert $. For the unresolved case, only the center of light of both images (green line) can be observed. This can be expressed by : $$\boldsymbol{\theta_c}= \frac{A_{+}\boldsymbol{\theta_{+}} +A_{-}\boldsymbol{\theta_{-}}}{A_{+}+A_{-}} =\frac{u^{2}+3}{u^{2}+2}\boldsymbol{u}\cdot{\theta_{E}},$$ where $A_{\pm}$ are the magnifications of the two images given by [@1986ApJ...301..503P] $$A_{\pm} = \frac{ u^{2}+2}{2u\sqrt{u^{2}+4}}\pm 0.5 .$$ The corresponding shift is given by $$\delta\boldsymbol{\theta_{c}} = \frac{\boldsymbol{u}}{u^{2}+2} \cdot{\theta_{E}}. \label{equation:shift}$$ The measurable deflection can be further reduced, due to luminous-lens effects, however, in the following, we consider the resolved case, where luminous-lens effects can be ignored. For the resolved case, the observable is the shift of the position of the brightest image only. This can be expressed by $$\label{equation:microlensing_term} \delta\boldsymbol{\theta_{+}} = \frac{ \sqrt{(u^{2}+4)} - u}{2} \cdot \frac{\boldsymbol{u}}{u} \cdot{\theta_{E}}.$$ For large impact parameters $u\gg 5$ this can be approximated as [@2000ApJ...534..213D] $$\delta\theta_{+} \simeq \frac{\theta_{E}}{u} = \frac{\theta_{E}^{2}}{ \lvert\boldsymbol{\Delta \phi} \rvert} \propto \frac{M_{L}}{ \lvert\boldsymbol{\Delta\phi} \rvert} \label{equation:approx_shift}.$$ which is proportional to the mass of the lens. Nevertheless equation(\[equation:shift\]) is also a good approximation for the shift of the brightest image whenever $u > 5$, since then the second image is negligibly faint. This is always the case in the present study. ![The astrometric shift for an event with an Einstein radius of $\theta_{E} = 12.75 \mathrm{mas}$ (black circle) and an impact parameter of u = 0.75. While the lens (red) passes a background star (black star, fixed in origin) two images (blue dashed) of the source are created due to gravitational lensing. This leads to a shift of the center of light, shown in green. The straight long-dashed black line connects the current positions for one epoch. While the lens is moving in the direction of the red arrow, all other images are moving according to their individual arrows. The red, blue and green dots correspond to different epochs with fixed time steps .[]{data-label="figure:shift"}](astroshift.png){width="9cm"} [*Gaia*]{} satellite {#chapter:gaia} ==================== The [*Gaia*]{} satellite is a space telescope of the European Space Agency (ESA) which was launched in December 2013. It is located at the Earth-Sun Lagrange point L2, where it orbits the sun at roughly $1\%$ larger orbit than the earth. In mid 2014 [*Gaia*]{} started to observe the whole sky on a regular basis defined by a nominal (pre-defined) scanning law. Scanning law ------------ As a drift-scan instrument, the rotation rate of [*Gaia*]{} is kept constant at the CCD readout speed, with a 6 hour period. Further, [*Gaia*]{}’s spin axis is inclined by 45 degrees to the sun, with a precession frequency of one turn around the sun direction every 63 days. Finally, [*Gaia*]{} is not fixed at L2 but moving on a $100\,000\,\mathrm{km}$ Lissajous-type orbit around L2. The orbit of [*Gaia*]{} and the inclination is chosen such that the overall coverage of the sky is quite uniform with about 70 observations per star during the nominal 5-year mission in different scan angles. However, certain parts of the sky are inevitably observed more often. Consequently, [*Gaia*]{} cannot be pointed on a certain target at a given time. We use the [*Gaia*]{} observation forecast tool (GOST)[^1] to get the information when a target is inside the field of view of [*Gaia*]{}, and the current scan direction of [*Gaia*]{} at that time. GOST also lists the CCD row, which can be translated into eight or nine CCD observations. For more details on the scanning law see or the [*Gaia*]{} Data Release Documentation[^2] Focal plane and readout window ------------------------------ [*Gaia*]{} is equipped with two separate telescopes with rectangular primary mirrors, pointing on two fields of view, separated by $106.5^{\circ}$. This results in two observations only a few hours apart with the same scanning direction. The light of the two fields of view is focused on one common focal plane which is equipped with 106 CCDs arranged in 7 rows. The majority of the CCDs (62) are used for the astrometric field. While [*Gaia*]{} rotates, the source first passes a sky mapper, which can distinguish between both fields of view. Afterwards, it passes nine CCDs of the astrometric field (or eight for the middle row). The astrometric field is devoted to position measurements, providing the astrometric parameters, and also the G-band photometry. For our simulations, we stack the data of these eight or nine CCDs into one measurement. Finally, the source passes by a red and blue photometer, plus a radial velocity spectrometer . In order to reduce the volume of data only small “windows” around detected sources are read out and transmitted to ground. For faint sources $(G >13\,\mathrm{mag})$ these windows are $12 \times 12\,\mathrm{pixels}$ ($\text{along-scan}\times \text{across-scan}$). This corresponds to $708\,\mathrm{mas}\, \times\, 2124 \, \mathrm{mas}$, due to a 1:3 pixel-size ratio. These data are stacked by the onboard processing of [*Gaia*]{} in across-scan direction into a one-dimensional strip, which is then transmitted to Earth. For bright sources $(G <13\,\mathrm{mag})$ larger windows ($18\,\times\,12\,\mathrm{pixel}$) are read out. These data are transferred as 2D images . When two sources with overlapping readout windows (e.g. Fig.\[figure:window\]: blue and grey stars) are detected, Gaia’s onboard processing assigns the full window (blue grid) only to one of the sources (usually the brighter source). For the second source [*Gaia*]{} assigns only a truncated window (green grid). For [*Gaia*]{} DR2 these truncated windows are not processed[^3]. For more details on the focal plane and readout scheme see ) Along scan precision -------------------- Published information about the precision and accuracy of [*Gaia*]{} mostly refers to the end-of-mission standard errors, which result from a combination of all individual measurements, and also consider the different scanning directions. [*Gaia*]{} DR1 provides an analytical formula to estimate this precision as a function of G magnitude and V-I colour . However, we are interested in the precision of one single field-of-view scan (i.e the combination of the nine or eight CCD measurements in the astrometric field). The red line in Figure\[figure:sig\_gmag\] shows the formal precision in along-scan direction for one CCD . The precision is mainly dominated by photon noise. Due to different readout gates, the number of photons is roughly constant for sources brighter than $G = 12\,\mathrm{mag}$. The blue line in Figure\[figure:sig\_gmag\] shows the scatter of the postfit residuals, and the difference represents the combination of all unmodeled errors. For more details on the precision see . Simulation of [*Gaia*]{} Measurements and Mass reconstruction {#chapter:Analysis} ============================================================= Data input {#section:Data} ---------- We use the 530 events predicted by with an epoch of closest approach between 2013.5 and 2026.5. We also include the events outside the most extended mission of [*Gaia*]{} (ending in 2024.5), since it is possible to determine the mass only from the tail of an event, or from using both [*Gaia*]{} measurements and additional observations. The sample is naturally divided into two categories: events where the motion of the background source is known, and events where the motion of the background source is unknown. A missing proper motion in DR2 will not automatically mean that [*Gaia*]{} cannot measure the motion of the background source. The data for [*Gaia*]{} DR2 are derived only from a 2-year baseline. With the 5-year baseline for the nominal mission ended in mid-2019 and also with the potential extended 10-year baseline, [*Gaia*]{} is expected to provide proper motions and parallaxes also for some of those sources. In order to deal with the unknown proper motions and parallaxes we use randomly selected values from a normal distribution with means $5\,\mathrm{mas/yr}$ and $2\,\mathrm{mas}$, respectively and standard deviations of $3\,\mathrm{mas/yr}$ and $1\,\mathrm{mas}$, respectively, and using a uniform distribution for the direction of the proper motion. For the parallaxes, we only use the positive part of the distribution. Both distributions roughly reflect the sample of all potential background stars in . ### Multiple background sources {#multiple-background-sources .unnumbered} Within 10 years, some of our lensing stars pass close enough to multiple background stars, thus causing several measurable effects. As an extrem case, the light deflection by Proxima Centauri causes a measurable shift (larger than $0.1\,\mathrm{mas}$) on 18 background stars. This is due to the star’s large Einstein radius, its high proper motion and the dense background. Since those events are physically connected, we simulate and fit the motion of the lens and multiple background sources simultaneously. We also compare three different scenarios: A first one where we use all background sources, a second one where we only select those with known proper motion, and a third one where we only select those with a precision in along-scan direction better than $0.5\,\mathrm{mas}$ per field of view transit (assuming 9 CCD observations). The latter limit corresponds roughly to sources brighter than $G \simeq 18.5\,\mathrm{mag}$. Simulation of [*Gaia*]{} Data {#section:Simulation} ----------------------------- We expect that [*Gaia*]{} DR4 and the full release of the extended mission will provide for each single CCD observation the position and uncertainty in along scan direction, in combination with the observation epochs. These data are simulated as a basis for the present study. We thereby assume that all variation and systematic effects caused by the satellite itself are corrected beforehand. However, since we are only interested in relative astrometry, measuring the astrometric deflection is not affected by most of the systematics as for example the slightly negative parallax zero-point . We also do not simulate all CCD measurements separately, but rather a mean measurement of all eight or nine CCD measurements during a field of view transit. In addition to the astrometric measurements, [*Gaia*]{} DR4 will also publish the scan angle and the barycentric location of the [*Gaia*]{} satellite. We find that our results strongly depend on the temporal distribution of measurements and their scan directions. Therefore we use for each event predefined epochs and scan angles, provided by the GOST online tool. This tool only lists the times and angles when a certain area is passing the field of view of [*Gaia*]{}, however, it is not guaranteed that a measurement is actually taken and transmitted to Earth. We assume that for each transit [*Gaia*]{} measures the position of the background source and lens simultaneously (if resolvable), with a certain probability for missing data points and clipped outliers. To implement the parallax effect for the simulated measurements we assume that the position of the [*Gaia*]{} satellite is exactly at a 1% larger distance to the Sun than the Earth. Compared to a strict treatment of the actual [*Gaia*]{} orbit, we do not expect any differences in the results, since first, [*Gaia*]{}’s distance from this point (roughly L2) is very small compared to the distance to the Sun, and second, we consistently use 1.01 times the earth orbit for the simulation and for the fitting routine. The simulation of the astrometric [*Gaia*]{} measurements is described in the following subsections. ### Astrometry Using the [*Gaia*]{} DR2 positions ($\alpha_{0},\delta_{0}$), proper motions ($\mu_{\alpha*,0},\mu_{\delta,0}$) and parallaxes ($\varpi_{0}$) we calculate the unlensed positions of lens and background source seen by [*Gaia*]{} as a function of time, using the following equation: $$\begin{pmatrix} \alpha \\ \delta \end{pmatrix} = \begin{pmatrix} \alpha_{0} \\ \delta_{0} \end{pmatrix}+ (t-t_{0}) \begin{pmatrix} \mu_{\alpha*,0}/\cos \delta_{0} \\ \mu_{\delta,0} \end{pmatrix}+1.01\cdot \varpi_{0} \cdot \vec{J}^{-1}_{ \ominus} \vec{E(t)}, \label{equation:motion}$$ where $\vec{E(t)}$ is the barycentric position of the Earth, in cartesian coordinates, in astronomical units ($\mathrm{au}$) and $$\vec{J}^{-1}_{ \ominus} = \begin{pmatrix} \sin\alpha_{0}/\cos\delta_{0}&-\cos\alpha_{0}/\cos\delta_{0}&0\\ \cos\alpha_{0}\sin\delta_{0} &\sin\alpha_{0}\sin\delta_{0}&-\cos\delta_{0} \end{pmatrix}$$ is the inverse Jacobian matrix for the transformation into a spherical coordinate system, evaluated at the lens position. We then calculate the observed position of the source by adding the microlensing term (Eq.(\[equation:microlensing\_term\])). Here we assume that all our measurements are in the resolved case. That means, [*Gaia*]{} observes the position of the brighter image of the source, and the measurement of the lens position is not affected by the fainter image of the source. For this case the exact equation is: $$\label{equation:microlensing_excact} \begin{pmatrix} \alpha_{obs} \\ \delta_{obs} \end{pmatrix} = \begin{pmatrix} \alpha \\ \delta \end{pmatrix} + \begin{pmatrix} \Delta \alpha \\ \Delta\delta \end{pmatrix} \cdot \left(\sqrt{0.25+\frac{\theta_{E}^{2}}{\Delta\phi^{2}}}-0.5\right)$$\ where $\Delta\phi = \sqrt{(\Delta\alpha\cos \delta)^{2}+(\Delta\delta)^{2}} $ is the unlensed angular separation between lens and source and $(\Delta\alpha, \Delta\delta) = (\alpha_{source}-\alpha_{lens}, \delta_{source}-\delta_{lens})$ are the differences in right ascension and declination, respectively. However, this equation shows an unstable behaviour in the fitting process, caused by the square root. This results in a time-consuming fit process. To overcome this problem we use the shift of the center of light as approximation for the shift of the brightest image. This approximation is used for both the simulation of the data and the fitting procedure: $$\label{equation:microlensing} \begin{pmatrix} \alpha_{obs} \\ \delta_{obs} \end{pmatrix} = \begin{pmatrix} \alpha \\ \delta \end{pmatrix} + \begin{pmatrix} \Delta \alpha \\ \Delta\delta \end{pmatrix} \cdot \frac{\theta_{E}^{2}}{\left(\Delta\phi^{2}+2 \theta_{E}^{2} \right)},$$ The differences between equations(\[equation:microlensing\_excact\]) and (\[equation:microlensing\]) are by at least a factor 10 smaller than the measurements errors (for most of the events even by a factor 100 or more). Further, using this approximation we underestimate the microlensing effect, thus being on a conservative track for the estimation of mass determination efficiency. We do not include any orbital motion in this analysis even though SIMBAD listed some of the lenses (e.g. 75 Cnc) as binary stars. However, from an inspection of their orbital parameters we expect that this effect influences our result only slightly. The inclusion of orbital motion would only be meaningfull if a good prior would be available. This might come with [*Gaia*]{} DR3 (end of 2022). ### Resolution Due to the on-board readout process and the on-ground data processing the resolution of [*Gaia*]{} is not limited by its point-spread function, but limited by the size of the readout windows. Using the apparent position and G magnitude of lens and source we investigate for all given epochs if [*Gaia*]{} can resolve both stars or if [*Gaia*]{} can only measure the brightest of both (mostly the lens, see Fig.\[figure:window\]). We therefore calculate the separation in along-scan and across-scan direction, as $$\begin{aligned} \Delta\phi_{AL} &= \mid\sin \Theta \cdot \Delta\alpha\cos \delta + \cos \Theta \cdot \Delta\delta\mid \\ \Delta\phi_{AC} &= \mid - \cos \Theta \cdot \Delta\alpha\cos \delta + \sin \Theta \cdot \Delta\delta\mid, \end {aligned}$$ where $\Theta$ is the position angle of scan direction counting from North towards East. When the fainter star is outside of the read out window of the brighter star, that means the separation in along-scan direction is larger than $354\,\mathrm{mas}$ or the separation in across scan direction is larger $1062\,\mathrm{mas}$, we assume that [*Gaia*]{} measures the positions of both sources. Otherwise we assume that only the position of the brightest star is measured, unless both sources have a similar brightness ($\Delta G < 1\,\mathrm{mag}$). In that case, we exclude the measurements of both stars. ### Measurement errors In order to derive a relation for the uncertainty in along-scan direction as function of the G magnitude, we start with the equation for the end-of-mission parallax standard error, where we ignore the additional color term : $$\label{equation:sigma_parallax} \sigma_{\varpi} = \sqrt{-1.631 + 680.766 \cdot z + 32.732 \cdot z^{2}} \,\mathrm{\mu as}$$ with $$\label{equation:z_original} z = 10^{(0.4\,(\max(G,\,12) - 15))}$$ We then adjust this relation in order to describe the actual precision in along-scan direction per CCD shown in (Fig.\[figure:sig\_gmag\], blue line) by adding a factor of $7.75$ and an offset of $100\,\mathrm{\mu as}$. And we adjust z (Eq.(\[equation:z\_original\])) to be constant for $G < 14\,\mathrm{mag}$ (Fig.\[figure:sig\_gmag\], green dotted line). These adjustments are done heuristically. We note that we overestimate the precision for bright sources, however most of the background sources, which carry the astrometric microlensing signal, are fainter than $G = 13\,\mathrm{mag}$. For those sources the assumed precision is slightly worse compared to the actually achieved precision for [*Gaia*]{} DR2. Finally we assume that during each field-of-view transit all nine (or eight) CCD observations are ok. Hence, we divide the CCD precision by $\sqrt{N_{CCD}} = 3\, (\text{or } 2.828)$ to determine the error in along-scan direction per field-of-view transit: $$\sigma_{AL} =\frac{\left(\sqrt{-1.631 + 680.766 \cdot \tilde{z} + 32.732 \cdot \tilde{z} ^{2}}\cdot 7.75+100\right)}{\sqrt{N_{CCD}}}\, \mathrm{\mu as}$$ with $$\tilde{z} = 10^{(0.4\,(\max(G,\,14) - 15))}$$ In across-scan direction we assume a precision of $\sigma_{AC} = 1"$. This is only used as rough estimate for the simulation, since only the along-scan component is used in the fitting routine. For each star and each field-of-view transit we pick a value from a 2D Gaussian distribution with $\sigma_{AL}$ and $\sigma_{AC}$ in along-scan and across-scan direction, respectively, as positional measurement. Finally, the data of all resolved measurements are forwarded to the fitting routine. These contain the positional measurements $(\alpha,\,\delta)$, the error in along-scan direction$(\sigma_{AL})$, the epoch of the observation ($t$), the current scaning direction ($\Theta$), as well as an identifier for the corresponding star (i.e if the measurement corresponds to the lens or source star). Mass reconstruction {#section:reconstruction} ------------------- To reconstruct the mass of the lens we fit equation(\[equation:microlensing\]) (including the dependencies of Eq.(\[equation:Einsteinradius\]) and (\[equation:motion\])) to the data of the lens and the source simultaneously. We therefore use a weighted-least-squares method. Since [*Gaia*]{} only measures precisely in along scan direction, we compute the weighted residuals $r$ as follow: $$r =\frac{\sin\Theta\,(\alpha_{model} - \alpha_{obs})\cdot\cos{\delta}+ \cos\Theta\, (\delta_{model} - \delta_{obs})} {\sigma_{AL}}, \label{equation:residuen}$$ while ignoring the across-scan component. The least-squares method uses a Trust-Region-Reflective algorithm [@S1064827595289108], which we also provide with the analytic form of the Jacobian matrix of equation(\[equation:residuen\]) (including all inner dependencies from Eq.(\[equation:Einsteinradius\]), (\[equation:motion\]) and (\[equation:microlensing\])). We do not exclude negative masses, since, due to the noise, there is a non-zero probability that the determined mass will be below zero. As initial guess, we use the first data point of each star as position, along with zero parallax, zero proper motion, as well as a mass of $M = 0.5 {\,\mathrm{M_{\odot}}}{}$. Data analysis {#section:data_analysis} ------------- In order to determine the precision of the mass determination we use a Monte Carlo approach. We first create a set of error-free data points using the astrometric parameters provided by [*Gaia*]{} and the mass based on the G magnitude. We then create 500 sets of observations, by randomly picking values from the error ellipse of each data point. We also include a 5% chance that a data point is missing, or is clipped as outlier. From the sample of 500 reconstructed masses, we determine the 15.8, 50, and 84.2 percentiles (see Fig.\[figure:mass\_distributuions\]). These represent the $1\sigma$ regime. We note that a real observation will gives us one value from the determined distribution and not necessarily a value close to the true value or close to the median value. But the standard deviation of this distribution will be similar to the error of real measurements. Further, the median value gives us an insight if we can reconstruct the correct value. To determine the influence of the input parameters, we repeat this process 100 times while varying the input parameters within the individual error distributions. This additional analysis is only done for events were the first analysis using the error-free values from [*Gaia*]{} DR2 lead to a $1\sigma$ uncertainty smaller than the assumed mass of the lens. Results {#chapter:Result} ======= [.25]{} [.25]{} [.25]{} [.25]{} Single background source ------------------------ Using the method described in section 4, we determine the scatter of individual fits. The scatter gives us an insight into the reachable precision of the mass determination using the individual [*Gaia*]{} measurements. In our analysis we find three different types of distribution. For each of these a representative case is shown in Figure\[figure:mass\_distributuions\]. For the first two events (Fig. \[figure:mass\_distributuions\](a) and \[figure:mass\_distributuions\](b)), the scatter of the distributions, calculated via 50th percentile minus 15.8th percentile and 84.2th percentile minus 50th percentile is smaller than 15% and 30% of the assumed mass, respectively. For such events it will be possible to determine the mass of the lens once the data are released. For the event of Figure\[figure:mass\_distributuions\] (c) the precision is in the same order as the mass itself. For such events the [*Gaia*]{} data are affected by astrometric microlensing, however the data are not good enough to determine a precise mass. By including further data as for example observations by the Hubble Space Telescope, during the peak of the event, a good mass determination might be possible. This is of special interest for upcoming events in the next years. If the scatter is much larger than the mass itself as in Figure \[figure:mass\_distributuions\](d)), the mass cannot be determined using the [*Gaia*]{} data. In this analysis we test 530 microlensing events, predicted for the epoch J2014.5 until J2026.5 by . Using data for the potential 10-year extended [*Gaia*]{} mission, we find that the mass of 16 lenses can be reconstructed with a precision of $15\%$ or better. Further 26 events can be reconstructed with an accuracy better than $30\%$ and additional 41 event with an precision better than $50\%$ ( i.e $16+26+41 = 83$ events can be reconstructed with an error smaller than $50\%$ of the mass). The percentage of events where we can reconstruct the mass increases with the mass of the event (see Fig.\[figure:hist\_mass\_accuracy\]). This is not surprising since a larger lens mass results in a larger microlensing effect. However, due to the larger fraction of stars with masses below $0.65{\,\mathrm{M_{\odot}}}{}$ also the masses of low-mass stars can be reconstructed with a small relative error ($< 15\%$). Using only the data of the nominal 5-year mission we can observe the same trend. However, due to the fewer data points and the fact that most of the events reach the maximal deflection after the end of the nominal mission (2019.5), the fraction of events with a certain precision of the mass reconstruction is much smaller (see Fig.\[figure:hist\_mass\_accuracy\] bottom panel). So the mass can only be determined for $3$, $3+ 6 = 9$, and $3 + 6 + 9 = 18$ events with an accuracy better than $15\%$, $30\%$ and $50\%$, respectively. For 137 events, where the expected precision is better than $100\%$, we expect that [*Gaia*]{} is able to at least qualitatively detect the astrometric deflection. For those we repeat the analysis while varying the input parameters for the data simulation. Figure\[figure:plot\_mass\_accuracy\] shows the achievable precision as function of the input mass for a representative subsample. When the proper motion of the background star is known from [*Gaia*]{} DR2, the uncertainty of the achievable precision is about $6\%$ and about $10\%$ when the proper motion is unknown. We find that the reachable precision (in solar masses) depends only weakly on the input mass, and is more connected to the astrometric input parameters. Hence, the scatter of the achievable precision is smaller when the proper motion and parallax of the background source is known from [*Gaia*]{} DR2. For the 83 events with a precision better than 50%, Table\[table:single\] (better than 30%) and Table\[table:single2\] (30% to 50%) list the achievable precision for each individual star as well as the determined scatter, for the nominal mission, as well as for the extended mission. Future events {#future-events .unnumbered} ------------- In our sample, 393 events have a closest approach after 2019.5 (Fig.\[figure:hist\_mass\_accuracy\] middle panel). These events are of special interest, since it is possible to obtain further observations using other telescopes, and combine the data. Naively, one might expect that about 50% of the events should be after this date (assuming a constant event-rate per year). However, the events with a closest approach close to the epochs used for [*Gaia*]{} DR2 are more difficult to treat by the [*Gaia*]{} reduction (e.g. fewer observations due to blending). Therefore many background sources are not included in [*Gaia*]{} DR2. For 33 of these future events, the achievable precision is between 15 and 50 percent. In combination with further precise measurements around the closest approach, an even higher precision can be reached. ![Distribution of the assumed masses and the resulting mass determination precision of the investigated events. Top panel: Using the data of the extended 10-year mission. Middle panel: Events with a closest approach after mid 2019. Bottom panel: Using only the data of the nominal 5-year mission. The grey, red, yellow and green parts correspond to a precision of the mass determination better than 100%, 50%, 30% and 15% of the assumed mass. The thick black line shows the distribution of the input sample, where the numbers at top show the number of events in the corresponding bins. The thin black line in the bottom panel shows the events during the nominal mission. The peak at $0.65{\,\mathrm{M_{\odot}}}{}$ is caused by the sample of white dwarfs. The bin size increases by a constant factor of 1.25 from bin to bin.[]{data-label="figure:hist_mass_accuracy"}](mass_accuracy_histogram_10years.png "fig:"){width="9.1cm"} ![Distribution of the assumed masses and the resulting mass determination precision of the investigated events. Top panel: Using the data of the extended 10-year mission. Middle panel: Events with a closest approach after mid 2019. Bottom panel: Using only the data of the nominal 5-year mission. The grey, red, yellow and green parts correspond to a precision of the mass determination better than 100%, 50%, 30% and 15% of the assumed mass. The thick black line shows the distribution of the input sample, where the numbers at top show the number of events in the corresponding bins. The thin black line in the bottom panel shows the events during the nominal mission. The peak at $0.65{\,\mathrm{M_{\odot}}}{}$ is caused by the sample of white dwarfs. The bin size increases by a constant factor of 1.25 from bin to bin.[]{data-label="figure:hist_mass_accuracy"}](mass_accuracy_histogram_future.png "fig:"){width="9.1cm"} ![Distribution of the assumed masses and the resulting mass determination precision of the investigated events. Top panel: Using the data of the extended 10-year mission. Middle panel: Events with a closest approach after mid 2019. Bottom panel: Using only the data of the nominal 5-year mission. The grey, red, yellow and green parts correspond to a precision of the mass determination better than 100%, 50%, 30% and 15% of the assumed mass. The thick black line shows the distribution of the input sample, where the numbers at top show the number of events in the corresponding bins. The thin black line in the bottom panel shows the events during the nominal mission. The peak at $0.65{\,\mathrm{M_{\odot}}}{}$ is caused by the sample of white dwarfs. The bin size increases by a constant factor of 1.25 from bin to bin.[]{data-label="figure:hist_mass_accuracy"}](mass_accuracy_histogram_5years.png "fig:"){width="9.1cm"} ![\[figure:plot\_mass\_accuracy\] Achievable precision as function of the input mass for 15 events. The two red events with a wide range for the imput mass are white dwarfs, were the mass can only poorly be determined from the G magnitude. The achievable precision is roughly constant as function of the input mass. The diagonal lines indicates precisions of 15%, 30%, 50% and 100%, respectively.](mass_accuracy_plot_mass_lens){width="9cm"} Multiple background sources --------------------------- For the 22 events of with multiple background sources, we test three different cases: Firstly, we use all potential background sources. Secondly we only use background sources where [*Gaia*]{} DR2 provides all 5 astrometric parameters, and finally we select only those background sources were the expected precision of [*Gaia*]{} is better than $0.5\,\mathrm{mas}$. The expected precisions of the mass determinations for the different cases are shown in Figures \[figure:diff\_ideas\_prox\] and \[figure:diff\_ideas\_all\], as well as the expected precision for the best case using only one background source. By using multiple background sources, a better precision of the mass determination can be reached. We note that averaging the results of the individual fitted masses will not necessarily increase the precision, since the values are highly correlated. Using all sources it is possible to determine the mass of Proxima Centauri with a precision of $\sigma_{M} =0.012{\,\mathrm{M_{\odot}}}{}$ for the extended 10-year mission of [*Gaia*]{}. This corresponds to a relative error of $10\%$, considering the assumed mass of $M = 0.117{\,\mathrm{M_{\odot}}}{}$ This is roughly a factor $\sim0.7$ better than the precision of the best event only (see Fig.\[figure:diff\_ideas\_prox\] top panel, $\sigma_{M} = 0.019{\,\mathrm{M_{\odot}}}{}\widehat{=}16\%$). Since we do not include the potential data points of the two events predicted by @2014ApJ...782...89S, it might be possible to reach an even higher precision. For those two events, @2018MNRAS.480..236Z measured the deflection using the VLT equipped with the SPHERE instrument. They derived a mass of $M=0.150^{+0.062}_{-0.051}{\,\mathrm{M_{\odot}}}{}$. Comparing our expectations with their mass determination, we expect to reach a six times smaller error. A further source which passes multiple background sources is the white dwarf LAWD 37, where we assume a mass of $0.65{\,\mathrm{M_{\odot}}}{}$. Its most promising event, which was first predicted by [@10.1093/mnrasl/sly066] is in November 2019. [@10.1093/mnrasl/sly066] also mentioned that [*Gaia*]{} might be able to determine the mass with an accuracy of $3\%$, However this was done without knowing the scanning law for the extended mission. We expect a precision of a mass determination by [*Gaia*]{} of $0.12{\,\mathrm{M_{\odot}}}{}$, which corresponds to $19\%$. Within the extended [*Gaia*]{} mission the star passes by 12 further background sources. By combining the information of all astrometric microlensing events by LAWD 37 this result can improved slightly (see Fig.\[figure:diff\_ideas\_prox\] bottom panel). Here we expect a precision of $0.10\,{\,\mathrm{M_{\odot}}}{}$ ($16\%$). [.5]{} [.5]{} For 8 of the 22 lenses with multiple events the expected precision is better than 50%. The results of these events are given in Table \[table:multi\]. For further three events the expected precision is between 50% and 100% (Figs.\[figure:diff\_ideas\_all\](g) to \[figure:diff\_ideas\_all\](i)). Additionally to our three cases a more detailed selection of the used background sources can be done, however, this is only meaningful once the quality of the real data is known. Summary and Conclusion {#chapter:conclusion} ====================== In this work we showed that [*Gaia*]{} can determine stellar masses for single stars using astrometric microlensing. For that purpose we simulated the individual [*Gaia*]{} measurements for 530 predicted events during the [*Gaia*]{} era, using conservative cuts on the resolution and precision of Gaia. In this study we did not consider orbital motion, however, the orbital motion can be included in the fitting routine for the analysis of the real [*Gaia*]{} measurements. [*Gaia*]{} DR3 (end of 2022) will include orbital parameters for a fraction of the contained stars. This information can be used to decide if orbital motion has to be considered or not. We also assumed that source and lens can only be resolved if both have individual readout windows. However, it might be possible to measure the separation in along-scan direction even from the blended measurement in one readout window. Due to the full width at half maximum of $103\,\mathrm{mas}$ [*Gaia*]{} might be able to resolve much closer lens-source pairs. The astrometric microlensing signal of such measurements is stronger. Hence, the results of events with impact parameters smaller than the window size can be improved by a careful analysis of the data. Via a Monte Carlo approach we determined the expected precision of the mass determination and found that for 42 events a precision better than $30\%$, and sometimes down to $5\%$ can be achieved. By varying the input parameters we found that our results depend only weakly on selected input parameters. The scatter is of the order of $6 \%$ if the proper motion of the background star is known from [*Gaia*]{} DR2 and of the order of $10 \%$ if the proper motion is unknown. Further, the dependency on the selected input mass is even weaker. For 26 future events (closest approach after 2019.5), the [*Gaia*]{} Data alone are not sufficient to derive a precise mass. For these events it will be helpful to take further observations using, for example, the Hubble Space Telescope, the Very Large Telescope or the Very Large Telescope Interferometer. Such two-dimensional measurements can easily be included in our fitting routine by adding two observations with perpendicular scanning directions. Hence, this study can help to select favourable targets for upcoming observations. The combination of [*Gaia*]{} data and additional information might also lead to a better mass constraints for the two previously observed astrometric microlensing events of Stein 51b [@2017Sci...356.1046S] and Proxima Centauri [@2018MNRAS.480..236Z]. For the latter, [*Gaia*]{} DR2 does not contain the background sources. But we are confident that [*Gaia*]{} has observed both background stars. Nevertheless, by combining the information from multiple background sources, [*Gaia*]{} can determine the mass of Proxima Centauri with a precision of $0.012\,{\,\mathrm{M_{\odot}}}{}$. This work has made use of results from the ESA space mission [*Gaia*]{}, the data from which were processed by the [*Gaia*]{} Data Processing and Analysis Consortium (DPAC). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the [*Gaia*]{} Multilateral Agreement. The [*Gaia*]{} mission website is: <http://www.cosmos.esa.int/Gaia>. Two (U.B., J.K.) of the authors are members of the [*Gaia*]{} Data Processing and Analysis Consortium (DPAC). This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. This research made use of Astropy, a community-developed core Python package for Astronomy [@refId0]. This research made use of matplotlib, a Python library for publication quality graphics [@Hunter:2007]. This research made use of SciPy [@jones_scipy_2001]. This research made use of NumPy [@van2011numpy]. [.33]{} [.33]{} [.33]{} [.33]{} [.33]{} [.33]{} [.33]{} [.33]{} [.33]{} [^1]: [*Gaia*]{} observation forecast tool,\ <https://gaia.esac.esa.int/gost/> [^2]: [*Gaia*]{} Data Release Documentation - The scanning law in theory\ <https://gea.esac.esa.int/archive/documentation/GDR2/Introduction/chap_cu0int/cu0int_sec_mission/cu0int_ssec_scanning_law.html> [^3]: [*Gaia*]{} Data Release Documentation - Datamodel description\ <https://gea.esac.esa.int/archive/documentation/GDR2/Gaia_archive/chap_datamodel/sec_dm_main_tables/ssec_dm_gaia_source.html>
--- abstract: 'This paper investigates the relation between Toda brackets and congruences of modular forms. It determines the $f$-invariant of Toda brackets and thereby generalizes the formulas of J.F. Adams for the classical $e$-invariant to the chromatic second filtration.' address: ' Fakultät für Mathematik, Ruhr-Universität Bochum, NA1/66, D-44780 Bochum, Germany' author: - Gerd Laures bibliography: - 'toda.bib' title: Toda brackets and congruences of modular forms --- Introduction ============ In this work congruences between modular forms are used to compute Toda brackets in the second Adams-Novikov filtration. The translation from stable homotopy classes between spheres and modular forms is based on the tertiary invariant $$f: \xymatrix{\pi^{st}_{2n-2}\ar[r]& (D/M_0+M_n)_{{{\mathbb Q}}/ {{\mathbb Z}}}}$$ defined in [@MR1660325] with values in Katz’s ring of divided congruences. The $f$-invariant generalizes the classical $e$ invariant of Frank Adams. The purpose of this work is an extension of Adams’ formulas [@MR0198470 §11] for the $e$-invariant of Toda brackets to the second filtration: $$\begin{aligned} e\left< \alpha, p\iota , \beta \right> &=& -pe(\alpha)e(\beta)\\ e\left< p\iota, \alpha , \beta \right> &=& -p\delta e(\alpha)e(\beta)\\ e\left< \alpha , \beta , p\iota \right> &=& -p\delta e(\alpha)e(\beta)\end{aligned}$$ modulo the indeterminacy. Here, $\delta$ is a certain number which only depends on the dimensions. The $e$-invariant is not strong enough to detect elements in the second filtration. The reason is the following: the $e$-invariant is based on $K$-theory and in the $K$-based Adams-Novikov spectral sequence there aren’t any classes in the second line other than classes in the image of $J$. In [@MR1660325] $K$-theory was replaced by elliptic cohomology theory of level $N$ (see [@MR1071369][@MR1235295][@MR1271552]). It was shown that the corresponding tertiary invariant $f$ is injective on the 2-line. So, it is natural to ask for a precise relationship between the $f$-invariant and Toda brackets. The 2-line of the classical Adams-Novikov spectral sequence was computed for odd primes by Miller, Ravenel and Wilson in [@MR0458423] with the help of the chromatic spectral sequence and by Shimomura for $p=2$ in [@MR635034]. In contrast to the 1-line it is not cyclic. The number of generators is unbounded when considered as a function of the dimension. Its complicated structure makes it hard to determine classes such as Massey products. The $f$-invariant simplifies the 2-line in two steps: first the complex orientation gives an injective map into the elliptic based $E_2$ term and in the second step the extension group is mapped to Katz’s ring of divided congruences. In this paper it is reinterpreted as a coboundary of a resolution which can be viewed as an integral version of a chromatic resolution. The paper starts with a recollection on higher homotopy invariants in general. Here, the edge homomorphisms are used to construct invariants for stable homotopy classes with the help of any theory $E$. Under mild conditions the first few invariants take values in the $E$-based Adams-Novikov $E_2$-term. Then the relation between Toda brackets and Massey products is explained. Adams considered Massey products for extensions of exact sequences. However, it turns out that the cobar complex is better suited to keep the homological algebra small for higher filtrations. This holds especially when a ${{\mathbb Q}}/{{\mathbb Z}}$- reduction is used to come down by one filtration. It follows a computational section where the commutation between the coboundary operator and Toda brackets is investigated. Proposition \[image\] gives a precise translation of the elliptic Adams-Novikov $E_2$ term by primitives in the ring of divided congruences. It comes from a resolution which is closely related to the chromatic resolution. Finally, the desired formulas for the $f$-invariants of Toda brackets are given. In the last section some examples are calculated including the Kervaire class in dimension 30 as 4-fold Toda bracket. Higher Homotopy invariants ========================== Suppose $E$ is a flat ring spectrum. Let $\bar{E}$ be the fibre of the unit map $S^0{\longrightarrow}E$. Then there is spectral sequence $(E_r,d_r)$ with the associated filtration of $\pi_*^S$ given for $s\geq 0$ by $$F^s = \mbox{im} (\pi_* \bar{E}^s {\longrightarrow}\pi_* S^0)$$ The differential $d_r$ raises the filtration by $r$. Hence there are well defined ‘edge homomorphisms’ $$e_s: \xymatrix {F^s \ar[r]&\ F^s/F^{s+1} \ar@{>->}[r]&E_\infty^s \ar@{>->}[r]& E^s_{s+1}}.$$ For a stable class $\alpha$ the value $e_s(\alpha)$ is defined if and only if $e_r(\alpha)$ vanishes for all $r< s$. It associates to $\alpha$ its representative in the $E^s_{s+1}$-term. The invariants are natural with respect to maps between spectral sequences. In case $E$ is complex bordism $MU$ this is the classical Adams-Novikov spectral sequence. Its quotient group $F^0/F^1$ is concentrated in dimension 0 and the invariant $$e_0:\xymatrix{\pi_0^S=F^0\ar[r] &\pi_0(MU)={{\mathbb Z}}}$$ takes values in the integers. It associates to a stable self map of $S^0$ its degree $d$. In fact we may replace $E$ by any other spectrum with the property that the Hurewicz homomorphism is injective in dimension 0. If $E$ has torsion coefficients the map $e_0$ can be non trivial in positive dimensions. For instance this happens for real $K$-theory in dimensions $8k+1$ and $8k+2$. The invariant $e_1$ has been studied by Adams for real and complex $K$-theory in [@MR0198470]. It can be described in terms of exact sequences as follows: Suppose the $e_0$ invariant of $\alpha\in \pi_n^S $ vanishes and let $C_\alpha$ be the cofibre of $\alpha$. Then the sequence $$\xymatrix{E_*S^0 \ar@{>->}[r]& E_*C_\alpha \ar@{->>}[r]& E_*S^{n+1}}$$ of $E_*E$-comodules is short exact and hence determines an element in $$E^{1,n+1}_2\cong\mbox{Ext}^{1,n+1}_{E_*E}(E_*,E_*).$$ Adams determines the extension group for $K$-theory with the help of a monomorphism $$\iota: \xymatrix{E_2^{1,n+1} \ar@{>->}[r]&{{\mathbb Q}}/{{\mathbb Z}}}.$$ In particular, the extension term is cyclic. For $K$-theory its order in dimension $n=4t-1$ is the denominator of the divided Bernoulli number $B_t/2t$. The composite $\iota e_1$ is called the classical $e$-invariant. For variations of the topological modular forms spectrum $TMF$ the invariant $e_2$ has been studied (in chronological order) by the author [@MR1660325][@MR1781277], by Hornborstel-Naumann [@MR2354323], by Behrens-Laures in [@MR2544384] and by von Bodecker [@vB09a][@vB09c]. For its relation to characteristic classes of manifolds with corners see Laures [@MR1781277] and for an interpretation as a spectral invariant see von Bodecker [@vB09b] and Bunke-Naumann [@BN09]. Before describing the invariant we first take a look at the case of complex bordism. Here, for positive even $n$ the $e_1$-invariant vanishes. Moreover, there are no $d_2$-differentials hitting the 2-line. Thus the $E_3^2$-term injects into the $E_2^2$ term. This yields a well defined homomorphism $$e_2: \xymatrix{\pi_n^S\ar[r] & E_2^{2,n+2}}=\mbox{Ext}^{2,n+2}_{E_*E}(E_*,E_*).$$ Let us describe this invariant in terms of exact sequences in more detail: observe that any lift $\bar{\alpha}$ of a stable class $\alpha$ is unique since $E_*$ and $E_*E$ are evenly graded. Moreover, the sequence $$\xymatrix{E_*\Sigma \bar{E} \ar@{>->}[r]& E_*\Sigma C_{\bar{\alpha}} \ar@{->>}[r]& E_*S^{n+2}}$$ is short exact and can be spliced together with $$\xymatrix{E_*S^{0} \ar@{>->}[r]& E_* E \ar@{->>}[r]& E_*\Sigma\bar{E}}$$ to get an extension of degree 2. Next suppose $E$ is complex oriented with $(p,v_1)$ regular and $v_2$ invertible mod $(p,v_1)$. Then locally at $p$ the 2-line of $E$ coincides with the 2-line of $v_2^{-1}MU_{(p)}$ [@MR1781277 4.3.3] and there is a well understood injection[@MR860042] $$\xymatrix{E^2_2(MU_{(p)}) \ar@{>->}[r]&E^2_2(v_2^{-1}MU_{(p)})\cong E_2^2 (E)}.$$ In case of $\Gamma_1(N)$ topological modular forms $E=TMF_1(N)$ there is an injection $$\xymatrix{E_2^{2,2t} \ar@{>->}[r] & (D/M_0+M_t)_{ {{\mathbb Q}}/{{\mathbb Z}}}}$$ where $D$ is the ring of divided congruences and $M$ the ring of modular forms. This map will be reviewed in section \[invariants\]. Furthermore, proposition \[image\] describes its precise image. The $e_3$-invariant is defined for odd dimensional homotopy classes in the cokernel of the $J$-homomorphism. It takes values in the $E_2$-term since all elements in the 1-line are permanent cycles. Hence the invariant $$e_3: \xymatrix{\mbox{coker} J\ar[r]& E_2^3=\mbox{Ext}^3_{E_*E}(E_*,E_*)}$$ is well defined. It will not be further investigated here. Toda brackets and Massey products ================================= In the sequel we assume that $E$ is a complex oriented flat ring theory. The $E_1$-term of the $E$-based Adams-Novikov spectral sequence is the cobar complex $\Omega_*$ which is a differential graded algebra. We use the notation $\Gamma$ for the comodule $E_*E$ over $A=E_*$ and $\bar{\Gamma}$ for the augmentation ideal $E_*\Sigma\bar{E}$. Then the differential $$d_1: \xymatrix{\bar{\Gamma}^{\otimes s} \ar[r]&\bar{\Gamma}^{\otimes (s+1)}}$$ sends $[\alpha_1|\cdots |\alpha_s]$ to $[1|\alpha_1|\cdots |\alpha_s] -[\alpha_1'|\alpha_1''|\cdots |\alpha_s] +\ldots +(-1)^{s+1}[\alpha_1|\cdots |\alpha_s|1]$ and the product is given by concatenation. In fact, each $(E_r,d_r)$ is a differential graded algebra in which Massey products can be defined. Massey products in the cobar complex are related to Toda brackets in stable homotopy. This relation has been studied by Adams, Moss, Lawrence, May, Kochman and others and is well known. Recall from [@MR1407034], Suppose $u\in E_r^{s,t+s}$ and $w\in E_{r'}^{s',t+s'}$ with $s'<s$. Then $d_{r'}(w)$ is a crossing differential of $d_r(u)$ if $s'+r'>s+r$. The following result reformulates the result [@MR1407034 5.7.5] for the homotopy invariants of the last section. \[toda-massey\] Suppose that the Toda bracket $\left<\alpha_1, \ldots ,\alpha_n \right>$ is strictly defined. Let $ a_i$ be a representative of $e_{s(i)}(\alpha_i)$ in the $MU$-based $E^{s(i)}_{r+1}$ for some $r$. Suppose further that the Massey product $\left< a_1,\ldots, a_n\right>$ is defined in $E_{r+1}$ and that there are no crossing differentials of $d_r a_{ij}$ for all defining systems $(a_{ij})$. Then for $s=\sum_is(i)$ there is a class $\alpha \in \left<\alpha_1,\ldots ,\alpha_n \right>$ of filtration $s-n+2$ with $$e_{s-n+2}(\alpha)\in \left< a_1,\ldots, a_n\right>.$$ Since $a_i$ converges to $\alpha_i$ we can apply of [@MR1407034 5.7.5]. Hence $\left< a_1,\ldots, a_n\right>$ consists of infinite cycles which converge to elements of $\left<\alpha_1, \ldots ,\alpha_n \right>$. Since all elements have filtration $s-n+2$ the claim follows. Under the assumptions of \[toda-massey\] we have $$e_{s-n+2}(\alpha)\in \left< a_1,\ldots, a_n\right>$$ for all spectral sequences based on a complex oriented theory $E$. This follows from the naturality of all constructions. The proposition suggests that for the $e_k$-invariant of $n$-fold Toda brackets one should consider only sums of filtrations $s=k+n-2$. The cases for $n=3$ and $k=0,1$ were considered by Adams in [@MR0198470 Theorem 5.3]. The case $k=2$ is the object of the rest of the paper. In the next section triple Massey products are considered. Note that in this case there aren’t any crossing differentials in the defining system as long as there aren’t any $r$-boundaries in the 3-line for $r\geq 2$. Moreover, if the Toda bracket $\left< \alpha_1,\alpha_2, \alpha_3\right>$ is defined then so is the Massey product $\left< a_1,a_2, a_3\right>$. \[indet\] 1. Suppose that the Toda bracket $\left< \alpha_1,\alpha_2, \alpha_3\right>$ is defined. Then $$e_2 \left< \alpha_1,\alpha_2, \alpha_3\right>=\left< e_{s_1}\alpha_1,e_{s_2}\alpha_2, e_{s_3}\alpha_3\right>$$ modulo $$e_{s_1}\alpha_1 H^{s_2+s_3-1,|\alpha_2|+s_2+|\alpha_3|+s_3}(\Omega_*)+ H^{s_1+s_1-1,|\alpha_1|+s_1+|\alpha_2|+s_2}(\Omega_*)e_{s_3}\alpha_3.$$ 2. Suppose that the Toda bracket $\left< \alpha_1,\alpha_2, \alpha_3,\alpha_4\right>$ is strictly defined and that there are no crossing differentials. Then $$e_2 \left< \alpha_1,\alpha_2, \alpha_3,\alpha_4\right>=\left< e_{s_1}\alpha_1,e_{s_2}\alpha_2, e_{s_3}\alpha_3,e_{s_4}\alpha_4 \right>$$ modulo $$\sum_{a,b,c}\left< a, e_{s_3}\alpha_3,e_{s_4}\alpha_4\right>+\left< e_{s_1}\alpha_1,b,e_{s_4}\alpha_4\right>+\left< e_{s_1}\alpha_1,e_{s_2}\alpha_2,c\right>.$$ Since the invariant $e_2 $ is multiplicative it maps the indeterminacy of the Toda bracket $$\alpha_1 \pi^S_{|\alpha_2|+|\alpha_3|+1}+\pi^S_{|\alpha_1|+|\alpha_2|+1}\alpha_3$$ to the indeterminacy of the Massey product stated above. The same holds for the 4-fold products for which the indeterminacy can be found for the Toda bracket in [@MR1052407 theorem 2.3.1] and for the Massey product in [@MR0238929 proposition 2.4]. The ${{\mathbb Q}}/{{\mathbb Z}}$-reduction =========================================== Let $(A,\Gamma)$ be a Hopf algebroid. In this section we assume that $(A,\Gamma)$ has the following properties: 1. $\Gamma$ is flat as an $A$-module. 2. $A$ and $\Gamma$ are torsion free. 3. The map $\phi: A^{\otimes 2}_{{\mathbb Q}}\rightarrow \Gamma_{{\mathbb Q}}$ which sends $a\otimes b$ to $a\eta_R(b)$ is an isomorphism. Define the cosimplicial abelian group $\Omega_{{\mathbb Q}}^n=A_{{\mathbb Q}}^{n+1}$ with cofaces $$\partial^i (a_0\otimes \ldots \otimes a_n)=a_0\otimes \ldots \otimes a_{i-1}\otimes 1\otimes a_{i+1}\otimes \ldots\otimes a_n.$$ The notation is justified to the fact that the map $\phi^{\otimes n}$ provides an isomorphism between the complex $\Omega_{{\mathbb Q}}$ and the rationalized cobar complex. Its cohomology is concentrated in dimension 0 $$H^*(\Omega_{{\mathbb Q}}^*)=H^0(\Omega_{{\mathbb Q}}^*)={{\mathbb Q}}.$$ Its algebra structure is given by $$(a_0\otimes \ldots \otimes a_n)\otimes(b_0\otimes \ldots \otimes b_m)\mapsto a_0\otimes \ldots \otimes a_{n-1} \otimes a_nb_0\otimes b_1\otimes \ldots \otimes b_m.$$ The short exact sequence $$\xymatrix{\Omega^*\ar@{>->}[r]& \Omega^*_{{\mathbb Q}}\ar@{->>}[r]& \Omega^*_{{{\mathbb Q}}/{{\mathbb Z}}}}$$ yields a connecting isomorphism $\delta : H^{n-1}(\Omega^*_{{{\mathbb Q}}/{{\mathbb Z}}}) {\rightarrow}H^{n}(\Omega^*_{})$ in positive dimensions. Choose an augmentation $\tau: \Omega_{{\mathbb Q}}{\longrightarrow}{{\mathbb Q}}$. Then the following observation is easily verified. Let $a\in \Omega^n$ be a cycle and suppose $\sum a_0\otimes \ldots \otimes a_n\in \Omega^{n}_{{\mathbb Q}}$ is its rationalization. Then we have $$\delta^{-1} [a]=[\sum \tau(a_0)a_1\otimes a_2 \otimes \ldots \otimes a_n].$$ Moreover, when writing $b'=\sum \tau(b_0) b_1 \otimes \ldots \otimes b_n$ for $b \in \Omega^n_{{\mathbb Q}}$ we have $$(db)' =b-db'.$$ We are going to study the image of triple Massey products $\left< x,y,z\right> $ under $\delta^{-1}$. For $n=2$ there are 3 cases of interest: $(|y|=0,|x|+|z|=3)$, $(|y|=1,|x|+|z|=2)$ and $(|y|=2, |x|+|z|=1)$. We start with the first case. Suppose $\delta [a]\in H^2(\Omega^*)$ is represented by some $a\in A_{{\mathbb Q}}\otimes A_{{\mathbb Q}}$. We say that $a$ is $q$-adapted for some $q\in {{\mathbb Z}}$ if $qa \in \Gamma$. \[4.3\] $q$-torsion classes admit $q$-adapted representatives. Choose a representative $a $ which is not yet $q$-adapted. Since $\delta [a]$ is q-torsion the element $qa$ represents a boundary in $\Omega^1_{{{\mathbb Q}}/{{\mathbb Z}}}=\Gamma_{{{\mathbb Q}}/{{\mathbb Z}}}$. Hence there is $r\in A_{{\mathbb Q}}$ with $$\tilde{a}=q a +dr \in \Gamma.$$ and $\tilde{a}/q$ is a $q$-adapted representative. \[4.4\] Suppose that for some $q\in {{\mathbb Z}}$ the Massey product $\left<\delta [a],q,\delta[c]\right>$ is defined. 1. For $|[a]|=1, |[c]|=0$ and $a$ $q$-adapted we have $$\delta [q a c]\in \left<\delta [a],q,\delta [c]\right>.$$ 2. For $|[a]|=0, |[c]|=1$ and $c$ $q$-adapted we have $$\delta [-q a c]\in \left< \delta [a],q,\delta [c]\right>.$$ For $(i)$ observe that $\delta [a]$ is represented by $d a \in \Gamma^{\otimes 2}$. Hence $q\, da$ is the boundary of $q a\in \Gamma$. Similarly, let $dc\in \Gamma$ then $q\, dc$ is the boundary of $ qc\in A$. Thus we obtain the representative of the Massey product $$[da\, qc-qa \, dc]\in \left< \delta [a],q,\delta [c]\right>.$$ The lemma tells us that the desired class is obtained by the applying $\tau$ to the first factor in its tensor product expression. Alternatively, one writes the representative as $d(qac)$ and obtains the result. The second formula follows from a similar calculation. We turn to the other cases. \[4.5\] For $ s,t\geq 1$ the product of classes $\delta[a]\in H^s \Omega^*,\delta[b]\in H^t\Omega^*$ vanishes if and only if there is an $r\in A_{{\mathbb Q}}^{\otimes {s+t-1}}$ with $$(a \, db +dr) \in \Gamma^{s+t-1}.$$ We write $ r=r(a, b)$ for any such element. If $\delta[a]\delta[b]=0$ there is $x\in \Gamma^{s+t-1} $ with $$dx =da \, db.$$ Applying $\tau 1\otimes \ldots \otimes 1$ gives $$x-dx'= (a-da')db$$ and we set $r=x'-a'\ db$. Conversely, we have $$d(a\, db +dr)= da\, db.$$ \[4.6\] Suppose that the Massey product $\left< \delta [a],\delta [b],\delta [c]\right>$ is defined. Then 1. for $| a|=|b|=|c|=0$ it holds $$\delta [a(b\,dc + dr(b,c))+r(a,b)dc]\in \left< \delta [a],\delta [b],\delta [c]\right>$$ 2. for $| a|=1$, $|b|=0$ and $c=q\in{{\mathbb Z}}$ it holds $$\delta [q(ab+r(a,b))]\in \left< \delta [a],\delta [b],q\right>.$$ 3. for $ a=q\in {{\mathbb Z}}$, $|b|=0$ and $|c|=1$ it holds $$\delta [-qr(b,c))]\in \left< q,\delta [b],\delta [c]\right>.$$ 4. If $\left< q,\delta [b],\delta [c]\right>$ with $|b|=1$, $|c|=0$ is defined and $b$ is $q$-adapted it holds $$\delta [qr(b,c)]\in \left< q,\delta [b],\delta [c]\right>$$ 5. if $\left< \delta [a],\delta [b],q\right>$ with $|b|=1$, $|a|=0$ is defined and $b$ is $q$-adapted it holds $$\delta [-q(ab+r(a,b))]\in \left< \delta [a],\delta [b],q\right>.$$ By the lemma a representative of the first Massey product is $$da\, (b\, dc +dr(b,c))+(a \, db +dr(a,b))\, dc.$$ Applying $\tau 1\otimes 1$ gives $$a(b\, dc+dr(b,c))+r(a,b)\, dc$$ as claimed. For $(ii)$ a representative is $$da\, qb+(a\, db +dr(a,b))q.$$ In the last case it is $$-q(b\, dc+dr(b,c)) +qb\, dc.$$ Again applying $\tau 1 \otimes 1$ gives the results up to boundary terms. The other Massey products are obtained with the help of the lemma as before. In the case $(iv)$ one gets $$q(b \, dc +dr) +(-qb) dc=q\, dr$$ and in the last case $$-da \, qb+(-a\, db-dr)q= -q(d(ab)+dr).$$ We close this section with an example of a 4-fold Massey product. Note that in principle other types and even higher Massey products can be calculated with the same method. \[4.7\] Suppose $a$ and $b$ have degree 1 and are $q$-adapted. Then the following conditions are equivalent: 1. the Massey product $\left< q,\delta [a],q,\delta [b]\right>$ is defined 2. $0\in \left< \delta [a],q,\delta [b]\right>$ 3. there are $r\in A_{{\mathbb Q}}\otimes A_{{\mathbb Q}}$ and $dv,dw\in \Gamma$ with $ qab+a\, dw + dv\, b+dr\in \Gamma^{\otimes 2}.$ In this case the 4-fold product contains $\delta (-qr+v\, dw)$. Moreover, the same assertions hold for $\left< \delta [a],q,\delta [b],q\right>$. A defining system can look like $$\xymatrix@!=0,1pc{q&&da&&q&&db\\&-qa-dv&&-qa-dv&&-qb-dw&\\ &&0&&x&&}$$ In particular a defining system exists if there is a $x\in \Gamma^{\otimes 2 }$ with $$dx = qd(ab)+dv\, db+da\, dw.$$ Applying $\tau 1\otimes1\otimes 1$ gives $x=qab+dv\, b +a\, dw+dr $ for some $r$. This shows the equivalences. Finally, if $x$ is as above the missing corner is $-qr+v\, dw$ up to a boundary. Invariants in modular forms {#invariants} =========================== For a ${{\mathbb Z}}[1/N ]$-algebra $R$ let $M_k (\Gamma_1 (N ))_R$ be the ring of $\Gamma_1(N)$ modular forms of weight $k$ over $R[\zeta_N]$ which are meromorphic at the cusps. Let $TMF_1 (N )$ denote the corresponding spectrum of topological modular forms. Its coefficients $\pi_* TMF_1(N )$ are concentrated in even degrees, and we have $$\pi_{2k}TMF_1 (N ) \cong M_k (\Gamma_1 (N )).$$ The spectrum $TMF_1(N)$ is complex orientable with formal group isomorphic to the formal completion of the universal elliptic curve over the ring of $\Gamma_1(N)$ modular forms. In fact, it is Landweber exact and also carries the names $Ell^{\Gamma_1(N)}$ or $E^{\Gamma_1(N)}$ in the older literature (see [@MR1071369] [@MR1271552] for theories away from the prime 2 and [@MR1235295] [@MR1660325] for the general case). Nowadays one uses the notation $TMF_1(N)$ since it is a complex oriented relative of the spectrum $TMF$ of topological modular forms. It can be obtained as global sections of a sheaf of spectra over some moduli space of elliptic curves and level structures (see [@MR2469520] et al.). Since the congruence subgroup $\Gamma_1(N)$ will be fixed once and for all we will remove it from the notation and write $TMF$ instead of $TMF_1(N)$. Let $D=D(\Gamma_1(N))$ be the ring of divided congruences (cf.[@MR0417059]). An element in $D$ is a sum $\sum f_k$ of modular forms $f_k\in (M_k)_{{\mathbb Q}}$ with an integral $q$-expansion, that is, an expansion with coefficients in ${{\mathbb Z}}[1/N,\zeta_N]$. The following result is well known (see [@MR1307488] for $p\not=2,3$ or [@MR1781277] [@MR2076927] for the general case.) The $K$-homology of $TMF$ vanishes in odd degrees and there is an isomorphism $$\pi_0 K \wedge TMF\cong D.$$ Set $E=TMF$. Consider the resolution of $E_*$ as $E_*E$ comodule $$\xymatrix@!=2.5pc{& E_*\otimes {{\mathbb Q}}\ar@{->>}[rd]\ar@{-->}[rr]&&E_*\Sigma K \otimes{{\mathbb Q}}/Z \ar@{-->}[rr]\ar@{->>}[rd] &&E_*\Sigma \bar{K} \otimes {{\mathbb Q}}/{{\mathbb Z}}\\ E_*\ar@{>->}[ru] && E_*\otimes {{\mathbb Q}}/{{\mathbb Z}}\ar@{>->}[ru]&&E_*\Sigma \bar{K} \otimes {{\mathbb Q}}/{{\mathbb Z}}\ar[ru]^= }$$ The middle sequence is exact since $E_*=M_*$ is a pure subgroup of $E_*K=D$. The short exact sequences provide connecting homomorphisms $\delta$ for the extension groups. \[image\] Let $P(M)=Hom_{E_*E}(E_*,M)$ denote the primitives of a comodule $M$. Then the sequence $$\xymatrix{ \pi_*\Sigma \bar{K}\otimes {{\mathbb Q}}/{{\mathbb Z}}\ar[r]& P(E_*\Sigma \bar{K}\otimes {{\mathbb Q}}/ {{\mathbb Z}})\ar[r]^{\delta^2} & \mbox{Ext}^2_{E_*E}(E_*,E_*)\ar[r]&0}$$ is exact. Moreover the $f$-invariant is the composite of $e_2$ with the inclusion $$\xymatrix{ \mbox{Ext}^{2,2n}_{E_*E}(E_*,E_*)\cong P(E_{2n-1}\bar{K})/ \pi_{2n-1}\bar{K}\otimes {{\mathbb Q}}/{{\mathbb Z}}\ar@{>->}[r] &(D/M_0+M_{n})_{{{\mathbb Q}}/ {{\mathbb Z}}}}$$ It is not hard to see that the cohomology of $E_*\otimes {{\mathbb Q}}$ is concentrated in degree and dimension 0. In fact, the map $\tau$ gives a contracting homotopy. Hence, for positive dimension we have the isomorphism $$\delta: \xymatrix{ \mbox{Ext}^1(E_*, E_*\otimes {{\mathbb Q}}/ {{\mathbb Z}}) \ar[r] & \mbox{Ext}^2(E_*,E_*)}.$$ Furthermore, the middle short exact sequence of the resolution gives the exact sequence $$\xymatrix{P(E_*K\otimes {{\mathbb Q}}/ {{\mathbb Z}}) \ar[r] & P(E_*\Sigma \bar{K}\otimes {{\mathbb Q}}/ {{\mathbb Z}})\ar[r]^{\delta} & \mbox{Ext}^1(E_*,E_*\otimes {{\mathbb Q}}/{{\mathbb Z}})}.$$ We claim that $\delta $ is surjective in positive dimensions. For that it suffices to show that the map into $\mbox{Ext}^1(E_*,E_*K\otimes {{\mathbb Q}}/ {{\mathbb Z}})$ vanishes. This map admits a factorization $$\xymatrix{\mbox{Ext}^1(E_*,E_*\otimes {{\mathbb Q}}/{{\mathbb Z}})\ar[r]& \mbox{Ext}^1(E_*,E_*E\otimes {{\mathbb Q}}/{{\mathbb Z}})\ar[r]&\mbox{Ext}^1(E_*,E_*K\otimes {{\mathbb Q}}/ {{\mathbb Z}})}$$ induced by the ring map $\chi_0: E{\rightarrow}K$ in $E$-homology. Since the middle term vanishes we have shown that $\delta$ is surjective. It remains to identify the first map. We claim that the map $$\xymatrix{ \pi_*K\otimes {{\mathbb Q}}/ {{\mathbb Z}}\ar[r] & P(E_*K\otimes {{\mathbb Q}}/ {{\mathbb Z}})}$$ is an isomorphism. Let $K_T$ be the elliptic theory associated to the Tate curve. Its coefficients are integral Laurent series in a variable $q$ in all even degrees and they vanish in odd degrees. There is the Miller character, that is, a ring map $\chi: E {\rightarrow}K_T$ which is the $q$-expansion map on coefficients. Consider the injective map $$\xymatrix {(\chi \wedge 1 )_* :E_*K \otimes {{\mathbb Q}}/ {{\mathbb Z}}\ar[r]& {K_T}_*K \otimes {{\mathbb Q}}/ {{\mathbb Z}}}$$ which takes $q$-expansions. A primitive in its source gives a primitive in the target if the target is viewed as a comodule over ${K_T}_*K_T$. The primitives in the target all lie in the primitive of the extended comodule ${K_T}_*K_T\otimes {{\mathbb Q}}/Z$ and hence in $\pi_*K_T\otimes {{\mathbb Q}}/{{\mathbb Z}}$. Since they also lie in ${K_T}_*K\otimes {{\mathbb Q}}/{{\mathbb Z}}$ they must lie in $(\pi_*K)\otimes{{{\mathbb Q}}/ {{\mathbb Z}}}$. This group coincides with $\pi_*(\Sigma \bar{K})\otimes {{{\mathbb Q}}/ {{\mathbb Z}}}$ in positive dimension and hence the claim follows. The $f$-invariant is the composite of $e_2$ with $$\xymatrix{H^2\Omega_* \ar@{>->}[r] & \Omega_2/d\Omega_1\ar[r]^{\! \!\! \!\! \! \! \!\! \!\tau 1\otimes 1 }\ar[r]&{\Gamma}/d(M_n)\otimes {{\mathbb Q}}/ {{\mathbb Z}}\ar[r]^{1\wedge \chi_0 } & D/(M_0+M_n)_{{{\mathbb Q}}/ {{\mathbb Z}}} }.$$ As we have seen before the second map gives an inverse of the connecting homomorphism. Hence the claim follows from the commutative diagram $$\xymatrix{ H^1(\Omega_{{{\mathbb Q}}/ {{\mathbb Z}}})&P(E_*\Sigma \bar{K}\otimes {{\mathbb Q}}/{{\mathbb Z}})/ \sim \ar[r] \ar[l]_\delta ^\cong & D/(M_0+M_n)_{ {{\mathbb Q}}/ {{\mathbb Z}}} \\H^1(\Omega_{ {{\mathbb Q}}/ {{\mathbb Z}}}) \ar[u]_=& P(\bar{\Gamma} \otimes {{\mathbb Q}}/{{\mathbb Z}})/\sim \ar[r]\ar[u]_{1\wedge \chi_0} \ar[l]_\delta^\cong & {\Gamma}/d(M_n)_{{{\mathbb Q}}/ {{\mathbb Z}}} \ar[u]_{1\wedge \chi_0}}$$ in which a 1-cycle in the cobar complex of the left lower corner is send to itself under the composite of the bottom row. We will not study the primitives of $$E_*\Sigma \bar{K}= (D/M_n)_{ {{\mathbb Q}}/ {{\mathbb Z}}}$$ here since it is not needed for Toda brackets and the rest of this work. However, we mention that the primitives are eigenforms under the action of the Hecke operations and they are fixed under the action of the Adams operations (see [@MR1692001 1.1]). This suggests that the elements of the 2-line in the cokernel of $J$ are related to newforms. A precise relationship locally at primes $p\geq 5$ between the 2-line and certain $p$-adic modular forms is described in [@MR2469520]. The following result is due to Bunke and Naumann. It is very useful for explicit calculations. \[Naumann\] 1. There exists a ${{\mathbb Z}}$-basis $f_0\ldots f_{n_k}$ of $M_{k}$ such that $q^i (f_j ) = \delta_{ij}$ for all $i,j$. 2. For a basis as above, the map $$\xymatrix{(D/M_k)_{{{\mathbb Q}}/ {{\mathbb Z}}} \ar[r] & \prod_{i\geq n_k}{{\mathbb Q}}/{{\mathbb Z}}}, \qquad f\mapsto \left( q^\nu(f-\sum_{i=0}^{n_k-1}q^i(f)f_i)\right)_{\nu\geq n_k}$$ is injective. The first part is lemma 9.2 of [@BN10] and the second part is stated there in terms of $q$-expansions. Clearly, it suffices to check finitely many Fourier coefficients for a modular form in the source. There are upper bounds for this number but we do not work them out here. The $f$-invariant of Toda brackets ================================== In this section we apply our formulas to the $f$-invariant of Toda brackets. We start with the simplest case. \[thm1\] Suppose that the Toda bracket $\left< \alpha , p \iota , \beta \right> $ is defined for some $\alpha, \beta $ in positive dimensions $m,n$ and some $p\in {{\mathbb Z}}$. 1. Let the $f$-invariant of $\alpha$ be defined. Choose a representative of $f(\alpha)$ whose $q$-expansion is annihilated by $p$. Then we have $$f\left< \alpha , p \iota , \beta\right> =p e(\beta )f(\alpha )$$ modulo $e(\beta)H^{0,m+2}(\Omega^*_{{{\mathbb Q}}/{{\mathbb Z}}})$. 2. Let the $f$-invariant of $\beta$ be defined. Choose a representative of $f(\alpha)$ whose $q$-expansion is annihilated by $p$. Then we have $$f\left< \alpha , p \iota , \beta\right> = -p e(\alpha )f(\beta )$$ modulo $e(\alpha)H^{0,n+2}(\Omega^*_{{{\mathbb Q}}/{{\mathbb Z}}})$. First note in view of \[indet\] that the indeterminacy in the theorem coincides with the image under the $f$-invariant of the Massey product indeterminacy. For $(i)$ we can choose a $p$-adapted representative $a$ with $\delta [a]=e_2(\alpha)$. Its image under $\chi^0_*$ coincides with the normalized representative of $f(\alpha)$ up to a constant and a modular form $g$ of weight $m/2 +1$ for which $pg$ is a ${{\mathbb Q}}/{{\mathbb Z}}$-cycle. Hence it follows from \[4.4\] $(i)$with $\delta[c]=e_1\beta$ $$f\left< \alpha , p \iota , \beta\right> \ni \chi^0_* [pac]= p f(\alpha)e(\beta)+pg e(\beta)$$ which is the first claim. This shows $(i)$ and $(ii)$ is analogues. Other Toda brackets are more complicated. We first need the A divided congruence $f$ in $k$ variables has [*virtual weight $n$*]{} if there is a modular form $g$ of weight $n$ with the same ${{\mathbb Q}}/{{\mathbb Z}}$ Fourier coefficients $a_{i_1,i_2\ldots i_k}$ for all $i_1,i_2,\ldots , i_k >0$. We write $[f]_n$ for such a ${{\mathbb Q}}/ {{\mathbb Z}}$-modular form $g$. \[6.3\] Suppose $f$ has virtual weight $n$. 1. For $k=1$ any two modular forms in the bracket $[f]_n$ only differ by cycles in $\Omega^{0,2n}_{{{\mathbb Q}}/{{\mathbb Z}}}$. 2. For $k=2$ any two modular forms in the bracket only differ by cycles in $\Omega_{{{\mathbb Q}}/{{\mathbb Z}}}^1$ and modular forms of the form $g \otimes 1 $ with $g$ of weight $n$. For $(i)$ two elements in $[f]$ differ by modular forms $h$ of weight $n$ with integral expansion except the 0-coefficient. Any such $h$ is a cycle in $\Omega^{0,2n}_{{{\mathbb Q}}/ {{\mathbb Z}}}$. We remark for $n$ even and level 1 that the highest denominator in the constant coefficient appearing among all such $h$ is the divided Bernoulli number which happens if $h$ is the divided Eisenstein series $\bar{E}_n$ (see Serre [@MR0404145]). For the case of two variables one observes that for any difference modular form $h=\sum h_1\otimes h_2$ it holds $$\sum h_1\otimes h_2 -h_1h_2\otimes 1 +h_2\otimes h_1 -1\otimes h_1h_2=0$$ (cf. [@MR1660325 Eq.3.2]). Hence, $z=h-\sum h_1h_2\otimes 1$ is antisymmetric with vanishing $q_L^iq_R^j$-coefficients for $i,j>0$. This implies that $z$ is a cycle as one easily verifies. In the following we write $e_{M}(\alpha)$ for any representative of $\delta^{-1}e_1(\alpha)$ in $\Omega^0_{{{\mathbb Q}}/ {{\mathbb Z}}} $ and $$e(\alpha)=q^0(e_M(\alpha)).$$ Note that for $|\alpha|=4n-1$ and level 1 modular forms with 6 inverted we have $$e_M(\alpha)=e(\alpha)E_n$$ and $e$ is the classical $e$-invariant. \[thm 2\] Suppose that the Toda bracket $\left< \alpha , \beta , \gamma \right> $ is defined. 1. Let $|\alpha|=2k-1$, $|\beta |=2l-1 $ and $|\gamma |=2m-1$. Then the modular forms $f(\alpha,\beta)=[e_M(\alpha)e(\beta)]_{k+l}$ and $f(\beta,\gamma)=[e_M(\beta)e(\gamma)]_{l+m}$ exist and we have $$f\left< \alpha , \beta, \gamma \right> = e_M(\alpha) (q^0( f(\beta,\gamma))-e(\beta)e(\gamma)) +e(\gamma)f(\alpha, \beta)$$ modulo the indeterminacy $ e_M(\alpha) q^0H^{0,2(l+m)}(\Omega^*_{{{\mathbb Q}}/ {{\mathbb Z}}})+ e(\gamma)H^{0,2(k+l)}(\Omega^*_{{{\mathbb Q}}/ {{\mathbb Z}}})$. 2. Let $|\alpha|=2k-2, |\beta|=2l-1$. Then $[ f(\alpha)\otimes e_M(\beta)]_{k+l}$ exists and we have $$f\left< \alpha , \beta, p \iota \right> = p f(\alpha) e(\beta) + p\chi^0_* [f(\alpha)\otimes e_M(\beta)]_{k+l}$$ modulo $pP(({D/M_{k+l}})_ {{{\mathbb Q}}/ {{\mathbb Z}}})$. 3. Let $|\beta|=2l-1,|\gamma|=2m-2$. Then $[e_M(\beta)\otimes f(\gamma)]_{l+m}$ exists and we have $$f\left< p \iota , \beta, \gamma \right> = -p\chi^0_*[e_M(\beta)\otimes f(\gamma)]_{l+m}$$ modulo $pP(({D/M_{l+m}})_{{{\mathbb Q}}/ {{\mathbb Z}}})$. 4. Let $|\alpha|=2k-1$ and $|\beta|=2l-2$. Choose a representative of $f(\beta)$ whose $q$-expansion is annihilated by $p$. Then the modular form $[e_M(\alpha )\otimes f(\beta)]_{k+l}$ exists and we have $$f\left<\alpha , \beta, p \iota\right> = -p e_M(\alpha) f(\beta) -p \chi^0_* [e_M(\alpha)\otimes f(\beta)]_{k+l}$$ modulo $pP(({D/M_{k+l}})_{{{\mathbb Q}}/ {{\mathbb Z}}})$. 5. Let $|\beta|=2l-2$ and $|\gamma|=2m-1$. Choose a representative of $f(\beta)$ whose $q$-expansion is annihilated by $p$. Then the modular form $[f(\beta)\otimes e_M(\gamma)]_{l+m}$ exists and we have $$f\left< p \iota , \beta, \gamma \right> = p\chi^0_* [f(\beta)\otimes e_M(\gamma)]_{l+m}$$ modulo $pP(({D/M_{l+m}})_{{{\mathbb Q}}/ {{\mathbb Z}}})$. For $(i)$ $\alpha$, $\beta$, $\gamma$ is represented by $a=e_M(\alpha)$, $b=e_M(\beta)$, $c=e_M(\gamma)$ in the ${{\mathbb Q}}/ {{\mathbb Z}}$-cobar complex. Since the product $da\, db$ vanishes there is a modular form $r$ of degree $k+l$ with $a\, db+dr\in \Gamma=E_*E$. Applying $\chi^0_*$ gives $$a q^0(b)-ab + q^0(r)-r \in D.$$ Thus $aq^0(b) $ is congruent to the modular form $ab+r$ of degree $k+l$ up to a constant and $$f(\alpha,\beta)=[e_M(\alpha ) e(\beta)]_{k+l}=ab+r$$ modulo cycles of degree $k+l$. The analogues statement for $b,c$ shows that the brackets exist. The $f$-invariant of the Toda bracket is hence obtained from \[4.6\]$(i)$ and the computation $$\begin{aligned} f\left< \alpha , \beta, \gamma \right> &=& \chi_*^0\delta^{-1} \left< \delta [a],\delta [b],\delta [c]\right>\\ &=& \chi_*^0 [a(b\,dc + dr(b,c))+r(a,b)dc]\\ &=& ab(q^0(c)-c)+a(q^0(r(b,c)-r(b,c))+r(a,b)(q^0(c)-c)\\ &=& abq^0(c)+aq^0(r(b,c))+r(a,b)q^0(c)\\ &=&aq^0(f(\beta, \gamma)-bc)+f(\alpha ,\beta )q^0(c)\\ &=& e_M(\alpha)(q^0( f(\beta,\gamma))-e(\beta)e(\gamma)) +e(\gamma)f(\alpha, \beta)\end{aligned}$$ This is the result. For $(ii)$ we find with lemma \[4.5\] a modular form $r$ of weight $k+l$ with $a\, db +dr\in \Gamma^2$. This is an expression with Fourier coefficients in variables $q_L$,$q_M$ and $q_R$. For $i,k>0$ we have $$\begin{aligned} q_L^iq_M^0q_R^k(a\, db+dr)&=& \sum q^i(a_1)q^0(a_2)q^k(b)-q^i(r_1) q^k(r_2)\\&=&q_L^iq_R^k(\sum a_1q^0(a_2)\otimes b-r_1\otimes r_2)\end{aligned}$$ which hence is integral. This shows $$r=[f(\alpha)\otimes e_M(\beta)]_{k+l}.$$ We compute with \[4.6\]$(ii)$ $$\begin{aligned} f\left< \alpha , \beta, p \iota \right> &=& \chi^0_* \delta^{-1} \left< \delta [a],\delta [b],p\iota \right>\\ &=& \chi^0_* (p(ab+r))\\ &=& p f(\alpha) e(\beta) + p \chi^0_* [f(\alpha)\otimes e_M(\beta)]_{k+l} \end{aligned}$$ Similarly, for $(iii)$ we have $$\begin{aligned} f\left< p \iota , \beta, \gamma \right> &=& \chi^0_* \delta^{-1} \left< p \iota ,\delta [b],\delta \gamma \right>\\ &=& \chi^0_* (-p r))\\ &=& - p \chi^0_* [ e_M(\beta) \otimes f(\gamma)]_{l+m} \end{aligned}$$ and for $(iv)$ $$\begin{aligned} f\left< \alpha , \beta, p \iota \right> &=& \chi^0_* \delta^{-1} \left< \delta [a],\delta [b],p\iota \right>\\ &=& -\chi^0_* (p(ab+r))\\ &=& - p e_M(\alpha) f(\beta) -p \chi^0_* [e_M(\alpha) \otimes f(\beta)]_{k+l}\end{aligned}$$ Finally, $$\begin{aligned} f\left< p \iota , \beta, \gamma \right> &=& \chi^0_* \delta^{-1} \left< p \iota ,\delta [b],\delta \gamma \right>\\ &=& \chi^0_* (p r))\\ &=& p \chi^0_* [ f(\beta) \otimes e_M(\gamma)]_{l+m} \end{aligned}$$ The indeterminacy is readily verified with \[indet\]. \[4toda\] Suppose that the Toda bracket $\left< p \iota, \alpha, p\iota , \beta \right> $ is strictly defined and let $|\alpha|=2k-2, |\beta |=2l-2$. Then there are representatives of $f(\alpha)$ and $f(\beta)$ which are annihilated by $p$ and for which the bracket $[pf(\alpha)\otimes f(\beta)]_{k+l}$ exists. For any such modular form we have $$f\left< p \iota, \alpha, p \iota , \beta \right> = p\chi^0_*[pf(\alpha)\otimes f(\beta)]_{k+l}$$ modulo $$\chi^0_*([H^{0,2k}(\Omega^*_{{{\mathbb Q}}/{{\mathbb Z}}})\otimes f(\beta )]_{k+l}+[f(\alpha)\otimes H^{0,2l}(\Omega^*_{{{\mathbb Q}}/ {{\mathbb Z}}})]_{k+l})+pq^0(H^{0,2k}(\Omega^*_{{{\mathbb Q}}/ {{\mathbb Z}}}))f(\beta)$$ and the indeterminacies coming from the 3-fold brackets $$p(P((D/M_{k+l})_{{{\mathbb Q}}/ {{\mathbb Z}}}) + H^{0,2k}(\Omega^*_{{{\mathbb Q}}/ {{\mathbb Z}}})q^0(H^{0,2l}(\Omega^*_{{{\mathbb Q}}/ {{\mathbb Z}}})).$$ The same holds for the bracket $\left<\alpha, p\iota , \beta , p \iota \right> $. With \[4.3\] we find $p$-adapted $a,b$ with $e_2(\alpha)=\delta[a]$ and $e_2(\beta )=\delta[b]$. Without loss of generality we can assume $pab+dr\in \Gamma^{\otimes 2 }$ (else replace $a$ by $a+dv/p$ and $b$ by $b+dw/p$). In particular for $i,j>0$ the numbers $$\begin{aligned} q_L^iq_M^0q_R^j(p\sum a_1\otimes a_2b_1\otimes b_2+dr)&=& p\sum q^i(a_1)q^0(a_2b_1)q^j(b_2)-q^i(r_1)q^j(r_2)\\&=& q^i_Lq_R^j(p\sum a_1q^0(a_2)\otimes q^0(b_1)b_2-r) \end{aligned}$$ are integral. Since $f(\alpha)=q^0(a_2)a_1$ and $f(\beta)=-q^0(b_1)b_2$ we conclude that the bracket $[pf(\alpha)\otimes f(\beta)]_{k+l}$ exists. Its indeterminacy is as in \[6.3\]$(ii)$. Hence we have with \[4.7\] $$\begin{aligned} f\left< p \iota, \alpha, p \iota , \beta \right> &=& \chi^0_* \delta^{-1} \left< p \iota, \delta[a], p \iota ,\delta [b] \right>\\ &=& - \chi^0_* p r\\ &=& p \chi^0_* [pf(\alpha)\otimes f(\beta)]_{k+l} \end{aligned}$$ The indeterminacy uses \[indet\] and the calculations of the 3-fold brackets above. Examples ======== In dimension 8 there is the Toda bracket $\left< \nu^2 , 2 ,\eta \right>$ where $\nu$ is the Hopf map of dimension 3. We use the formula \[thm1\] for level 3 TMF to show that this class coincides with $\beta_2$: The $f$-invariant of the product $\nu^2$ can be computed from the formula (cf.[@vB09a]) $$f(\nu^2) = e(\nu)e_M(\nu)= \frac{E_1^2}{12^2}.$$ which can be normalized to $-1/2 ((E_1^2-1)/12)^2$. Hence we have $$f\left< \nu^2 , 2 ,\eta \right>=-\frac{1}{2} \left( \frac{E_1^2-1}{12}\right)^2$$ which coincides with the $f$-invariant of $\beta_2$ (see [@vB09c]). Similarly, we have for the dimension 7 Hopf map $\sigma$ instead of $\nu$ $$f(\sigma^2) = e(\sigma)e_M(\sigma)= \frac{E_4}{240^2}= -\frac{1}{2}\left( \frac{E_4-1}{240}\right)^2$$ and hence $$f\left< \sigma^2 , 2 ,\eta \right>=-\frac{1}{2} \left( \frac{E_4-1}{240}\right)^2.$$ The computation in [@vB09a]p.7 shows that this expression is congruent to $$f(\beta_{4/3})=\frac{1}{2} \left( \frac{E_1^2-1}{4}\right)^4+\frac{1}{2} \left( \frac{E_1^2-1}{4}\right)^3.$$ There is another way to compute this class using the Toda relation $$\left< \sigma^2 , 2 ,\eta \right>=\left< \sigma , 2\sigma ,\eta \right>$$ and formula \[thm 2\]$(i)$: First we have $$f(\sigma , 2 \sigma) = \left[ 2 \frac{E_4}{240^2}\right]_8= \frac{E_4^2}{240^2}$$ and $$f(2 \sigma , \eta ) = \left[ \frac{E_4}{240}\right]_5= 0.$$ This gives $$f\left< \sigma , 2 \sigma,\eta \right>= -\frac{E_4}{240^2}+\frac{1}{2}\frac{E_4^2 }{240^2}=\frac{1}{2} \left( \frac{E_4-1}{240}\right)^2=f(\beta_{4/3})$$ In dimension 18 there is the class $\left< \sigma , 2 \sigma, \nu \right>$ for which the formula reads $$f\left< \sigma , 2 \sigma, \nu \right>= \frac{E_4}{240}\left(q^0 \left[ \frac{E_4}{1440}\right]_6-\frac{1}{1440}\right)+\frac{1}{12} \frac{E_4^2}{240^2}$$ To evaluate the bracket observe that $$d^3=d^5 \mbox { mod } 8$$ for all integers $d$ and hence $$\frac{E_4-1}{240}=\sum_{n\geq 1}\sum_{d\mid n}d^3q^n=\sum_{n\geq 1 }\sum_{d\mid n}d^5q^n =\frac{1-E_6}{504}\mbox{ mod } 8.$$ This gives $$\begin{aligned} f\left< \sigma , 2 \sigma, \nu \right>&=& \frac{E_4}{240}\left(-\frac{1}{3024}-\frac{1}{1440}\right)+\frac{1}{12} \frac{E_4^2}{240^2}\\ &=&\frac{1}{6}\frac{1}{240^2}\left( -\frac{31}{21}E_4+\frac{1}{2}E_4^2\right)\end{aligned}$$ which has order 4 modulo indeterminacy. In fact with lemma \[Naumann\] one can show that it coincides with $f(-\beta_{4/2,2})$. The Toda bracket $\left< \sigma^2, 2 ,\sigma^2 , 2\right>$ in dimension 30 exists (see [@MR810962]1.2 and 1.3.) Compute modulo indeterminacy $$\begin{aligned} \lefteqn{\left[ \frac{1}{2}\left( \frac{E_4 -1}{240}\right)^2 \otimes \left( \frac{E_4 -1}{240}\right)^2 \right]_{16} }\\ &=& \left[ \frac{1}{240^4}\left( \frac{E_4^2\otimes E_4^2}{2}-E_4^2\otimes E_4-E_4\otimes E_4^2 +2 E_4 \otimes E_4\right) \right]_{16}\\ &=& \frac{1}{240^4}\left(\frac{E_4^2\otimes E_4^2}{2}-\frac{E_4^3\otimes E_4}{3}-\frac{E_4\otimes E_4^3}{3}\right) \\ &=& \frac{1}{12}\left(\frac{E_4 \otimes 1 -1 \otimes E_4}{240}\right) ^4\end{aligned}$$ Here we used in the second step that $$\frac{1}{3} \left( \frac{E_4-1}{240}\right) ^3 \otimes \frac{E_4-1}{240}$$ is integral. This gives with \[4toda\] $$\begin{aligned} f\left< \sigma^2, 2 ,\sigma^2 , 2\right>&=& 2 \chi^0_*\left[ \frac{1}{2}\left(\frac{E_4 -1}{240}\right)^2\otimes \left(\frac{E_4 -1}{240}\right)^2\right]_{16}\\ &=& 2 \chi^0_*\left( \frac{1}{12}\left(\frac{E_4 \otimes 1 -1 \otimes E_4}{240}\right) ^4 \right)\\ &=& \frac{1}{2} \left(\frac{E_4 -1}{240}\right)^4\end{aligned}$$ which has order 2 and coincides with the $f$ invariant of the Kervaire class $f(\beta_{8/8})$ (compare [@vB09c]).
--- abstract: 'In this article we study forms of the Segre cubic over non-algebraically closed fields, their automorphism groups and equivariant birational rigidity. In particular, we show that all forms of the Segre cubic are cubic hypersurfaces and all of them have a point.' author: - Artem Avilov title: On forms of the Segre cubic --- Introduction ============ The Segre cubic is a classical three-dimensional variety with many interesting properties. For example, it is a compactification of the moduli space of configurations of six points on the projective line (see [@Dol §2]) and its dual variety, called the Igusa quartic, is a compactification of the moduli space of certain abelian surfaces (see [@Muk Theorem 2]). Birational geometry of the Segre cubic was extensively studied, for example, its small resolutions were described (see [@Fin]). The aim of this article to study equivariant birational rigidity of forms of the Segre cubic. Equivariant birational rigidity of the Segre cubic itself over algebraically closed field of characteristic zero was studied by the author in the paper [@Avi]. We show that for every field of characteristic zero there is only one form of the Segre cubic over this field which is $G$-birationally rigid (see Definition \[def3\]), and only for the following groups: $S_{6}$, $A_{6}$, $S_{5}$ and $A_{5}$, where $S_{6}$ is the full automorphism group and groups $S_{5}$ and $A_{5}$ are embedded into $S_{6}$ in the standard way (see Definition \[def57\]). Moreover, in these cases the form of the Segre cubic is $G$-birationally superrigid. These results can be useful, for example, for classification of finite subgroups of three-dimensional Cremona groups over fields of characteristic zero. We wxpect that these results are valid also over fields of characteristic . Also we prove that all forms of the Segre cubic have a point defined over the basic field, and all of them are cubic hypersurfaces. Special attention is given to the case of the field of real numbers. In this article we use the following notation for groups: by $C_{n}$ we denote the cyclic group of order $n$; by $D_{2n}$ we denote the dihedral group of order $2n$; by $S_{n}$ we denote the symmetric group of rank $n$; by $A_{n}$ we denote the alternating group of rank $n$. For an arbitrary field ${\mathbb{K}}$ by ${\mathbb{K}}^{\operatorname{sep}}$ we denote its separable closure. This work is supported by the Russian Science Foundation under grant No 18-11-00121. The author is a Young Russian Mathematics award winner and would like to thank its sponsors and jury. The author would like to thank S. Gorchinskiy, C. Shramov and A. Trepalin for useful discussions and comments. Biregular geometry of the forms of the Segre cubic ================================================== *The Segre cubic* over the field ${\mathbb{K}}$ of characteristic zero or  is a three-dimensional variety $\mathcal{S}_{{\mathbb{K}}}$, which can be explicitely given by the following system of equations in ${\mathbb{P}}^{5}_{{\mathbb{K}}}$: $$\label{eq}\sum\limits_{i=1}^{6}x_{i}=\sum\limits_{i=1}^{6}x_{i}^{3}=0.$$ Usually, if it doesn’t lead to misunderstanding, we will omit the index and denote the Segre cubic by $\mathcal{S}$. We will call a variety $X$ defined over an arbitrary field ${\mathbb{K}}$ of characteristic zero or $p\geqslant 5$ a *form of the Segre cubic* if $X_{{\mathbb{K}}^{\operatorname{sep}}}=X\otimes_{{\mathbb{K}}} {\operatorname{Spec}}{\mathbb{K}}^{\operatorname{sep}}$ is isomorphic to the Segre cubic over the field ${\mathbb{K}}^{\operatorname{sep}}$. If the field ${\mathbb{K}}$ is the field of real numbers ${\mathbb{R}}$, we will call the variety $X$ a *real form of the Segre cubic*. Note that equations  make sence over an arbitrary field, so there is at least one form of the Segre cubic over an arbitrary field. We will use the following well-known facts about the Segre cubic (see, for example, [@Dol §2]): - the automorphism group ${\operatorname{Aut}}(\mathcal{S})$ is isomorphic to $S_{6}$ and acts by permutations of standard coordinates; - the singular set of the Segre cubic $\mathcal{S}$ consists of 10 ordinary double points, all of them form an ${\operatorname{Aut}}(\mathcal{S})$-orbit and one of them has coordinates $(1:1:1:-1:-1:-1)$; - the variety $\mathcal{S}$ contains exactly 15 planes which form an ${\operatorname{Aut}}(\mathcal{S})$-orbit, one of them can be given by equations $x_{1}+x_{2}=x_{3}+x_{4}=x_{5}+x_{6}=0$ in standard coordinates; - every singular point of $\mathcal{S}$ lies on 6 planes and every plane on $\mathcal{S}$ contains exactly 4 singular points. In other words, singular points and planes form a $(10_{6}, 15_{4})$-configuration in notation of [@Dol2]. The automorphism group of such a configuration is isomorphic to $S_{6}$ and is induced by the automorphism group ${\operatorname{Aut}}(\mathcal{S})$ (see, for example, [@Avi2 Lemma 2.1]). \[def57\] We will say that a subgroup $A_{5}\subset{\operatorname{Aut}}(\mathcal{S})$ or $S_{5}\subset{\operatorname{Aut}}(\mathcal{S})$ is *standard* if it preserves some hyperplane $\{x_{i}=0\}$. In the sequel we will need the following easy facts about elements of order 2 and certain subgroups of the group $S_{6}$. \[le1\]The element $(1\ 2)$ acting on the $(10_{6}, 15_{4})$-configuration has exactly $4$ fixed singular points and $3$ invariant planes and its centralizer is isomorphic to $C_{2}\times S_{4}$. The element $(1\ 2)(3\ 4)$ acting on the $(10_{6}, 15_{4})$-configuration has exactly $2$ fixed singular points and $3$ invariant planes and its centralizer is isomorphic to $C_{2}\times (C_{2}^{2}\rtimes C_{2})\simeq C_{2}\times D_{8}$. The element $(1\ 2)(3\ 4)(5\ 6)$ acting on the $(10_{6}, 15_{4})$-configuration has exactly $4$ fixed singular points and $7$ invariant planes and its centralizer is isomorphic to $C_{2}^{3}\rtimes S_{3}$. The stabilizer of a singular point is isomorphic to $S_{3}^{2}\rtimes C_{2}$, the stabilizer of a plane is isomorphic to $S_{4}\times C_{2}$. A non-standard subgroup $S_{5}$ acts on the set of planes with two orbits of length $5$ and $10$ respectively. Simple direct computations. \[cor1\] Every real form of the Segre cubic has a singular point defined over the base field ${\mathbb{R}}$. Let $X$ be a real form of the Segre cubic. Consider an action of the complex conjugation $\sigma$ on the $(10_{6}, 15_{4})$-configuration of singular points and planes on the Segre cubic $X_{{\mathbb{C}}}$. Since $\sigma$ acts as an element of order 2 or 1, by Lemma \[le1\] we have a $\sigma$-invariant singular point. For an arbitrary field ${\mathbb{K}}$ we have the following assertion. \[le6\] A form $X$ of the Segre cubic over the field ${\mathbb{K}}$ contains a ${\mathbb{K}}$-point if and only if the variety $X$ is isomorphic to a cubic hypersurface in ${\mathbb{P}}^{4}_{{\mathbb{K}}}$. Assume that the variety $X$ has a point defined over the field ${\mathbb{K}}$. Then the group $\Pic(X_{{\mathbb{K}}^{\operatorname{sep}}})^{\operatorname{Gal}({\mathbb{K}}^{\operatorname{sep}}/{\mathbb{K}})}$ coincides with its subgroup $\Pic(X)$ (see, for example, ). In particular, the divisor class $-\frac{1}{2}K_{X}$ is defined over ${\mathbb{K}}$ and its linear system induces an embedding of $X$ into ${\mathbb{P}}^{4}_{{\mathbb{K}}}$ as a cubic hypersurface. Due to [@Cor2 Proposition 3.2] the converse statement is also true. As a consequence, every real form of the Segre cubic is a cubic itself. Let us show, that analogous result is valid over an arbitrary field of characteristic zero or $p\geqslant 5$. For this purpose we need the following lemma. \[le11\][(cf. [@Cor2 Lemma 2.3])]{.nodecor} Let ${\mathbb{K}}$ be an arbitrary field of characteristic different from $2$, and let ${\mathbb{L}}/{\mathbb{K}}$ be a composite of quadratic extensions of the field ${\mathbb{K}}$. Let $X$ be a variety over the field ${\mathbb{K}}$ such that $X_{{\mathbb{L}}}$ is isomorphic to a cubic in ${\mathbb{P}}^{4}_{{\mathbb{L}}}$. Suppose that $X_{{\mathbb{L}}}$ has an ${\mathbb{L}}$-point. Then $X$ has a ${\mathbb{K}}$-point. There is a sequence of quadratic field extensions $${\mathbb{K}}={\mathbb{L}}_{0}\subset{\mathbb{L}}_{1}\subset{\mathbb{L}}_{2}\subset ... \subset {\mathbb{L}}_{n}={\mathbb{L}}.$$ By the induction it is enough to consider the case when ${\mathbb{L}}$ is a quadratic extension of ${\mathbb{K}}$. Since the characteristic of ${\mathbb{K}}$ differs from $2$, the extension ${\mathbb{K}}\subset {\mathbb{L}}$ is a Galois extension. We know that the cubic $X_{{\mathbb{L}}}$ contains an ${\mathbb{L}}$-point. This point is either defined over ${\mathbb{K}}$ or its ${\operatorname{Gal}}({\mathbb{L}}/{\mathbb{K}})$-orbit consists of two points. In the second case the line $l$ passing through these points is defined over ${\mathbb{K}}$. If this line lies on $X_{{\mathbb{L}}}$ then $X$ contains every ${\mathbb{K}}$-point of the line $l$. If the line $l$ doesn’t lie on $X_{{\mathbb{L}}}$ then the third intersection point of the line $l$ and the cubic $X_{{\mathbb{L}}}$ is defined over ${\mathbb{K}}$. \[le14\] Let ${\mathbb{K}}$ be a field of characteristic zero or $p\geqslant 5$. Then every form of the Segre cubic over the field ${\mathbb{K}}$ has a ${\mathbb{K}}$-point and isomorphic to a cubic hypersurface. Let $X$ be a form of the Segre cubic. There is the exact sequence $$0\to \Pic(X)\to \Pic(X_{{\mathbb{K}}^{\operatorname{sep}}})^{{\operatorname{Gal}}({\mathbb{K}}^{\operatorname{sep}}/{\mathbb{K}})}\to \operatorname{Br}({\mathbb{K}}),$$ see, for example, [@GSh Problem 3.3.5(iii)]. If the divisor class $-\frac{1}{2}K_{X}$ is not defined over the field ${\mathbb{K}}$, then the group $\Pic(X)$ is embedded into $\Pic(X_{{\mathbb{K}}^{\operatorname{sep}}})^{{\operatorname{Gal}}({\mathbb{K}}^{\operatorname{sep}}/{\mathbb{K}})}$ as a subgroup of index 2, so we define canonically an element of order 2 in the Brauer group $\operatorname{Br}({\mathbb{K}})$. Since this element has a representation as a tensor product of quaternion algebras over the field ${\mathbb{K}}$ (see [@MeS Theorem 11.5]), there is a composite ${\mathbb{L}}$ of quadratic extensions of the field ${\mathbb{K}}$ such that our algebra splits over this extension. Thus the corresponding embedding $$\Pic(X_{{\mathbb{L}}})\to \Pic(X_{{\mathbb{K}}^{\operatorname{sep}}})^{{\operatorname{Gal}}({\mathbb{K}}^{\operatorname{sep}}/{\mathbb{L}})}$$ is an isomorphism. The linear system $|-\frac{1}{2}K_{X_{{\mathbb{L}}}}|$ induces an embedding of $X_{{\mathbb{L}}}$ into ${\mathbb{P}}^{4}_{{\mathbb{L}}}$ as a cubic hypersurface. By Lemma \[le6\] there is an ${\mathbb{L}}$-point on $X_{{\mathbb{L}}}$. By Lemma \[le11\] there is a ${\mathbb{K}}$-point on $X$. In particular, by Lemma \[le6\], the variety $X$ is isomorphic to a cubic hypersurface in ${\mathbb{P}}^{4}_{{\mathbb{K}}}$. Now we return to the field of real numbers. \[pr3\] There are exactly $4$ real forms of the Segre cubic $\mathcal{S}$ up to isomorphism. All of them are three-dimensional cubic hypersurfaces and rational over the field ${\mathbb{R}}$. It is well-known (see, for example, [@Ser Chapter III, §1]) that there is a one-to-one correspondence between the forms of a real variety $X$ and the elements of $\operatorname{H}^{1}(\operatorname{Gal}({\mathbb{C}}/{\mathbb{R}}), {\operatorname{Aut}}(X_{{\mathbb{C}}}))$. The latter set, in turn, is in one-to-one correspondence with the set of all homomorphisms $$\operatorname{Gal({\mathbb{C}}/{\mathbb{R}})}\simeq C_{2}\to S_{6}\simeq{\operatorname{Aut}}(X_{{\mathbb{C}}})$$ up to conjugation. Such homomorphisms are defined by conjugacy classes of elements of order 2 or 1 in the group $S_{6}$. There are exactly four such classes: the trivial permutation and conjugacy classes of the transposition $(1\ 2)$, the product of two transpositions $(1\ 2)(3\ 4)$ and the product of three transpositions $(1\ 2)(3\ 4)(5\ 6)$. Let $X$ be a real form of the Segre cubic $\mathcal{S}$. By Corollary \[cor1\] the variety $X$ contains an ${\mathbb{R}}$-point. By Lemma \[le6\] the variety $X$ is isomorphic to a cubic in ${\mathbb{P}}^{4}_{{\mathbb{R}}}$. This cubic contains a singular point defined over the field ${\mathbb{R}}$ and the projection from such a point gives us a birational map from the variety $X$ to ${\mathbb{P}}^{3}_{{\mathbb{R}}}$. We say that a real form $X$ of the Segre cubic is of type I (resp., II, III or IV) if the image of the complex conjugation in the group ${\operatorname{Aut}}(\mathcal{S})\simeq S_{6}$ is the trivial permutation (resp., is conjugate to the transposition $(1\ 2)$, is conjugate to the permutation $(1\ 2)(3\ 4)$ or is conjugate to the permutation $(1\ 2)(3\ 4)(5\ 6)$). \[rem\] The form of the Segre cubic of type I can be obtained in the following way: blow up five ${\mathbb{R}}$-points of ${\mathbb{P}}^{3}_{{\mathbb{R}}}$ in general position and contract 10 proper transforms of lines passing through pairs of points. The obtained variety is the required form of the Segre cubic. It also can be defined explicitely by the equations . Note that in this case there is an action of the group $S_{5}$ on ${\mathbb{P}}^{3}$ with five marked points and the construction is $S_{5}$-equivariant. The forms of type IV and III can be constructed in a similar way, but we need to blow up ${\mathbb{P}}^{3}_{{\mathbb{R}}}$ in 3 real points and one pair of conjugated points or one real point and two pairs of conjugated points in general position respectively. One can easily see that we obtain exactly forms of type IV and III by calculation of numbers of singular ${\mathbb{R}}$-points and planes defined over ${\mathbb{R}}$ in both cases. The form of type II has no such a transformation into ${\mathbb{P}}^{3}_{{\mathbb{R}}}$, but by Proposition \[pr3\] a transformation of other type exists. In the following proposition we describe automorphism groups of all forms of the Segre cubic. \[pr4\] The automorphism group of a form of the Segre cubic over an arbitrary field ${\mathbb{K}}$ of characteristic zero or $p\geqslant 5$ coincides with the centralizer of the image of the Galois group ${\operatorname{Gal}}({\mathbb{K}}^{\operatorname{sep}}/{\mathbb{K}})$ in the automorphism group of the $(10_{6}, 15_{4})$-configuration of singular points and planes on $\mathcal{S}$. The automorphism group of the form of the Segre cubic of type $\mathrm{I}$ (resp., $\mathrm{II}$, $\mathrm{III}$ or $\mathrm{IV}$) is isomorphic to $S_{6}$ (resp., $C_{2}\times S_{4}$, $C_{2}\times D_{8}$ or $C_{2}^{3}\rtimes S_{3}).$ Let $X$ be a form of the Segre cubic over ${\mathbb{K}}$. There is a canonical embedding of the automorphism group of $X$ into the automorphism group of the $(10_{6}, 15_{4})$-configuration of singular points and planes on $X_{{\mathbb{K}}^{\operatorname{sep}}}\simeq\mathcal{S}$ and the canonical map from the Galois group ${\operatorname{Gal}}({\mathbb{K}}^{\operatorname{sep}}/{\mathbb{K}})$ into the same group $S_{6}$. Since all automorphisms defined over the base field commute with the action of the Galois group, the image of such embedding is contained in the centralizer of the image of the Galois group ${\operatorname{Gal}}({\mathbb{K}}^{\operatorname{sep}}/{\mathbb{K}})$. Let $g\in S_{6}$ be an element of the automorphism group of the $(10_{6}, 15_{4})$-configuration which commutes with the image of some element $\sigma\in{\operatorname{Gal}}({\mathbb{K}}^{\operatorname{sep}}/{\mathbb{K}})$. Let $\widetilde{g}$ be the corresponding automorphism of the variety $X_{{\mathbb{K}}^{\operatorname{sep}}}$. Then $\widetilde{g}^{-1}\circ\sigma^{-1}\circ\widetilde{g}\circ\sigma$ is a linear transformation of ${\mathbb{P}}_{{\mathbb{K}}^{\operatorname{sep}}}^{4}$ which fixes all singular points of the variety $X_{{\mathbb{K}}^{\operatorname{sep}}}$. Since there are 10 singular points and they are in general enough position this linear map is trivial. Thus $\sigma^{-1}\circ\widetilde{g}\circ\sigma=\widetilde{g}$. If $g$ commutes with all elements of the image of ${\operatorname{Gal}}({\mathbb{K}}^{\operatorname{sep}}/{\mathbb{K}})$ then $\widetilde{g}$ is defined over the base field ${\mathbb{K}}$. As a consequence, the embedding of the group ${\operatorname{Aut}}(X)$ into the centralizer of the image of the Galois group is an isomorphism. The second assertion is a consequense of Lemma \[le1\]. Birational geometry of the forms of the Segre cubic =================================================== For the classification of finite subgroups in Cremona groups it is important to study $G$-birational rigidity of Fano varieties. Let $X$ and $Y$ be a varieties with an action of a finite group $G$. We call a rational map $f:X\dasharrow Y$ a *$G$-equivariant map* if there exist an automorphism $\tau$ of the group $G$ such that the following diagram commutes for every $g\in G$: $$\xymatrix{ X \ar@{-->}[r]^{f}\ar[d]^{g}& Y \ar[d]^{\tau(g)} \\ X \ar@{-->}[r]^{f}& Y }$$ We denote the group of $G$-equivariant automorphisms of a $G$-variety $X$ by ${\operatorname{Aut}}^{G}(X)$ and the group of $G$-equivariant birational selfmaps of a $G$-variety $X$ by ${\operatorname{Bir}}^{G}(X)$. \[def3\] Let $G$ be a finite subgroup of the automorphism group of a Fano variety $X$ with terminal singularities, and suppose that $X$ is a $G{\mathbb{Q}}$-factorial variety and ${\operatorname{rk}}\Pic(X)^{G}=1$. The variety $X$ is called *$G$-birationally rigid* if there is no birational map from $X$ to another $G$-Mori fibration. If one also has ${\operatorname{Bir}}^{G}(X)={\operatorname{Aut}}^{G}(X)$ then $X$ is called *$G$-birationally superrigid*. There is the following theorem: \[th57\][(see [@Avi Proposition 4.1, Theorem 4.8])]{.nodecor} Let $G\subset{\operatorname{Aut}}(\mathcal{S})$ be a subgroup of the automorphism group of the Segre cubic over an algebraically closed field of characteristic zero. Then the variety $\mathcal{S}$ is $G$-birationally rigid if and only if $G$ containts a standard subgroup $A_{5}\subset {\operatorname{Aut}}(\mathcal{S})$. Moreover, in this case $\mathcal{S}$ is $G$-birationally superrigid. The next theorem is an analog of the previous statement over an arbitrary field of characteristic zero. Let $X$ be a form of the Segre cubic over some field ${\mathbb{K}}$ of characteristic zero, and let $G\subset{\operatorname{Aut}}(X)$ be a subgroup. Assume that and that the variety $X$ is $G{\mathbb{Q}}$-factorial and $G$-birationally rigid. Then the variety $X$ can be explicitely given by equations  and $G$ contains a standard subgroup $A_{5}\subset S_{6}$. Conversely, the variety given by the equations  is $A_{5}$-birationally superrigid with respect to a standard subgroup $A_{5}\subset{\operatorname{Aut}}(X)$. As were noticed earlier, there is a canonical embedding $G\subset S_{6}$ where $S_{6}$ is the automorphism group of the $(10_{6}, 15_{4})$-configuration. Denote by $H\subset S_{6}$ the image of the Galois group ${\operatorname{Gal}}(\overline{{\mathbb{K}}}/{\mathbb{K}})$ in $S_{6}$. By Proposition \[pr4\] the group $G$ is contained in the centralizer of the group $H$. Let $F\subset S_{6}$ be a subgroup generated by $G$ and $H$. If $F$ does not contain a standard subgroup $A_{5}\subset S_{6}$ then it is not hard to check directly with computer that the group $F$ lies in one of the following subgroups: non-standard subgroup $S_{5}\subset S_{6}$, $S_{3}^{2}\rtimes C_{2}$ (which is the stabilizer of a point), $S_{4}\times C_{2}$ (which is the stabilizer of a plane on $\mathcal{S}$) or $S_{4}\times C_{2}$ (which is conjugate to the following group: $S_{4}$ acts by permutations of coordinates $x_{1}, x_{2}, x_{3}, x_{4}$ while $C_{2}$ permutes $x_{5}$ and $x_{6}$ in standard coordinates). Consider an action of a non-standard subgroup $S_{5}\subset S_{6}$ on the set of planes on $\mathcal{S}$. By Lemma \[le1\] it has two orbits of length 5 and 10 respectively. The sum of all planes in the first orbit is not a ${\mathbb{Q}}$-Cartier divisor since the group $\Pic(X)$ is a primitive sublattice in the group ${\operatorname{Cl}}(X)$ and is generated by the class of hyperplane section while the sum of 5 planes is not a integer multiple of a hyperplane section in ${\operatorname{Cl}}(X)$. Thus in this case the $G$-variety is not $G{\mathbb{Q}}$-factorial. In the second case the projection from a singular point which is invariant with respect to $F$ gives us an equivariant birational map to ${\mathbb{P}}^{3}_{{\mathbb{K}}}$. In the third case an $F$-invariant plane is a Weil divisor which is not ${\mathbb{Q}}$-Cartier by the same reason as in the first case. But this is impossible since we assume that the variety $X$ is $G{\mathbb{Q}}$-factorial. Let us consider the forth case. Let the group $S_{4}\times C_{2}$ act on ${\mathbb{P}}_{\overline{{\mathbb{K}}}}^{4}$ as was described above. Then there is an $S_{4}\times C_{2}$-orbit that consists of the following planes on $\mathcal{S}$: $$\begin{aligned} & x_{1}+x_{2}=x_{3}+x_{4}=x_{5}+x_{6}=0,\\ & x_{1}+x_{3}=x_{2}+x_{4}=x_{5}+x_{6}=0,\\ & x_{1}+x_{4}=x_{2}+x_{3}=x_{5}+x_{6}=0. \end{aligned}$$ They form a hyperplane section of $\mathcal{S}$ given by the equation $x_{5}+x_{6}=0$. The only common point of these three planes $p=(0:0:0:0:1:-1)$ is defined over ${\mathbb{K}}$ and is $G$-invariant. Since the action of the group $G$ on the hyperplane is a projectivisation of a four-dimensional representation of the group $G$ and the invariant point $p$ correspondes to a one-dimensional subrepresentation, we can find a three-dimensional subrepresentation of $G$ as well. The corresponding $G$-invariant plane is defined over the base field. The projection from such a plane gives us a structure of $G$-equivariant cubic fibration. Now we can apply a $G$-equivariant resolution of singularities and relative $G$-equivariant minimal model program and obtain a $G$-birational transformation into a $G$-Mori fibration with the base of positive dimension. So the group $F$ contains a standard subgroup $A_{5}\subset S_{6}$, hence it is isomorphic to one one the following groups: $A_{5}$, $S_{5}$, $A_{6}$ or $S_{6}$. The groups $G$ and $H$ are normal subgroups of $F$, all elements of $G$ commute with all elements of $H$ and they generate the whole group $F$. Obviously, it is possible only in the following case: one of the groups $G$ and $F$ coincides with $F$ while the other one is trivial. If the group $G$ is trivial then the projection from any plane in ${\mathbb{P}}^{4}$ which is defined over the base field gives us a structure of fibration by cubic surfaces. If the group $H$ is trivial then the variety $X$ can be given by the equations  and the group $G$ contains a standard subgroup $A_{5}\subset S_{6}$. Conversely, assume that the form $X$ of the Segre cubic is given by the equations  and the group $G$ contains a standard subgroup $A_{5}\subset S_{6}\simeq {\operatorname{Aut}}(X)$. Then by [@Avi Lemma 4.6, Lemma 4.7] the pair $(X, \frac{1}{\mu}\mathcal{M})$ is canonical for every $\mu$ and every movable $G$-invariant linear subsystem . Hence by the Noether–Fano inequalities (see, for example, [@Cor Theorem 2.4] and [@ChS Theorem 3.2.6] in a $G$-equivariant situation) the variety $X$ is $G$-birationally superrigid. Among all real forms of the Segre cubic only the form of type $\mathrm{I}$ can be $G$-birationally rigid and only if the group $G$ contains a standard subgroup $A_{5}$. In this case it is $G$-birationally superrigid. It looks like the analogous statement (at least in one direction) should be valid also for fields of characteristic $p>5$. We have a minimal model program for threefolds over algebraically closed field of characteristic $p>5$ (see [@HaX]) and it should work also in relative situation with a group action over arbitrary perfect field, but it is not known to the author if it is written down -nywhere. To prove the converse statement we need an analog of the Noether–Fano inequalities in positive characteristic and the existence of such analog is also unknown to the author. It is still an open question: does the birational (super)rigidity of the variety $X_{\overline{k}}=X\otimes_{{\operatorname{Spec}}k} {\operatorname{Spec}}\overline{k}$ over the field $\overline{k}$ implies the birational (super)rigidity of the variety $X$ over the field $k$ and is the same assertion true for varieties with the group action (see discussion in [@Kol] and especially [@Kol Question 4]). Such a result is valid for algebraically closed field $k$ of characteristic zero and its algebraically closed extension $K$, see [@Kol Theorem 2, Theorem 6]. [99]{} A. Avilov, Automorphisms of singular three-dimensional cubic hypersurfaces, Eur. J. of Math. 4:3 (2018), 761–777 A. Avilov, Biregular and birational geometry of quartic double solids with 15 nodes, to appear in Izv. RAN I. Cheltsov, C. Shramov, Cremona groups and the icosahedron, Monogr. Res. Notes Math., CRC Press, Boca Raton, FL (2016) D. Coray, Cubic hypersurfaces and a result of Hermite, Duke Math. J. 54:2 (1987), 657–670 A. Corti, Singularities of linear systems and 3-fold birational geometry. L.M.S. Lecture Note Series, 281 (2000), 259–312 I. Dolgachev, Corrado Segre and nodal cubic threefolds, preprint, https://arxiv.org/abs/1501.06432 I. Dolgachev, Abstract configurations in algebraic geometry, The Fano Conference, Univ. Torino, Turin (2004), 423–462 H. Finkelnberg, Small resolutions of the Segre cubic, Nederl. Akad. Wetensch. Indag. Math. 49 (1987), 261–277 S. Gorchinskiy, C. Shramov, Unramified Brauer group and its applications, Modern lecture courses, MCCME (2018) (in russian) C. Hacon, C. Xu, On the three dimensional minimal model program in positive characteristic, J. of Amer. Math. Soc. 28:3 (2015), 711–744 J. Kollar, Birational Rigidity of Fano Varieties and Field Extensions, Proc. Steklov Inst. Math., 264 (2009), 96–101 A. Merkurjev, A. Suslin, $K$-cohomology of Severi-Brauer varieties and the norm residue homomorphism, Math. USSR-Izv., 21:2 (1983), 307–340 S. Mukai, Igusa quartic and Steiner surfaces, Contemp. Math. 564 (2012), 205–210 Yu. Prokhorov, Rational surfaces, NOC lecture courses, issue 24, MI RAS, Moscow (2015) (in russian) J.-P. Serre, Galois cohomology, Springer-Verlag, Berlin (1997) A. Avilov, <span style="font-variant:small-caps;">National Research University Higher School of Economics, AG Laboratory, HSE, 6 Usacheva str., Moscow, Russia, 119048.</span> *E-mail*: `v07ulias@gmail.com`
--- abstract: | In this paper, we review modified $f(R)$ theories of gravity in Palatini formalism. In this framework, , we use the Raychaudhuri’s equation along with the requirement that the gravity is attractive, which holds for any geometrical theory of gravity to discuss the energy conditions. Then, to derive these conditions, we obtain an expression for effective pressure and energy density by considering FLRW metric. Energy conditions derived in Palatini version of $f(R)$ Gravity differ from those derived in GR. We will see that the WEC (weak energy condition) derived in Palatini formalism has exactly the same expression in its metric approach.\ \ [*Keywords*]{}: Modified Gravity, Palatini Formalism, Energy Conditions address: 'Department of Physics, Florida Atlantic University, FL 33431, USA ' author: - 'H. Saiedi' title: 'Energy Conditions in Palatini Approach to Modified $f(R)$ Gravity' ---  \ Introduction ============ According to astronomical observations, the gravity force at large scales may not behave in standard GR derived from the Hilbert-Einstein action, $A = \frac{1}{16\pi G}\int {\sqrt{-g} \ R \ d^4x} \ + \int {\sqrt{-g} \ L_m \ d^4x}$, where $R$ is the Ricci scalar, $G$ is Newton’s gravitational constant, and $L_m$ is the matter lagrangian density, respectively \[1-4\]. So, a generalized Hilbert-Einstein action may be required to fully understand the gravitational interaction. One of the possible ways to generalize GR is related to the modification of the geometric section of Hilbert-Einstein action. Examples of such modified gravity models are introduced in \[5,6\], by assuming that the Ricci scalar $R$ in the lagrangian is replaced by an arbitrary function $f(R)$ of the Ricci scalar. For discussions of modified $f(R)$ gravity theories see \[7-18\]. The modified $f(R)$ theories of gravity can easily explain the recent cosmological observations, and give a solution to the dark matter problem \[19\]. The metric and Palatini formalisms are two different ways that GR can be derived and lead to the same field equations \[20\]. However, in modified $f(R)$ gravity, the equations of motion in Palatini approach and metric formalism are generically different \[20\]. The field equations in metric approach are higher-order while in Palatini approach they are second-order. Both these formalisms in $f(R)$ theories allow the formulation of simple extensions of Einstein’s GR. For further discussions of $f(R)$ theories of gravity involving geometry and matter coupling see \[21, 22\]. In the cosmological context, different $f(R)$ models give rise to the problem of how to constrain from theoretical and observational aspects of these possible $f(R)$ models. Recently, by testing the cosmological viability of some specific cases of $f(R)$ this possibility has been discussed \[23-29\]. By imposing the energy conditions, we may have further constrains to $f(R)$ theories of gravity \[30, 31\]. In different contexts, these conditions (so-called energy conditions) have been used to obtain global solutions for a variety of situations. As an example, the weak (WEC) and strong (SEC) were used in the Hawking-Penrose singularity theorems. Also, the null energy condition (NEC) is required to prove of the second law of black hole thermodynamics. However, the energy conditions were basically formulated in GR \[32\], one can drive these conditions in $f(R)$ theories of gravity by introducing new effective pressure and energy density defined in Jordan frame. In the present paper, the energy conditions for $f(R)$ theories in Palatini formalism are derived by using the Raychaudhuri’s equation (along with the attraction of gravity) which is the ultimate origin of the energy conditions.\ Palatini Formalism for $f(R)$ Theories of gravity ================================================= To explain the cosmic speed-up, the model $f(R)=R-\mu^4/R$ in metric formalism has some problems. By observation that these problems could be avoided by considering its Palatini formalism, this approach to $f(R)$ theories of gravity has been boosted. Also, the energy conditions in GR and metric formalism of $f(R)$ theories have been discussed in different contexts. So, finding the energy conditions in Palatini version of $f(R)$ would be interesting that we will discuss them in the present paper. Therefore, first we review the field equations in Palatini formalism, and then obtain the energy conditions. The action that defines $f(R)$ theories has the generic form\ $$A = \frac{1}{2k^2}\int {\sqrt{-g} \ f(R) \ d^4x} \ + A_m[g_{\mu\nu}, \psi_m]$$  \ where $A_m[g_{\mu\nu}, \psi_m]$ represents the matter action, which depends on the metric $g_{\mu\nu}$ and the matter field $\psi_m$. In the case of Palatini formalism, the connection $\Gamma^\lambda_{\mu\nu}$ and the metric $g_{\mu\nu}$ are regarded as dynamical variables to be independently varied. Varying the action respect to the metric does yield dynamical equations\ $$f'(R)R_{\mu\nu}(\Gamma) - \frac{1}{2} f(R) g_{\mu\nu} = k^2 T_{\mu\nu} \ ,$$  \ where $f'(R)=\frac{df}{dR}$ and $T_{\mu\nu}$ is the usual energy-momentum tensor. $R_{\mu\nu}(\Gamma)$ is the Ricci tensor corresponding to the connection $\Gamma^\lambda_{\mu\nu}$, which is in general different from the Ricci tensor corresponding to the metric connection $R_{\mu\nu}(g)$. Taking the trace of the equation (2), we obtain\ $$f'(R)R - 2 f(R) = k^2 T \ ,$$  \ where $R = R(T) = g^{\mu\nu}R_{\mu\nu}(\Gamma)$ is directly related to $T$ and is different from the Ricci scalar $R(g) = g^{\mu\nu}R_{\mu\nu}(g)$ in the metric case. Varying the action (1) with respect to the connection yields\ $$\nabla_\alpha(\sqrt{-g} f'(R) g^{\mu\nu}) = 0 \ ,$$  \ Taking into account that under conformal transformations how the Ricci tensor transforms, it has been shown that \[20, 33\]\ $$\begin{aligned} R_{\mu\nu}(g) - \frac{1}{2} g_{\mu\nu} R(g) &=& \frac{k^2}{f'} T_{\mu\nu} - \frac{R(T)f' - f }{2f'} g_{\mu\nu} + \frac{1}{f'}\left (\nabla_\mu\nabla_\nu f' - g_{\mu\nu}\Box f'\right ) , \ \nonumber \\ &-& \frac{3}{2(f')^2} \left [ \partial\mu f' \partial\nu f' - \frac{1}{2} g_{\mu\nu} (\partial f')^2 \right ]\end{aligned}$$  \ where $R_{\mu\nu}(g)$ and $R(g)$ are computed in terms of the Levi-Civita connection of the metric $g_{\mu\nu}$, i.e., they represent the usual Ricci tensor and scalar curvature. It follows that $R(T) = g^{\mu\nu}R_{\mu\nu}(\Gamma)$ and $R(g) = g^{\mu\nu}R_{\mu\nu}(g)$ are related by\ $$R = R(T) = R(g) + \frac{3}{2(f')^2} \partial_\lambda f' \partial^\lambda f' - \frac{3}{f'} \Box f' \ \ .$$  \ For simplicity, we take $k^2 = 8\pi G = 1$. Now, we can realize that the right hand side of equation (5) can be considered as an effective energy-momentum tensor $T^e_{\mu\nu}$. So\ $$\begin{aligned} T^e_{\mu\nu} &=& \frac{1}{f'} T_{\mu\nu} - \frac{R(T)f' - f }{2f'} g_{\mu\nu} + \frac{1}{f'}\left (\nabla_\mu\nabla_\nu f' - g_{\mu\nu}\Box f'\right ) , \ \nonumber \\ &-& \frac{3}{2(f')^2} \left [ \partial\mu f' \partial\nu f' - \frac{1}{2} g_{\mu\nu} (\partial f')^2 \right ] \ .\end{aligned}$$  \ Taking the trace of the above equation, one can easily find\ $$T^e = g^{\mu\nu}T^e_{\mu\nu} = \frac{T}{f'} - \frac{2}{f'} \left(R(T)f' - f \right) - \frac{3 \Box f'}{f'} + \frac{3}{2(f')^2} (\partial f')^2 \ .$$  \ By substituting $T$ from (3) into equation (8), after simplification, we reach\ $$T^e = \frac{3}{2(f')^2} (\partial f')^2 - \frac{3 \Box f'}{f'} - R(T) \ .$$  \ Now, by comparing the above relation and equation (6), one can easily realize that\ $$T^e = - R(g) \ .$$  \ So, we can rewrite the equation (5) as\ $$R_{\mu\nu}(g) = T^e_{\mu\nu} - \frac{T^e}{2} g_{\mu\nu} \ ,$$  \ where $ T^e_{\mu\nu}$ and $T^e$ are equations (7) and (8), respectively.\ \ Energy Conditions in Palatini Version of $f(R)$ ================================================ To find the energy conditions, we shall use the Raychaudhuri’s equation which holds for any geometrical theory of gravity. Therefore, we first briefly review these conditions in GR and then apply them to the $f(R)$ modified gravity in Palatini formalism. The Raychaudhuri’s equation implies that for any hypersurface orthogonal congruences, the condition for attractive gravity (convergence of timelike geodesics) reduces to   $R_{\mu\nu}(g) u^\mu u^\nu \geq 0$,   where $u^\mu$ is a tangent vector field to a congruence of timelike geodesics. In GR, using the units such that $k^2 = 8\pi G = c = 1$, we have  $R_{\mu\nu}(g) - \frac{1}{2}g_{\mu\nu}R(g) = T_{\mu\nu}$  or   $R_{\mu\nu}(g) = T_{\mu\nu} - \frac{T}{2}g_{\mu\nu}$.\ So the condition $R_{\mu\nu}(g) u^\mu u^\nu \geq 0$  implies that\ $$R_{\mu\nu}(g) u^\mu u^\nu = \left ( T_{\mu\nu} - \frac{T}{2}g_{\mu\nu} \right ) u^\mu u^\nu \geq 0 \ \ .$$  \ For a perfect fluid with energy density $\rho$ and pressure $p$ $$T_{\mu\nu} = (\rho + p) u_\mu u_\nu - p g_{\mu\nu} \ .$$  \ by using the restriction (12), the SEC can be written as  $(\rho + 3p) \geq 0$.\ The condition for convergence of null geodesics along with Einsteins’s equations leads to\ $$R_{\mu\nu}(g) k^\mu k^\nu = T_{\mu\nu} k^\mu k^\nu \geq 0 \ \ .$$  \ which is the NEC. Here $k^\mu$ is a tangent vector field to a congruence of null geodesics. Therefore, the NEC for the energy-momentum tensor (13) can be written as   $(\rho + p) \geq 0$.\ Since, the Raychaudhuri’s equation is valid for any geometrical gravity theory, so, the conditions   $R_{\mu\nu}(g) u^\mu u^\nu \geq 0$  and  $R_{\mu\nu}(g) k^\mu k^\nu \geq 0$  along with field equations in Palatini version of $f(R)$ gravity implies that\ $$\begin{aligned} R_{\mu\nu}(g) u^\mu u^\nu &=& \left ( T^e_{\mu\nu} - \frac{T^e}{2}g_{\mu\nu} \right ) u^\mu u^\nu \geq 0 , \\ \nonumber \\ R_{\mu\nu}(g) k^\mu k^\nu &=& T^e_{\mu\nu} k^\mu k^\nu \geq 0 \ \ .\end{aligned}$$  \ where we have used $T^e_{\mu\nu}$ instead of $T_{\mu\nu}$. Now, by comparing the above equations with equations (12) and (14), we can simply figure out that the SEC and NEC can be modified as   $(\rho_e + 3p_e) \geq 0$  and  $(\rho_e + p_e) \geq 0$, respectively.\ By substituting (6) into (7), for the homogeneous and isotropic Friedmann-Lemaitre-Robertson-Walker (FLRW) metric with scale factor $a(t)$, and after some simplifications, we reach the following relations for effective density $\rho_e$ and effective pressure $p_e$.\ $$\begin{aligned} \rho_e &=& T^{0e}_0 = \frac{1}{f'} \left [ \rho + \frac{1}{2} (f - R(g)f') - \frac{3}{2f'}(\partial_0 f')^2 + \frac{3}{2}\partial_0 \partial_0 f' + \frac{3}{2} H \partial_0 f' \right ] \ \ , \\ \nonumber \\ \nonumber \\ p_e &=& - T^{1e}_1 = \frac{1}{f'} \left [ p - \frac{1}{2} (f - R(g)f') - \frac{1}{2}\partial_0 \partial_0 f' - \frac{5}{2} H \partial_0 f' \right ].\end{aligned}$$  \ Here, $H=\dot{a}/a$ is the Hubble parameter. We can easily rewrite the above equations as (by denoting $R_g = R(g)$)\ $$\begin{aligned} \rho_e &=& \frac{1}{f'} \left [ \rho + \frac{1}{2} (f - R_gf') \right ] \nonumber \\ \nonumber \\ &+& \frac{1}{f'} \left [ \frac{3}{2}\ddot{R_g}f'' - \frac{3}{2f'}\dot{R_g}^2 f''^2 + \frac{3}{2}\dot{R_g}^2 f''' + \frac{3}{2} H \dot{R_g}f'' \right ] \ \ , \\ \nonumber \\ \nonumber \\ p_e &=& \frac{1}{f'} \left [ p - \frac{1}{2} (f - R_gf') - \frac{1}{2}\ddot{R_g}f'' - \frac{1}{2}\dot{R_g}^2 f''' - \frac{5}{2} H \dot{R_g}f'' \right ].\end{aligned}$$  \ To simply express the energy conditions, we can write the Ricci scalar and its derivatives for a spatially flat FLRW metric in terms of the deceleration $(q)$, jerk $(j)$ and snap $(s)$ parameters \[34-36\]\ $$\begin{aligned} R_g &=& -6H^2(1-q) \nonumber \\ \dot{R_g} &=& -6H^3(j-q-2) \nonumber \\ \ddot{R_g} &=& -6H^4(s+q^2+8q+6)\end{aligned}$$  \ where\ $$q=-\frac{1}{aH^2}. \frac{d^2a}{dt^2} \ \ \ , \ \ \ j=\frac{1}{aH^3}. \frac{d^3a}{dt^3} \ \ \ , \ \ \ s=\frac{1}{aH^4}. \frac{d^4a}{dt^4}$$  \ Now, we can classify the energy conditions as follow\ \ **NEC**  :  $(\rho_e + p_e) \geq 0$\ $$\begin{aligned} \rho &+& p + \ddot{R_g}f'' - \frac{3}{2f'}\dot{R_g}^2 f''^2 + \dot{R_g}^2 f''' + H \dot{R_g}f'' \ \geq \ 0 \ \ \ \Rightarrow \nonumber \\ \nonumber \\ \nonumber \\ \rho &+& p - 6H^4(s+q^2+8q+6)f'' - \frac{54}{f'}H^6(j-q-2)^2 f''^2 \nonumber \\ &+& 36H^6(j-q-2)^2 f''' +6H^4(j-q-2) f'' \ \geq \ 0\end{aligned}$$  \ \ **SEC**  :  $(\rho_e + 3p_e) \geq 0$\ $$\begin{aligned} \rho &+& 3p -f +R_gf' - \frac{3}{2f'}\dot{R_g}^2 f''^2 + 6H \dot{R_g}f'' \ \geq \ 0 \ \ \ \Rightarrow \nonumber \\ \nonumber \\ \nonumber \\ \rho &+& 3p - f - 6H^2(1-q)f' - \frac{54}{f'}H^6(j-q-2)^2 f''^2 \nonumber \\ &+& 36H^4(j-q-2) f'' \ \geq \ 0\end{aligned}$$  \ \ **WEC** (week energy condition)  :  beside the inequality (23),  $\rho_e \geq 0$\ $$\begin{aligned} \rho &+& \frac{1}{2}(f - R_gf') - 3H \dot{R_g}f'' \ \geq \ 0 \ \ \ \Rightarrow \nonumber \\ \nonumber \\ \nonumber \\ \rho &+& \frac{1}{2}f + 3H^2(1-q)f' + 18H^4(j-q-2) f'' \ \geq \ 0\end{aligned}$$  \ \ **DEC** (dominant energy condition)  :  beside the inequalities (23) and (25),   $(\rho_e - p_e) \geq 0$\ $$\begin{aligned} \rho &-& p + f - R_gf' + 2\dot{R_g}^2 f''' - \frac{3}{2f'}\dot{R_g}^2 f''^2 + 2\ddot{R_g}f'' + 4H \dot{R_g}f'' \ \geq \ 0 \ \ \ \Rightarrow \nonumber \\ \nonumber \\ \nonumber \\ \rho &-& p + f + 6H^2(1-q) f' + 72H^6(j-q-2)^2 f''' - \frac{54}{f'}H^6(j-q-2)^2 f''^2 \nonumber \\ &-& 12H^4(s+q^2+8q+6)f'' - 24H^4(j-q-2) f'' \ \geq \ 0\end{aligned}$$  \ It is useful to discuss the energy conditions for some specific $f(R)$ models for the present values of deceleration, jerk, and snap parameters. As we can see from the inequalities (23), (24), (25), and (26), these inequalities depend on the value of the snap parameter except for WEC and SEC. Since a reliable value of this parameter has not been reported, therefore, only the WEC and SEC are discussable with the present observational datas of deceleration and jerk parameters $(q_0=-0.81, \ j_0=2.16)$. We shall note that the inequality (25) for WEC is exactly the same inequality found in \[28\].\ Conclusion ========== In the context of modified $f(R)$ gravity, we have reviewed the field equations in Palatini formalism. It is shown that the model $f(R)=R-\mu^4/R$ in metric approach has some problems to explain the cosmic speed-up. By considering the Palatini version of this model, these problems can be avoided. To discuss the energy conditions, we use the Raychaudhuri’s equation along with the requirement that the gravity is attractive, which is the ultimate origin of the energy conditions and holds for any geometrical theory of gravity. We consider FLRW metric to derive the effective pressure and energy density, which are needed to find the energy conditions. It is worth to mention here that the WEC derived in Palatini formalism of $f(R)$ gravity is exactly the same WEC found in its metric approach. References {#references .unnumbered} ========== S. Perlmutter et al., *Astrophys. J.* **517**, 565 (1999). R. Amanullah et al., *Astrophys. J.* **716**, 712 (2010). J. M. Overduin and P. S. Wesson, *Physics Reports*. **402**, 267 (2004). H. Baer, K.-Y. Choi, J. E. Kim, and L. Roszkowski, *Physics Reports*. **555**, 1 (2015). H. A. Buchdahl, *Month. Not. R. Astron. Soc.* **150**, 1 (1970). J. D. Barrow and A. C. Ottewill, *J. Phys. A: Math. Gen.* **16**, 2757 (1983). S. M. Carroll, V. Duvvuri, M. Trodden and M. S. Turner, *Phys. Rev. D* **70**, 043528 (2004). S. Nojiri and S. D. Odintsov, *Int. J. Geom. Meth. Mod. Phys.* **4**, 115 (2007). T. P. Sotiriou and V. Faraoni, *Rev. Mod. Phys.* **82**, 451 (2010). S. Capozziello and M. De Laurentis, *Phys. Rept.* **509**, 167 (2011). S. Nojiri, S.D. Odintsov, V.K. Oikonomou. *Physics Reports* **692**, 1-104. (2017). S. Bahamonde, M. Jamil, P. Pavlovic, and M. Sossich. *Physical Review D* **94**, 4 (2016) H. Saiedi and B. Nasr Esfahani, *Mod. Phys. Lett. A* **26**, 1211 (2011). J. Dutta, S. Mitra, B. Chetry. *International Journal of Theoretical Physics* **55**:10, 4272-4285 (2016). S. Habib Mazharimousavi, M. Halilsoy. *Modern Physics Letters A* **31**,37 (2016) H. Saiedi. *Modern Physics Letters A* **27**:38 (2012). A. Pasqua, S. Chattopadhyay, and R. Myrzakulov, arXiv:1306.0991 T. Xu, S. Cao, J. Qi, M. Biesiada, X. Zheng, and Zong-Hong Zhu, arXiv:1708.08631 C. G. Boehmer, T. Harko, and F. S. N. Lobo, *Astropart. Phys.* **29**, 386 (2008). G. J. Olmo, *Int. J. Mod. Phys. D* **20**, 413 (2011). O. Bertolami, C. G. Boehmer, T. Harko and F. S. N. Lobo, *Phys. Rev. D* **75**, 104016 (2007). T. Harko, F. S. N. Lobo, S. Nojiri and S. D. Odintsov, *Phys. Rev. D* **84**, 024020 (2011). A. W. Brookfield, C. van de Bruck, and L. M. H. Hall, *Phys. Rev. D* **74**, 064028 (2006). L. Amendola, R. Gannouji, D. Polarski, and S. Tsujikawa, *Phys. Rev. D* **75**, 083504 (2007). S. Fay, S. Nesseris, and L. Perivolaropoulos, *Phys. Rev. D* **76**, 063504 (2007). L. Amendola, D. Polarski, and S. Tsujikawa, *Phys. Rev. Lett.* **98**, 131302 (2007). S. Capozziello, S. Nojiri, S. D. Odintsov, and A. Troisi, *Phys. Lett. B* **639**, 135 (2006). J. Santos, J. S. Alcaniz, M. J. Reboucas, and F. C. Carvalho, *Phys. Rev. D* **76**, 083513 (2007) M. Zubair and S. Waheed. *Astrophysics and Space Science* **355**, 2, 361–369 (2015). J. H. Kung, *Phys. Rev. D* **52**, 6922 (1995). S. E. Perez Bergliaffa, *Phys. Lett. B* **642**, 311 (2006). S.W. Hawking and G. F. R. Ellis, *The Large Scale Structure of Spacetime* *(Cambridge University Press, England, 1973).* S. Capozziello, F.Darabi, D.Vernieri, *Mod.Phys.Lett.A* **25**, 3279-3289 (2010). M. Visser, *Classical Quantum Gravity* **21**, 2603 (2004). M. Visser, G*en. Relativ. Gravit.* **37**, 1541 (2005). E. R. Harrison, *Nature (London)* **260**, 591 (1976).
--- abstract: 'Fast magnetoacoustic wave is an important tool for inferring solar atmospheric parameters. We numerically simulate the propagation of fast wave pulses in randomly structured plasmas mimicking the highly inhomogeneous solar corona. A network of secondary waves is formed by a series of partial reflections and transmissions. These secondary waves exhibit quasi-periodicities in both time and space. Since the temporal and spatial periods are related simply through the fast wave speed, we quantify the properties of secondary waves by examining the dependence of the average temporal period ($\bar{p}$) on the initial pulse width ($w_0$) as well as the density contrast ($\delta_\rho$) and correlation length ($L_c$) that characterize the randomness of the equilibrium density profiles. For small-amplitude pulses, $\delta_\rho$ does not alter $\bar{p}$ significantly. Large-amplitude pulses, on the other hand, enhance the density contrast when $\delta_\rho$ is small but have a smoothing effect when $\delta_\rho$ is sufficiently large. We found that $\bar{p}$ scales linearly with $L_c$ and that the scaling factor is larger for a narrower pulse. However, in terms of the absolute values of $\bar{p}$, broader pulses generate secondary waves with longer periods, and this effect is stronger in random plasmas with shorter correlation lengths. Secondary waves carry the signatures of both the leading wave pulse and background plasma, our study may find applications in MHD seismology by exploiting the secondary waves detected in the dimming regions after CMEs or EUV waves.' author: - Ding Yuan - Bo Li - 'Robert W. Walsh' title: Secondary fast magnetoacoustic waves trapped in randomly structured plasmas --- Introduction {#sec:intro} ============ Fast magnetohydrodynamic (MHD) waves can propagate across various magnetic structures, and could therefore be easily trapped in structures with low [Alfvén]{}speeds [see e.g., @goedbloed2004]. Plasma structuring modifies how MHD waves propagate and leads to interesting effects such as wave-guiding, dispersion, mode coupling, resonant absorption, and phase mixing [@edwin1983; @vandoosselaere2008b; @sakurai1991; @heyvaerts1983]. Theoretical studies on MHD waves in structured plasmas, combined with the abundant measurements of low-frequency waves and oscillations in the solar atmosphere, can be employed to infer the solar atmospheric parameters that are difficult to measure directly [see the reviews by @nakariakov2005rw; @demoortel2012 and references therein]. This technique, commonly referred to as solar MHD seismology, has been successful in yielding the magnetic field strength [@nakariakov2001], plasma beta [@zhang2015], transverse structuring [@aschwanden2003], and longitudinal [Alfvén]{}transit time [@arregui2007] in various coronal loops. In addition, it has also been adopted to infer the effective polytropic index of coronal plasmas [@vandoorsselaere2011], magnetic topology of sunspots [@yuan2014cf; @yuan2014lb], and the magnetic structure of large-scale coronal streamers [@chen2010; @chen2011]. While not a common practice in the literature, modelling the inhomogeneous solar atmosphere as randomly structured plasmas is more appropriate in a number of situations. For instance, this approach has been adopted to model the plethora of thin fibrils in sunspots [@keppens1994], the filamentary coronal loops [@parker1988; @pascoe2011], and the structuring in the solar corona across the solar disk [@murawski2001; @yuan2015rs hereafter referred to as ]. @nakariakov2005 examined the dispersive oscillatory wakes of fast waves in randomly structured plasmas. @murawski2001 studied the possible deceleration of fast waves due to random structuring. performed a parametric study on the attenuation of fast wave pulses propagating across a randomly structured corona, and proposed the application of the results for seismologically exploiting the frequently observed large-scale extreme ultraviolet (EUV) waves. Previous investigations of the global EUV waves across the solar disk primarily focused on the nature and properties of the leading front [see the reviews by, e.g., @gallapher2011; @patsourakos2012; @liu2014; @warmuth2015]. However, secondary waves have been observed at strong magnetic waveguides or anti-waveguides, e.g., active regions [@ofman2002; @shen2013], coronal holes [@veronig2006; @li2012; @olmedo2012], prominences [@okamoto2004; @takahashi2015], and coronal loops [@shen2012]. These studies on secondary waves were primarily intended to provide support for the wave nature of EUV waves. However, given that their spatial distribution and temporal evolution can now be observed in substantial detail, secondary waves may well be suitable for remotely diagnosing the structured solar atmosphere as well (). In this study, we present a detailed numerical study on the interaction between fast wave pulses with a randomly structured plasma, paying special attention to secondary waves in the wake of the leading fast wave pulse. We describe our numerical model in [Section \[sec:model\]]{}, and then present a case study on the secondary waves and their quasi-periodicity in [Section \[sec:period\]]{}. Then we perform a parametric study on how this quasi-periodicity is affected by plasma structuring ([Section \[sec:para\]]{}). Finally, [Section \[sec:con\]]{} summarizes the present study. Numerical model {#sec:model} =============== We used MPI-AMRVAC, a finite-volume code [@keppens2012; @porth2014], to solve the ideal MHD equations: $$\begin{aligned} \frac{\partial\rho}{\partial t}+{\nabla\cdot (\rho{\ensuremath{\boldsymbol{\mathbf{v}}}})}=&0, \label{eq:mass}\\ \frac{\partial\rho {\ensuremath{\boldsymbol{\mathbf{v}}}}}{\partial t}+{\nabla\cdot \left}[\rho{\ensuremath{\boldsymbol{\mathbf{v}}}}{\ensuremath{\boldsymbol{\mathbf{v}}}}+{\ensuremath{\boldsymbol{\mathbf{I}}}}p_\mathrm{tot}-\frac{{\ensuremath{\boldsymbol{\mathbf{B}}}}{\ensuremath{\boldsymbol{\mathbf{B}}}}}{\mu_0} \right]=&0, \\ \frac{\partial \epsilon}{\partial t}+{\nabla\cdot \left}[{\ensuremath{\boldsymbol{\mathbf{v}}}}(\epsilon+p_\mathrm{tot})-\frac{({\ensuremath{\boldsymbol{\mathbf{v}}}}\cdot{\ensuremath{\boldsymbol{\mathbf{B}}}}){\ensuremath{\boldsymbol{\mathbf{B}}}}}{\mu_0} \right]=&0, \\ \frac{\partial {\ensuremath{\boldsymbol{\mathbf{B}}}}}{\partial t}+{\nabla\cdot (}{\ensuremath{\boldsymbol{\mathbf{v}}}}{\ensuremath{\boldsymbol{\mathbf{B}}}}-{\ensuremath{\boldsymbol{\mathbf{B}}}}{\ensuremath{\boldsymbol{\mathbf{v}}}})=&0,\end{aligned}$$ where $\rho$ is the density, ${\ensuremath{\boldsymbol{\mathbf{v}}}}$ the velocity, ${\ensuremath{\boldsymbol{\mathbf{B}}}}$ the magnetic field, and ${\ensuremath{\boldsymbol{\mathbf{I}}}}$ is the unit tensor. In addition, $p_\mathrm{tot}=p+B^2/2\mu_0$ is the total pressure, where $p$ is the gas pressure, and $\mu_0$ is the magnetic permeability of free space. The total energy density $\epsilon$ is defined by $\epsilon=\rho v^2/2+p/(\gamma-1)+B^2/2\mu_0$, where $\gamma$ is the adiabatic index. To facilitate the numerical computations, we adopt a set of three independent constants of normalization, namely, $B_0=10{\ensuremath{\,\mathrm {G}}}$, $L_0=1000{\ensuremath{\,\mathrm {km}}}$ and $\rho_0=7.978\cdot10^{-13}{\ensuremath{\,\mathrm {kg\,m^{-3}}}}$. Some derivative constants are also relevant, e.g., the [Alfvén]{}speed $V_\mathrm{A}=B_0/\sqrt{\rho_0\mu_0}=1000{\ensuremath{\,\mathrm {km\,s^{-1}}}}$ and the [Alfvén]{}transit time $\tau_\mathrm{A}=1{\ensuremath{\,\mathrm {s}}}$. In the following text, all symbols represent normalized variables. The MPI-AMRVAC code was configured to solve the one-dimensional (1D) version of the ideal MHD equations in Cartesian coordinates $(x,y,z)$, meaning that all dependent variables depend only on $y$. We chose the three-step Runge-Kutta method for time integration. Furthermore, from the multiple finite-volume approaches implemented by the MPI-AMRVAC code, we adopted the HLL approximate Riemann solver [@harten1983] with the KOREN flux limiter [see, e.g., @toth1996; @kuzmin2006]. The equilibrium state, into which fast wave pulses are to be introduced, is characterized by a uniform, $x$-directed magnetic field ${\ensuremath{\boldsymbol{\mathbf{B}}}}(t=0, y)=(1,0, 0)$. The plasma pressure $p(t=0, y)$ is also uniform, corresponding to a plasma beta of $0.01$ everywhere in the computational domain. Random structuring is realized by specifying a proper density profile $\rho(t=0, y)$. A uniform plasma pressure is maintained by adjusting the temperature profile ($p=\rho T$) accordingly. The density profile is formed by a set of sinusoidal modulations superimposed on a uniform background, $$\label{eq:dens} \rho\left(t=0, y \right) = 1+\frac{\Delta}{N}\sum\limits_{i=1}^{N} R_i\sin \left( \frac{1}{4}\frac{i\pi y}{L_y} +\phi_i \right),$$ where $\Delta$ is a scaling parameter, and $L_y$ is the size of the numerical domain in the $y$-direction. The values $R_i$ and $\phi_i$ are the random amplitude and phase of the $i$-th harmonic component, given by uniform pseudo-random number generators within the ranges of $[0,1]$ and \[$-\pi$,$\pi$\], respectively. A correlation length $L_c$ is defined to quantify the average spacing between two fine structures; and a density contrast $\delta_\rho$ is calculated to define the coarseness of density fluctuations (). A density distribution with $\delta_\rho=0.18$ and $L_c=1.26$ is shown in [Figure \[fig:yt\]]{}(a) for illustration purposes. Fast wave pulses to be launched into the equilibrium are in the form of a Gaussian profile centered around $y_0=0$ with an amplitude of $A_0$ and an initial width of $w_0$ (): $$\begin{aligned} v'_y(t=0, y)&=A_0\exp\left[-4\ln2\frac{(y-y_0)^2}{w_0^2} \right],\\ B'_x(t=0, y)&=A_0\exp\left[-4\ln2\frac{(y-y_0)^2}{w_0^2} \right], \\ \rho'(t=0, y)&=A_0\exp\left[-4\ln2\frac{(y-y_0)^2}{w_0^2} \right],\end{aligned}$$ where $v'_y$, $B'_x$, and $\rho'$ are the perturbations to the $y$-component of the velocity, the $x$-component of the magnetic field, and plasma density, respectively. This form of perturbation simulates a local eigenmode solution and ensures that fast waves are uni-directional. Hereafter, fast wave pulses with $A_0=0.005$ ($0.1$) will be referred to as small-(large-) amplitude pulses. As will be shown, the small-amplitude pulses have negligible nonlinear effects, whereas the large-amplitude ones lead to non-restorable density perturbations. Quasi-periodicity of secondary waves {#sec:period} ==================================== We start with an examination of what happens when a small-amplitude fast pulse with $w_0=1.78$ is launched into a randomly structured plasma as depicted in [Figure \[fig:yt\]]{}(a). [Figure \[fig:yt\]]{}(b) presents the distribution in the $y-t$ plane of the $y$-component of the velocity perturbation ($v_y$), where the diagonal ridge is the course of the leading fast wave pulse. While propagating, this pulse undergoes a series of interactions with the random plasma, thereby experiencing some attenuation and broadening (). In addition, secondary waves with amplitudes at about 10-20% of that of the main pulse are clearly seen, forming a ‘fabric’ pattern as a result of partial transmissions and reflections. Partial reflection is strong at sharp density changes (e.g., at $y\simeq20, 48, 85, 105$), as evidenced by the strong backward propagation of secondary waves. ![ (a) Density distribution of a randomly structured plasma with a density contrast $\delta_\rho=0.18$ and a correlation length $L_c=1.26$. (b) Distribution in the $y-t$ plane of the $y$-component of the velocity perturbation ($v_y$) in response to a small-amplitude fast pulse with $w_0=1.78$. The green (black) curve shows the temporal (spatial) distribution of $v_y$ at $y=32$ ($t=80$). In both curves, the values of $v_y$ are multiplied by $10^4$ for presentation purposes. \[fig:yt\]](ytdisp){width="50.00000%"} We see that the secondary waves exhibit both spatial and temporal quasi-periodicities. To better show this, we derive the the Fourier spectra for the secondary waves shown in the two curves in [Figure \[fig:yt\]]{}(b) by excluding the main pulse and the zeros ahead of the pulse. As shown in [Figure \[fig:fft\]]{}(a), the velocity variation $v_y(t, y=32)$ has prominent oscillations with periods in the range between $6$ and $24$ ([Figure \[fig:fft\]]{}(a)). The average period is found to be $\bar{p}=8.5\pm2.5$ (see Appendix \[sec:method\] for how we calculate this $\bar{p}$). This periodicity is not unique to the temporal profile of $v_y$ at $y=32$, but is found at all other positions. On the other hand, the spatial period ([Figure \[fig:fft\]]{}(b)) for $v_y(t=80, y)$ ranges from $6$ to $22$ (or from $5L_c$ to $17L_c$), with an average value of $\bar{\lambda}=8.1\pm2.3$. The average spatial period is a few times longer than the correlation length $L_c$, meaning that secondary waves need to traverse several correlation lengths before settling into a quasi-periodic signal. This is consistent with @yuan2016st. In addition, the temporal periodicity is found to be correlated with the spatial one, which is not surprising given that the fast wave pulse and secondary waves have an average speed of propagation of about unity in numerical units. The randomness in spatial structuring is transformed into the randomness in the temporal domain. As demonstrated in @yuan2016st, quasi-periodicity is an intrinsic property of a random time series, and the quasi-periods are usually a few times longer than the timescales of the transients. [pnu]{} (45,60)[**$\bar{p}=8.5$**]{} (45,55)[**$\sigma_p=2.5$**]{} [pxi]{} (45,60)[**$\bar{\lambda}=8.1$**]{} (45,55)[**$\sigma_\lambda=2.3$**]{} Parametric Study {#sec:para} ================ In this section, we investigate how the quasi-periodicities depend on various parameters characterizing the equilibrium density profiles and fast pulses. As is evident from Eq. (\[eq:dens\]), the equilibrium density profile is characterized primarily by the density contrast $\delta_\rho$ and correlation length $L_c$. On the other hand, in addition to the amplitudes, fast pulses are also characterized by their initial widths $w_0$. To bring out the effects of each individual parameter on the average period $\bar{p}$, we choose to vary a designated parameter alone by fixing the rest. In addition, since the periodicities in the temporal and spatial domains are correlated, we will only show the results for the temporal periodicities. Given that the time series at each position should be sufficiently long to allow the computation of a significant Fourier spectrum, we choose only the $v_y$ profiles for locations between $y=10$ to $y=100$. The values for $\bar{p}$ are then scatter-plotted for each parameter, see Figures \[fig:rho\], \[fig:Lc\], and \[fig:sig\]. A smaller spread of period means that secondary waves are trapped by a randomly structured plasma in a more uniform manner, similar to the thermalization of collisional particles heated impulsively. The dependence of $\bar{p}$ on the density contrast $\delta_\rho$ is shown in [Figure \[fig:rho\]]{}, for which the values of the correlation length and initial pulse width are fixed at $L_c=1.26$ and $w_0=1.18$, respectively. For small-amplitude pulses, given in [Figure \[fig:rho\]]{}(a), the mean period does not vary significantly with the density contrast. However, the spread in $\bar{p}$ is larger for larger values of $\delta_\rho$. In the case of large-amplitude pulses, shown in [Figure \[fig:rho\]]{}(b), the spread in $\bar{p}$ tends to be stronger than for small-amplitude pulses. Furthermore, this spread tends to first increase with increasing $\delta_\rho$ before decreasing when $\delta_\rho \gtrsim 0.2$. This trend can be understood as follows. Large-amplitude pulses can lead to non-restorable density perturbations (), and therefore will enhance the density contrast if $\delta_\rho$ is weak. However, when $\delta_\rho$ is sufficiently strong, the passage of a nonlinear fast wave pulse has a smoothing effect. ![ Period as a function of density contrast $\delta_\rho$ for (a) small and (b) large amplitude pulses. These computations pertain to a fixed correlation length $L_c$ of $1.26$, and a fixed initial pulse width $w_0$ of $0.18$. \[fig:rho\]](rho_yp){width="48.00000%"} [Figure \[fig:Lc\]]{} shows the dependence of $\bar{p}$ on the correlation length $L_c$. In this set of experiments, we fix the density contrast at $\delta_\rho=0.18$ and vary the correlation length. Furthermore, while both pertain to small-amplitude pulses, two different values for the initial pulse width ($w_0 = 2.83$ and $1.41$) are examined and the results are shown in Figures. \[fig:Lc\](a) and \[fig:Lc\](b), respectively. It is clear that, regardless of $w_0$, the mean period $\bar{p}$ tends to depend linearly on $L_c$. Comparing Figures. \[fig:Lc\](a) and \[fig:Lc\](b), one sees that a narrow pulse is more sensitive to the variations in the correlation length. The period almost triples as $L_c$ doubles. In contrast, the increase in $\bar{p}$ with $L_c$ for the broader pulse is not as strong. ![ Period as a function of correlation length for small-amplitude pulses with two different initial widths: (a) $w_0=2.83$ and (b) $w_0=1.41$. These computations pertain to a fixed density contrast $\delta_\rho$ of $0.18$. \[fig:Lc\]](Lc_yp){width="48.00000%"} How does $\bar{p}$ depend on the initial pulse width $w_0$? To address this, we launch a set of pulses with different initial widths and investigate their propagation in two randomly structured plasmas with correlation lengths of $L_c = 2.00$ and $1.26$, respectively. The resonant energy loss effect () is not prominent in the periodicity of the secondary waves. However, [Figure \[fig:sig\]]{} shows that broader pulses normally generate secondary waves with longer periods, which is more evident if $L_c$ is smaller. This is understandable given that when $L_c$ is small, fast pulses will be able to interact with the fine-scale inhomogeneities more frequently. ![Period as a function of initial pulse width. Two density profiles with correlation lengths of (a) $L_c=1.26$ and (b) $L_c=2.0$ are examined, even though both pertain to the same density contrast of $\delta_\rho = 0.18$. The vertical dashed lines mark where the initial pulse width matches $L_c$ and $2L_c$. \[fig:sig\]](sig_yp){width="48.00000%"} Conclusions {#sec:con} =========== This study offered a series of numerical simulations on the propagation of fast MHD wave pulses in randomly structured plasmas mimicking the solar corona. While traversing the plasma inhomogeneities, fast wave pulses experience partial reflections, giving rise to secondary waves propagating in the opposite direction. In turn, these waves generate further waves, once again due to their partial reflection and transmission at plasma inhomogeneities. The end result is that, the energy contained in the primary fast pulses is spread over the randomly structured plasmas in the form of propagating and standing secondary waves. These secondary waves exhibit quasi-periodicities in both time and space. The spatial period at a given instant is related to the period observed at a fixed position via the fast wave speed. The interaction between fast wave pulses and the plasma inhomogeneities, as quantified by the average temporal period $\bar{p}$ of secondary waves, turns out to depend primarily on the combination of parameters $[\delta_\rho, L_c, w_0]$. Here $\delta_\rho$ and $L_c$ are the density contrast and correlation length that characterize the equilibrium density profile. Furthermore, $w_0$ is the initial width of the wave pulse. For small-amplitude pulses, $\delta_\rho$ does not have a significant effect on $\bar{p}$. Rather, it determines the rapidness for $\bar{p}$ to reach a uniform distribution. Large-amplitude pulses, on the other hand, lead to non-restorable density perturbations, thereby enhancing the density contrast when $\delta_\rho$ is small but having a smoothing effect when $\delta_\rho$ is sufficiently large. The average period $\bar{p}$ scales linearly with the correlation length, with the scaling factors being larger for narrower pulses. However, broader pulses can generate secondary waves with longer periods, the effect being stronger in random plasmas with shorter correlation lengths. Secondary waves carrying the signatures of both the leading wave pulse and background plasma may be detected in the dimming regions after CMEs or EUV waves [see @guo2015; @chandra2016 for some recent observations]. However, a dedicated observational study is needed to fully explore the seismological applications of the present study. We thank the anonymous referee for the constructive comments. This work is supported by the Open Research Program KLSA201504 of Key Laboratory of Solar Activity of National Astronomical Observatories of China (DY). It is also supported by the National Natural Science Foundation of China (41174154, 41274176, and 41474149). natexlab\#1[\#1]{}, I., [Andries]{}, J., [Van Doorsselaere]{}, T., [Goossens]{}, M., & [Poedts]{}, S. 2007, , 463, 333 , M. J., [Nightingale]{}, R. W., [Andries]{}, J., [Goossens]{}, M., & [Van Doorsselaere]{}, T. 2003, , 598, 1375 Chandra, R., Chen, P. F., Fulara, A., Srivastava, A. K., & Uddin, W. 2016, , 822, 106 , Y., [Feng]{}, S. W., [Li]{}, B., [et al.]{} 2011, , 728, 147 , Y., [Song]{}, H. Q., [Li]{}, B., [et al.]{} 2010, , 714, 644 , I., & [Nakariakov]{}, V. M. 2012, Royal Society of London Philosophical Transactions Series A, 370, 3193 , P. M., & [Roberts]{}, B. 1983, , 88, 179 , P. T., & [Long]{}, D. M. 2011, , 158, 365 Goedbloed, J. P., & Poedts, S. 2004, Principles of magnetohydrodynamics: with applications to laboratory and astrophysical plasmas (Cambridge university press) , Y., [Ding]{}, M. D., & [Chen]{}, P. F. 2015, , 219, 36 Harten, A., Lax, P. D., & Leer, B. v. 1983, SIAM review, 25, 35 , J., & [Priest]{}, E. R. 1983, , 117, 220 , R., [Bogdan]{}, T. J., & [Goossens]{}, M. 1994, , 436, 372 , R., [Meliani]{}, Z., [van Marle]{}, A. J., [et al.]{} 2012, Journal of Computational Physics, 231, 718 , D. 2006, Journal of Computational Physics, 219, 513 , T., [Zhang]{}, J., [Yang]{}, S., & [Liu]{}, W. 2012, , 746, 13 , W., & [Ofman]{}, L. 2014, , 289, 3233 , K., [Nakariakov]{}, V. M., & [Pelinovsky]{}, E. N. 2001, , 366, 306 , V. M., & [Ofman]{}, L. 2001, , 372, L53 , V. M., [Pascoe]{}, D. J., & [Arber]{}, T. D. 2005, , 121, 115 , V. M., & [Verwichte]{}, E. 2005, Living Reviews in Solar Physics, 2, 3 , L., & [Thompson]{}, B. J. 2002, , 574, 440 , T. J., [Nakai]{}, H., [Keiyama]{}, A., [et al.]{} 2004, , 608, 1124 , O., [Vourlidas]{}, A., [Zhang]{}, J., & [Cheng]{}, X. 2012, , 756, 143 , E. N. 1988, , 330, 474 , D. J., [Wright]{}, A. N., & [De Moortel]{}, I. 2011, , 731, 73 , S., & [Vourlidas]{}, A. 2012, , 281, 187 , O., [Xia]{}, C., [Hendrix]{}, T., [Moschou]{}, S. P., & [Keppens]{}, R. 2014, , 214, 4 , T., [Goossens]{}, M., & [Hollweg]{}, J. V. 1991, , 133, 227 , Y., & [Liu]{}, Y. 2012, , 754, 7 , Y., [Liu]{}, Y., [Su]{}, J., [et al.]{} 2013, , 773, L33 , T., [Asai]{}, A., & [Shibata]{}, K. 2015, , 801, 37 , C., & [Compo]{}, G. P. 1998, Bulletin of the American Meteorological Society, 79, 61 , G., & [Odstr[č]{}il]{}, D. 1996, Journal of Computational Physics, 128, 82 , T., [Brady]{}, C. S., [Verwichte]{}, E., & [Nakariakov]{}, V. M. 2008, , 491, L9 , T., [Wardle]{}, N., [Del Zanna]{}, G., [et al.]{} 2011, , 727, L32 , A. M., [Temmer]{}, M., [Vr[š]{}nak]{}, B., & [Thalmann]{}, J. K. 2006, , 647, 1466 , A. 2015, Living Reviews in Solar Physics, 12, doi:10.1007/lrsp-2015-3 , D., [Nakariakov]{}, V. M., [Huang]{}, Z., [et al.]{} 2014, , 792, 41 , D., [Pascoe]{}, D. J., [Nakariakov]{}, V. M., [Li]{}, B., & [Keppens]{}, R. 2015, , 799, 221 Yuan, D., Su, J., Jiao, F., & Walsh, R. W. 2016, , 224, 30 , D., [Sych]{}, R., [Reznikova]{}, V. E., & [Nakariakov]{}, V. M. 2014, , 561, A19 , Y., [Zhang]{}, J., [Wang]{}, J., & [Nakariakov]{}, V. M. 2015, , 581, A78 Calculation of quasi-periodicity {#sec:method} ================================ We use $f(s)$ to represent either the spatial distribution of the velocity distribution $v_y(y,t)$ at a fixed instant or its temporal variation at a fixed location, after excluding both the leading pulse and the zeros ahead. The Fourier transform $\mathscr{F}(\xi)$ is calculated as $$\mathscr{F}(\xi)=\int_{-\infty}^{\infty}f(s)e^{-2\pi j s \xi}{\ensuremath{\text{d}}}s,$$ where $j = \sqrt{-1}$, and $\xi$ denotes either the spatial or temporal frequency. As a result, $1/\xi$ will be either the spatial or temporal period. The Fourier spectrum is then obtained by calculating $P(\xi)=\left|\mathscr{F}(\xi)\right|^2$. The mean frequency $\bar{\xi}$ is taken to be a weighted average, $$\bar{\xi}=\frac{\int_{-\infty}^{\infty}P(\xi)\xi{\ensuremath{\text{d}}}\xi}{\int_{-\infty}^{\infty}P(\xi){\ensuremath{\text{d}}}\xi}.$$ The error bar $\sigma_\xi$ is then computed by using $$\sigma_\xi^2=\frac{\int_{-\infty}^{\infty}P(\xi)\xi^2{\ensuremath{\text{d}}}\xi}{\int_{-\infty}^{\infty}P(\xi){\ensuremath{\text{d}}}\xi}-\bar{\xi}^2.$$ In our calculations, we integrate only the frequency components with power above the $3\sigma$-noise level [see @torrence1998 for its definition]. In terms of the temporal or spatial periods, we adopt $1/\bar{\xi}$ and $\sigma_\xi/\bar{\xi}^2$ as the mean value and its associated uncertainty (see the vertical lines in [Figure \[fig:fft\]]{}).
--- abstract: 'Weak decay rates under stellar density and temperature conditions holding at the rapid proton capture process are studied in neutron-deficient medium-mass waiting point nuclei extending from Ni up to Sn. Neighboring isotopes to these waiting point nuclei are also included in the analysis. The nuclear structure part of the problem is described within a deformed Skyrme Hartree-Fock + BCS + QRPA approach, which reproduces not only the beta-decay half-lives but also the available Gamow-Teller strength distributions, measured under terrestrial conditions. The various sensitivities of the decay rates to both density and temperature are discussed. In particular, we study the impact of contributions coming from thermally populated excited states in the parent nucleus, as well as the competition between beta decays and continuum electron captures.' author: - 'P. Sarriguren' title: 'Stellar weak decay rates in neutron-deficient medium-mass nuclei' --- Introduction ============ An accurate understanding of most astrophysical processes requires necessarily information from nuclear physics, which provides the input to deal with network calculations and astrophysical simulations (see [@langanke03; @apra] and references therein). Obviously, nuclear physics uncertainties will finally affect the reliability of the description of those astrophysical processes. This is especially relevant in the case of explosive phenomena, which involve knowledge of the properties of exotic nuclei, not well explored yet. Thus, most of the astrophysical simulations of these violent events must be built on nuclear-model predictions of limited quality and accuracy. This is in particular the case of the X-ray bursts (XRBs) [@wallace; @thielemann; @schatz; @woosley], which are generated by a thermonuclear runaway in the hydrogen-rich environment of an accreting neutron star that is fed from a red giant binary companion close enough to allow for mass transfer. Type I XRBs are typically characterized by a rapid increase in luminosity generating burst energies of $10^{39}-10^{40}$ ergs, which are typically a factor 100 larger than the steady luminosity. The luminosity suffers a sharp raise of about $1-10$ s followed by a gradual softening with time scales between 10 and 100 s. These bursts are recurrent with time scales ranging from hours to days. The properties of XRBs are particularly dependent on the accretion rate. Typical accretion rates for type I XRBs are about $ 10^{-8}-10^{-9} M_{\odot}$ yr$^{-1}$. Lower accretion rates lead to weaker flashes while larger accretion rates lead to stable burning on the surface of the neutron star. The ignition of XRBs takes place when the temperature ($T$) and the density ($\rho$) in the accreted disk become high enough to allow a breakout from the hot CNO cycle. Peak conditions of $T=1-3$ GK and $\rho=10^6-10^7$ g cm$^{-3}$ are reached and eventually, this scenario allows the development of the nucleosynthesis rapid proton capture ($rp$) process [@schatz; @woosley; @wormer; @pruet], which is characterized by proton capture reaction rates that are orders of magnitude faster than any other competing process, in particular $\beta$-decay. It produces rapid nucleosynthesis on the proton-rich side of stability toward heavier proton-rich nuclei reaching nuclei with $A\gtrsim 100$, as it have been studied in Ref. [@schatz01], where it was shown that the $rp$ process ends in a closed SnSbTe cycle. It also explains the energy and luminosity profiles observed in XRBs. Nuclear reaction network calculations, which may involve as much as several thousand nuclear processes, are performed to follow the time evolution of the isotopic abundances, to determine the amount of energy released by nuclear reactions, and to find the reaction path for the $rp$ process [@wallace; @thielemann; @schatz; @woosley; @wormer; @pruet; @schatz01; @ffn]. In general, the reaction path follows a series of fast proton-capture reactions until the dripline is reached and further proton capture is inhibited by a strong reverse photodisintegration reaction. At this point, the process may only proceed through a beta decay or a less probable double proton capture. Then the reaction flow has to wait for a relatively slow $\beta$-decay and the respective nucleus is called a waiting point (WP). The short time scale of the $rp$ process (around 100 s) makes highly significant any mechanism that may affect the process in some seconds and the half-lives of the WP nuclei are of this order. Therefore, the half-lives of the WP nuclei along the reaction path determine the time scale of the nucleosynthesis process and the produced isotopic abundances. In this respect, the weak decay rates of neutron-deficient medium-mass nuclei under stellar conditions play a relevant role to understand the $rp$ process. Although the products of the nucleosynthesis $rp$ process are not expected to be ejected from type I XRBs due to the strength of the neutron star gravitational field, there are other speculative sites for the occurrence of $rp$ processes. This is the case of core collapse supernovae that might supply suitable physical conditions for the $rp$ process provided neutrino-induced reactions are included in the nucleosynthesis calculations [@wanajo]. These reactions have to be included to bypass the slow beta decays at the WP nuclei via capture reactions of neutrons, which are created from the antielectron neutrino absorption by free protons [@frohlich]. Contrary to the XRBs, these scenarios will finally lead to the ejection of the nucleosynthetic products and thus contribute to the galactic chemical evolution. Since the pioneering work of Fuller, Fowler and Newman [@ffn], where the general formalism to calculate weak-interaction rates in stellar environments as a function of density and temperature was introduced, improvements have been focused on the description of the nuclear structure aspect of the problem. Different approaches to describe the nuclear structure involved in the stellar weak decay rates can be found in the literature. They are basically divided into Shell Model [@langanke00; @langanke01] or quasiparticle random phase approximation (QRPA) [@nabi; @paar09; @sarri_plb] categories. Certainly, the nuclear structure problem involved in the calculation of these rates must be treated in a reliable way. In particular, this implies that the nuclear models should be able to describe at least the experimental information available on the decay properties (Gamow-Teller strength distributions and $\beta$-decay half-lives) measured under terrestrial conditions. Although these decay properties may be different at the high $\rho$ and $T$ existing in $rp$ process scenarios, success in describing the decay properties in terrestrial conditions is a requirement for a reliable calculation of the weak decay rates in more general conditions. With this aim in mind, we study here the dependence of the decay rates on both $\rho$ and $T$ using a QRPA approach based on a selfconsistent deformed Hartree-Fock (HF) mean field. Deformation has to be taken into account because the reaction path in the $rp$ process crosses a region of highly deformed nuclei around $A=70-80$. This nuclear model has been tested successfully (see [@sarri_prc] and references therein) and reproduces very reasonably the experimental information available on both bulk and decay properties of medium-mass nuclei. In this work we focus our attention to the even-even WP Ni, Zn, Ge, Se, Kr, Sr, Zr, Mo, Ru, Pd, Cd, and Sn isotopes, as well as to their closer even-even neighbors. The paper is organized as follows. In Section \[wdr\] the weak decay rates are introduced as functions of density and temperature and their nuclear structure and phase space components are studied. Section \[results\] contains the results. First, we study the decay properties under terrestrial conditions, and secondly as functions of both densities and temperatures at the $rp$ process. Section \[conclusions\] contains the conclusions of this work. Weak decay rates {#wdr} ================= There are several distinctions between terrestrial and stellar decay rates caused by the effect of high $\rho$ and $T$. The main effect of $T$ is directly related to the thermal population of excited states in the decaying nucleus, accompanied by the corresponding depopulation of the ground states. The weak-decay rates of excited states can be significantly different from those of the ground state and a case by case consideration is needed. Another effect related to the high $\rho$ and $T$ comes from the fact that atoms in these scenarios are completely ionized and consequently electrons are no longer bound to the nuclei, but forming a degenerate plasma obeying a Fermi-Dirac distribution. This opens the possibility for continuum electron capture ($cEC$), in contrast to the orbital electron capture ($oEC$) produced by bound electrons in the atom under terrestrial conditions. These effects make weak interaction rates in the stellar interior sensitive functions of T and $\rho$, with $T=1.5$ GK and $\rho=10^6$ g cm$^{-3}$, as the most significant conditions for the $rp$ process [@schatz]. The decay rate of the parent nucleus is given by $$\lambda = \sum_i \lambda_i\, \frac{2J_i+1}{G} e^{-E_i/(kT)} \, , \label{population}$$ where $G=\sum_i \left( 2J_i+1 \right) e^{-E_i/(kT)}$ is the partition function, $J_i(E_i)$ is the angular momentum (excitation energy) of the parent nucleus state $i$, and thermal equilibrium is assumed. In principle, the sum extends over all populated states in the parent nucleus up to the proton separation energy. However, since the range of temperatures for the $rp$ process peaks at $T=1.5$ GK ($kT\sim 300$ keV), only a few low-lying excited states are expected to contribute significantly in the decay. Specifically, we consider in this work all the collective low-lying excited states below 1 MeV [@ensdf]. Two-quasiparticle excitations in even-even nuclei will appear at an excitation energy above 2 MeV, which is a typical energy to break a pair in these isotopes. Hence, they can be safely neglected at these temperatures. As an example, the maximum population appears for the lowest of these states ($E_{2^+}=261$ keV in $^{76}$Sr), which at $T$=1.5 GK is $12\%$, while the ground state still contributes with $88\%$. The decay rate for the parent state $i$ is given by $$\lambda _i = \sum_f \lambda_{if}\, ,$$ where the sum extends over all the states in the final nucleus reached in the decay process. The rate $\lambda_{if}$ from the initial state $i$ to the final state $f$ is given by $$\lambda _{if} = \frac{\ln 2}{D} B_{if}\Phi_{if} (\rho,T)\, ,$$ where $D=6146$ s. This expression is decomposed into a nuclear structure part $B_{if}$ that contains the transition probabilities for allowed Fermi (F) and Gamow-Teller (GT) transitions, $$B_{if}=B_{if}(GT)+ B_{if}(F)\, ,$$ and a phase space factor $\Phi_{if}$, which is a sensitive function of $\rho$ and $T$. The theoretical description of both $B_{if}$ and $\Phi_{if}$ are explained in the next subsections. Nuclear Structure ----------------- The nuclear structure part of the problem is described within the QRPA formalism. Various approaches have been developed in the past to describe the spin-isospin nuclear excitations in QRPA [@krum; @hamamoto; @moller; @hir; @homma; @borzov; @paar04; @fracasso; @petro; @sarri_74; @sarri_pp; @sarri_odd]. In this subsection we show briefly the theoretical framework used in this paper to describe the nuclear part of the decay rates in the neutron-deficient nuclei considered in this work. More details of the formalism can be found in Refs. [@sarri_74; @sarri_pp; @sarri_odd]. The method starts with a self-consistent deformed Hartree-Fock mean field formalism obtained with Skyrme interactions, including pairing correlations. The single-particle energies, wave functions, and occupation probabilities are generated from this mean field. In this work we have chosen the Skyrme force SLy4 [@sly4] as a representative of the Skyrme forces. This particular force includes some selected properties of unstable nuclei in the adjusting procedure of the parameters. It is one of the most successful Skyrme forces and has been extensively studied in the last years. The solution of the HF equation is found by using the formalism developed in Ref. [@vautherin], assuming time reversal and axial symmetry. The single-particle wave functions are expanded in terms of the eigenstates of an axially symmetric harmonic oscillator in cylindrical coordinates, using twelve major shells. The method also includes pairing between like nucleons in BCS approximation with fixed gap parameters for protons and neutrons, which are determined phenomenologically from the odd-even mass differences involving the experimental binding energies [@audi]. The potential energy curves are analyzed as a function of the quadrupole deformation $\beta$, $$\beta = \sqrt{\frac{\pi}{5}}\frac{Q_0}{A\langle r^2 \rangle}\, , \label{beta_quadru}$$ written in terms of the mass quadrupole moment $Q_0$ and the mean square radius $\langle r^2 \rangle$. For that purpose, constrained HF calculations are performed with a quadratic constraint [@constraint]. The HF energy is minimized under the constraint of keeping fixed the nuclear deformation. Calculations for GT strengths are performed subsequently for the various equilibrium shapes of each nucleus, that is, for the solutions, in general deformed, for which minima are obtained in the energy curves. Since decays connecting different shapes are disfavored, similar shapes are assumed for the ground state of the parent nucleus and for all populated states in the daughter nucleus. The validity of this assumption was discussed for example in Refs. [@krum; @homma]. To describe GT transitions, a spin-isospin residual interaction is added to the mean field and treated in a deformed proton-neutron QRPA. This interaction contains two parts, a particle-hole ($ph$) and a particle-particle ($pp$). The interaction in the $ph$ channel is responsible for the position and structure of the GT resonance [@homma; @sarri_gese] and it can be derived consistently from the same Skyrme interaction used to generate the mean field, through the second derivatives of the energy density functional with respect to the one-body densities. The $ph$ residual interaction is finally expressed in a separable form by averaging the resulting contact force over the nuclear volume [@sarri_74]. The $pp$ part is a neutron-proton pairing force in the $J^\pi=1^+$ coupling channel, which is also introduced as a separable force [@hir; @sarri_pp]. The strength of the $pp$ residual interaction in this theoretical approach is not derived self-consistently from the SLy4 force used to obtain the mean field, but nevertheless it has been fixed in accordance to it. This strength is usually fitted to reproduce globally the experimental half-lives. Various attempts have been done in the past to fix this strength [@homma], arriving to expressions that depend on the model used to describe the mean field, Nilsson model in the above reference. In previous works [@sarri_pp; @sarri_gese; @sarri_pb; @sarri_wp; @sarri_pere] we have studied the sensitivity of the GT strength distributions to the various ingredients contributing to the deformed QRPA-like calculations, namely to the nucleon-nucleon effective force, to pairing correlations, and to residual interactions. We found different sensitivities to them. In this work, all of these ingredients have been fixed to the most reasonable choices found previously [@sarri_prc] and mentioned above. In particular we use the coupling strengths $\chi ^{ph}_{GT}=0.15$ MeV and $\kappa ^{pp}_{GT} = 0.03$ MeV. The proton-neutron QRPA phonon operator for GT excitations in even-even nuclei is written as $$\Gamma _{\omega _{K}}^{+}=\sum_{\pi\nu}\left[ X_{\pi\nu}^{\omega _{K}} \alpha _{\nu}^{+}\alpha _{\bar{\pi}}^{+}+Y_{\pi\nu}^{\omega _{K}} \alpha _{\bar{\nu}} \alpha _{\pi}\right]\, , \label{phon}$$ where $\alpha ^{+}\left( \alpha \right) $ are quasiparticle creation (annihilation) operators, $\omega _{K}$ are the QRPA excitation energies, and $X_{\pi\nu}^{\omega _{K}},Y_{\pi\nu}^{\omega _{K}}$ the forward and backward amplitudes, respectively. For even-even nuclei the allowed GT transition amplitudes in the intrinsic frame connecting the QRPA ground state $\left| 0\right\rangle \ \ \left( \Gamma _{\omega _{K}} \left| 0 \right\rangle =0 \right)$ to one-phonon states $\left| \omega _K \right\rangle \ \ \left( \Gamma ^+ _{\omega _{K}} \left| 0 \right\rangle = \left| \omega _K \right\rangle \right)$, are given by $$\left\langle \omega _K | \sigma _K t^{\pm} | 0 \right\rangle = \mp M^{\omega _K}_\pm \, ,\quad K=0,1\, , \label{intrinsic}$$ where $$\begin{aligned} M_{-}^{\omega _{K}}&=&\sum_{\pi\nu}\left( q_{\pi\nu}X_{\pi \nu}^{\omega _{K}}+ \tilde{q}_{\pi\nu}Y_{\pi\nu}^{\omega _{K}} \right) , \\ M_{+}^{\omega _{K}}&=&\sum_{\pi\nu}\left( \tilde{q}_{\pi\nu} X_{\pi\nu}^{\omega _{K}}+ q_{\pi\nu}Y_{\pi\nu}^{\omega _{K}}\right) \, ,\end{aligned}$$ with $$\tilde{q}_{\pi\nu}=u_{\nu}v_{\pi}\Sigma _{K}^{\nu\pi },\ \ \ q_{\pi\nu}=v_{\nu}u_{\pi}\Sigma _{K}^{\nu\pi}, \label{qs}$$ $v'$s are occupation amplitudes ($u^2=1-v^2$) and $\Sigma _{K}^{\nu\pi}$ spin matrix elements connecting neutron and proton states with spin operators $$\Sigma _{K}^{\nu\pi}=\left\langle \nu\left| \sigma _{K}\right| \pi\right\rangle \, .$$ The GT strength for a transition from an initial state $i$ to a final state $f$ is given by $$B_{if}(GT^{\pm} )= \frac{1}{2J_i+1} \left( \frac{g_A}{g_V} \right)_{\rm eff} ^2 \langle f || \sum_j^A \sigma_j t^{\pm}_j || i \rangle ^2 \, ,$$ where $(g_A/g_V)_{\rm eff} = 0.74 (g_A/g_V)_{\rm bare}$ is an effective quenched value. For the transition $I_iK_i (0^+0) \rightarrow I_fK_f (1^+K)$ in the laboratory system, the energy distribution of the GT strength $B_{\omega}(GT^\pm )$ is expressed in terms of the intrinsic amplitudes in Eq. (\[intrinsic\]) as $$\begin{aligned} B_{\omega}(GT^\pm )& =& \left( \frac{g_A}{g_V} \right)_{\rm eff} ^2 \sum_{\omega_{K}} \left[ \left\langle \omega_{K} \left| \sigma_0t^\pm \right| 0 \right\rangle ^2 \delta_{K,0} \right. \nonumber \\ && \left. + 2 \left\langle \omega_{K} \left| \sigma_1t^\pm \right| 0 \right\rangle ^2 \delta_{K,1} \right] \, . \label{bgt}\end{aligned}$$ To obtain this expression, the initial and final states in the laboratory frame have been expressed in terms of the intrinsic states using the Bohr-Mottelson factorization [@bm]. Concerning Fermi transitions, the Fermi operator is the isospin ladder operator $T_{\pm}$, which commutes with the nuclear part of the Hamiltonian excluding the small Coulomb component. Then, superallowed Fermi transitions ($0^+ \rightarrow 0^+$) only occur between members of an isospin multiplet. The Fermi strength is narrowly concentrated in the isobaric analog state (IAS) of the ground state of the decaying nucleus. Thus, neglecting effects from isospin mixing one has $$B_{if}(F^{\pm}) = \frac{1}{2J_i+1}\langle f || \sum_j^A t^{\pm}_j || i \rangle ^2 = T(T+1)-T_{z_i}T_{z_f} \, ,$$ where $T$ is the nuclear isospin and $T_{z}=(N-Z)/2$ its third component. The $B_{if}(F^+)$ strength of our concern here reduces to $B(F^+)=(Z-N)=2$ for the $(T,T_z)=(1,-1)$ isotopes in the decay $(Z,N)\rightarrow (Z-1,N+1)$ with $Z=N+2$. For these transitions the excitation energy of the IAS in the daughter nucleus is given by [@pruet; @ffn] $$E_{IAS}=(ME)_i-(ME)_f + 0.7824 - \Delta E_C \, {\rm MeV},$$ where $ME$ is the atomic mass excess. The Coulomb displacement energy $\Delta E_C$ between pairs of isobaric analog levels is given by $$\Delta E_C = 1.4144 {\bar Z}/A^{1/3} -0.9127 {\rm MeV}\, ,$$ where $\bar{Z}=(Z_i+Z_f)/2$. This expression was obtained in Ref. [@antony] from a fitting to data corresponding to levels with isospin $T=1$. In any case, Fermi transitions are only important for the $\beta^+$ decay of neutron-deficient light nuclei with $Z> N$ ($T_z<0$), where the IAS can be reached energetically. Thus, although they have been considered in the calculations of the terrestrial half-lives, only the dominant GT transitions are included in the stellar decay rates. Phase Space Factors ------------------- The phase space factor contains two components, electron capture ($EC$) and $\beta^+$ decay $$\Phi_{if}=\Phi^{EC}_{if}+\Phi^{\beta^+}_{if}\, .$$ In the case of $\beta^+/EC$ decay in the laboratory, $EC$ arises from orbital electrons in the atom and the phase space factor is given by [@gove] $$\Phi^{oEC}=\frac{\pi}{2} \sum_x q_x^2 g_x^2B_x\, ,$$ where $x$ denotes the atomic subshell from which the electron is captured, $q$ is the neutrino energy, $g$ is the radial component of the bound-state electron wave function at the nucleus, and $B$ stands for other exchange and overlap corrections [@gove]. In $rp$-process stellar scenarios, the phase space factor for $cEC$ is given by $$\begin{aligned} \Phi^{cEC}_{if}&=&\int_{\omega_\ell}^{\infty} \omega p (Q_{if}+\omega)^2 F(Z,\omega) \nonumber \\ && \times S_{e^-}(\omega) \left[ 1-S_{\nu}(Q_{if}+\omega)\right] d\omega \, . \label{phiec}\end{aligned}$$ The phase space factor for positron emission $\beta^+$ process is given by $$\begin{aligned} \Phi^{\beta^+}_{if}&=&\int _{1}^{Q_{if}} \omega p (Q_{if}-\omega)^2 F(-Z+1,\omega) \nonumber \\ && \times \left[ 1-S_{e^+}(\omega)\right] \left[ 1-S_{\nu}(Q_{if}-\omega)\right] d\omega \, . \label{phib}\end{aligned}$$ In these expressions $\omega$ is the total energy of the positron in $m_ec^2$ units, $p=\sqrt{\omega ^2 -1}$ is the momentum in $m_e c$ units, and $Q_{if}$ is the total energy available in $m_e c^2$ units $$Q_{if}=\frac{1}{m_ec^2}\left( M_p-M_d+E_i-E_f \right) \, ,$$ which is is written in terms of the nuclear masses of parent ($M_p$) and daughter ($M_d$) nuclei and their excitation energies $E_i$ and $E_f$, respectively. $F(Z,\omega)$ is the Fermi function [@gove] that takes into account the distortion of the $\beta$-particle wave function due to the Coulomb interaction. $$F(Z,\omega ) = 2(1+\gamma) (2pR)^{-2(1-\gamma)} e^{\pi y} \frac{|\Gamma (\gamma+iy)|^2}{[\Gamma (2\gamma+1)]^2}\, ,$$ where $\gamma=\sqrt{1-(\alpha Z)^2}$ ; $y=\alpha Z\omega /p$ ; $\alpha$ is the fine structure constant and $R$ the nuclear radius. The lower integration limit in the $cEC$ expression is given by ${\omega_\ell}=1$ if $Q_{if}> -1$, or ${\omega_\ell}=|Q_{if}|$ if $Q_{if}< -1$. $S_{e^-}$, $S_{e^+}$, and $S_\nu$, are the electron, positron, and neutrino distribution functions, respectively. Its presence inhibits or enhances the phase space available. In $rp$ scenarios the commonly accepted assumptions [@schatz] state that $S_\nu=0$ since neutrinos and antineutrinos can escape freely from the interior of the star and then they do not block the emission of these particles in the capture or decay processes. Positron distributions become only important at higher $T$ ($kT > 1$ MeV) when positrons appear via pair creation, but at the temperatures considered here we take $S_{e^+}=0$. The electron distribution is described as a Fermi-Dirac distribution $$S_{e}=\frac{1}{\exp \left[ \left(\omega -\mu_e\right)/(kT)\right] +1} \, ,$$ assuming that nuclei at these temperatures are fully ionized and electrons are not bound to nuclei. The chemical potential $\mu_e$ is determined from the expression $$\rho Y_e = \frac{1}{\pi^2 N_A}\left( \frac{m_e c}{\hbar}\right) ^3 \int_0^{\infty} (S_e - S_{e^+}) p² dp \, ,$$ in (mol/cm$^3$) units. $\rho$ is the baryon density (g/cm$^3$), $Y_e$ is the electron-to-baryon ratio (mol/g), and $N_A$ is Avogadro’s number (mol$^{-1}$). Under the assumptions $S_{e^+}=S_\nu =0$ mentioned above, the phase space factors for $\beta^+$ decay in Eq. (\[phib\]) are independent of the density and temperature. The only dependence of the $\beta^+$ decay rates on $T$ arises from the thermal population of excited parent states. On the other hand, the phase space factor for $cEC$ in Eq. (\[phiec\]) is a function of both $\rho Y_e$ and $T$, through the electron distribution $S_{e^-}$. The phase space factors increase with $Q_{if}$ and thus the decay rates are more sensitive to the strength $B_{if}$ located at low excitation energies of the daughter nucleus. It is also interesting to notice the relative importance of both $\beta^+$ decay and electron capture phase space factors (see Fig. 3 in Ref.[@sarri_plb]). In general, the former dominates at sufficiently high $Q_{if}$ (low excitation energies in the daughter nucleus), while the latter is always dominant at sufficiently low $Q_{if}$ (high excitation energies in the daughter nucleus). The $\beta$-decay half-life in the laboratory is obtained by summing all the allowed transition strengths to states in the daughter nucleus with excitation energies lying below the corresponding $Q_{EC}$ energy, and weighted with the phase space factors, $$T_{1/2}^{-1}=\frac{\lambda}{\ln 2}=\frac{1}{D} \sum_{0 < E_f < Q_{EC}} \left[ B_{if}(GT)+B_{if}(F) \right] \Phi_{if}^{\beta^+/oEC} \, , \label{t12}$$ where the $Q_{EC}$ energy is given by $$Q_{EC} = M_p-M_d+m_e = Q_{\beta^+} + 2m_e\, .$$ Results for weak decay rates {#results} ============================ In this section we present first the results for the potential energy curves. Then, we show the results for the decay properties, GT strength distributions and $\beta$-decay half-lives, under terrestrial conditions comparing them with the available experimental information. Finally, we present the results for the stellar weak decay rates under density and temperature conditions implied in the $rp$ process. Potential Energy Curves ----------------------- In Fig. \[fig\_eq\] we can see the potential energy curves for the even-even Ni, Zn, Ge, Se, Kr, Sr, Zr, Mo, Ru, Pd, Cd, and Sn nuclei in the vicinity of the $N=Z$ isotopes considered in this work. We show the energies relative to that of the ground state plotted as a function of the quadrupole deformation $\beta$ in Eq. (\[beta\_quadru\]). They are obtained from constrained HF+BCS calculations with the Skyrme force SLy4 [@sly4]. The nuclei studied here cover a whole proton shell ranging from magic number $Z=28$ (Ni isotopes) up to magic number $Z=50$ (Sn isotopes). The isotopes considered are the predicted WP nuclei, which in most cases correspond to $N=Z$, and their neighbor isotopes. Then, it is expected that the lighter and heavier nuclei close to $Z=28$ and $Z=50$, respectively, have a tendency to be spherical. The spherical shapes in these isotopes show sharply peaked profile that become shallow minima as one moves away from $Z=28$ or $Z=50$, and finally deformed shapes are developed as one approaches mid-shell nuclei. The profiles of the latter exhibit a rich structure giving raise to shape coexistence when various minima at close energies are located at different deformations. It is also worth mentioning the correlations observed between mirror nuclei interchanging the number of neutrons and protons. Thus, we see the remarkable similarity between the profiles of $^{66}$Ge $(Z=32,N=34)$ and $^{66}$Se $(Z=34,N=32)$, between $^{70}$Se $(Z=34,N=36)$ and $^{70}$Kr $(Z=36,N=34)$, and between $^{74}$Kr $(Z=36,N=38)$ and $^{74}$Sr $(Z=38,N=36)$. These results are in qualitative agreement with similar ones obtained in this mass region from different theoretical approaches. Just to give some examples, shape transition and shape coexistence were discussed in $A\sim 80$ nuclei within a configuration-dependent shell-correction approach based on a deformed Woods-Saxon potential [@naza85]. Relativistic mean field calculations in this mass region have also been reported in Ref. [@relat]. Nonrelativistic calculations are also available from both Skyrme [@nonrel_skyrme_1; @nonrel_skyrme_2; @nonrel_skyrme_3] and Gogny [@nonrel_gogny] forces, as well as from the complex VAMPIR approach [@petrovici]. Experimental evidence of shape coexistence in this mass region has become available in the last years [@wood; @piercey; @chandler; @becker; @fisher; @bouchez; @gade; @gorgen05; @clement; @davies; @andreoiu; @singh; @hurst; @gorgen07; @ljungvall; @obertelli], and by now this is a well established characteristic feature in the neutron-deficient $A=70-80$ mass region. Laboratory Gamow-Teller strength and half-lives ----------------------------------------------- While the half-lives give only a limited information of the decay (different strength distributions may lead to the same half-life), the strength distribution contains all the information. It is of great interest to study the decay rates under stellar $rp$ conditions using a nuclear structure model that reproduces the strength distributions and half-lives under terrestrial conditions. In the next figures, we show the results obtained for the energy distributions of the GT strength corresponding to the equilibrium shapes for which we obtained minima in the potential energy curves in Fig. \[fig\_eq\]. The GT strength is plotted versus the excitation energy of the daughter nucleus $E_{ex}=E_f$ (MeV). Fig. \[fig\_bgt1\] (\[fig\_bgt2\]) contains the results for the isotopes Ni, Zn, Ge, Se, Kr, and Sr (Zr, Mo, Ru, Pd, Cd, and Sn). We show the energy distributions of the individual GT strengths in the case of the ground state shapes. We also show the continuous distributions for both ground state and possible shape isomers, obtained by folding the strength with 1 MeV width Breit-Wigner functions. The vertical arrows show the $Q_{EC}$ energy, as well as the proton separation energy in the daughter nucleus, both taken from experiment [@audi]. It is worth noticing that in general both deformations produce quite similar GT strength distributions on a global scale. The main exceptions correspond to the comparison between spherical and deformed shapes, where clear differences can be observed. In any case, the small differences among the various shapes at the low energy tails (below the $Q_{EC}$) of the GT strength distributions lead to sizable effects in the $\beta$-decay half-lives. These differences can be better seen because of the logarithmic scale. Experimental information on GT strength distributions are mainly available for $^{72}$Kr [@piqueras], $^{74}$Kr [@poirier], $^{76}$Sr [@nacher], and $^{102,104}$Sn [@karny] isotopes, where $\beta^+$-decay experiments have been performed with total absorption spectroscopy techniques, allowing the extraction of the GT strength in practically the whole $Q$-energy window. In Ref. [@sarri_prc] a comparison between similar calculations to those in this work and the experimental data for Kr and Sr isotopes was carried out. In general, good agreement with experiment was found and this was one of the reasons to extrapolate this type of calculations to stellar environments, as well as to other WP nuclei. Measurements of the decay properties (mainly half-lives) of nuclei in this mass region have been reported in the last years [@karny; @oinonen; @kienle; @faestermann; @wohr; @kankainen; @kavatsyuk; @dossat; @bazin; @stoker; @weber; @elomaa]. The calculation of the half-lives in Eq.(\[t12\]) involves the knowledge of the GT strength distribution and of the $Q_{EC}$ values. In this work experimental values for $Q_{EC}$ are used. They are taken from Ref. [@audi] or from the Jyväskylä mass database [@weber; @jyvaskyla], when available. In Fig. \[fig\_t12\] the measured half-lives are compared to the QRPA results obtained from the equilibrium deformations of the various isotopes. In general good agreement for the $N=Z$ WP is obtained. Also for the more stable $N=Z+2$ the agreement is very reasonable, except for the heavier Cd an Sn isotopes, where the half-lives are overestimated. The half-lives of the more exotic isotopes are fairly well described by QRPA. Stellar weak decay rates ------------------------ Figures \[fig\_ni\]-\[fig\_sn\] show the decay rates as a function of the temperature $T$. On the left-hand side (a) one can see the decomposition of the total rates into their contributions from the decay of the the ground state $0^+_{\rm gs}\rightarrow 1^+$ and from the decay of the excited state $2^+ \rightarrow 1^+,2^+,3^+$ in the parent nucleus. The middle panel (b) contains the decomposition of the rates into their $\beta^+$ and $cEC$ components evaluated at various densities ($\rho Y_e$). On the right-hand side (c) the total rates for various densities are presented. The gray area is the relevant range $T=1-3$ GK for the $rp$ process. Each figure contains the results for three isotopes. The results corresponding to the more exotic ones are displayed on top, whereas the results corresponding to the more stable isotopes appear on the bottom. In the middle we find the intermediate isotopes, which in most cases correspond to the WP nuclei. The results decomposed into their contributions from various parent states (a) show that the decay from the ground state is always dominant at the temperatures within the gray area of interest. The contributions of the decays from excited states increase with $T$, as they become more and more thermally populated, but in general they do not represent significant contributions to the total rates and can be neglected in most cases. Nevertheless, there are a few cases where these contributions should not be ignored, which correspond to those cases where the excitation energy of the $2^+$ excited state is very low. This is the case of the middle-shell nuclei Kr, Sr, Zr, and Mo, where the contributions of the low-lying excited states compete with those of the ground state already at temperatures in the range of $rp$ process. The effect on the rates of the decay from excited $0^+_2$ states was also considered in Ref. [@sarri_plb] in the case of Kr and Sr isotopes. It was concluded that in general their relative impact is again very small in the total rates at these temperatures. Concerning the competition between $\beta^+$ and $cEC$ rates (b) one should distinguish between different isotopes. Thus, the more exotic isotopes appearing on the top of the figures show a clear dominance of the $\beta^+$ rates over the $cEC$ ones that can be neglected except at very high densities beyond $rp$-process conditions. On the other hand, the opposite is true with respect to the more stable isotopes on the bottom, where the $\beta^+$ rates are completely negligible. The origin of these features can be understood from the behavior of the phase space factors as a function of the available energy $Q_{if}$. As it was mentioned above and discussed in Ref. [@sarri_plb], more exotic nuclei with larger $Q_{if}$ values favor $\beta^+$ because of the larger phase space factors, while the opposite is true for more stable nuclei with smaller $Q_{if}$ values. The interesting cases occur in the middle panels that correspond in most cases to the $N=Z$ WP nuclei. Here, there is a competition between $\beta^+$ and $cEC$ rates that depends on the nucleus, on the temperature, and on the density $\rho Y_e$. One can see that for large enough densities, $cEC$ becomes dominant at any $T$. For low densities, $\beta^+$ rates dominate at low $T$, while $cEC$ dominates at higher $T$, but in general there is a competition that must be analyzed case by case. Finally, the total rates in (c) are a consequence of the competition between $\beta^+$ and $cEC$ rates mentioned above. Since the $\beta^+$ decay rate is independent of the density and depends on $T$ only through the contributions from excited parent states, the total rates are practically constant for the more exotic isotopes in the upper figures, only modulated by the small contribution from $cEC$. In the central isotopes the rates are the result of the competition discussed in (b), and finally in the heavier isotopes (lower figures) we can see that the total rates are practically due to $cEC$ with little contribution from $\beta^+$. Tables containing $\beta^+$, $cEC$, and total decay rates for all the isotopes considered in this work are available in Ref. [@epaps]. Summary and Conclusions {#conclusions} ======================= In summary, the weak decay rates of waiting point and neighbor nuclei from Ni up to Sn have been investigated at temperatures and densities where the $rp$ process takes place. The nuclear structure has been described within a microscopic QRPA approach based on a selfconsistent Skyrme-Hartree-Fock-BCS mean field that includes deformation. This approach reproduces both the experimental half-lives and the more demanding GT strength distributions measured under terrestrial conditions in this mass region. The relevant ingredients to describe the rates have been analyzed. We have studied the contributions to the decay rates coming from excited states in the parent nucleus which are populated as $T$ raises. It is found that they start to play a role above $T=1-2$ GK and that for isotopes with low-lying excited states, their contributions can be comparable to those of the ground states. Concerning the contributions from the continuum electron capture rates, it is found that they are enhanced with $T$ and $\rho$. They are already comparable to the $\beta^+$ decay rates at $rp$ conditions for the WP nuclei. For more exotic isotopes the rates are dominated by $\beta^+$ decay, while for more stable isotopes they are dominated by $cEC$. This work was supported by Ministerio de Ciencia e Innovación (Spain) under Contract No. FIS2008–01301. [00]{} K. Langanke and G. Martínez-Pinedo, Rev. Mod. Phys. [**75**]{}, 819 (2003). A. Aprahamian, K. Langanke, and M. Wiescher, Prog. Part. Nucl. Phys. [**54**]{}, 535 (2005). R. K. Wallace and S. E. Woosley, Ap. J. Suppl. [**45**]{}, 389 (1981). F.-K. Thielemann [*et al.*]{}, Nucl. Phys. [**A 570**]{}, 329c (1994). H. Schatz [*et al.*]{}, Phys. Rep. [**294**]{}, 167 (1998). S. E. Woosley [*et al.*]{}, Ap. J. Suppl. [**151**]{}, 75 (2004). L. Van Wormer [*et al.*]{}, Ap. J. [**432**]{}, 326 (1994). J. Pruet and G.M. Fuller, Ap. J. Suppl. [**149**]{}, 189 (2003). H. Schatz [*et al.*]{}, Phys. Rev. Lett. [**86**]{}, 3471 (2001). G. M. Fuller, W. A. Fowler and M. J. Newman, Ap. J. Suppl. [**42**]{}, 447 (1980); Ap. J. [**252**]{}, 715 (1982); Ap. J. Suppl. [**48**]{}, 279 (1982); Ap. J. [**293**]{}, 1 (1985). S. Wanajo, Ap. J. [**647**]{}, 1323 (2006). C. Fröhlich [*et al.*]{}, Phys. Rev. Lett. [**96**]{}, 142502 (2006). K. Langanke and G. Martínez-Pinedo, Nucl. Phys. [**A673**]{}, 481 (2000). K. Langanke and G. Martínez-Pinedo, At. Data Nucl. Data Tables [**79**]{}, 1 (2001). J.-U. Nabi and H. V. Klapdor-Kleingrothaus, At. Data Nucl. Data Tables [**71**]{}, 149 (1999); [**88**]{}, 237 (2004). N. Paar, G. Colò, E. Khan, and D. Vretenar, Phys. Rev. C [**80**]{}, 055801 (2009). P. Sarriguren, Phys. Lett. [**B 680**]{}, 438 (2009). P. Sarriguren, Phys. Rev. C [**79**]{}, 044315 (2009). Evaluated Nuclear Structure Data File (ENSDF), http://www.nndc.bnl.gov/ensdf/. J. Krumlinde and P. Möller, Nucl. Phys. [**A417**]{}, 419 (1984); P. Möller and J. Randrup, Nucl. Phys. [**A514**]{}, 1 (1990). F. Frisk, I. Hamamoto, and X. Z. Zhang, Phys. Rev. C [**52**]{}, 2468 (1995). P. Möller, J. R. Nix, and K. -L. Kratz, At. Data Nucl. Data Tables [**66**]{}, 131 (1997). M. Hirsch, A. Staudt, K. Muto and H.V. Klapdor-Kleingrothaus, Nucl. Phys. [**A535**]{}, 62 (1991); K. Muto, E. Bender, T. Oda, and H.V. Klapdor-Kleingrothaus, Z. Phys. A [**341**]{}, 407 (1992). H. Homma, E. Bender, M. Hirsch, K. Muto, H. V. Klapdor-Kleingrothaus and T. Oda, Phys. Rev. C [**54**]{}, 2972 (1996). I. N. Borzov, Nucl. Phys. [**A 777**]{}, 645 (2006). N. Paar, T. Nikšić, D. Vretenar, and P. Ring, Phys. Rev. C [**69**]{}, 054303 (2004). S. Fracasso and G. Colò, Phys. Rev. C [**76**]{}, 044307 (2007). A. Petrovici, K. W. Schmid, O. Radu, and A. Faessler, Nucl. Phys. [**A799**]{}, 94 (2008); Phys. Rev. C [**78**]{}, 044315 (2008); A. Petrovici, K. W. Schmid, O. Andrei, and A. Faessler, Phys. Rev. C [**80**]{}, 044319 (2009). P. Sarriguren, E. Moya de Guerra, A. Escuderos, and A. C. Carrizo, Nucl. Phys. [**A635**]{}, 55 (1998). P. Sarriguren, E. Moya de Guerra, and A. Escuderos, Nucl. Phys. [**A691**]{}, 631 (2001). P. Sarriguren, E. Moya de Guerra, and A. Escuderos, Phys. Rev. C [**64**]{}, 064306 (2001). E. Chabanat [*et al.*]{}, Nucl. Phys. [**A635**]{}, 231 (1998). D. Vautherin, Phys. Rev. C [**7**]{}, 296 (1973). G. Audi, O. Bersillon, J. Blachot, and A. H. Wapstra, Nucl. Phys. [**A729**]{}, 3 (2003). H. Flocard, P. Quentin, A. K. Kerman, and D. Vautherin, Nucl. Phys. [**A203**]{}, 433 (1973). P. Sarriguren, E. Moya de Guerra, and A. Escuderos, Nucl. Phys. [**A658**]{}, 13 (1999). P. Sarriguren, O. Moreno, R. Álvarez-Rodríguez, and E. Moya de Guerra, Phys. Rev. C [**72**]{}, 054317 (2005). P. Sarriguren, R. Álvarez-Rodríguez, and E. Moya de Guerra, Eur. Phys. J. A [**24**]{}, 193 (2005). P. Sarriguren and J. Pereira, Phys. Rev. C [**81**]{}, 064314 (2010). A. Bohr and B. Mottelson, [*Nuclear Structure*]{}, (Benjamin, New York 1975). M. S. Antony, A. Pape, and J. Britz, At. Data Nucl. Data tables [**66**]{}, 1 (1997). N.B. Gove and M.J. Martin, Nucl. Data Tables [**10**]{}, 205 (1971). W. Nazarewicz, J. Dudek, R. Bengtsson, T. Bengtsson, and I. Ragnarsson, Nucl. Phys. [**A435**]{}, 397 (1985). G. A. Lalazissis and M. M. Sharma, Nucl. Phys. [**A 586**]{}, 201 (1995); G. A. Lalazissis, S. Raman, and P. Ring, At. Data Nucl. Data Tables [**71**]{}, 1 (1999). P. Bonche, H. Flocard, P.-H. Heenen, S. J. Krieger, and M. S. Weiss, Nucl. Phys. [**A443**]{}, 39 (1985). M. Bender, P. Bonche and P.-H. Heenen, Phys. Rev. C [**74**]{}, 024312 (2006). M. Yamagami, K. Matsuyanagi, and M. Matsuo, Nucl. Phys. [**A693**]{}, 579 (2001). S. Hilaire and M. Girod, Eur. Phys. J. A [**33**]{}, 237 (2007); M. Girod, J.-P. Delaroche, A. Görgen, and A. Obertelli, Phys. Lett. [**B 676**]{}, 39 (2009). A. Petrovici, K. W. Schmid, and A. Faessler, Nucl. Phys. [**A605**]{}, 290 (1996); [**A665**]{}, 333 (2000). J. L. Wood, E. F. Aganjar, C. de Coster and K. Heyde, Nucl. Phys. [**A651**]{}, 323 (1999). R. B. Piercey [*et al.*]{}, Phys. Rev. Lett. [**47**]{}, 1514 (1981). C. Chandler [*et al.*]{}, Phys. Rev. C [**56**]{}, R2924 (1997). F. Becker [*et al.*]{}, Eur. Phys. J. A [**4**]{}, 103 (1999). S. M. Fischer [*et al.*]{}, Phys. Rev. Lett. [**84**]{}, 4064 (2000). E. Bouchez [*et al.*]{}, Phys. Rev. Lett. [**90**]{}, 082502 (2003). A. Gade [*et al.*]{}, Phys. Rev. Lett. [**95**]{}, 022502 (2005). A. Görgen [*et al.*]{}, Eur. Phys. J. A [**26**]{}, 153 (2005). E. Clément [*et al.*]{}, Phys. Rev. C [**75**]{}, 054313 (2007). P. J. Davies [*et al.*]{}, Phys. Rev. C [**75**]{}, 011302(R) (2007). C. Andreoiu [*et al.*]{}, Phys. Rev. C [**75**]{}, 041301(R) (2007). B. S. Nara Singh [*et al.*]{}, Phys. Rev. C [**75**]{}, 061301(R) (2007). A. M. Hurst [*et al.*]{}, Phys. Rev. Lett. [**98**]{}, 072501 (2007). A. Görgen [*et al.*]{}, Eur. Phys. J. Special Topics [**150**]{}, 117 (2007). J. Ljungvall [*et al.*]{}, Phys. Rev. Lett. [**100**]{}, 102502 (2008). A. Obertelli [*et al.*]{}, Phys. Rev. C [**80**]{}, 031304(R) (2009). I. Piqueras [*et al.*]{}, Eur. Phys. J. A [**16**]{}, 313 (2003). E. Poirier [*et al.*]{}, Phys. Rev. C [**69**]{}, 034307 (2004). E. Nácher [*et al.*]{}, Phys. Rev. Lett. [**92**]{}, 232501 (2004). M. Karny [*et al.*]{}, Eur. Phys. J. A [**27**]{}, 129 (2006). M. Oinonen [*et al.*]{}, Phys. Rev. C [**61**]{}, 035801 (2000). P. Kienle [*et al.*]{}, Prog. Part. Nucl. Phys. [**46**]{}, 73 (2001). T. Faestermann [*et al.*]{}, Eur. Phys. J. A [**15**]{}, 185 (2002). A. Wöhr [*et al.*]{}, Nucl. Phys. [**A 742**]{}, 349 (2004). A. Kankainen [*et al.*]{}, Eur. Phys. J. A [**29**]{}, 271 (2006). O. Kavatsyuk [*et al.*]{}, Eur. Phys. J. A [**31**]{}, 319 (2007). C. Dossat [*et al.*]{}, Nucl. Phys. [**A 792**]{}, 18 (2007). D. Bazin [*et al.*]{}, Phys. Rev. Lett. [**101**]{}, 252501 (2008). J. B. Stoker [*et al.*]{}, Phys. Rev. C [**79**]{}, 015803 (2009). C. Weber [*et al.*]{}, Phys. Rev. C [**78**]{}, 054310 (2008). V.-V. Elomaa [*et al.*]{}, Eur. Phys. J. A [**40**]{}, 1 (2009); Phys. Rev. Lett. [**102**]{}, 252501 (2009). http://research.jyu.fi/igisol/JYFLTRAP\_masses/ See supplementary material at \[\] for text files containing weak-decay rates in a grid of densities ($\rho Y_e$) and temperatures ($T$) covering the rp-process conditions. ![image](fig_eq){width="80.00000%"} ![image](bgt_1){width="50.00000%"} ![image](bgt_2){width="50.00000%"} ![image](fig_t12_z){width="90.00000%"} ![image](rates_ni){width="80.00000%"} ![image](rates_zn){width="80.00000%"} ![image](rates_ge){width="80.00000%"} ![image](rates_se){width="80.00000%"} ![image](rates_kr){width="80.00000%"} ![image](rates_sr){width="80.00000%"} ![image](rates_zr){width="80.00000%"} ![image](rates_mo){width="80.00000%"} ![image](rates_ru){width="80.00000%"} ![image](rates_pd){width="80.00000%"} ![image](rates_cd){width="80.00000%"} ![image](rates_sn){width="80.00000%"}
--- abstract: 'The multi-component decaying dark matter (DM) scenario is investigated to explain the possible excesses in the positron fraction by PAMELA and recently confirmed by AMS-02, and in the total $e^+ +e^-$ flux observed by Fermi-LAT. By performing the $\chi^2$ fits, we find that two DM components are already enough to give a reasonable fit of both AMS-02 and Fermi-LAT data. The best-fitted results show that the heavier DM component with its mass 1.5 TeV dominantly decays through the $\mu$-channel, while the lighter one of 100 GeV mainly through the $\tau$-channel. As a byproduct, the fine structure around 100 GeV observed by AMS-02 and Fermi-LAT can be naturally explained by the dropping due to the lighter DM component. With the obtained model parameters by the fitting, we calculate the diffuse $\gamma$-ray emission spectrum in this two-component DM scenario, and find that it is consistent with the data measured by Fermi-LAT. We also construct a microscopic particle DM model to naturally realize the two-component DM scenario, and point out an interesting neutrino signal which is possible to be measured in the near future by IceCube.' author: - 'Chao-Qiang Geng$^{1,2,3,4}$[^1], Da Huang$^{1,4}$[^2] and Lu-Hsing Tsai$^{1,4}$[^3]' title: 'Imprint of Multi-component Dark Matter on AMS-02' --- Introduction ============ The constitution of the cosmic ray (CR) can always tell us a lot about our Galaxy and our universe. Recently, the AMS-02 collaboration has published the first measurement of the positron fraction $e^+/(e^- + e^+)$ in CR with a high precision, which shows a continuous rise from $5$ up to $350$ GeV [@AMS02] and confirms the general behavior previously measured by CAPRICE [@Boezio:2000zz], HEAT [@DuVernois:2001bb; @Beatty:2004cy], AMS-01 [@Aguilar:2007yf], PAMELA [@PAMELA; @PAMELA2] and Fermi-LAT [@FermiLAT:2011ab]. The observed uprise is in stark contrast with the conventional expectations based on the secondary-origin positrons, whose fraction is just monotonically decreasing with energy. Furthermore, the total flux spectrum of electrons and positrons measured by ATIC [@Chang:2008aa], PPB-BETS [@PPB], HESS [@HESS1; @HESS2], Fermi-LAT [@Ackermann:2010ij; @FermiLAT] and more recently by AMS-02 is harder than those expected from the conventional astrophysical background, indicating some excesses in the energy range higher than 10 GeV. All these results imply that there exist some extra exotic $e^{\pm}$ sources in our Galaxy which are unknown to us. In the literature, there have been many possible mechanisms, such as the astrophysical source like pulsars [@pulsar], annihilating dark matter (DM) [@DMindependent; @annihilation; @annihilationAMS; @AnnihilationDecay] and decaying DM [@DMindependent; @decay; @decayAMS; @3bodydecay; @3bodydecayAMS; @Ishiwata:2009vx]. However, it is pointed out in Refs. [@Chen:2009gd; @Feng:2013zca; @2body; @2bodyAMSa; @2bodyAMSb] that there is a tension between the AMS-02 positron fraction and the Fermi-LAT total flux since the slope of the former decreases one order of magnitude from 10 to 250 GeV [@AMS02], while the latter is much flatter. In particular, for the simplest scenario with a single type of DM whose decay is mainly through the leptonic two-body channels, it is difficult in performing a good fit of the AMS-02 and Fermi-LAT data simultaneously [@Feng:2013zca; @2bodyAMSb]. In order to reduce this tension, we need to resort to some more complicated models, such as three/four-body decaying/annihilating [@3bodydecayAMS], asymmetric decaying [@asymmetricDM], dynamical DMs [@dynamicalDM] as well as other astrophysical solutions [@pulsar; @Feng:2013zca]. More interestingly, the positron fraction from AMS-02 and the total $e^+ + e^-$ flux from Fermi-LAT and many other experiments show a structure with “flash damp" or “jerk" around 100 GeV. Since this fine structure is observed in more than two independent experiments, we think it is reasonable to take it seriously, though it can also be caused by the statistical fluctuations. In this paper, we propose a multi-component decaying DM scenario with two-body leptonic decay channels. Such a scenario can reconcile the tension between the AMS-02 and Fermi-LAT data since the change of the slope in the spectrum is achieved by the different channels of two DM components. Moreover, the fine structure in the two data sets has the natural explanation that the lighter DM drops around 100 GeV. Although the similar scenario has already been considered in Ref. [@multiDM], our present discussion is more general. As mentioned in Ref. [@Cirelli:2012ut], the most stringent constraint on the decaying DM models comes from the cosmic diffuse $\gamma$-rays, which was precisely measured by EGRET [@EGRET] and more recently by Fermi-LAT [@FermiLAT_Gamma]. In our present work, the dark matter contribution to the diffuse $\gamma$-rays could be produced from the leptonic final state radiation associated to the DM leptonic decay and the scattering of the resultant electrons/positrons to the interstellar medium (ISM) via bremsstrahlung as well as the low-energy photons inside and outside of our Galaxy via the inverse Compton (IC) process. As a result, with the parameters obtained by fitting the AMS-02 positron fraction and the Fermi-LAT total $e^+ + e^-$ flux, the total diffuse $\gamma$-ray spectrum is completely fixed. We will demonstrate that the predicted diffuse $\gamma$-ray spectrum does not exceed the Fermi-LAT bound, and somehow agrees with the measured spectrum well. The paper is organized as follows. In Sec. \[pst\_fit\], we first perform the $\chi^2$-fitting of the AMS-02 positron fraction and the Fermi-LAT total electron/positron flux with a single component DM. We then propose the multi-component DM scenario to fit the data by carefully examining the simplest case with only two components. In Sec. \[gamma\], we predict the total diffuse $\gamma$-ray spectrum and compare it with the Fermi-LAT data. In Sec. \[model\], we build a simple microscopic model to realize the two-component DM scenario. Our conclusions are presented in Sec. \[conclusion\]. Fitting Decaying Dark Matter Models with AMS-02 and Fermi-LAT data {#pst_fit} =================================================================== Sources and Propagation of Cosmic-Ray in the Galaxy --------------------------------------------------- The propagation of various charged CR particles in our Galaxy is well described by the general diffusion-reacceleration equation, given by [@DiffuseEquation] $$\begin{aligned} \label{diffusionEq} \frac{\partial \psi}{\partial t} &= & Q({\bf x},p) + \nabla \cdot(D_{xx}\nabla\psi-{\bf V}_c\psi)+\frac{\partial}{\partial p}p^2 D_{pp} \frac{\partial}{\partial p}\frac{1}{p^2}\psi-\frac{\partial}{\partial p}\big[ \dot{p}\psi - \frac{p}{3}(\nabla \cdot {\bf V}_c )\psi \big]\nonumber\\ && -\frac{1}{\tau_f}\psi-\frac{1}{\tau_r}\psi,\end{aligned}$$ where $\psi({\bf x}, p, t)$ is the number density of CR particles per unit of momentum, $Q({\bf x}, p)$ is the source term, and $D_{xx}$ is the spatial diffusion coefficient which is parameterized as a power law $D_{xx} = \beta D_0(\rho/\rho_r)^{\delta}$ with $\rho = p/(Ze)$ the rigidity of the cosmic ray, $\rho_r$ the reference rigidity, $\beta = v/c$ the velocity and $\delta$ the power spectral index. The normalization constant $D_0$ and the power index $\delta$ are determined by fitting the experimental values of the secondary-to-primary ratios, such as $\mbox{B/C}$, and the unstable-to-stable ratios of secondary particles, such as ${}^{10}\mbox{Be}/{}^{9}\mbox{Be}$ and ${}^{26}\mbox{Al}/{}^{27}\mbox{Al}$. The overall convection driven by the stellar wind is characterized by the convection velocity ${\bf V}_c$, and the reacceleration process is described by the diffusion coefficient in the momentum space $D_{pp}=4 V_a^2 p^2/(3D_{xx}\delta(4-\delta^2)(4-\delta))$. In Eq.(\[diffusionEq\]), $\dot{p}=d p/d t$ denotes the momentum loss rate, while $\tau_f$ and $\tau_r$ the time scales for the nuclei fragmentation and radioactive decay, respectively. In the usual CR propagation model, the CR diffusion is confined in a Galactic halo which is parametrized as a cylinder with half-height $z_h$ and radius $r_h$, while the densities of the CR components vanish at the boundary of the halo. In our computation, we take $z_h=4$ kpc and $r_h=20$ kpc. The source term $Q({\bf x},p)$ for the primary particles is the product of the particle injection spectra $q^{n,e}(\rho)$ and the CR source spatial distribution $f(R,z)$, which are broken power-law functions with respect to the rigidity $\rho$ and the supernova-remnant (SNR) type: $$\begin{aligned} q^{n,e}(\rho) &\propto& \Big(\frac{\rho}{\rho_{br}^{n,e}}\Big)^{-\gamma^{n,e}_1(\gamma^{n,e}_2)},\\ f(R,z) &\propto& \Big( \frac{R}{R_\odot} \Big)^a \exp\Big[ -\frac{b(R-R_\odot)}{R_\odot} \Big]\exp\Big(-\frac{|z|}{z_s}\Big),\end{aligned}$$ respectively, where $\gamma^{n,e}_{1(2)}$ are the spectral index below (above) the nucleus and electron broken rigidities $\rho_{br}^{n,e}$ , $R_\odot = 8.5$ kpc is the distance between the Galactic center and our solar system and $z_s = 0.2$ kpc is the characteristic height of the Galactic disk. Here, we have adopted $a=1.25$ and $b=3.56$ by following Ref. [@Trotta:2010mx]. The collisions of the primary CR particles in the interstellar medium (ISM) would produce the secondary particles. For our present interest, the secondary positrons and electrons are the final products of the decay chain of the pions and kaons originated from such collisions, which can be calculated along with solving the CR diffusion equations. The primary electrons and secondary electrons/positrons constitute the background of the $e^+ +e^-$ flux. However, in order to explain the AMS-02 and Fermi-LAT results, we need to introduce additional primary source terms $Q^{\rm DM}_{\pm}$ into the positron/electron diffusion equations. In this study, we shall always adopt the isothermal profile as our DM distribution in the Galaxy [@isothermal], which is given by: $$\begin{aligned} \rho(r)=\rho_0{r_c^2+r_\odot^2 \over r_c^2+r^2}\;,\end{aligned}$$ where $\rho_0=0.43~{\rm GeV}\cdot{\rm cm}^{-3}$, $r_c=2.8~{\rm kpc}$ is the core radius, and $r_\odot=8.5~{\rm kpc}$ is the distance between the galactic center and our solar system as $R_\odot$. The variable $r$ is the distance from the Galactic center to the position of the DM source. The $e^+/e^-$ injection spectra induced by the DM decays are, however, much model-dependent, so that they are introduced in the corresponding subsections below. After the propagation of the CR by taking into account the energy losses for electrons/positrons by ionization, Coulomb interaction, inverse Compton (IC) scattering, bremsstrahlung and synchrotron radiation under the galactic magnetic fields, we can obtain the electron/positron fluxes observed around the earth through the relation $\Phi_{e}=(c/ 4\pi)\psi(E)$. In the present work, we use the numerical package [GALPROP]{} [@GALPROP] to consistently solve the coupled diffusion-reacceleration equations for various CR components by including the $e^+/e^-$ contribution from the decaying DM sources. For our numerical calculations, we apply the parameter set as shown in Table \[parameters\]. ------------------------------------- ------------------- ---------- -------------------------- ---------------------------- -------------- -------------- ---------------------------- -------------- -------------- $D_0(\mathrm{cm}^2\mathrm{s}^{-1})$ $\rho_r(\rm{MV})$ $\delta$ $v_A({\rm km\, s}^{-1})$ $\rho_{\rm br}^e(\rm{MV})$ $\gamma_1^e$ $\gamma_2^e$ $\rho_{\rm br}^p(\rm{MV})$ $\gamma_1^n$ $\gamma_2^n$ $5.3\times10^{28}$ $4.0\times10^{3}$ 0.33 $33.5$ $4.0\times10^{3}$ 1.54 2.6 $11.5\times10^{3}$ 1.88 2.39 ------------------------------------- ------------------- ---------- -------------------------- ---------------------------- -------------- -------------- ---------------------------- -------------- -------------- : The parameters for the diffuse propagation, primary electron, and primary proton. \[parameters\] As a result, the total fluxes $\Phi^{\rm (tot)}_{e,p}$ for electrons and positrons can be expressed by $$\begin{aligned} \Phi^{\rm (tot)}_{e}&=&\kappa \Phi^{\rm (primary)}_{e}+\Phi^{\rm (secondary)}_{e} +\Phi^{\rm DM}_{e}\;,\nonumber\\ \Phi^{\rm (tot)}_{p}&=&\Phi^{\rm (secondary)}_{p} +\Phi^{\rm DM}_{p}\;,\end{aligned}$$ where the factor $\kappa$ is inserted to account for the uncertainty in the normalization for the primary electron flux, which should be fixed with other parameters of the model in the fitting procedure. Finally, due to the solar winds and the heliospheric magnetic field at the top of the atmosphere (TOA) of the earth, the fluxes of the CR particles would be affected. Here, we use the simple force-field approximation [@Gleeson:1968zza] to account for this solar modulation effect, that is, the measured electron/positron fluxes at the TOA are related to the interstellar ones via: $$\begin{aligned} \Phi^{\rm TOA}_{e/p} (T_{\rm TOA}) = \Big(\frac{2 m_e T_{\rm TOA}+T^2_{\rm TOA}}{2m_e T+T^2}\Big) \Phi^{\rm tot}_{e/p},\end{aligned}$$ where $T_{\rm TOA}=T-\phi_F$ is the kinetic energy of the electrons/positrons at the top of the atmosphere and numerically we take the potential $\phi_F = 0.55$ GV. General Discussion of Decaying Dark Matter Scenario {#general} --------------------------------------------------- In the decaying DM scenario, although the lifetime of each DM component is typically of ${\cal O}(10^{26}{\rm s})$ which is much longer than the age of the universe $\tau_U \approx 4\times 10^{17}$ s, it is remarkable that such a low decay rate is already enough to provide a sufficient amount of positrons and/or electrons to explain the AMS-02 and Fermi-LAT excesses. The $e^\pm$ source terms $Q^{\rm DM}_{e,p}$ induced by the DM decays can be generally expressed as $$\begin{aligned} Q({\mathbf x},p)^{\rm DM}_{e,p}= \sum_i{\rho_i(\mathbf x)\over \tau_i M_i} \Big( \frac{dN_{e,p}}{dE} \Big)_i \;,\end{aligned}$$ where $M_i$, $\tau_i$ and $\rho_i(\mathbf x)$ denote the mass, lifetime and energy density distribution for the $i$-th DM component in our Galaxy, respectively, and $(dN_{e,p}/ dE)_i$ is the differential electron/positron multiplicity per annihilation, which depends on the main decaying processes of the DM. In the following, we focus on the scenario in which all DMs dominantly decay through two-body leptonic processes $\chi_i\rightarrow l^\pm Y^\mp$, where $\chi_i$ represents the $i$-th DM particle, $l=e$, $\mu$ and $\tau$, and $Y$ is another heavy charged particle with mass $M_Y$, which is illustrated in Fig. \[Fig\_DMDecay\]. ![Illustration for the process of the DM particle $\chi$ decaying into charged leptons $\l^\pm$ and heavy charged particles $Y^{\mp}$.[]{data-label="Fig_DMDecay"}](DMdecay.ps){width="8cm"} In this simple two-body decay scenario, $(dN_{e,p}/ dE)_i$ is fully determined by the kinematical analysis, in which the produced leptons have a definite energy $E_c$ which can be written as a function of $M_i$ and $M_Y$. Thus, when $l=e$, the $e^+/e^-$ energy spectrum is just a delta function: $$\Big({dN^e\over dE}\Big)_i= {1\over E_{ci}}\delta(1-x)\;,\;$$ where $x=E/E_{ci}$. For the $l=\mu$ and $\tau$ cases, the subsequent decay would give one or more positrons/electrons. The normalized $e^{\pm}$ energy spectrum $d{N^{\mu }}/dE$ has the following analytical expression: $$\begin{aligned} \Big({dN^\mu\over dE}\Big)_i&=&{1\over E_{ci}}[3(1-x^2)-{4\over3}(1-x)]\theta(1-x)\;,\;\end{aligned}$$ with $x=E/E_{ci}$, while $dN^\tau/dE$ can be obtained by the simulation of the $\tau$ decay with [PYTHIA]{} [@PYTHIA]. For the general situation with all decay channels of $l=e,\mu$ and $\tau$ simultaneously, the total electron/positron energy distribution from the decaying DMs can be normalized as $$\begin{aligned} \label{Norm_Spectrum} \Big({dN_{e,p}\over dE}\Big)_i={1\over2}\Big[\epsilon^e_i\Big({dN^e\over dE}\Big)_i+\epsilon^\mu_i\Big({dN^\mu\over dE}\Big)_i+\epsilon^\tau_i\Big({dN^\tau\over dE}\Big)_i\Big]\;,\end{aligned}$$ where $\epsilon^{e,\mu,\tau}_i$ are the branching ratios for three leptonic channels of the $i$-th DM with the relation $\epsilon^{e}_i+\epsilon^{\mu}_i+\epsilon^{\tau}_i=1$, and the factor $1/2$ takes into account that $e^-$ and $e^+$ come from two different channels. This normalization relation means that the leptonic decay channels dominate over other ones, realizing the leptophilic scenario which is favored by the current measurement of the antiproton flux spectrum in CR by PAMELA [@Adriani:2010rc]. These branching ratio parameters will be determined by fitting the $e^\pm$ spectra in the following subsections. Note that in our present setup, we assume that the DM decays will give the same amount of electrons and positrons, rather than the asymmetric DM scenario [@asymmetricDM]. Moreover, at the first sight our present leptonic decay channels are different from the ones in ${\rm DM}\rightarrow l^+ l^-$ usually considered in the literature, but the final $e^\pm$ spectra are essentially identical just by the replacement of the energy cutoff $E_{ci}$ with the half DM mass $M_i/2$. Clearly, with the simple rescaling of the obtained DM lifetimes $\tau_i$ and masses $M_i$, our fitting may also be generalized to ${\rm DM}\rightarrow l^+ l^-$. Fitting Results with the Single-Component Decaying Dark Matter -------------------------------------------------------------- In this subsection, we shall concentrate on the simplest case with only one DM component. Within the above general framework with the DM mass $M=3030$ GeV, we have 5 parameters: the primary electron spectrum normalization factor $\kappa$, energy cutoff $E_c$, DM lifetime $\tau$ and two independent branching ratios $\epsilon^e$ and $\epsilon^\tau$, which will be determined by fitting the data points of the AMS-02 positron fraction and the Fermi-LAT total $e^\pm$ flux. In addition, $\epsilon^e$ and $\epsilon^\tau$ should be subject to the constraint $\epsilon^e + \epsilon^\tau \leq 1$ by considering the possible contribution from the $\mu$-channel. Note that the Fermi-LAT data show that the $e^\pm$ excess extends as high as to 1 TeV, which indicates that the DM cutoff $E_c$ should be, at least, equal to or larger than 1 TeV. Since our purpose is to discuss the generic feature of the proposed decaying DM scenario, we fix $E_c$ to be 1 TeV, 1.3 TeV and 1.5 TeV respectively, while fitting other four parameters. In this work, we take the 42 data points of the positron fraction from AMS-02 [@AMS02] with energy above 10 GeV and the 26 data points of the total flux of electrons and positrons from Fermi-LAT [@FermiLAT]. The selection constraint with energy above 10 GeV is set in order to reduce the effects of the solar modulation. In total, we consider 68 data points in our global fits. For the fitting procedure, we use the simple $\chi^2$-minimization method, in which the $\chi^2$-function is constructed as $$\begin{aligned} \chi^2=\sum_{i=1}^{68}\Big({y^{\rm th}_i-y^{\rm exp}_i\over \sigma_i}\Big)^2\;,\end{aligned}$$ where $y^{\rm th}_i$ are the theoretical predictions for the positron fraction or the total $e^+ + e^-$ flux and $y_i^{\rm exp}$ are the corresponding experimental data points with errors $\sigma_i$. The index $i$ runs over all the data points. The point in the parameter space which gives the minimal $\chi^2$ value will be the best-fit point for our DM model. $E_c$(GeV) $\kappa$ $\epsilon^{e}$ $\epsilon^{\mu}$ $\epsilon^{\tau}$ $\tau(10^{26}{\rm s})$ $\chi_{\rm min}^2$ $\chi_{\rm min}^2/d.o.f.$ ------------ ---------- ---------------- ------------------ ------------------- ------------------------ -------------------- --------------------------- 1000 0.73 0.09 0 0.91 [0.66]{} 463 7.35 1300 0.72 0.04 0 0.96 [0.71]{} 516 8.19 1500 0.71 0.02 0 0.98 [0.74]{} 541 8.46 : Points of the parameter space for different cutoff values of $E_c$, which lead to minimal $\chi^2$ where the DM mass is chosen as [$M=3030$ GeV]{}. \[tab\_1DM\] ![(a) Total flux and (b) positron fraction from the DM contributions with the best-fitting parameters given in Table \[tab\_1DM\]. []{data-label="Fig_1DM"}](SingleDM.eps){width="16cm"} The best-fit results are shown in Table \[tab\_1DM\] and Fig. \[Fig\_1DM\] for the three cases with the electron energy cutoff at $E_c=1$ TeV, 1.3 TeV and 1.5 TeV, respectively. From Table \[tab\_1DM\], we find that $\chi_{\rm min}^2/{\rm d.o.f}$ is always larger than 7 and it tends to increase with a larger $E_c$, suggesting that the single DM models with the five parameters $\{\kappa, E_c, \tau, \epsilon^e,\epsilon^\tau \}$ cannot provide a reasonable fit to the AMS-02+Fermi-LAT data. This result agrees with the previous studies in the single-component decaying DM models [@2bodyAMSa]. It is also interesting to note that for all the three cases, the best fits indicate that the DM $\mu$-channel does not contribute the electron/positron flux since $\epsilon^{e,\tau}$ always saturate the constraint $\epsilon^e+\epsilon^\tau =1$. From the technical perspective, the failure of the fitting can be attributed to the fact that the positron fraction from all three leptonic DM decaying channels are harder than the measured spectrum by AMS-02. Since the AMS-02 data in the low energy range, around $E \simeq {\rm 10}$ GeV, have very small errors and thus dominate the value of $\chi^2$, the parameters are, in fact, already fixed by saturating those data points. The resultant $e^\pm$ spectra deviate from the Fermi-LAT and AMS-02 data at high energies as depicted in Fig. \[Fig\_1DM\]. Therefore, the insufficiency to fit the AMS-02 positron fraction and the Fermi-LAT total $e^+ + e^-$ flux implies that the single-decaying DM scenario should be extended to a more complicated situation. There are several ways to do this. One interesting idea is to split the whole DM density into multiple components, which will be discussed in a great detail in the next subsection. Fitting with the Two-Component Decaying Dark Matter --------------------------------------------------- The multiple-component DM scenario is very interesting from a phenomenological perspective [@multiDM; @DoubleDiskDM; @dynamicalDM]. In this subsection, we consider the implication of the multiple-component DM to the indirect DM search. In particular, we shall show that with the two DM components, denoted by ${\rm DM}_{L(H)}$, representing the lighter (heavier) DM, it is enough to accommodate the AMS-02 positron fraction and the Fermi-LAT total positron/electron flux simultaneously. In addition, if the decay of the ${\rm DM}_L$ to $e^\pm$ terminates at around 100 GeV, the cutoff energy $E_{cL}$ would manifest itself as the fine structure of “jerk" or “flash damp" [@multiDM] in the positron fraction and total $e^+ + e^-$ flux spectra, as implied by AMS-02, Fermi-LAT, and many others. For simplicity, we shall assume that each of the two components carries half of the total energy density of DM in the Galaxy and in the whole universe. We also take that the three two-body charged leptonic decay channels are the major decaying processes for both DM components as specified in Sec. \[general\]. Consequently, the extra electron/positron source term due to DM decays should be modified to: $$\begin{aligned} Q({\mathbf x},p)^{\rm DM}_{e,p}={\rho({\mathbf x})\over 2}\Big[{1\over\tau_L M_L}\Big({\frac{dN_{e,p}}{dE}}\Big)_L +{1\over\tau_H M_H}\Big({dN_{e,p}\over dE}\Big)_H\Big]\;,\end{aligned}$$ where the subscripts $L$ and $H$ represent the quantities corresponding to $\mathrm{DM}_L$ and $\mathrm{DM}_H$ with choosing $M_{L,H}=416$ and 3030 GeV, respectively. In order to make the total flux excess cover the whole Fermi-LAT energy range, the energy cutoff for ${\rm DM}_H$ is set to $E_{cH}=1500~\mathrm{GeV}$. The fine structure around 100 GeV shown in the AMS-02 data determines $E_{cL}=100~{\rm GeV}$. The normalized total electron/positron differential multiplicity $\Big({dN_{e,p}\over dE}\Big)_{H(L)}$ for each DM is defined in Eq. (\[Norm\_Spectrum\]). Hence, in the present two-component DM model, we are left with 7 parameters needed to be fixed by the $\chi^2$ fitting: $\tau_i$, $\epsilon^e_i$ and $\epsilon^\tau_i$ for each DM, together with the primary electron normalization uncertainty $\kappa$. The data selection and fitting procedure are the same as in the single-component DM scenario. After some tentative fits, we find that the minimum of $\chi^2$ can be obtained when $\epsilon^{e}_H=\epsilon^{\tau}_H=0$, and $\epsilon^e_L+\epsilon^{\tau}_L=1$ which implies $\epsilon^\mu_L=0$ . Thus, in order to enhance the accuracy and stability of the fit, we turn off these three channels by requiring $\epsilon^e_H=\epsilon^\tau_H=0$ and $\epsilon^\tau_L=1-\epsilon^e_L$, and fit the rest four parameters again. The value of $\chi_{\rm min}^2$ and its corresponding parameters are given in Table \[tab\_2DM\]. $\kappa$ $\epsilon^{e}_H$ $\epsilon^{\mu}_H$ $\epsilon^{\tau}_H$ $\tau_H(10^{26}{\rm s})$ $\epsilon^{e}_L$ $\epsilon^{\mu}_L$ $\epsilon^{\tau}_L$ $\tau_L(10^{26}{\rm s})$ $\chi_{\rm min}^2$ $\chi_{\rm min}^2/d.o.f.$ ---------- ------------------ -------------------- --------------------- -------------------------- ------------------ -------------------- --------------------- -------------------------- -------------------- --------------------------- 0.844 0 1 0 [0.76]{} 0.018 0 0.982 [0.82]{} 62.3 1.06 : The point in the parameter space which gives the minimal value of $\chi^2$ with the DM masses and cutoff energies taken as [$M_{L,H}=(416,3030)~{\rm GeV}$]{} and $E_{cL,H}= (100,1500)~{\rm GeV}$, respectively. \[tab\_2DM\] The minimum of $\chi^2/{\rm d.o.f}$ is only 1.06, representing the goodness of the fitting. The best-fit results tell us that ${\rm DM}_H$ decays only through the muon-channel while the lighter one mainly through the electron and tau channels with the latter being the dominant one. The determination of the flavor dependence of the DM decay channels displays the power of the indirect DM search method. The predicted positron fraction and total $e^+ + e^-$ flux based on the best-fit parameters are depicted in Fig. \[Fig\_2DM\]. ![(a) Total flux and (b) positron fraction from the DM contributions with the parameters given in Table \[tab\_2DM\]. []{data-label="Fig_2DM"}](BiDMdecayingK.eps){width="16cm"} The fine structure around 100 GeV is evident in the positron fraction, and less significant in the total $e^+ + e^-$ flux, both of which agree well with the experimental data of AMS-02 and Fermi-LAT. If this fine structure persists and becomes clearer when more data are accumulated by AMS-02, Fermi-LAT and many others in the near future, it would be an important support to the multi-component DM scenario. Finally, we would like to give some discussions on our fitting results when the data sets are changed. More recently, the AMS-02 collaboration has showed their measurement on the total $e^+ + e^-$ flux. If we use the AMS-02 data on the total flux instead of the Fermi-LAT ones, we have checked that the present setup would give essentially the same level of the fit. The main difference is that the ratio of the primary electron source is increased and the lifetime of the lighter DM is reduced, reflecting the fact that we need more $e^+/e^-$ at the low energy shown in the AMS-02 data. Another issue is related to our data-taking criterion that we only adopt the data point with the energy above 10 GeV. Since the error bars at the low energy are small, it is expected that the data just above this energy cutoff would give the most important statistical power so that the variation of this cutoff slightly would alter the final fitting result. However, from the $\chi^2-$fitting, we find that the rise of the cutoff to 20 GeV does not lead to a large effect, which may be related to the fact that the precision of the AMS-02 positron fraction data in the whole energy range is much higher than that of the Fermi-LAT total $e^+ + e^-$ flux and thus still dominates the fitting. Diffuse Gamma Ray from Dark Matter Decay {#gamma} ======================================== As discussed in Refs. [@Cirelli:2012ut; @Papucci:2009gd; @Beacom:2004pe; @Gamma_DM; @Gamma_extraDMIC; @Gamma_extra; @Ibarra:2007wg; @Ishiwata:2009vx], the positrons/electrons from the DM decays or annihilations are always accompanied with the emissions of high energy photons, which would contribute to the diffuse $\gamma$-ray background. In order for the decay of a typical DM candidate to account for the observed positrons and electrons in AMS-02 and Fermi-LAT, the associated flux of high-energy $\gamma$-rays would have the potential to exceed the diffuse $\gamma$-ray data by Fermi-LAT [@FermiLAT_Gamma] and EGRET [@EGRET]. As pointed in Refs. [@Gamma_DM; @Gamma_extraDMIC; @Gamma_extra], especially Ref. [@Cirelli:2012ut; @Papucci:2009gd], a large range of the parameter space of the decaying DM models with the usual decay channels trying to explain the PAMELA and Fermi-LAT positron/electron excesses has already been excluded. Thus, it is necessary to consider whether our multi-component decaying DM scenario, in particular the two-component DM case discussed in the previous section, is still viable under the diffuse $\gamma$-ray constraints. In this section, we compute the total diffuse $\gamma$-ray flux by taking account of all possible $\gamma$-ray sources, including the usual astrophysical diffuse background $\gamma$-ray radiation inside and outside our Galaxy as well as the DM contributions. We will compare our result with the Fermi-LAT inclusive continuum photon spectrum [@FermiLAT_Gamma], which was measured within the energy range $4.8~{\rm GeV}<E_\gamma<264~{\rm GeV}$ for the sky in the high latitude with $|b|>10^\circ$ plus the Galactic center (GC) with $|b|<10^\circ$, $l<10^\circ$ and $l>350^\circ$. The conventional astrophysical background can be further divided into two parts: inside and outside the Galaxy. For the background $\gamma$-radiation, we have included three sources: pion decay, inverse Compton (IC) scattering, and bremsstrahlung, all of which are originated from the collisions of CR particles with the galactic interstellar medium (ISM) and low energy photons during the CR diffuse process. We use the [GALPROP]{} code to numerically calculate the spectra of these three parts of high energy photons with the same numerical values of the CR diffusion parameters when we fit the two-component DM model with the AMS-02 and Fermi-LAT excesses as well as the primary electron normalization $\kappa=0.844$, obtained by the $\chi^2$-fitting in the last section. The extragalactic $\gamma$-ray background (EGB) is usually considered to be the superposition of contributions from unresolved extragalactic sources, such as the active galactic nuclei (AGN). In the present work, we adopt the following parameterization: $$\begin{aligned} { E^2\Phi_\gamma(E)=5.18\times 10^{-7}E^{-0.499}( {\rm GeV} {\rm cm}^{-2} {\rm sr}^{-1} {\rm s}^{-1})\;,}\label{Eq_ExtraSource}\end{aligned}$$ which is obtained by fitting the low energy spectrum of the EGRET $\gamma$-ray data [@EGRET; @Ishiwata:2009vx]. The total sum of these two backgrounds is shown as the black dashed line in Fig.\[Fig\_GammaRay\]. The DM decays can provide a lot of new sources for the $\gamma$-ray flux. Inside the Galaxy, the extra high-energy electrons/positrons as the decay products of the two DM components can induce the $\gamma$-rays by the collision with the ISM via bremsstrahlung and the scattering with the starlight, IR photons and the Cosmic Microwave Background (CMB) via IC, both of which can also be computed by using the GALPROP package. Furthermore, we should also consider the $\gamma$-rays coming from the associated DM prompt decays. Since our two DM components decays involve $e,\mu$ and $\tau$ channels, $\gamma$-rays can be emitted via the internal bremsstrahlung [@Beacom:2004pe] or final state radiation (FSR) [@Gamma_DM; @FermiLAT_Gamma] from the external lepton legs. For the $\mu$-channel, we also include the effects of the radiative muon decays [@Gamma_DM], [*e*.g.]{}, $\mu^{+} \to e^{+} \bar{\nu}_\mu \nu_e\gamma$ and $\mu^{-} \to e^{-} \bar{\nu}_\mu \nu_e\gamma$. For the $\tau$-channel, the produced $\tau$ decays produce many $\pi^0$, which can further decay into two photons with the total spectrum parameterized as [@Gamma_DM; @Fornengo:2004kj]: $$\begin{aligned} \frac{d N_\gamma}{d y} = y^{-1.31} (6.94y-4.93y^2-0.51y^3)e^{-4.53y},\end{aligned}$$ where $y=E_\gamma/M_{H,L}$ for two DM components. The three lepton FSRs, the radiative muon decays and the pion decays from $\tau$ will be called the prompt decay below. The relative size of these contributions can be completely determined by the fitted $\epsilon^{e,\mu,\tau}_{H,L}$ listed in Table \[tab\_2DM\]. Outside the Galaxy, the DM-induced $\gamma$-rays are mainly generated by the prompt decays and the ICs of the electrons and positrons from the DM decays with the CMB photons. Different from the prompt decays inside the Galaxy, we need to consider the $\gamma$-ray redshift effects caused by the cosmic expansion, which is encoded in the following formula [@Gamma_extra]: $$\begin{aligned} \Big[E_\gamma^2 \frac{d \Phi_\gamma}{dE_\gamma}\Big]_{\rm eg} &=& E_\gamma^2\cdot \frac{c\, \Omega_{\rm DM}\rho_c}{4\pi H_0 \Omega_M^{1/2}}\int^\infty_1 dy \frac{y^{-3/2}}{\sqrt{1+\Omega_\Lambda/\Omega_M y^{-3}}}\cdot \nonumber\\ && \frac{1}{2}\Big[\frac{1}{\tau_H M_H}\Big(\frac{d N_\gamma}{d(y E_\gamma)}\Big)_H+\frac{1}{\tau_L M_L} \Big(\frac{d N_\gamma}{d(y E_\gamma)}\Big)_L\Big]\, ,\end{aligned}$$ where $y=1+z$ with $z$ being the redshift, $c$ is the speed of light, $H_0$ represents the present value of the Hubble parameter, $\rho_c$ is the critical density, and $(\Omega_{\rm DM},\Omega_M,\Omega_\Lambda)= (0.11889 h^{-2}, 0.14105 h^{-2},0.6914)$ with $h= 0.6777$ [@Planck] are the density parameters of DM, total matter, and cosmological constant, respectively. The lifetimes $\tau_{H,L}$, masses $M_{H, L}$ and flavor weights $\epsilon^{e,\mu,\tau}_{H,L}$ for the $\gamma$-ray injection spectra of the two DMs are obtained by the $\chi^2$-fits in the previous section, which are listed in Table \[tab\_2DM\]. As for the computation of the extragalactic IC scattering contribution, we follow the treatment in Ref. [@Gamma_extraDMIC]. The final results for the various galactic and extragalactic contributions, as well as the total $\gamma$-ray spectrum, are presented in Fig. \[Fig\_GammaRay\]. ![Photon fluxes as a function of $E_\gamma$, where the black solid line is the total contribution from ordinary sources and DM, while the other solid lines are related to different parts of DM contributions, and the dashed lines correspond to ordinary sources inside or outside our galaxy. Note that the gray band represents the total error, including all the systematic and statistical ones. []{data-label="Fig_GammaRay"}](GammaRay.eps){width="12cm"} In Fig. \[Fig\_GammaRay\], different $\gamma$-ray components with different origins are evident. In the energy range $E_\gamma\lesssim 0.1~{\rm GeV}$, only the isotropic extragalactic source in Eq. (\[Eq\_ExtraSource\]) yields the contributions. Beyond that range, the ordinary galactic background (pion decay + inverse Compton scattering + bremsstrahlung) begins to dominate the total $\gamma$-ray spectrum. The DM contributions become prominent when $E_\gamma \gtrsim 1$ GeV, extending to as high as 1500 GeV with the sharp cutoff. On top of that, the drop-out of the lighter DM component is also obvious as the fine structure around 100 GeV. Remarkably, the measured inclusive continuum $\gamma$-ray spectrum by Fermi-LAT given in Ref. [@FermiLAT_Gamma] also shows this fine structure in the expected energy range. If this fine structure survives and becomes clearer with more data accumulated in the near future, it would be a strong support of the existence of the 100 GeV DM component. Moreover, our model predicts the sharp falloff above 1000 GeV, which is a clean evidence for the second DM component and could be observed by Fermi-LAT as well as other future experiments such as Cherenkov Telescope Array [@CTA; @CTAweb]. Finally, it is observed from Fig. \[Fig\_GammaRay\] that the predicted $\gamma$-ray spectrum in our model is consistent with the Fermi-LAT measurement in all the measured energy range. In order to make this observation more precise, we calculate the usual $\chi^2$ with the 82 Fermi-LAT data points and find that $\chi^2=79.1$ with $\chi^2/d.o.f=0.965$, which is sufficient to illustrate that the predictions in our two-component DM model agree with the actual measurement well. This conclusion seems to contradict the stringent DM lifetime bound obtained in Ref. [@Cirelli:2012ut; @Papucci:2009gd]. Here, we want to give some remarks on the possible reasons for these differences of our present result from others. The main difference of ours from Ref. [@Cirelli:2012ut] lies in the interpretation of the composition of the Fermi-LAT data: Ref. [@Cirelli:2012ut] assumes that the spectrum can be obtained with the conventional astrophysical sources, and can be fitted with the simple power law function. The possible contribution from DM can only be compared with the residue after the subtraction of the data points to this background function, resulting in a very stringent DM lifetime bound. In our treatment here, however, we try to calculate every $\gamma$-ray contribution precisely. Except for the EGB part from the analysis of the first-year Fermi-LAT data, the other contributions are actually already determined after we specify our CR diffusion-reacceleration parameters listed in Table \[parameters\] and fix the model parameters in Table \[tab\_2DM\] by fitting the AMS-02 and Fermi-LAT data. [ As for the constraint from Ref. [@Papucci:2009gd], we need to be more precise since the authors, Papucci and Strumia (PS), in Ref. [@Papucci:2009gd] did not assume any astrophysical background at all in their derivation. From Fig. 8 in Ref. [@Papucci:2009gd], we can read off the lower DM lifetime bound $\tau^{PS}_\mu = 5\times 10^{25}$s for $\mbox{DM} \to \mu^+ \mu^-$ with $m_{\rm DM}= 3000\,$GeV and $\tau_{\tau}^{PS}=1.5\times 10^{26}\,$s for $\mbox{DM}\to \tau^+\tau^-$ with $m_{\rm DM}=200\,$GeV, corresponding to the heavy and light DM dominant decaying channels with the energy cutoffs $E_{cH}=1500\,$GeV and $E_{cL}=100\,$GeV, respectively. Note that each lepton in the lepton pair decay channels carries one half of the DM energy. Nevertheless, there should be a factor $1/4 = 1/2\times 1/2$ suppression in our two-component DM case, in which one 1/2 accounts for the half density of each DM component and the other 1/2 for the single $e^+$ or $e^-$ generated in one DM decay process. Also, an extra suppression from the DM mass requires to be considered. By taking all of these suppressions into account, we can transform the DM lifetime bounds shown in Ref.[@Papucci:2009gd] into that in our case through the following formula, $$\tau_l = \frac{M^{PS}_{\rm DM} \tau^{PS}_l}{4 M_i}\,.$$ For example, for the light DM case, the corresponding lifetime bound for the $\tau-$channel in our case is only $\tau_{\tau}= 2\times 10^{25}\,$s with the light DM mass $M_L=416\,$GeV. The same argument can be also applied to the heavy DM with the lifetime bound $\tau_\mu = 1.24\times 10^{25}\,$s. Obviously, these two bounds are much lower than the two best-fitting DM lifetimes of $\tau_L = 8.2\times 10^{25}\,$s and $\tau_H=7.6\times 10^{25}\,$s listed in Table \[tab\_2DM\].]{} In sum, our calculation is completely consistent with the fit of the AMS-02 positron fraction and the Fermi-LAT $e^\pm$ flux, thus representing the generic prediction of the diffuse $\gamma$-ray emission for the present multi-component decaying DM model. Microscopic Model Realization of Multi-Component Dark Matter Scenario {#model} ===================================================================== The previous phenomenological analysis has already shown that the two-component DM scenario is promising to solve the $e^+/e^-$ anomalies of the AMS-02 and Fermi-LAT data at the same time, while satisfying the diffuse $\gamma$-ray constraint from Fermi-LAT. On the other hand, a microscopic points of view would help us understand the underlying dynamics deeper. In this section, we would like to construct a particle physics model to realize this two-component DM scenario, which is a simple two-component DM extension of the one in Ref. [@Chen:2009gd]. The starting point is to introduce two $SU(2)_L$ singlet fermions $N_{R\,1,2}$ with hypercharge $Y=0$ and two $SU(2)_L$ doublet scalars $\eta$ and $\zeta$ with the same hypercharge $Y=-1$. Two $Z_2$ symmetries are imposed on these newly introduced particles with the corresponding charges presented in Table \[tab\_z2list\]. $N_{R\, 1}$ $N_{R\, 2}$ $\eta$ $\zeta$ ---------- ------------- ------------- -------- --------- -- $Z_{2}$ $-$ $-$ $-$ $+$ $Z'_{2}$ $+$ $+$ $+$ $-$ : Quantum numbers of the discrete symmetries for new particles. \[tab\_z2list\] \[tab\_decaymode\] Note that $N_{R\, 1,2}$ are the two DM candidates, achieved by requiring that the tree level mass of $\eta$ should be larger than those of $N_{R\,1,2}$ such that the decays of $N_{R\,1,2}$ through the $Z_2\times Z_2^\prime$-allowed Yukawa interactions $\bar L_{L\,i} N_{R\, 1,2} \eta$ are kinematically forbidden, where the subscripts $i=1,2,3$ stand for three generations. However, in order for the two DM components to decay, we also need to further explicitly break $Z_2\times Z_2^\prime$ by adding the soft-breaking term $\mu^2 \zeta^\dagger \eta$ with the characterizing energy scale $\mu$, as well as demanding the mass of the doublet $\zeta$ is smaller than those of two DMs $M_{1,2}$. Thus, the relevant Lagrangian to the two decaying DM components $N_{R\, 1,2}$ is given as follows: $$\begin{aligned} L=-\bar L_{Li} (Y_{1i}N_{R1}+ Y_{2i}N_{R2}) \eta-{M_1\over2}\overline{ (N_{R1})^c}N_{R1} -{M_2\over2}\overline{ (N_{R2})^c}N_{R2}-\mu^2 \zeta^\dagger \eta- V\;,\end{aligned}$$ where the scalar potential $V$ includes all other possible interactions involving $\eta$ and $\zeta$. The main decay channels of the DMs are represented in Fig. \[Fig\_DMDecay1\], which can be viewed as the resolution of the blob in Fig. \[Fig\_DMDecay\]. ![Illustration for the process of the DM particle $N_R$ decaying into charged leptons $\l^\pm$ and heavy charged particles $\zeta^{\mp}$ through the mixings with $\eta^{\mp}$.[]{data-label="Fig_DMDecay1"}](DMdecay1.ps){width="8cm"} With this setup, the lifetimes of the two DMs of ${\cal O}(10^{26}{\rm s})$ can be naturally obtained if [$Y_{1(2)i}\sim{\cal O}(10^{-6})$, $\mu\sim{\cal O}(1 ~{\rm GeV})$ and $M_\eta\sim {\cal O}(10^{10} ~{\rm GeV})$.]{} An interesting prediction of this model is that there is the same amount of neutrino fluxes from the DM decay as the leptons. This can be easily seen from Fig. \[Fig\_DMDecay1\] when we make an $SU(2)_L$ rotation in terms of the decay products, $\zeta^\pm$ and $l^\mp$, to their $SU(2)_L$ neutral partners, $\zeta^0$ and $\nu_{e,\mu,\tau}$. Currently, the search for neutrinos from annihilating and decaying DMs is performed in the South Pole by the IceCube Collaboration [@ICECUBE1; @ICECUBE2]. The analysis of data of neutrinos from the Galactic halo [@ICECUBE1] and the Galactic center [@ICECUBE2] can already give quite tight bounds on the annihilation cross sections for the annihilating DM explanation of the positron/electron excesses, but for the decaying DM scenario the bound on the DM lifetime is rather weak, only from ${\cal O}(10^{22}{\rm s})$ to ${\cal O}(10^{24}{\rm s})$ for different decay channels, particularly for typical leptonic channels. As a result, our two-component model predicts that the two lifetimes of two DMs are of few$\times{\cal O}(10^{25}{\rm s})$, which can be potentially observed by the near-future IceCube experiments. Conclusions {#conclusion} =========== Both the precise measurements of the positron fraction by AMS-02 and the total $e^+ + e^-$ flux by Fermi-LAT evidently show the uprise above 10 GeV, which cannot be explained by the traditional astrophysical sources. Decaying DMs could be one appealing possible origin for these extra $e^\pm$. However, for the simplest scenario with only one DM component decaying mainly through two-body leptonic channels, it is not easy to accommodate both experiments. In the present work, we have investigated the multi-component DM scenario as one possible solution to the above problem, in which at least two DM components possess their own two-body leptonic decays. As a byproduct, the fine structure around 100 GeV observed in the data of both the AMS-02 positron fraction and the Fermi-LAT total $e^+ + e^-$ flux can have the simple explanation that the lighter DM contribution drop out there. By performing the simple $\chi^2$-fitting of the two spectra, we have found that the heavier DM component with the energy cutoff larger than 1 TeV decays dominantly through the $\mu$-channel, while the lighter one with exactly 100 GeV cutoff mainly via the $\tau$-channel with the minor contribution from the direct $e$-channel. With the fitted parameters, we have predicted the spectrum of the diffuse $\gamma$-ray emission in our two-component DM model. By comparing the spectrum with the one measured by Fermi-LAT [@FermiLAT_Gamma], we have demonstrated that it is consistent with the Fermi-LAT data points, showing that our present two-component DM model is still allowed by the current Fermi-LAT $\gamma$-ray measurement. We note that the Fermi-LAT constraint is not so stringent as pointed in the previous study [@Cirelli:2012ut]. Finally, we have built a microscopic particle model to realize the above two-component decay DM scenario. With the appropriate choice of the particle masses, mixings and couplings, it is quite natural to obtain the lifetimes of the two DM components to be of ${\cal O}(10^{26}{\rm s})$. Our scenario also predicts the same amount of the neutrino flux signal, which is expected to be observed in the future IceCube experiments. We are grateful to Dr. Y. F. Zhou and Dr. P. Y. Tseng for useful discussions. The work was supported in part by National Center for Theoretical Science, National Science Council (NSC-101-2112-M-007-006-MY3) and National Tsing Hua University (Grant Nos. 102N1087E1 and 102N2725E1). [01]{} M. Aguilar [*et al.*]{} \[AMS Collaboration\], Phys. Rev. Lett.  [**110**]{}, 141102 (2013). M. Boezio [*et al.*]{}, Astrophys. J.  [**532**]{}, 653 (2000). M. A. DuVernois [*et al.*]{}, Astrophys. J.  [**559**]{}, 296 (2001). J. J. Beatty [*et al.*]{}, Phys. Rev. Lett.  [**93**]{}, 241102 (2004) \[astro-ph/0412230\]. M. Aguilar [*et al.*]{} \[AMS-01 Collaboration\], Phys. Lett. B [**646**]{}, 145 (2007) \[astro-ph/0703154 \[ASTRO-PH\]\]. O. Adriani [*et al.*]{} \[PAMELA Collaboration\], Nature [**458**]{}, 607 (2009) \[arXiv:0810.4995 \[astro-ph\]\]. O. Adriani [*et al.*]{} \[ PAMELA Collaboration\], arXiv:1308.0133 \[astro-ph.HE\]. M. Ackermann [*et al.*]{} \[Fermi LAT Collaboration\], Phys. Rev. Lett.  [**108**]{}, 011103 (2012) \[arXiv:1109.0521 \[astro-ph.HE\]\]. J. Chang [*et al.*]{}, Nature [**456**]{}, 362 (2008). S. Torii [*et al.*]{} \[PPB-BETS Collaboration\], arXiv:0809.0760 \[astro-ph\]. F. Aharonian [*et al.*]{} \[H.E.S.S. Collaboration\], Phys. Rev. Lett.  [**101**]{}, 261104 (2008) \[arXiv:0811.3894 \[astro-ph\]\]. F. Aharonian [*et al.*]{} \[H.E.S.S. Collaboration\], Astron. Astrophys.  [**508**]{}, 561 (2009) \[arXiv:0905.0105 \[astro-ph.HE\]\]. M. Ackermann [*et al.*]{} \[Fermi LAT Collaboration\], Phys. Rev. D [**82**]{}, 092004 (2010) \[arXiv:1008.3999 \[astro-ph.HE\]\]. A. A. Abdo [*et al.*]{} \[Fermi LAT Collaboration\], Phys. Rev. Lett.  [**102**]{}, 181101 (2009) \[arXiv:0905.0025 \[astro-ph.HE\]\]. S. Profumo, Central Eur. J. Phys.  [**10**]{}, 1 (2011) \[arXiv:0812.4457 \[astro-ph\]\]. T. Linden and S. Profumo, Astrophys. J.  [**772**]{}, 18 (2013) \[arXiv:1304.1791 \[astro-ph.HE\]\]; P. F. Yin, Z. H. Yu, Q. Yuan and X. J. Bi, Phys. Rev. D [**88**]{}, 023001 (2013) \[arXiv:1304.4128 \[astro-ph.HE\]\]; D. Gaggero, L. Maccione, G. Di Bernardo, C. Evoli and D. Grasso, Phys.  Rev.  Lett.  111, [**021102**]{} (2013) \[arXiv:1304.6718 \[astro-ph.HE\]\]. K. Ishiwata, S. Matsumoto and T. Moroi, Phys. Lett. B [**675**]{}, 446 (2009) \[arXiv:0811.0250 \[hep-ph\]\]; L. Bergstrom, T. Bringmann, I. Cholis, D. Hooper and C. Weniger, Phys. Rev. Lett.  [**111**]{}, 171101 (2013) \[arXiv:1306.3983 \[astro-ph.HE\]\]; D. Gaggero and L. Maccione, arXiv:1307.0271 \[astro-ph.HE\]. L. Bergstrom, T. Bringmann and J. Edsjo, Phys. Rev. D [**78**]{}, 103520 (2008) \[arXiv:0808.3725 \[astro-ph\]\]; M. Cirelli and A. Strumia, PoS IDM [**2008**]{}, 089 (2008) \[arXiv:0808.3867 \[astro-ph\]\]; D. Hooper, A. Stebbins and K. M. Zurek, Phys. Rev. D [**79**]{}, 103513 (2009) \[arXiv:0812.3202 \[hep-ph\]\]; E. Nezri, M. H. G. Tytgat and G. Vertongen, JCAP [**0904**]{}, 014 (2009) \[arXiv:0901.2556 \[hep-ph\]\]; X. J. Bi, P. H. Gu, T. Li and X. Zhang, JHEP [**0904**]{}, 103 (2009) \[arXiv:0901.0176 \[hep-ph\]\]; D. Hooper and K. M. Zurek, Phys. Rev. D [**79**]{}, 103529 (2009) \[arXiv:0902.0593 \[hep-ph\]\]; Y. .B. Zeldovich, A. A. Klypin, M. Y. Khlopov and V. M. Chechetkin, Sov. J. Nucl. Phys.  [**31**]{}, 664 (1980) \[Yad. Fiz.  [**31**]{}, 1286 (1980)\]; K. Belotsky, D. Fargion, M. Khlopov and R. V. Konoplich, Phys. Atom. Nucl.  [**71**]{}, 147 (2008) \[hep-ph/0411093\]. P. S. B. Dev, D. K. Ghosh, N. Okada and I. Saha, arXiv:1307.6204 \[hep-ph\]. K. Cheung, P. Y. Tseng and T. C. Yuan, Phys. Lett. B [**678**]{}, 293 (2009) \[arXiv:0902.4035 \[hep-ph\]\]. C. R. Chen and F. Takahashi, JCAP [**0902**]{}, 004 (2009) \[arXiv:0810.4110 \[hep-ph\]\]; P. F. Yin, Q. Yuan, J. Liu, J. Zhang, X. J. Bi and S. H. Zhu, Phys. Rev. D [**79**]{}, 023512 (2009) \[arXiv:0811.0176 \[hep-ph\]\]; K. Hamaguchi, E. Nakamura, S. Shirai and T. T. Yanagida, Phys. Lett. B [**674**]{}, 299 (2009) \[arXiv:0811.0737 \[hep-ph\]\]; A. Ibarra and D. Tran, JCAP [**0902**]{}, 021 (2009) \[arXiv:0811.1555 \[hep-ph\]\]; E. Nardi, F. Sannino and A. Strumia, JCAP [**0901**]{}, 043 (2009) \[arXiv:0811.4153 \[hep-ph\]\]; I. Gogoladze, R. Khalid, Q. Shafi and H. Yuksel, Phys. Rev. D [**79**]{}, 055019 (2009) \[arXiv:0901.0923 \[hep-ph\]\]; K. Hamaguchi, F. Takahashi and T. T. Yanagida, Phys. Lett. B [**677**]{}, 59 (2009) \[arXiv:0901.2168 \[hep-ph\]\]; S. L. Chen, R. N. Mohapatra, S. Nussinov and Y. Zhang, Phys. Lett. B [**677**]{}, 311 (2009) \[arXiv:0903.2562 \[hep-ph\]\]; K. Ishiwata, S. Matsumoto and T. Moroi, arXiv:0903.3125 \[hep-ph\]; A. Arvanitaki, S. Dimopoulos, S. Dubovsky, P. W. Graham, R. Harnik and S. Rajendran, Phys. Rev. D [**80**]{}, 055011 (2009) \[arXiv:0904.2789 \[hep-ph\]\]. K. Ishiwata, S. Matsumoto and T. Moroi, JHEP [**0905**]{}, 110 (2009) \[arXiv:0903.0242 \[hep-ph\]\]. A. Ibarra, D. Tran and C. Weniger, Int. J. Mod. Phys. A [**28**]{}, 1330040 (2013) \[arXiv:1307.6434 \[hep-ph\]\]. H. Fukuoka, J. Kubo and D. Suematsu, Phys. Lett. B [**678**]{}, 401 (2009) \[arXiv:0905.2847 \[hep-ph\]\]. [ A. Ibarra, A. S. Lamperstorfer and J. Silk, arXiv:1309.2570 \[hep-ph\].]{} A. Arvanitaki, S. Dimopoulos, S. Dubovsky, P. W. Graham, R. Harnik and S. Rajendran, Phys. Rev. D [**79**]{}, 105022 (2009) \[arXiv:0812.2075 \[hep-ph\]\]; K. Hamaguchi, S. Shirai and T. T. Yanagida, Phys. Lett. B [**673**]{}, 247 (2009) \[arXiv:0812.2374 \[hep-ph\]\]; C. H. Chen, C. Q. Geng and D. V. Zhuridov, Phys. Lett. B [**675**]{}, 77 (2009) \[arXiv:0901.2681 \[hep-ph\]\]. M. Ibe, S. Matsumoto, S. Shirai and T. T. Yanagida, JHEP [**1307**]{}, 063 (2013) \[arXiv:1305.0084 \[hep-ph\]\]; K. Kohri and N. Sahu, arXiv:1306.5629 \[hep-ph\]. C. H. Chen, C. Q. Geng and D. V. Zhuridov, JCAP [**0910**]{}, 001 (2009) \[arXiv:0906.1646 \[hep-ph\]\]. V. Barger, W. Y. Keung, D. Marfatia and G. Shaughnessy, Phys. Lett. B [**672**]{}, 141 (2009) \[arXiv:0809.0162 \[hep-ph\]\]; M. Cirelli, M. Kadastik, M. Raidal and A. Strumia, Nucl. Phys. B [**813**]{}, 1 (2009) \[Addendum-ibid. B [**873**]{}, 530 (2013)\]; \[arXiv:0809.2409 \[hep-ph\]\]; C. R. Chen, F. Takahashi and T. T. Yanagida, Phys. Lett. B [**673**]{}, 255 (2009) \[arXiv:0811.0477 \[hep-ph\]\]; C. R. Chen, M. M. Nojiri, F. Takahashi and T. T. Yanagida, Prog. Theor. Phys.  [**122**]{}, 553 (2009) \[arXiv:0811.3357 \[astro-ph\]\]; J. Liu, P. F. Yin and S. H. Zhu, Phys. Rev. D [**79**]{}, 063522 (2009) \[arXiv:0812.0964 \[astro-ph\]\]. L. Feng, R. Z. Yang, H. N. He, T. K. Dong, Y. Z. Fan and J. Chang, arXiv:1303.0530 \[astro-ph.HE\]; A. Sharma, arXiv:1304.0831 \[astro-ph.CO\]; J. Kopp, Phys. Rev. D [**88**]{}, 076013 (2013) \[arXiv:1304.1184 \[hep-ph\]\]; A. De Simone, A. Riotto and W. Xue, JCAP [**1305**]{}, 003 (2013) \[JCAP [**1305**]{}, 003 (2013)\] \[arXiv:1304.1336 \[hep-ph\]\]; I. Cholis and D. Hooper, Phys. Rev. D [**88**]{}, 023013 (2013) \[arXiv:1304.1840 \[astro-ph.HE\]\]; Q. Yuan and X. J. Bi, Phys. Lett. B [**727**]{}, 1 (2013) \[arXiv:1304.2687 \[astro-ph.HE\]\]. Q. Yuan, X. J. Bi, G. M. Chen, Y. Q. Guo, S. J. Lin and X. Zhang, arXiv:1304.1482 \[astro-ph.HE\]; H. B. Jin, Y. L. Wu and Y. F. Zhou, arXiv:1304.1997 \[hep-ph\]. For recent reviews, see K. Petraki and R. R. Volkas, Int. J. Mod. Phys. A [**28**]{}, 1330028 (2013) \[arXiv:1305.4939 \[hep-ph\]\]; K. M. Zurek, arXiv:1308.0338 \[hep-ph\]. K. R. Dienes, J. Kumar and B. Thomas, Phys. Rev. D [**88**]{}, 103509 (2013) \[arXiv:1306.2959 \[hep-ph\]\]. Y. Kajiyama, H. Okada and T. Toma, arXiv:1304.2680 \[hep-ph\]. M. Cirelli, E. Moulin, P. Panci, P. D. Serpico and A. Viana, Phys. Rev. D [**86**]{}, 083506 (2012) \[arXiv:1205.5283 \[astro-ph.CO\]\]. P. Sreekumar [*et al.*]{} \[EGRET Collaboration\], Astrophys. J.  [**494**]{}, 523 (1998) \[astro-ph/9709257\]. M. Ackermann [*et al.*]{} \[LAT Collaboration\], Phys. Rev. D [**86**]{}, 022002 (2012) \[arXiv:1205.2739 \[astro-ph.HE\]\]. E. A. Baltz and J. Edsjo, Phys. Rev. D [**59**]{}, 023511 (1998) \[astro-ph/9808243\]. R. Trotta, G. Johannesson, I. V. Moskalenko, T. A. Porter, R. R. de Austri and A. W. Strong, Astrophys. J.  [**729**]{}, 106 (2011) \[arXiv:1011.0037 \[astro-ph.HE\]\]. K. G. Begeman, A. H. Broeils and R. H. Sanders, Mon. Not. Roy. Astron. Soc.  [**249**]{}, 523 (1991). A. W. Strong and I. V. Moskalenko, Astrophys. J.  [**509**]{}, 212 (1998) \[astro-ph/9807150\]. L. J. Gleeson and W. I. Axford, Astrophys. J.  [**154**]{}, 1011 (1968). T. Sjostrand, S. Mrenna and P. Z. Skands, JHEP [**0605**]{}, 026 (2006) \[hep-ph/0603175\]. O. Adriani [*et al.*]{} \[PAMELA Collaboration\], Phys. Rev. Lett.  [**105**]{}, 121101 (2010) \[arXiv:1007.0821 \[astro-ph.HE\]\]. J. Fan, A. Katz, L. Randall and M. Reece, arXiv:1303.1521 \[astro-ph.CO\]; J. Fan, A. Katz, L. Randall and M. Reece, Phys. Rev. Lett.  [**110**]{}, 211302 (2013) \[arXiv:1303.3271 \[hep-ph\]\]; M. McCullough and L. Randall, JCAP [**1310**]{}, 058 (2013) \[arXiv:1307.4095 \[hep-ph\]\]; M. Y. Khlopov and C. Kouvaris, Phys. Rev. D [**78**]{}, 065040 (2008) \[arXiv:0806.1191 \[astro-ph\]\]. J. F. Beacom, N. F. Bell and G. Bertone, Phys. Rev. Lett.  [**94**]{}, 171301 (2005) \[astro-ph/0409403\]. R. Essig, N. Sehgal and L. E. Strigari, Phys. Rev. D [**80**]{}, 023506 (2009) \[arXiv:0902.4750 \[hep-ph\]\]. A. A. Abdo [*et al.*]{} \[Fermi-LAT Collaboration\], Phys. Rev. Lett.  [**104**]{}, 101101 (2010) \[arXiv:1002.3603 \[astro-ph.HE\]\]; M. Cirelli and P. Panci, Nucl. Phys. B [**821**]{}, 399 (2009) \[arXiv:0904.3830 \[astro-ph.CO\]\]; S. Matsumoto, K. Ishiwata and T. Moroi, Phys. Lett. B [**679**]{}, 1 (2009) \[arXiv:0905.4593 \[astro-ph.CO\]\]. A. Ibarra, D. Tran and C. Weniger, JCAP [**1001**]{}, 009 (2010) \[arXiv:0906.1571 \[hep-ph\]\]; C. R. Chen, F. Takahashi and T. T. Yanagida, Phys. Lett. B [**671**]{}, 71 (2009) \[arXiv:0809.0792 \[hep-ph\]\]. A. Ibarra and D. Tran, Phys. Rev. Lett.  [**100**]{}, 061301 (2008) \[arXiv:0709.4593 \[astro-ph\]\]. N. Fornengo, L. Pieri and S. Scopel, Phys. Rev. D [**70**]{}, 103529 (2004) \[hep-ph/0407342\]. P. A. R. Ade [*et al.*]{} \[Planck Collaboration\], arXiv:1303.5062 \[astro-ph.CO\]. M. Actis [*et al.*]{} \[CTA Consortium Collaboration\], Exper. Astron.  [**32**]{}, 193 (2011) \[arXiv:1008.3703 \[astro-ph.IM\]\]. www.cta-observatory.org R. Abbasi [*et al.*]{} \[IceCube Collaboration\], Phys. Rev. D [**84**]{}, 022004 (2011) \[arXiv:1101.3349\[astro-ph.HE\]\]. R. Abbasi [*et al.*]{} \[IceCube Collaboration\], arXiv:1210.3557\[hep-ex\]. [^1]: geng@phys.nthu.edu.tw [^2]: dahuang@phys.nthu.edu.tw [^3]: lhtsai@phys.nthu.edu.tw
--- abstract: | The effects of an unconditional move rule in the spatial Prisoner’s Dilemma, Snowdrift and Stag Hunt games are studied. Spatial structure by itself is known to modify the outcome of many games when compared with a randomly mixed population, sometimes promoting, sometimes inhibiting cooperation. Here we show that random dilution and mobility may suppress the inhibiting factors of the spatial structure in the Snowdrift game, while enhancing the already larger cooperation found in the Prisoner’s dilemma and Stag Hunt games. [**Corresponding author**]{}: Jeferson J. Arenzon, +55 51 33086446 (phone), +55 51 33087286 (fax), arenzon@if.ufrgs.br author: - 'Estrella A. Sicardi' - Hugo Fort - 'Mendeli H. Vainstein' - 'Jeferson J. Arenzon' title: Random mobility and spatial structure often enhance cooperation --- Introduction ============ Competition and cooperation are two inseparable sides of the same coin. While competition is a key concept in Darwin’s theory of evolution, cooperation is rather puzzling, yet ubiquitous in nature [@SmSz97]. How cooperative behavior evolved among self-interested individuals is an important open question in biology and social sciences, along with the issue of how cooperation contends with competition in order to achieve global and individual optima. A powerful tool to analyze these problems is evolutionary game theory [@Smith82; @Weibull95; @HoSi98], an application of the mathematical theory of games to biological contexts. Of particular relevance are two-player games where each player has a strategy space containing two possible actions (2$\times$2 games), to cooperate (C) or to defect (D). The payoff of a player depends on its action and on the one of its co-player. Assuming pairwise, symmetric interaction between the players, there are four possible values for this payoff. Cooperation involves a cost to the provider and a benefit to the recipient. Two cooperators thus get a [*reward*]{} $R$ while two defectors get a [*punishment*]{} $P$. The trade between a cooperator and a defector gives the [*temptation*]{} $T$ for the latter, while the former receives the [*sucker’s payoff*]{}, $S$. We renormalize all values such that $R=1$ and $P=0$. The ranking of the above quantities defines the game they are playing. The paradigmatic example is the [*Prisoner’s Dilemma*]{} (PD) game in which the four payoffs are ranked as $T>R>P>S$. It clearly pays more to defect whatever the opponent’s strategy: the gain will be $T>R$ if the other cooperates and $P>S$ if he defects. The dilemma appears since if both play D they get $P$, what is worse than the reward $R$ they would have obtained had they both played C. The PD is related with two other [*social dilemma*]{} games [@Poundstone92; @Liebrand83]. In most animal contests (in particular those involving escalating conflicts), mutual defection is the worst possible outcome for both players, and the damage exceeds the cost of being exploited, [*i.e.*]{}  $T>R>S>P$. This game is called [*chicken*]{} [@Rapoport66] or [*snowdrift*]{} (SD). On the other hand, when the reward surpasses the temptation to defect, [*i.e.*]{}  $R>T>P>S$, the game becomes the [*Stag Hunt*]{} (SH) [@Skyrms04]. The coordination of slime molds is an example of animal behavior that has been described as a stag hunt [@StZhQu00]. When individual amoebae of [*Dictyostelium discoideum*]{} are starving, they aggregate to form one large body whose reproductive success depends on the cooperation of many individuals. Here, we consider for $T>1$ the PD ($S<0$) and the SD ($S>0$), whose interface ($S=0$) is known as the weak form of the PD game, with $S=P=0$. The Stag Hunt (SH) game is obtained for $S<0$ and $T<1$. Classical evolutionary game theory constitutes a mean-field-like approximation which does not include the effect of spatial, correlated structures of populations. Axelrod [@Axelrod84] suggested to place the agents on a two-dimensional spatial array interacting with their neighbors. This cellular automaton was explored by Nowak and May [@NoMa92], who found that such spatial structure allows cooperators to build clusters in which the benefits of mutual cooperation can outweigh losses against defectors, thus enabling cooperation to be sustained, in contrast with the spatially unstructured game, where defection is always favored. The original Nowak-May model was extended and modified in several different ways (see ref. [@SzFa07] and references therein). Related to the present work, the effects of dilution and mobility were recently studied in refs. [@VaAr01; @VaSiAr07] in the weak form of the PD game. Mobility effects are difficult to anticipate. When non-assortative movements are included, the effective number of neighbors increases (towards the random mixing limit). Moreover, those clusters so necessary to sustain cooperation may now evaporate. Both these effects promote defection and the number of cooperators is expected to decrease. On the other hand, dilution [@AlTa08] and mobility decrease the competition for local resources and help to avoid retaliation and abuse (although that may require more contingent movements), thus tending to increase cooperation. In the evolutionary game context, diffusion was studied by several authors, sometimes as a cost to wander between patches without spatial structure [@DuWi91; @EnLe93] or as a trait connected with laying offspring within a given dispersal range [@BaRa98; @Koella00; @HaTa05; @LeFeDi05]. An explicit diffusive process was studied in the framework of the replicator equation [@HoSi98], extended to include a diffusive term [@FeMi95; @FeMi96]; however, the interactions were still mean field like. Aktipis [@Aktipis04] considered contingent movement of cooperators with a “win-stay, lose-move” rule, allowing them to invade a population of defectors and resist further invasions. Models with alternating viscosities, which reflect different stages of development that can benefit from the clusterization of cooperators or from dispersal, have also been considered and promote altruism, since the high viscosity phase allows interactions between close relatives and the low viscosity phase reduces the disadvantages of local competition among related individuals. In cases of populations with only a highly viscous phase, the effects of interactions among relatives and competition for local resources tend to balance and thus the evolution of altruistic behavior is inhibited [@WiPoDu92; @Taylor92]. Differently from previous works, ref. [@VaSiAr07] considered a diluted version of Nowak-May’s spatial PD model where individuals are able to perform random walks on the lattice when there is enough free space (the non-assortative “always-move” strategy). Specifically, the setting was the simplest possible: random diffusion of simple, memoryless, unconditional, non-retaliating, strategy-pure agents. Under these conditions, cooperation was found not only to be possible and robust but, for a broad range of the parameters (density, viscosity, etc), often enhanced when compared to the strongly viscous (no mobility) case. The parameters chosen put the model at the interface between the PD and the SD, and a natural question is how robust is the behavior when $S<P$, that is, in a genuine PD game? Moreover, how does mobility affect other games, like the SD or the SH? Recently, Jian-Yue [*et al*]{} [@JiZhYi07] extended the results of ref. [@VaSiAr07] for the SD game and COD dynamics (see next section), but with a restricted choice of $S$ and $T$. Another relevant question regards the existence of any fundamental difference between those games when mobility is introduced. In particular, in those cases where the spatial structure is known to inhibit cooperation [@HaDo04], does mobility change this picture? Our objective here is to present a more comprehensive analysis and extend our previous study in several directions, trying to shed some light on the above questions. The paper is organized as follows. The following section describes the details of the model and simulation. Then, we present the results for two possible implementations depending on the order of the diffusive and offspring steps. Finally, we present our conclusions and discuss some implications of the results. The Model {#section.model} ========= The model is a two dimensional stochastic cellular automaton in which cells are either occupied by unconditional players (cooperators or not) or vacant. At time $t$, the variable $S_i(t)$ is 0 if the corresponding lattice cell is empty, or $\pm 1$ depending on whether the agent at that site cooperates (1) or defects ($-1$). The relevant quantity is the normalized fraction of cooperators, ${\rho_{\scriptstyle\rm c}}$, after the stationary state is attained, defined as ${\rho_{\scriptstyle\rm c}}=(1+M/\rho)/2$, where $M$ is the “magnetization” $M=N^{-1}\sum_i\langle S_i(\infty)\rangle$, $N$ is the system size and $\rho\neq 0$ is the fraction of occupied sites, that is kept fixed at all times (when $\rho \neq 1$, we call the system diluted). The symbol $\langle\ldots\rangle$ stands for an average over the ensemble of initial configurations. We call “active” a site that has changed strategy since the previous step. At each time step, all individuals play against all its four nearest neighbors (if present), collecting the payoff from the combats. After that, it may either move or try to generate its offspring. We consider a best-takes-all reproduction: each player compares its total payoff with those of its nearest neighbors and changes strategy following the one (including itself) with the greatest payoff among them. This strategy changing updating rule preserves the total number of individuals, thus keeping $\rho$ constant. If a tie among some of the neighbors occurs, one of them is randomly chosen. During the diffusive part, each agent makes an attempt to jump to a nearest neighbor site chosen randomly, that is accepted, provided the site is empty, with a probability given by the mobility parameter $m$. Notice that $m$ does not measure the effective mobility, that depends on both $m$ and $\rho$, but only a tendency to move, space allowing. However, different combinations of both parameters could give the same effective mobility (measured, for example, through the mean square displacement), but that parameter alone would not suffice to characterize the game, since spatial correlations are determined by $\rho$. Among the several ways of implementing the reproductive and diffusive steps, here we consider two possibilities, named contest-offspring-diffusion (COD) and contest-diffusion-offspring (CDO). In the former, as the name says, each step consists of combats followed by the generation of offspring done in parallel, and then diffusion, while in the latter, the diffusion and offspring steps are reversed. Of course, the stochasticity introduced by the mobility disappears under some conditions (e.g., $\rho=1$ or $m=0$), and the final outcome may now depend on the initial conditions. We explore many choices of the payoff parameters $T$ and $S$ while $P$ and $R$ are kept fixed at 0 and 1, respectively, and compare the effects of diffusion with those obtained either for the related spatial weak version [@VaSiAr07] or for the randomly mixed limit. Square lattices of sizes ranging from $100 \times 100$ to $500 \times 500$ with periodic boundary conditions are used. Averages are performed over at least 100 samples and equilibrium is usually attained before 1000 network sweeps are completed, although in some cases not even $10^7$ sweeps are enough to bring the system to an equilibrium fixed point. The initial configuration has $\rho N$ individuals placed randomly on the lattice, equally distributed among cooperators and defectors. Results ======= ![Average fraction of cooperating individuals ${\rho_{\scriptstyle\rm c}}$ for mobilities $m=0$ (top) and 1 (bottom), $P=0$, $R=1$, $T=1.4$ and several values of $S$ in the COD case. The case $S=-0.1$ is representative of the whole interval around the weak PD point $S=0$ ($-0.2<S<2/15$). Indeed, all curves in this interval collapse. Notice also that near $\rho=1$, negative responses occur and ${\rho_{\scriptstyle\rm c}}$ may decrease as $S$ gets larger, the effect being stronger for $m=0$ (see text). At small densities, mobility is detrimental to cooperators: isolated clusters of cooperators, that could survive for $m=0$, can be predated by mobile defectors once mobility is considered. At intermediate densities, mobility can strongly increase the amount of cooperation.[]{data-label="fig.COD_rhoc2"}](COD_rhoc_m0.eps "fig:"){width="7.8cm"} ![Average fraction of cooperating individuals ${\rho_{\scriptstyle\rm c}}$ for mobilities $m=0$ (top) and 1 (bottom), $P=0$, $R=1$, $T=1.4$ and several values of $S$ in the COD case. The case $S=-0.1$ is representative of the whole interval around the weak PD point $S=0$ ($-0.2<S<2/15$). Indeed, all curves in this interval collapse. Notice also that near $\rho=1$, negative responses occur and ${\rho_{\scriptstyle\rm c}}$ may decrease as $S$ gets larger, the effect being stronger for $m=0$ (see text). At small densities, mobility is detrimental to cooperators: isolated clusters of cooperators, that could survive for $m=0$, can be predated by mobile defectors once mobility is considered. At intermediate densities, mobility can strongly increase the amount of cooperation.[]{data-label="fig.COD_rhoc2"}](COD_rhoc2.eps "fig:"){width="7.8cm"} ![image](S_m0.eps){width="8cm"} ![image](COD_S_m01.eps){width="8cm"} ![image](COD_S_m05.eps){width="8cm"} ![image](COD_S_m1.eps){width="8cm"} ![Average fraction of cooperating agents ${\rho_{\scriptstyle\rm c}}$ as a function of the mobility $m$ for several values of $\rho$ and $S$. Notice that ${\rho_{\scriptstyle\rm c}}$ may either increase ($m=1$ is optimal) or decrease (a small, but non zero, $m$ is optimal), depending on the parameters, and that there is a discontinuity at $m=0$.[]{data-label="fig.COD_m"}](COD_m.eps){width="7.8cm"} Ref. [@VaSiAr07] considered only the weak limit $P=S=0$ of the PD game for $T=1.4$ and $R=1$, that is, inside the region where, for $\rho=1$, both strategies coexist along with active sites. When $S\neq 0$, this region may comprise values of $S$ both in the PD and in the SD regimes, that is, $S<0$ and $S>0$, respectively. Figs. \[fig.COD\_rhoc2\] and \[fig.COD\_S\] show that the results found in ref. [@VaSiAr07] with the COD dynamics remain unchanged for this rather broad range of $S$. Indeed, the line labeled $S=-0.1$ in fig. \[fig.COD\_rhoc2\] exactly superposes with the results for all values of $S$ in the aforementioned interval, $S=0$ included. This can be more clearly seen in fig. \[fig.COD\_S\], where $\rho$, $m$ and $S$ are varied, and a plateau in the interval $(-0.2,2/15)$ [@Hauert01] is observed (although the value of ${\rho_{\scriptstyle\rm c}}$ in the plateau depends on both $\rho$ and $m$). Besides this active phase, the system presents a large number of different phases, with sharp transitions between them. For comparison, the no mobility case [@VaAr08], $m=0$, is also included, as well as the cooperators density when $\rho=1$ (solid black line). The transition points, calculated considering the several possible local neighborhoods [@VaAr08], are represented by vertical dashed and dotted lines located at the points where $S$ equals to $(T+3P)/2-R=-0.3$, $2T+2P-3R=-0.2$, $(T+3P-R)/3=2/15$, $(T+3P)/4=0.35$, $T+P-R=0.4$, $(2P+2T-R)/3=0.6$ and $P+T/2=0.7$. In addition to these transitions, a few more are introduced when the system is diluted (vertical dotted lines), $\rho<1$, each phase being characterized by the fraction of cooperators and by the way they organize spatially [@VaAr08]. When mobility is introduced after the offspring generation (COD dynamics), no new transition appears. Dilution (without mobility) allows cooperation for small values of $S$ ($S<-0.3$), which is absent with $\rho=1$. For low densities in particular, clusters are small and isolated; therefore, depending on the initial condition, cooperators succeed in forming pure clusters. However, as soon as mobility is introduced, this disorder driven phase disappears since small cooperative clusters are easily predated by wandering defectors. In this low density situation, cooperation is only sustained when the exploitation is not too strong (larger values of $S$), as can be seen in the case $\rho=0.3$ and $m=1$, where cooperation exists only for $S>0.7$. For intermediate densities, some phases may coalesce, like the large $S$ region for $\rho=0.5$ where ${\rho_{\scriptstyle\rm c}}=1$. Interestingly, mobility has a non trivial effect in the negative response that is already present at $\rho=1$ or $m=0$. When the sucker’s payoff $S$ increases (less exploitation), one expects higher levels of cooperation. But the opposite behavior is sometimes observed, and the number of cooperators may also decrease. An example occurs when $\rho=1$ as $S$ increases beyond the transition point 0.6, and ${\rho_{\scriptstyle\rm c}}$ attains a new plateau, far below the previous one. However, with mobility such effect may be enhanced, attenuated or reverted. Fig. \[fig.COD\_S\] depicts many examples of such behavior. Moreover, for a fixed $\rho$, mobility affects different phases in diverse ways. For example, for $\rho=0.5$, when $m$ changes from 0.1 to 0.5, the cooperative phase at $0.2<S<0.4$ disappears, while the one for $0.4<S<1$ suffers no alteration. When $m=1$, instead, this region splits into five smaller regions. Therefore, whether large or small mobility is better for cooperators strongly depends on both $m$ and $\rho$. For example, for $\rho=0.9$, ${\rho_{\scriptstyle\rm c}}$ increases with $m$ when $S$ is large and decreases for smaller values. The behavior of ${\rho_{\scriptstyle\rm c}}$ as a function of the mobility is shown in fig. \[fig.COD\_m\]. In some cases cooperation is an increasing function of the mobility and the optimal value is thus $m=1$. On the other hand, it may also be detrimental for cooperation, and ${\rho_{\scriptstyle\rm c}}$ steadily decreases with $m$. In this latter case, a non zero but very small mobility gives the optimal value. The most important result is obtained when we compare the simulations with what one would obtain in a large randomly mixed population. A simple mean field [@HoSi98; @Hauert01] argument leads to three possible solutions: two absorbing states, ${\rho_{\scriptstyle\rm c}}=0$ and 1, and a mixed case with $${\rho_{\scriptstyle\rm c}}=\frac{S}{S+T-1}, \label{eq.mf}$$ where we have already considered $R=1$ and $P=0$. Depending on the values of $S$ and $T$, one of these solutions may become the stable one. In the mean-field PD case, there is no cooperation and ${\rho_{\scriptstyle\rm c}}=0$ is the stable solution. For the SH, the stable solution depends also on the initial density of cooperators ${\rho_{\scriptstyle\rm c}}^0$ and eq. \[eq.mf\] delimits the basin of attraction of each solution: ${\rho_{\scriptstyle\rm c}}=0$ if ${\rho_{\scriptstyle\rm c}}^0<S/(S+T-1)$ and 1 otherwise. The solution given by eq. \[eq.mf\], shown in fig. \[fig.COD\_S\] as a curved line, is only stable in the SD game. It has been known that a structured spatial distribution of agents often inhibits cooperation [@HaDo04] in the SD game, as opposed to what happens in the PD and SH games. Nevertheless, when both dilution and mobility are introduced, cooperation is not so often inhibited. Indeed, in fig. \[fig.COD\_S\] one can see that in many cases the spatially distributed population outperforms the randomly mixed population in terms of cooperative behavior. ![Average fraction of cooperating individuals ${\rho_{\scriptstyle\rm c}}$ for mobilities $m=0$ (top) and $m=1$ (bottom), $P=0$, $R=1$, $T=0.9$ and several values of $S$ with COD dynamics for the SH game. Notice the strong effect of the inclusion of mobility. The chosen values correspond to the different plateaux seen in fig. \[fig.COD\_S\_SH\]. The density of cooperators is a monotonically increasing function of the total density for $m\neq 0$, while it is either non-monotonic or monotonically decreasing for $m=0$.[]{data-label="fig.sh_rhocm1"}](sh_rhocm0.eps "fig:"){width="7.8cm"} ![Average fraction of cooperating individuals ${\rho_{\scriptstyle\rm c}}$ for mobilities $m=0$ (top) and $m=1$ (bottom), $P=0$, $R=1$, $T=0.9$ and several values of $S$ with COD dynamics for the SH game. Notice the strong effect of the inclusion of mobility. The chosen values correspond to the different plateaux seen in fig. \[fig.COD\_S\_SH\]. The density of cooperators is a monotonically increasing function of the total density for $m\neq 0$, while it is either non-monotonic or monotonically decreasing for $m=0$.[]{data-label="fig.sh_rhocm1"}](sh_rhocm1.eps "fig:"){width="7.8cm"} ![image](sh_m0.eps){width="8cm"} ![image](sh_m01.eps){width="8cm"} ![image](sh_m05.eps){width="8cm"} ![image](sh_m1.eps){width="8cm"} Besides the PD and SD games, we also studied the effect of mobility in the SH game. Fig. \[fig.sh\_rhocm1\] shows the normalized fraction of cooperators as a function of the total density for several values of $S<0$ and $T=0.9 < R=1$ when the mobility is either high ($m=1$) or absent ($m=0$). Without mobility, isolated clusters of cooperators are able to survive and even at very low densities cooperation is sustained. Once mobility is included, cooperation at low densities is destroyed as small cooperator clusters are easily predated by mobile defectors. On the other hand, at higher densities cooperation is strongly enhanced and ${\rho_{\scriptstyle\rm c}}=1$ for all values of $S<0$. It is interesting to notice that mobility also changes the dependence on the density: with $m=1$ all curves are monotonically increasing functions of $\rho$, while for $m=0$ they are always decreasing for low $S$ and non-monotonic for larger values. The general dependence on $S$, for several values of the mobility can be observed in fig. \[fig.COD\_S\_SH\]. No negative response appears and the amount of cooperation increases with $S$, as expected. ![Average fraction of cooperating individuals ${\rho_{\scriptstyle\rm c}}$ versus $S$ ($T=1.4$, $R=1$ and $P=0$) for different values of the mobility and $\rho$ in the CDO case. The solid black line is the case $\rho=1$, while the vertical ones locate the transition points for both $\rho=1$ and $\rho<1$ (see fig. \[fig.COD\_S\] and text). Differently from the COD case, mobility introduces new transitions (indicated by vertical solid lines) at $S=-3$, $-2$, $-1$, $-1/2$, $-1/3$ and 0.[]{data-label="fig.CDO_S"}](CDO_S_m01.eps "fig:"){width="7.8cm"} ![Average fraction of cooperating individuals ${\rho_{\scriptstyle\rm c}}$ versus $S$ ($T=1.4$, $R=1$ and $P=0$) for different values of the mobility and $\rho$ in the CDO case. The solid black line is the case $\rho=1$, while the vertical ones locate the transition points for both $\rho=1$ and $\rho<1$ (see fig. \[fig.COD\_S\] and text). Differently from the COD case, mobility introduces new transitions (indicated by vertical solid lines) at $S=-3$, $-2$, $-1$, $-1/2$, $-1/3$ and 0.[]{data-label="fig.CDO_S"}](CDO_S_m1.eps "fig:"){width="7.8cm"} When the diffusion step is performed before the offspring laying (CDO dynamics), the amount of cooperation is often strongly enhanced, as can be observed in fig. \[fig.CDO\_S\]. Cooperators close to defectors have low payoff; therefore, if they do not move, in the next step their strategy will be replaced by D. On the other hand, if they do move away, there is a probability of surviving depending on the new neighborhood they encounter (at low densities, for example, they may be isolated after the move, and thus survive). Moreover, cooperators that detach from a cooperative neighborhood have high payoff and may replace the defectors they find. Indeed, for many values of the parameters $\rho$ and $m$, cooperators fully dominate the system, even in the region of small $S$ values where the COD dynamics allows no cooperators to survive. While all phases appearing in the COD dynamics were already present when $m=0$, in the CDO case a few new phases appear. These mobility driven transitions can be seen in fig. \[fig.CDO\_S\] and are marked by solid vertical lines at $S=-3$, $-2$, $-1$, $-1/2$, $-1/3$ and 0. In particular, a new transition appears at $S=0$ and, differently from all other cases where the weak PD behavior was representative of a wide range of values of $S$, in this case, although the weak and the strict PD ($S<P$) still behave in the same way, the small-$S$ SD becomes different. Even though the density of the two phases is very similar, they are in fact different since the final configurations are slightly different even if we prepare two systems (for example, one with $S=-0.01$ and the other with $S=0.01$) with identical initial conditions and subject to the same sequence of random numbers. Discussion and final comments ============================= The main question posed at the beginning was how mobility affects the outcome of different games beyond the weak dilemma at the frontier between the PD and the SD studied in Ref. [@VaSiAr07]. A main novelty emerges in the context of the SD game: mobility restores the enhancing factor of the spatial structure also found in the PD game, at variance with the $m=0$ case where cooperation is usually lower than the fully mixed case [@HaDo04]. In general, when agents are able to randomly diffuse on the lattice, unmatched levels of cooperation can be attained for wide ranges of the parameters. Moreover, differently from the PD and SH games, the spatial SD presents negative responses when the value of $S$ increases: instead of enhancing the amount of cooperation as one would expect, ${\rho_{\scriptstyle\rm c}}$ sometimes decreases. This effect, absent in the fully mixed case, is also observed even in the absence of mobility, something that has not been previously noticed. Cooperators are spatially organized in different ways depending on the game they play. For example, the clusters may be more compact or filamentous. This spatial structure rules the effect that mobility has on the fate of the game. We considered three regions of interest in the $T$ and $S$ plane, the genuine Prisoner’s Dilemma (PD) game ($T>1$ and $S<0$), the Snowdrift (SD) game ($T>1$ and $S>0$) and the Stag Hunt (SH) game ($T<1$ and $S<0$). Let us analyze what happens for each of the three games separately. We start with the genuine PD where qualitative differences with respect to the weak dilemma occur only for values of $S$ below a threshold $S^*$, a region in which cooperation is completely extinguished in the presence of mobility. This is reasonable since by increasing the penalization for the sucker’s behavior (decreasing $S$) one finally reaches a point below which C agents perform badly and cannot overcome the filter of selection. For the COD variant at $T=1.4$, $S^* = -0.2$ no matter the density $\rho$ and for all $m>0$ considered. On the other hand, for the CDO variant, $S^*$ depends strongly on $\rho$. It is remarkable that, even for very severe sucker’s penalizations (down to $S=-2.5$ in the figure, but whatever smaller value will do, and since there is no further transition below $S=-3$, even any negativelly large one), for intermediate values of $\rho$ (e.g. $\rho = 0.5$), the universal cooperation state (${\rho_{\scriptstyle\rm c}}=1$), or a state very close to that, can still be attained. In less severe dilemmas than the PD — mutual defection pays less than the sucker’s payoff in the SD, and mutual cooperation pays more than cheating in the SH — cooperation is, as one would have expected, in general higher. In the case of the SD, cooperation is often enhanced with respect to the weak dilemma with COD dynamics, while an unprecedented state of universal cooperation (${\rho_{\scriptstyle\rm c}}=1$) can be sometimes reached with the CDO one. Hauert and Doebeli [@HaDo04] noticed that cooperation is often inhibited by spatial structure with ${\rho_{\scriptstyle\rm c}}$ being usually lower than its value in a randomly mixed population, where for large systems one of the three solutions ${\rho_{\scriptstyle\rm c}}=S/(T+S-1)$, 0 or 1 is stable. Dilution and mobility change dramatically this scenario. When only dilution (but no mobility) is present, cooperation in a spatially distributed system is higher than in the random mixed limit either for intermediate densities or small values of $S$. When mobility is added, only high densities follow the behavior of the $\rho =1$ situation where spatial structure inhibits cooperation. On the contrary, for not so high densities cooperation is enhanced in the SD game when $m\neq 0$. In this way, in the presence of mobile agents, it is again possible to make the statement that spatial structure promotes cooperation. In the COD SH, the combination of mobility and large density ($\rho \ge 0.7$) leads to a boost in ${\rho_{\scriptstyle\rm c}}$ or even to universal cooperation. On the other hand, for smaller values of $\rho$, provided the sucker’s payoff $S$ is also small, ${\rho_{\scriptstyle\rm c}}$ is lower. So a crucial difference is that, for a given mobility, the level of cooperation grows with the density of agents, different from the behavior at $m=0$. Accessing the actual payoff involved in real situations is not an easy task, and it has been suggested that many examples that have been interpreted as realizations of the PD are also compatible with the SD and the SH games (see [@HaDo04; @Skyrms04] and references therein). In addition to this, mobility effects on the cooperation of real organisms are still largely unknown, as they are difficult to isolate from other factors and, as the theoretical results presented here have shown, even a tiny amount of mobility is able to produce very strong changes in the final result. Although mobility may have an effect similar to noise, allowing shallow basins to be avoided, they are not equivalent. For example, in Ref. [@HaDo04], several different dynamics, with and without noise, gave consistent results for the inhibition of cooperation in the SD game with spatial structure, while mobility drastically changes this outcome. Since the results seem to strongly depend on the chosen dynamics (although we have only considered “Best-takes-over” updatings, it has two possible variants, CDO and COD), an important, yet open, question concerns the existence of an unifying principle, in Hamilton’s sense [@Hamilton64; @Nowak06b], relating the parameters of the game, that tells us when cooperative behavior might be expected when mobile agents are present. Research partially supported by the Brazilian agency CNPq, grant PROSUL-490440/2007. JJA is partially supported by the Brazilian agencies CAPES, CNPq and FAPERGS. ES and HF want to thank PEDECIBA for financial support. [33]{} natexlab\#1[\#1]{}url \#1[`#1`]{}urlprefix\[1\] <span style="font-variant:small-caps;">Aktipis, C. A.</span> (2004). Know when to walk away: contingent movement and the evolution of cooperation. *J. Theor. Biol.* **231**, 249–260. <span style="font-variant:small-caps;">Alizon, S. & Taylor, P.</span> (2008). Empty sites can promote altruistic behavior. *Evolution* **62**, 1335–1344. <span style="font-variant:small-caps;">Axelrod, R.</span> (1984). *The Evolution of Cooperation*. New York: BasicBooks. <span style="font-variant:small-caps;">Dugatkin, L. A. & Wilson, D. S.</span> (1991). Rover - a strategy for exploiting cooperators in a patchy environment. *American Naturalist* **138**, 687–701. <span style="font-variant:small-caps;">Enquist, M. & Leimar, O.</span> (1993). The evolution of cooperation in mobile organisms. *Anim. Behav.* **45**, 747–757. <span style="font-variant:small-caps;">Ferrière, R. & Michod, R. E.</span> (1995). Invading wave of cooperation in a spatial iterated prisoners-dilemma. *Proc. R. Soc. B* **259**, 77–83. <span style="font-variant:small-caps;">Ferrière, R. & Michod, R. E.</span> (1996). The evolution of cooperation in spatially heterogeneous populations. *Am. Nat.* **147**, 692–717. <span style="font-variant:small-caps;">Hamilton, I. M. & Taborsky, M.</span> (2005). Contingent movement and cooperation evolve under generalized reciprocity. *Proc. R. Soc. B* **272**, 2259–2267. <span style="font-variant:small-caps;">Hamilton, W.</span> (1964). The genetical evolution of social behaviour i. *J. Theor. Biol.* **7**, 1–16. <span style="font-variant:small-caps;">Hauert, C.</span> (2001). Fundamental clusters in spatial $2\times 2$ games. *Proc. R. Soc. Lond. B* **268**, 761–769. <span style="font-variant:small-caps;">Hauert, C. & Doebeli, M.</span> (2004). Spatial structure often inhibits the evolution of cooperation in the snowdrift game. *Nature* **428**, 643–646. <span style="font-variant:small-caps;">Hofbauer, J. & Sigmund, K.</span> (1998). *Evolutionary Games and Population Dynamics*. Cambridge: Cambridge University Press. <span style="font-variant:small-caps;">Jian-Yue, G., Zhi-Xi, W. & Ying-Hai, W.</span> (2007). Evolutionary snowdrift game with disordered environments in mobile societies. *Chinese Physics* **16**, 3566–3570. <span style="font-variant:small-caps;">Koella, J. C.</span> (2000). The spatial spread of altruism versus the evolutionary response of egoists. *Proc. R. Soc. B* **267**, 1979–1985. <span style="font-variant:small-caps;">Le Galliard, J.-F., Ferrière, F. & Dieckmann, U.</span> (2005). Adaptive evolution of social traits: origin, trajectories, and correlations of altruism and mobility. *Am. Nat.* **165**, 206–224. <span style="font-variant:small-caps;">Liebrand, W. M.</span> (1983). *Simulation Gaming* **14**, 123. <span style="font-variant:small-caps;">Maynard Smith, J.</span> (1982). *Evolution and the Theory of Games*. Cambridge, UK: Cambridge University Press. <span style="font-variant:small-caps;">Maynard Smith, J. & Szathmary, E.</span> (1997). *The Major Transitions in Evolution*. Oxford University Press. <span style="font-variant:small-caps;">Nowak, M. A.</span> (2006). Five rules for the evolution of cooperation. *Science* **314**, 1560–1563. <span style="font-variant:small-caps;">Nowak, M. A. & May, R. M.</span> (1992). Evolutionary games and spatial chaos. *Nature* **246**, 15–18. <span style="font-variant:small-caps;">Poundstone, W.</span> (1992). *Prisoner’s Dilemma*. New York: Doubleday. <span style="font-variant:small-caps;">Rapoport, A.</span> (1966). *Two-Person Game Theory: The Essential Ideas*. Ann Arbor: U Michigan. <span style="font-variant:small-caps;">Skyrms, B.</span> (2004). *The Stag Hunt and the Evolution of Social Structure*. Cambridge University Press. <span style="font-variant:small-caps;">Strassman, J. E., Zhu, Y. & Queller, D. C.</span> (2000). Altruism and social cheating in the social amoeba [ *[d]{}ictyostelium discoideum*]{}. *Nature* **408**, 965–967. <span style="font-variant:small-caps;">Szabó, G. & Fath, G.</span> (2007). Evolutionary games on graphs. *Phys. Rep.* **446**, 97–216. <span style="font-variant:small-caps;">Taylor, P. D.</span> (1992). Altruism in viscous populations - an inclusive fitness model. *Evol. Ecol.* **6**, 352–356. <span style="font-variant:small-caps;">Vainstein, M. H. & Arenzon, J. J.</span> (2001). Disordered environments in spatial games. *Phys. Rev. E* **64**, 051905. <span style="font-variant:small-caps;">Vainstein, M. H. & Arenzon, J. J.</span> (2008). In preparation. <span style="font-variant:small-caps;">Vainstein, M. H., Silva, A. T. C. & Arenzon, J. J.</span> (2007). Does mobility decrease cooperation? *J. Theor. Biol.* **244**, 722–728. <span style="font-variant:small-caps;">van Baalen, M. & Rand, D. A.</span> (1998). The unit of selection in viscous populations and the evolution of altruism. *J. Theor. Biol.* **193**, 631–648. <span style="font-variant:small-caps;">Weibull, J.</span> (1995). *Evolutionary Game Theory*. Cambridge: MIT Press. <span style="font-variant:small-caps;">Wilson, D. S., Pollock, G. B. & Dugatkin, L. A.</span> (1992). Can altruism evolve in purely viscous populations? *Evol. Ecol.* **6**, 331–341.
--- address: 'Department of Mathematics, Yale University, New Haven, CT 06520' author: - Minxian Zhu title: 'On the semi-regular module and vertex operator algebras' --- Introduction ============ The aim of this paper is to give a proof of a conjecture stated in a previous paper by the author (\[Z1\]). Let ${\mathfrak{{g}}}$ be a simple complex Lie algebra, ${\hat {\mathfrak g}}$ be the affine Lie algebra and $h^\vee$ be the dual Coxeter number of ${\mathfrak{{g}}}$. Let $\mathcal A_{{\mathfrak{{g}}}, k}$ be the vertex algebroid associated to ${\mathfrak{{g}}}$ and a complex number $k$, according to \[GMS1\], we can construct a vertex algebra $U{\mathcal A}_{{\mathfrak{{g}}}, k}$, called the enveloping algebra of $A_{{\mathfrak{{g}}}, k}$. Set ${\mathbb V}= U{\mathcal A}_{{\mathfrak{{g}}}, k}$. It is shown in \[AG\] and \[GMS2\] that not only ${\mathbb V}$ is a ${\hat {\mathfrak g}}$-representation of level $k$, it is also a ${\hat {\mathfrak g}}$-representation of the dual level $\bar k = - 2h^\vee - k$. Moreover the two copies of ${\hat {\mathfrak g}}$-actions commute with each other, i.e. ${\mathbb V}$ is a ${\hat {\mathfrak g}}_k \oplus {\hat {\mathfrak g}}_{\bar k}$-representation. When $k \notin {\mathbb {Q}}$, the vertex operator algebra ${\mathbb V}$ decomposes into $$\oplus_{\lambda \in P^+} V_{\lambda, k} {\otimes}V_{\lambda^*, \bar k}$$ as a ${\hat {\mathfrak g}}_k \oplus {\hat {\mathfrak g}}_{\bar k}$-module (see \[FS\], \[Z1\]). Here $P^+$ is the set of dominant integral weights of ${\mathfrak{{g}}}$, $V_{\lambda, k}$ is the Weyl module induced from $V_\lambda$, the irreducible representation of ${\mathfrak{{g}}}$ with highest weight $\lambda$, in level $k$, and $V_{\lambda^*, \bar k}$ is induced from $V_\lambda^*$ in the dual level $\bar k$. In fact the vertex operators can be constructed using intertwining operators and Knizhnik-Zamolodchikov equations (see \[Z1\]). In the case where $k \in {\mathbb {Q}}$, the ${\hat {\mathfrak g}}_k \oplus {\hat {\mathfrak g}}_{\bar k}$-module structure of ${\mathbb V}$ is much more complicated. In the present paper, we prove a result about the existence of canonical filtrations of ${\mathbb V}$ conjectured at the end of \[Z1\]. More precisely we will prove the following. \[foo\][Theorem]{} \[maintheorem\] *[ Let $k \in {\mathbb {Q}}$, $k > - h^\vee$. The vertex operator algebra ${\mathbb V}$ admits an increasing (resp. a decreasing) filtration of ${\hat {\mathfrak g}}_k \oplus {\hat {\mathfrak g}}_{\bar k}$-submodules with factors isomorphic to $$V_{\lambda, k} {\otimes}V_{\lambda, \bar k}^c \quad (\text{resp. } V_{\lambda, k}^c {\otimes}V_{\lambda, \bar k}), \quad \lambda \in P^+,$$ where $V_{\lambda, \bar k}^c$ is the contragredient module of $V_{\lambda, \bar k}$ defined by the anti-involution: $x(n) \mapsto - x(-n)$, $\underline c \mapsto \underline c$ of ${\hat {\mathfrak g}}$. ]{}* We need two ingredients to prove the theorem: one is the semi-regular module; the other is the regular representation of the corresponding quantum group at a root of unity. The standard semi-regular module was first introduced by A. Voronov in \[V\] to treat the semi-infinite cohomology of infinite dimensional Lie algebras as a two-sided derived functor of a functor that is neither left nor right exact. It was also studied rigorously by S. M. Arkhipov. He defined the associative algebra semi-infinite cohomology in the derived categories’ setting (see \[A1\]), and discovered a deep semi-infinite duality which generalizes the classical bar duality of graded associative algebras (see \[A2\]). The semi-regular module $S_\gamma$ associated to a semi-infinite structure $\gamma$ of ${\hat {\mathfrak g}}$ (see \[V\]) is the semi-infinite analogue of the universal enveloping algebra $U$ of ${\hat {\mathfrak g}}$. In particular $S_\gamma$ is a $U$-bimodule, and the tensor product $S_\gamma {\otimes}_U {\mathbb V}$ becomes a ${\hat {\mathfrak g}}_{- \bar k} \oplus {\hat {\mathfrak g}}_{\bar k}$-representation. We will show in Section 3 that $S_\gamma {\otimes}_U {\mathbb V}$ can be embedded into $U^*$ as a bisubmodule. In fact it is spanned by the matrix coefficients of modules from the category $\mathcal O_{\bar k + h^\vee}$, defined and studied by Kazhdan and Lusztig in \[KL1-4\] for $\bar k < - h^\vee$. In the series of papers \[KL1-4\], Kazhdan and Lusztig defined a structure of braided category on $\mathcal O_{\bar k + h^\vee}$, and constructed an equivalence between the tensor category $\mathcal O_{\bar k + h^\vee}$ and the category of finite dimensional integrable representations of the quantum group with parameter $e^{i \pi/ (\bar k+ h^\vee)}$ (in the simply-laced case). It motivated the author to study the structure of regular representations of the quantum group at roots of unity (see \[Z2\]). One of the main results in \[Z2\] is that the quantum function algebra admits an increasing filtration of (bi)submodules such that the subquotients are isomorphic to the tensor products of the dual of Weyl modules $W_{- \omega_0 \lambda}^* {\otimes}W_\lambda^*$ ($\omega_0$ being the longest element in the Weyl group). Translating this to the affine Lie algebra, it means that $S_\gamma {\otimes}_U {\mathbb V}$ admits an increasing filtration of ${\hat {\mathfrak g}}_{ -\bar k} \oplus {\hat {\mathfrak g}}_{\bar k}$-submodules with factors isomorphic to $V_{ - \omega_0 \lambda, \bar k}^* {\otimes}V_{\lambda, \bar k}^c$. Applying the functor ${\mathcal H \text{om}}_U(S_\gamma, -)$ (see \[S, Theorem 2.1\]) to this filtration of $S_\gamma {\otimes}_U {\mathbb V}$, we obtain an increasing filtration of ${\hat {\mathfrak g}}_k \oplus {\hat {\mathfrak g}}_{\bar k}$-submodules of the vertex operator algebra ${\mathbb V}$ with factors described in Theorem \[maintheorem\]. The corresponding decreasing filtration is obtained by using the non-degenerate bilinear form on ${\mathbb V}$ constructed in \[Z1\]. The paper is organized as follows: In Section 2, we follow \[S\] to recall the definition of semi-regular module $S_\gamma$ and the two functors defined with it. In Section 3, we embed $S_\gamma {\otimes}_U {\mathbb V}$ into the dual of $U$ as a (bi)submodule. In Section 4, we prove the main theorem about the filtrations of the vertex operator algebra ${\mathbb V}$ using results of \[Z2\]. Semi-regular module $S_\gamma$ and equivalence of categories ============================================================ The semi-regular module of a graded Lie algebra with a semi-infinite structure was first introduced by A. Voronov in \[V\], where it was called the “standard semijective module”. It replaces the universal enveloping algebra (and its dual) in the semi-infinite theory, and like the universal enveloping algebra, it possesses left and right (semi)regular representations. Voronov used semijective complexes and resolutions to define the semi-infinite cohomology of infinite dimensional Lie algebras as a two-sided derived functor of a functor that is intermediate between the functors of invariants and coinvariants. In \[A2\], S. M. Arkhipov generalized the classical bar duality of graded associative algebras to give an alternative construction of the semi-infinite cohomology of associative algebras. Given a graded associative algebra $A$ with a triangular decomposition, he introduced the endomorphism algebra $A^\sharp$ of a semi-regular $A$-module $S_A$ (see \[A1\]). In the case where $A$ is the universal enveloping algebra of a graded Lie algebra, the algebra $A^\sharp$ is also a universal enveloping algebra of a Lie algebra which differs from the previous one by a $1$-dimensional central extension (determined by the critical $2$-cocycle). In the affine Lie algebra case, he proved that the category of all ${\hat {\mathfrak g}}$-modules with a Weyl filtration in level $k$ is contravariantly equivalent to the analogous category in the dual level $\bar k$. This equivalence was obtained directly in \[S\], where W. Soergel used it to find characters of tilting modules of affine Lie algebras and quantum groups. Let us recall the definition of the semi-regular module from \[S, Theorem 1.3\]. Let ${\mathfrak{{g}}}$ be a simple complex Lie algebra. Let ${\hat {\mathfrak g}}= {\mathfrak{{g}}} {\otimes}{\mathbb{C}}[t,t^{-1}] \oplus {\mathbb{C}}\underline c$ be the affine Lie algebra, where the commutator relations are given by $$[ x (m), y(n) ] = [x, y] (m+n) + m\delta_{m+n, 0} (x, y) \underline c.$$ Here $x (n) = x {\otimes}t^n$ for $x\in {\mathfrak{{g}}}$, $(, )$ is the normalized Killing form on ${\mathfrak{{g}}}$ and $\underline c$ is the center. Define a ${\mathbb Z}$-grading on ${\hat {\mathfrak g}}$ by $\text{deg} \,x (n) = n$ and $\text{deg} \, \underline c =0$. Set ${\hat {\mathfrak g}}_{>0} ={\mathfrak{{g}}} {\otimes}t {\mathbb{C}}[ t ]$, ${\hat {\mathfrak g}}_{<0}= {\mathfrak{{g}}} {\otimes}t^{-1} {\mathbb{C}}[ t^{-1} ]$, ${\hat {\mathfrak g}}_0 = {\mathfrak{{g}}} \oplus {\mathbb{C}}\underline c$ and $ {\hat {\mathfrak g}}_{\geq 0} = {\hat {\mathfrak g}}_{>0} \oplus {\hat {\mathfrak g}}_0$. Denote the enveloping algebras of ${\hat {\mathfrak g}}, {\hat {\mathfrak g}}_{\geq 0}, {\hat {\mathfrak g}}_{<0}$ by $U, B, N$. Obviously $U, B, N$ inherit ${\mathbb Z}$-gradings from the corresponding Lie algebras. Define a character $$\gamma: {\hat {\mathfrak g}}_0 = {\mathfrak{{g}}} \oplus {\mathbb{C}}\underline c \to {\mathbb{C}}; \quad \gamma|_{{\mathfrak{{g}}}} = 0, \quad \gamma ( \underline c ) = 2h^\vee,$$ where $h^\vee$ is the dual Coxeter number of ${\mathfrak{{g}}}$. It is easy to check that $\gamma$ is a semi-infinite character for ${\hat {\mathfrak g}}$ (see \[S, Definition 1.1\]). For any two ${\mathbb Z}$-graded vector spaces $M, M'$, define the ${\mathbb Z}$-graded vector space ${\mathcal H \text{om}}_{\mathbb{C}}(M, M')$ with homogeneous components $${\mathcal H \text{om}}_{\mathbb{C}}(M, M')_j = \{ f \in \text{Hom}_{\mathbb{C}}(M, M') | f (M_i) \subset M'_{i+j} \}.$$ The graded dual ${N^ \circledast}= \oplus_i N_i^*$ of $N$ is an $N$-bimodule via the prescriptions $(n f ) (n_1) = f (n_1 n)$ and $ (f n) (n_1) = f( n n_1)$ for any $n, n_1\in N$, $f \in {N^ \circledast}$. We have ${N^ \circledast}= {\mathcal H \text{om}}_{\mathbb{C}}(N, {\mathbb{C}})$, if we equip ${\mathbb{C}}$ with the ${\mathbb Z}$-grading ${\mathbb{C}}= {\mathbb{C}}_0$. Consider the following sequence of isomorphisms of (${\mathbb Z}$-graded) vector spaces: $${\mathcal H \text{om}}_B (U, {\mathbb{C}}_\gamma {\otimes}_ {\mathbb{C}}B) \,\, \tilde \to \,\, {\mathcal H \text{om}}_{\mathbb{C}}(N, B) \,\, \tilde \gets \,\, {N^ \circledast}{\otimes}_{\mathbb{C}}B \,\, \tilde \to \,\, {N^ \circledast}{\otimes}_N U,$$ here ${\mathbb{C}}_\gamma$ is the one-dimensional representation of ${\hat {\mathfrak g}}_{\geq 0}$ defined by the character $\gamma: {\hat {\mathfrak g}}_0 \to {\mathbb{C}}$ and the surjection ${\hat {\mathfrak g}}_{\geq 0} \twoheadrightarrow {\hat {\mathfrak g}}_0$, and ${\mathbb{C}}_\gamma {\otimes}_{\mathbb{C}}B$ is the tensor product of these two representations as a left ${\hat {\mathfrak g}}_{\geq 0}$-module. In the leftmost term, $U$ is considered a $B$-module via left multiplication of $B$ on $U$, and ${\mathcal H \text{om}}_B (U, {\mathbb{C}}_\gamma {\otimes}_ {\mathbb{C}}B) $ is made into a (left) $U$-module via the right multiplication of $U$ onto itself. The first isomorphism is defined as the restriction to $N$ using the identification ${\mathbb{C}}_\gamma {\otimes}_{\mathbb{C}}B \tilde \to B; 1{\otimes}b \mapsto b$. As a vector space, the semi-regular module $$S_\gamma = {N^ \circledast}{\otimes}_{\mathbb{C}}B.$$ It is also a $U$-bimodule: the left (resp. right) $U$-action on $S_\gamma$ is defined via the first two (resp. last) isomorphisms. The semi-infinite character $\gamma$ ensures that these two actions commute. \[levelshift\] $\underline c {\cdot}s = s {\cdot}\underline c + 2h^{\vee} s $ for any $s\in S_\gamma$, where $\underline c {\cdot}s$ and $s {\cdot}\underline c$ stand for the left and right actions of $\underline c$ on $s\in S_\gamma$. Easily verified. [\[S, Theorem 1.3\]]{} \[lrsr\] The map $\iota: {N^ \circledast}\hookrightarrow S_\gamma; f \mapsto f \otimes 1$ is an inclusion of $N$-bimodules. The maps $ U {\otimes}_N {N^ \circledast}\to S_\gamma; u {\otimes}f \mapsto u {\cdot}\iota (f)$ and ${N^ \circledast}{\otimes}_N U \to S_\gamma; f {\otimes}u \mapsto \iota (f) {\cdot}u$ are bijections. The sequence of isomorphisms $$S_\gamma = U {\otimes}_N {N^ \circledast}\cong B {\otimes}_{\mathbb{C}}{N^ \circledast}\cong {\mathcal H \text{om}}_{\mathbb{C}}(N, B) \, \tilde \to \, {\mathcal H \text{om}}_{B-\text {right}} (U, {\mathbb{C}}_{- \gamma} {\otimes}B )$$ induces a right $U$-map from $S_\gamma$ to ${\mathcal H \text{om}}_{B-\text {right}} (U, {\mathbb{C}}_{- \gamma} {\otimes}B )$. The right $U$-module structure of the latter is given by the left multiplication of $U$ on the first argument in ${\mathcal H \text{om}}$. Let $P^+$ be the dominant integral weights of ${\mathfrak{{g}}}$ and $\lambda \in P^+$. Denote by $V_{\lambda, k} = \text{Ind}_{ {\hat {\mathfrak g}}_{\geq 0}}^{{\hat {\mathfrak g}}} V_\lambda$ the Weyl module induced from the finite dimensional irreducible representation of ${\mathfrak{{g}}}$ with highest weight $\lambda$ in level $k$. Let $V_{\lambda, k}^*$ be the graded dual of $V_{\lambda, k}$, on which ${\hat {\mathfrak g}}$ acts by $ X f (v) = - f ( X v)$ for any $X \in {\hat {\mathfrak g}}$, $f \in V_{\lambda, k}^*$ and $v \in V_{\lambda, k}$. Let $\mathcal M$ (resp. $\mathcal K$) denote the category of all ${\mathbb Z}$-graded representations of ${\hat {\mathfrak g}}$, which are over $N$ isomorphic to finite direct sums of may-be grading shifted copies of $N$ (resp. ${N^ \circledast}$). In fact $\mathcal M$ (resp. $\mathcal K$) consists precisely of those ${\mathbb Z}$-graded ${\hat {\mathfrak g}}$-modules, which admit a finite filtration with factors isomorphic to Weyl modules (resp. the dual of Weyl modules) (see \[S, Remarks 2.4\]). [\[S, Theorem 2.1\]]{} The functor $S_\gamma {\otimes}_U -: \mathcal M \to \mathcal K$ defines an equivalence of categories with inverse ${\mathcal H \text{om}}_U ( S_\gamma, - )$, such that short exact sequences correspond to short exact sequences. Note that $S_\gamma {\otimes}_U - \cong {N^ \circledast}{\otimes}_N -$ and ${\mathcal H \text{om}}_U (S_\gamma, -) \cong {\mathcal H \text{om}}_N ({N^ \circledast}, - )$ by Proposition \[lrsr\]. \[tens\] Let $E$ be a ${\mathbb Z}$-graded $B$-module bounded from below, the functor $S_\gamma {\otimes}_U -$ maps $U {\otimes}_B E$ to ${\mathcal H \text{om}}_B (U, {\mathbb{C}}_\gamma {\otimes}E)$. Similar to the construction of the semi-regular module $S_\gamma$, consider the following sequence of isomorphisms of ${\mathbb Z}$-graded vector spaces: $$S_\gamma {\otimes}_U ( U {\otimes}_B E ) \,\, \cong \,\, {N^ \circledast}{\otimes}_{\mathbb{C}}E \,\, \tilde \to \,\, {\mathcal H \text{om}}_{\mathbb{C}}( N, E ) \,\, \tilde \gets \,\, {\mathcal H \text{om}}_B (U, {\mathbb{C}}_\gamma {\otimes}E).$$ It is straightforward to check that, under these isomorphisms, the (left) $U$-module structure of $S_\gamma {\otimes}_U ( U {\otimes}_B E )$ agrees with that of $ {\mathcal H \text{om}}_B (U, {\mathbb{C}}_\gamma {\otimes}E) $. \[gtens\] In general for any ${\mathbb Z}$-graded $B$-module $E'$, the inclusion $S_\gamma {\otimes}_U ( U {\otimes}_B E' ) \cong {N^ \circledast}{\otimes}_{\mathbb{C}}E' \hookrightarrow {\mathcal H \text{om}}_B (U, {\mathbb{C}}_\gamma {\otimes}E' )$ is a $U$-map. \[hom\] Let $F$ be a ${\mathbb Z}$-graded $B$-module bounded from above, then the functor ${\mathcal H \text{om}}_U (S_\gamma, - )$ maps ${\mathcal H \text{om}}_B (U, F) $ to $U {\otimes}_B ({\mathbb{C}}_{-\gamma} {\otimes}F)$. The isomorphism of vector spaces $U {\otimes}_B ({\mathbb{C}}_{-\gamma} {\otimes}F) \, \tilde \to \, {\mathcal H \text{om}}_U (S_\gamma, {\mathcal H \text{om}}_B (U, F) )$, induced from $${\mathcal H \text{om}}_U (S_\gamma, {\mathcal H \text{om}}_B (U, F) ) \cong {\mathcal H \text{om}}_N ({N^ \circledast}, {\mathcal H \text{om}}_{\mathbb{C}}(N, F ))$$ $$\cong {\mathcal H \text{om}}_{\mathbb{C}}({N^ \circledast}, F) \cong N {\otimes}_{\mathbb{C}}F \cong U {\otimes}_B ({\mathbb{C}}_{-\gamma} {\otimes}F),$$ agrees with the composition of (left) $U$-maps $$U {\otimes}_B ({\mathbb{C}}_{-\gamma} {\otimes}F) \to {\mathcal H \text{om}}_U (S_\gamma, S_\gamma {\otimes}_U (U {\otimes}_B ({\mathbb{C}}_{-\gamma} {\otimes}F) ) ) $$ $$\to {\mathcal H \text{om}}_U (S_\gamma, {\mathcal H \text{om}}_B (U, F) ),$$ hence it is a $U$-isomorphism. In particular $S_\gamma {\otimes}_U -$ transforms Weyl modules to the dual of Weyl modules, and ${\mathcal H \text{om}}_U (S_\gamma, -)$ transforms the latter to the former (both with a level shift). \[tiny\] $S_\gamma {\otimes}_U V_{\lambda, k} \cong V_{\lambda^*, \bar k}^*$, and ${\mathcal H \text{om}}_U (S_\gamma, V_{\lambda, \bar k}^*) \cong V_{\lambda^*, k}$, here $\lambda^*$ denotes the highest weight of $V_\lambda^*$. Note that $U {\otimes}_B V_\lambda = V_{\lambda, k}$ and ${\mathcal H \text{om}}_B (U, {\mathbb{C}}_\gamma {\otimes}V_\lambda) \cong V_{\lambda^*, \bar k}^*$ if $\underline c$ acts on $V_\lambda$ as scalar multiplication by $k$. Realization of $S_\gamma {\otimes}_U {\mathbb V}$ inside $U^*$ ============================================================== Fix a complex number $k$, and let ${\mathbb V}= U{\mathcal A}_{{\mathfrak{{g}}}, k}$ be the vertex operator algebra associated to the vertex algebroid ${\mathcal A}_{{\mathfrak{{g}}}, k}$ (see \[AG\], \[GMS1, 2\], \[Z1\]). Note that in \[Z1\], we used ${\mathbb V}$ to denote the vertex operator algebra for generic values of $k \notin {\mathbb {Q}}$, but here we adopt this notation with no restriction on $k$. The vertex operator algebra ${\mathbb V}$ admits two commuting actions of ${\hat {\mathfrak g}}$ in dual levels $k, \bar k = -2h^\vee -k$. It follows from Lemma \[levelshift\] that $S_\gamma {\otimes}_U {\mathbb V}$, using the ${\hat {\mathfrak g}}_k$-module structure of ${\mathbb V}$, becomes a ${\hat {\mathfrak g}}_{- \bar k} \oplus {\hat {\mathfrak g}}_{\bar k}$-representation. Define $U({\hat {\mathfrak g}}, k) = U({\hat {\mathfrak g}}) / (\underline c - k) U({\hat {\mathfrak g}})$. Our goal is to construct an embedding of $U$-bimodules $$\Phi: S_\gamma {\otimes}_U {\mathbb V}\hookrightarrow U({\hat {\mathfrak g}}, \bar k)^*.$$ Let ${\mathbb B}= \oplus_{i \leq 0} {\mathbb B}_i$ (denoted by “$B$” with opposite grading in \[Z1\]) be the commutative vertex subalgebra of ${\mathbb V}$ generated by $A$, where $A$ is the commutative algebra of regular functions on an affine connected algebraic group $G$ with Lie algebra ${\mathfrak{{g}}}$. Recall that ${\mathbb B}$ is closed under the actions of $U({\hat {\mathfrak g}}_{\geq 0}, k)$ and $U({\hat {\mathfrak g}}_{\geq 0}, \bar k)$. As a ${\hat {\mathfrak g}}_k$-module, we have ${\mathbb V}\cong U {\otimes}_B {\mathbb B}\cong N {\otimes}_{\mathbb{C}}{\mathbb B}$ (see e.g. \[Z1, Proposition 3.16\]). Since $S_\gamma \cong {N^ \circledast}{\otimes}_N U$ as a right $U$-module, we have $$S_\gamma {\otimes}_U {\mathbb V}\cong {N^ \circledast}{\otimes}_N {\mathbb V}\cong {N^ \circledast}{\otimes}_{\mathbb{C}}{\mathbb B}.$$ Define a functional ${\epsilon}: {\mathbb B}\to {\mathbb{C}}$ as follows: ${\epsilon}|_{{\mathbb B}_{\leq 1}} = 0$ and its restriction to ${\mathbb B}_0 = A$ is the evaluation of functions at identity. Multiplication induces isomorphism of vector spaces: $N {\otimes}_{\mathbb{C}}B \cong U$, hence any $u \in U$ can be written as $u = u_{<0} u_{\geq 0}$ with $u_{<0} \in N$ and $u_{\geq 0} \in B$. Let $\bar{}: U \to U; u \to \overline u$ be the anti-involution of $U$ determined by $ - \text{Id}: {\hat {\mathfrak g}}\to {\hat {\mathfrak g}}$. Define a map $$\Phi: S {\otimes}_U {\mathbb V}\to U^*$$ as follows: for any $f \in {N^ \circledast}$, $b\in {\mathbb B}$, $$\Phi (f {\otimes}b) ( u_{<0} u_{\geq 0}) = f (\overline{ u_{<0}}) {\epsilon}( u_{\geq 0}^r \cdot b),$$ here $u_{\geq 0}^r \cdot b$ means the $U({\hat {\mathfrak g}}_{\geq 0}, \bar k)$-action on ${\mathbb B}$. In fact $\Phi (f {\otimes}b) \in U({\hat {\mathfrak g}}, \bar k) ^*$. The dual space $U^*$ is a $U$-bimodule via the recipes $ (u {\cdot}g) (u_1) = g(u_1 u)$ and $ (g {\cdot}u) (u_1) = g ( u u_1)$ for any $u, u_1\in U$, $g \in U^*$. \[main\] For any $u\in U$ and $f {\otimes}b \in {N^ \circledast}{\otimes}_{\mathbb{C}}{\mathbb B}$ ( $\cong S_\gamma {\otimes}_U {\mathbb V}$), we have $$\Phi ( u^l {\cdot}(f {\otimes}b)) = ( \Phi (f {\otimes}b)) {\cdot}\bar u,$$ $$\Phi ( u^r {\cdot}(f {\otimes}b)) = u {\cdot}( \Phi (f {\otimes}b)),$$ here $u^l {\cdot}(f {\otimes}b)$, $u^r {\cdot}(f {\otimes}b)$ stand for the ${\hat {\mathfrak g}}_{- \bar k}$- and ${\hat {\mathfrak g}}_{ \bar k}$-actions on $S_\gamma {\otimes}_U {\mathbb V}$ respectively. To prove the theorem, we need some preparations. First, let $${\Theta}: S {\otimes}_U {\mathbb V}\to {\mathcal H \text{om}}_B (U, {\mathbb{C}}_\gamma {\otimes}{\mathbb B})$$ be the (left) $U$-map described in Remark \[gtens\] (taking $E' = {\mathbb B}$). Note that we regard ${\mathbb V}, {\mathbb B}$ as non-positively graded, i.e. taking the opposite of the grading defined by the conformal weights of the vertex operator algebra ${\mathbb V}$. Here ${\mathbb B}$ is regarded as a left $B$-module via the $U({\hat {\mathfrak g}}_{\geq 0}, k)$-action on ${\mathbb B}$, and ${\Theta}$ is a $U({\hat {\mathfrak g}}, - \bar k)$-map. Following \[GMS2, Z1\], let $\tau_i$ be an orthonormal basis of ${\mathfrak{{g}}}$ with respect to the normalized Killing form $(, )$. Let $C_{ijk}$ be the structure constants determined by $[ \tau_i, \tau_j ] = C_{ijk} \tau_k$. We identify ${\mathfrak{{g}}}$ with the tangent space to the identity of $G$. Let $\tau_i^L$ (resp. $\tau_i^R$) be the left (resp. right) invariant vector fields valued $ \tau_i$ (resp. $- \tau_i$) at the identity, there exist regular functions $a^{ij} \in A$ such that $\tau_i^R = a ^{ij} \tau_j^L$ and ${\epsilon}(a ^{ij} ) = - \delta_{ij}$. \[epbl\] Let $\beta: B \to B$ be the automorphism which restricts to ${\hat {\mathfrak g}}_{\geq 0}$ as $X \mapsto \gamma(X) + X$, then for any $u_{\geq 0} \in B$ and $b\in {\mathbb B}$, we have ${\epsilon}( \beta (u_{\geq 0}) ^l {\cdot}b) = {\epsilon}( \overline{u_{\geq 0}} ^r {\cdot}b)$, here $ \beta (u_{\geq 0}) ^l {\cdot}b$, $\overline{u_{\geq 0}} ^r {\cdot}b$ denote the $U({\hat {\mathfrak g}}_{\geq 0}, k)$- and $U({\hat {\mathfrak g}}_{\geq 0}, \bar k)$-actions on ${\mathbb B}$. By \[Z1, Lemma 3.14 (10)\], we have $\tau_j (n) ^l {\cdot}b = \sum_i \sum_{p \geq 0} a ^{ij}_{(-1-p)} \tau_i (n + p) ^r \cdot b$ for any $n\geq 0$, $b\in {\mathbb B}$. Since ${\epsilon}|_{B_{\geq 1}} = 0$, we have ${\epsilon}( \tau_j (n) ^l {\cdot}b) = \sum_i {\epsilon}( a ^{ij}_{(-1)} \tau_i ( n) ^r {\cdot}b) = \sum_i ( - \delta_{ij}) {\epsilon}(\tau_i (n ) ^r {\cdot}b) = - {\epsilon}(\tau_j (n) ^r {\cdot}b)$. Since the $U({\hat {\mathfrak g}}_{\geq 0}, k)$- and $U({\hat {\mathfrak g}}_{\geq 0}, \bar k)$-actions on ${\mathbb B}$ commute, for any $u_{\geq 0} = \tau_{j_1} (n_1) \cdots \tau_{j_q} (n_q)$, we have $ {\epsilon}( \beta (u_{\geq 0}) ^l {\cdot}b) = {\epsilon}( u_{\geq 0} ^l {\cdot}b) = {\epsilon}( - \tau_{j_1}(n_1) ^r {\cdot}(\tau_{j_2}(n_2) \cdots ) ^l {\cdot}b) = {\epsilon}( (\tau_{j_2}(n_2) \cdots ) ^l {\cdot}(- \tau_{j_1}(n_1) ) ^r {\cdot}b) = {\epsilon}( (\tau_{j_3}(n_3) \cdots ) ^l {\cdot}(- \tau_{j_2}(n_2) ) ^r {\cdot}(- \tau_{j_1}(n_1) ) ^r {\cdot}b) = \cdots = {\epsilon}( (- \tau_{j_q}(n_q)) ^r {\cdot}\cdots ( - \tau_{j_1}(n_1)) ^r {\cdot}b) = {\epsilon}( \overline { u_{\geq 0}} ^r {\cdot}b). $ We also have ${\epsilon}( \beta ( \underline{c} ) ^l {\cdot}b) = {\epsilon}( ( \underline{c} + 2h^{\vee}) ^l {\cdot}b) = {\epsilon}( (k + 2h^{\vee}) b ) = {\epsilon}( - \bar k b) = {\epsilon}( \bar {\underline{c}} ^r {\cdot}b)$, hence the lemma is proved. \[PhiTh\] For any $ f {\otimes}b \in {N^ \circledast}{\otimes}_{\mathbb{C}}{\mathbb B}$, we have $ \Phi ( f {\otimes}b ) = {\epsilon}\, {\Theta}( f {\otimes}b ) \,\, \bar{}$. By the definition of ${\Theta}$, for any $u = u_{<0} u_{\geq 0} \in U$, we have ${\Theta}( f {\otimes}b ) ( \bar u ) = {\Theta}( f {\otimes}b ) ( \overline { u_{\geq 0}} \, \overline { u_{<0}}) = f ( \overline {u_{<0}} ) \beta ( \overline { u_{\geq 0}} ) ^l {\cdot}b$. Then it follows from Lemma \[epbl\] that ${\epsilon}\, {\Theta}( f {\otimes}b) ( \bar u) = f ( \overline { u_{<0}} ) {\epsilon}( u_{\geq 0} ^r {\cdot}b ) = \Phi ( f {\otimes}b) ( u)$. For any $u\in U$ and $f {\otimes}b \in {N^ \circledast}{\otimes}_{\mathbb{C}}{\mathbb B}$, we have $\Phi ( u^l {\cdot}( f {\otimes}b ) ) = ( \Phi ( f {\otimes}b ) ) {\cdot}\bar u$. Since ${\Theta}$ is a (left) $U$-map, by Proposition \[PhiTh\], we have $$\Phi ( u^l {\cdot}(f {\otimes}b) ) = {\epsilon}{\Theta}( u^l {\cdot}( f {\otimes}b ) ) \,\, \bar{} = {\epsilon}( u {\cdot}{\Theta}( f {\otimes}b ) ) \,\, \bar{}$$ $$= {\epsilon}{\Theta}( f {\otimes}b ) r_u \,\, \bar{} \,\, = {\epsilon}{\Theta}( f {\otimes}b ) \,\, \bar{} \,\, l_{\bar u} = \Phi ( f {\otimes}b ) \, l_{\bar u} = ( \Phi ( f {\otimes}b ) ) {\cdot}\bar u,$$ where $r_u, l_{\bar u}: U \to U$ denote the right and left multiplications by $u$ and $\bar u$ respectively. Hence we proved one half of Theorem \[main\]. Next we prove the other half of Theorem \[main\], which is to show that $$\Phi ( u^r {\cdot}(f {\otimes}b)) = u {\cdot}( \Phi (f {\otimes}b)).$$ If $u = u_{\geq 0} \in B$, then $ {u_{\geq 0}} ^r {\cdot}( f {\otimes}b ) = f {\otimes}{u_{\geq 0}}^r {\cdot}b$. Hence $ \Phi ( f {\otimes}( {u_{\geq 0}}^r {\cdot}b ) ) ( u'_{<0} u'_{\geq 0} ) = f ( \overline{ u'_{<0} } ) {\epsilon}( {u'_{\geq 0}}^r {\cdot}{u_{\geq 0}}^r {\cdot}b ) = \Phi ( f {\otimes}b ) ( u'_{<0} u' _{\geq 0} u_{\geq 0} ) = u_{\geq 0} {\cdot}( \Phi ( f {\otimes}b )) ( u'_{<0} u'_{\geq 0} )$, which means that $\Phi ( {u_{\geq 0}}^r {\cdot}(f {\otimes}b)) = u_{\geq 0} {\cdot}( \Phi (f {\otimes}b))$. To prove it holds for $u = u_{<0} \in N$ as well, it suffices to show that $\Phi ( \tau_i (-1) ^r {\cdot}(f {\otimes}b)) = \tau_i (-1) {\cdot}( \Phi (f {\otimes}b))$ since ${\hat {\mathfrak g}}_{<0}$ is generated by ${\hat {\mathfrak g}}_{-1}$. Recall that although ${\mathbb B}$ is only closed under the action of $U({\hat {\mathfrak g}}_{\geq 0}, \bar k)$, it can be equipped with a ${\hat {\mathfrak g}}_{\bar k}$-module structure $\tilde \rho: U \to \text{End} ( {\mathbb B})$ such that $\tilde \rho ( u_{\geq 0} ) b = { u_{\geq 0} }^r {\cdot}b$ for any $u_{\geq 0} \in B$ and $b\in {\mathbb B}$ (see \[Z1, Lemma 3.29, Remark 3.30\]). In addition, we have $$\tau_i (-1) ^r {\cdot}( f {\otimes}b ) = \sum_j f {\cdot}\tau_j (-1) {\otimes}( a^{ij} b ) + f {\otimes}\tilde \rho( \tau_i (-1) ) b$$ (see \[Z1, Lemma 3.14 (9)\]). Hence for any $u_{<0} \in N$, $u_0\in U ({\hat {\mathfrak g}}_0)$ and $ u_{>0} \in U( {\hat {\mathfrak g}}_{>0} )$, we have $$\Phi ( \tau_i (-1) ^r {\cdot}(f {\otimes}b) ) ( u_{<0} u_0 u_{>0} )$$ $$= \sum_j f ( \tau_j (-1) \overline { u_{<0} } ) {\epsilon}( { u_0 }^r {\cdot}{u_{>0}}^r {\cdot}( a^{ij} b ) ) + f ( \overline { u_{<0} } ) {\epsilon}( { u_0 }^r {\cdot}{u_{>0}}^r {\cdot}\tilde \rho( \tau_i (-1)) b )$$ $$= \sum_j f ( \tau_j (-1) \overline { u_{<0} } ) {\epsilon}( { u_0 }^r {\cdot}a^{ij}_{(-1)} {u_{>0}}^r {\cdot}b) + f ( \overline { u_{<0} } ) {\epsilon}( { u_0 }^r {\cdot}[ u_{>0}, \tau_i (-1) ] ^r {\cdot}b ).$$ The last equality is because $[ {u_{>0}}^r, a^{ij}_{(-1)} ] |_{\mathbb B}= 0$ (see \[Z1, Lemma 3.14 (4)\]), and $ [ u_{>0}, \tau_i (-1) ] \in B$, ${\epsilon}|_{{\mathbb B}_{\geq 1}} = 0$. On the other hand, we have $$\tau_i (-1) {\cdot}( \Phi (f {\otimes}b)) ( u_{<0} u_0 u_{>0} ) = \Phi (f {\otimes}b) ( u_{<0} u_0 u_{>0} \tau_i (-1) )$$ $$= \Phi (f {\otimes}b) ( u_{<0} u_0 [ u_{>0}, \tau_i (-1)] + u_{<0} [ u_0, \tau_i (-1) ] u_{>0} + u_{<0} \tau_i (-1) u_{\geq 0} )$$ $$= f ( \overline { u_{<0}} ) {\epsilon}( {u_0}^r {\cdot}[ u_{>0}, \tau_i(-1)]^r {\cdot}b) + \sum_s f ( \overline{ u_{<0} \tau_s (-1) } ) {\epsilon}( F^{i, s} (u_0) ^r {\cdot}{u_{>0}}^r {\cdot}b )$$ $$+ f ( \overline{ u_{<0} \tau_i (-1) } ) {\epsilon}( {u_{\geq 0}}^r {\cdot}b )$$ where $F^{i, s} : U ({\hat {\mathfrak g}}_0) \to U ( {\hat {\mathfrak g}}_0 )$ are maps such that $[u_0, \tau_i (-1) ] = \sum_s \tau_s (-1) F^{i, s} ( u_0 )$ for any $u_0 \in U({\hat {\mathfrak g}}_0)$. Since $\tau_k^R ( a^{ij} ) = C_{kip} a^{pj}$, we have $[ \tau_k (0)^r, a^{ij}_{(-1)} ] = C_{kip} a^{pj}_{(-1)}$ (see \[Z1, Lemma 3.14 (4)\]). Compare it with the commutator $[ \tau_k (0), \tau_i(-1) ] = C_{kip} \tau_p (-1)$, it follows that $[ {u_0} ^r, a^{ij}_{(-1)} ] = \sum_s a^{sj}_{(-1)} F^{i, s} (u_0)^r$. Hence we have $$\sum_j f ( \tau_j (-1) \overline { u_{<0} } ) {\epsilon}( { u_0 }^r {\cdot}a^{ij}_{(-1)} {u_{>0}}^r {\cdot}b)$$ $$= \sum_j f ( \tau_j (-1) \overline { u_{<0} } ) {\epsilon}( \sum_s a^{sj}_{(-1)} F^{i, s} (u_0)^r {\cdot}{u_{>0}}^r {\cdot}b) + \sum_j f ( \tau_j (-1) \overline { u_{<0} } ) {\epsilon}( a^{ij}_{(-1)} {u_{\geq 0}}^r {\cdot}b)$$ $$= \sum_j f ( \tau_j (-1) \overline { u_{<0} } ) {\epsilon}( - F^{i, j} (u_0) ^r {\cdot}{u_{>0}}^r {\cdot}b) + f ( \tau_i (-1) \overline { u_{<0} } ) {\epsilon}( - {u_{\geq 0}}^r {\cdot}b )$$ $$= \sum_j f ( \overline{ u_{<0} \tau_j (-1) } ) {\epsilon}( F^{i, j} (u_0) ^r {\cdot}{u_{>0}}^r {\cdot}b ) + f ( \overline{ u_{<0} \tau_i (-1) } ) {\epsilon}( {u_{\geq 0}}^r {\cdot}b ),$$ which proves that $\Phi ( \tau_i (-1) ^r {\cdot}(f {\otimes}b) ) = \tau_i (-1) {\cdot}( \Phi (f {\otimes}b))$. The proof of Theorem \[main\] is now complete. \[explreal\] Following the notations in \[Z1\], let $\{ \widetilde \omega_i\}$ be right invariant $1$-forms dual to $\{ \tau_i^R \}$, and let $\widetilde { {\mathbb B}_0 } $ be the linear span of elements of the form $\partial ^{(j_1)} \widetilde \omega_{i_1} \cdots \partial ^{(j_n)} \widetilde \omega_{i_n} $, then ${\mathbb B}= A {\otimes}\widetilde { {\mathbb B}_0 } $. There is a non-degenerate pairing between $U({\hat {\mathfrak g}}_{>0})$ and $ \widetilde { {\mathbb B}_0 } $, defined by $( u_{>0}, \tilde b ) = {\epsilon}( {u_{>0}}^r {\cdot}\tilde b )$, via which $ \widetilde { {\mathbb B}_0 } $ can be identified with $U({\hat {\mathfrak g}}_{>0})^ \circledast$, the graded dual of $U({\hat {\mathfrak g}}_{>0})$. The regular functions $A$ can be identified with the Hopf dual $U({\mathfrak{{g}}})^*_{\text {Hopf}}$, which is a subalgebra of $U({\mathfrak{{g}}})^*$ defined by $$U({\mathfrak{{g}}})^*_{\text {Hopf}} = \{ \phi \in U({\mathfrak{{g}}})^* | \text{ Ker} \phi \text{ contains a two-sided ideal } J \subset U({\mathfrak{{g}}})$$ $$\qquad \qquad \text{ of finite codimension} \}.$$ It is not hard to see that ${\epsilon}(u_0^r {\cdot}u_{>0}^r {\cdot}a \tilde b ) = {\epsilon}(u_0^r {\cdot}a) {\epsilon}(u_{>0}^r {\cdot}\tilde b)$ for any $u_0 \in U({\mathfrak{{g}}}), u_{>0} \in U({\hat {\mathfrak g}}_{>0}), a\in A$ and $ \tilde b \in \widetilde { {\mathbb B}_0 }$. Hence $$S {\otimes}_U {\mathbb V}\cong {N^ \circledast}{\otimes}{\mathbb B}\cong {N^ \circledast}{\otimes}A {\otimes}\widetilde { {\mathbb B}_0 } \cong U({\hat {\mathfrak g}}_{<0})^\circledast {\otimes}U({\mathfrak{{g}}})^*_{\text {Hopf}} {\otimes}U({\hat {\mathfrak g}}_{>0})^\circledast \subset U({\hat {\mathfrak g}}, \bar k) ^*,$$ and $\Phi$ is injective. filtrations of the vertex operator algebra ${\mathbb V}$ ======================================================== Fix $k \in {\mathbb {Q}}$, $k > - h^\vee$; set $\varkappa = k + h^\vee > 0$. Let $\mathcal O_{- \varkappa}$ be the full subcategory of the category of ${\hat {\mathfrak g}}_{ \bar k}$-modules defined by Kazhdan and Lusztig in \[KL1-4\]. They constructed a tensor structure on $\mathcal O_{- \varkappa}$, and established an equivalence of tensor categories between $\mathcal O_{- \varkappa}$ and the category of finite-dimensional integrable representations of the quantum group with quantum parameter $q = e^{- i\pi/ \varkappa}$ (in the simply-laced case). Let $V_{\lambda,\bar k} = \text{Ind}_{{\hat {\mathfrak g}}_{\geq 0}}^{{\hat {\mathfrak g}}} V_\lambda$ be a Weyl module, denote the irreducible quotient of $V_{\lambda, \bar k}$ by $L_{\lambda, \bar k} $. [\[KL1, Definition 2.15\]]{} $\mathcal O_{- \varkappa}$ is the full subcategory of ${\hat {\mathfrak g}}_{ \bar k}$-modules, which admits a finite composition series with factors of the form $L_{\lambda, \bar k}$ for various $\lambda \in P^+$. Let us recall some basic facts about $\mathcal O_{- \varkappa}$. The ${\mathbb Z}_{>0}$-grading on ${\hat {\mathfrak g}}_{>0}$ induces an ${\mathbb {N}}$-grading on the enveloping algebra: $U({\hat {\mathfrak g}}_{>0}) = \bigoplus_{n \geq 0} U({\hat {\mathfrak g}}_{>0})_n$. For any $V \in \mathcal O_{- \varkappa}$, $v \in V$, there exists an $n_1 \in {\mathbb {N}}$ such that $U({\hat {\mathfrak g}}_{>0})_{n_1} {\cdot}v = 0$. A module $\mathcal N$ over ${\mathfrak{{g}}} {\otimes}{\mathbb{C}}[t]$ is said to be a nil-module if $\text{dim}_{\mathbb{C}}\mathcal N < \infty$ and there exists a $n \geq 1$ such that $U({\hat {\mathfrak g}}_{>0})_n \mathcal N = 0$. Extend $\mathcal N$ to a ${\hat {\mathfrak g}}_{\geq 0}$-module by defining the action of $\underline c$ to be multiplication by $\bar k$, and let $\mathcal N_{\bar k} = \text{Ind}_{{\hat {\mathfrak g}}_{\geq 0}}^{{\hat {\mathfrak g}}} \mathcal N$ be the induced module. We say that $\mathcal N_{\bar k}$ is a generalized Weyl module. [\[KL1, Theorem 2.22\]]{} A ${\hat {\mathfrak g}}_{\bar k}$-module $V$ is in $\mathcal O_{ -\varkappa}$ if and only if $V$ is a quotient of a generalized Weyl module. Given $V \in \mathcal O_{- \varkappa}$, let $\bar {L_0}: V \to V$ be the Sugawara operator defined by $\bar {L_0} v = - \frac{1}{\varkappa} \sum_{j > 0} \sum_i \tau_i(-j) \tau_i(j) v - \frac{1}{2 \varkappa} \sum_i \tau_i(0) \tau_i(0) v $, where $\{ \tau_i \}$ is an orthonormal basis of ${\mathfrak{{g}}}$ with respect to the normalized Killing form. Note that this operator is well defined and locally finite. Let $V_z$ be the generalized eigenspace of $\bar {L_0}$ with eigenvalue $-z \in {\mathbb{C}}$, we have $V = \bigoplus_{z \in {\mathbb{C}}} V_z$ with $\text{dim} V_z < \infty$. In fact there exist $z_1, \cdots, z_m \in {\mathbb {Q}}$ such that $\{ z | V_z \neq 0\} \subset \{ z_1 - {\mathbb {N}}\} \cup \cdots \cup \{ z_m - {\mathbb {N}}\}$, and $V$ becomes a ${\mathbb {Q}}$-graded ${\hat {\mathfrak g}}_{\bar k}$-representation, i.e. $x(n) V_z \subset V_{z+n}$ for any $x(n) \in {\hat {\mathfrak g}}$ (see \[KL1, Lemma 2.20, Proposition 2.21\]). In case $V = V_{\lambda, \bar k}$ is a Weyl module, $\bar {L_0}$ acts on $V_{\lambda, \bar k}$ semisimply. More specifically, we have $\bar {L_0} |_{U({\hat {\mathfrak g}}_{<0})_{-n} {\otimes}V_\lambda} = - \frac{ \langle \lambda, \lambda +2\rho \rangle }{2 \varkappa} + n$, where $\rho$ is the half sum of positive roots. Define the dual representation of $V$ as follows: as a vector space $V^* = \bigoplus_z (V_z)^*$; the ${\hat {\mathfrak g}}$-action is given by $ X f (v) = f ( -X v)$ for any $X \in {\hat {\mathfrak g}}, f\in V^*, v\in V$. In particular $V^*$ is a ${\hat {\mathfrak g}}_{- \bar k}$-module and locally $U({\hat {\mathfrak g}}_{<0})$-finite. In order for $V^*$ to be a graded ${\hat {\mathfrak g}}$-module as well, set $(V^*)_z = (V_{-z} )^*$, or equivalently set $(V^*)_z$ to be the generalized $(-z)$-eigenspace of the operator $L_0' = \frac{1}{\varkappa} \sum_{j > 0} \sum_i \tau_i(j) \tau_i(-j) + \frac{1}{2 \varkappa} \sum_i \tau_i(0) \tau_i(0)$ which acts on $V^*$. The contragredient dual $V^c$ is isomorphic to $V^*$ as a vector space, but instead of using $- \text{Id}: {\hat {\mathfrak g}}\to {\hat {\mathfrak g}}$, we use the anti-involution $x(n) \mapsto - x(-n), \underline c \mapsto \underline c$ to define the ${\hat {\mathfrak g}}$-action on $V^c$. Unlike $V^*$, the contragredient module $V^c$ is a ${\hat {\mathfrak g}}_{\bar k}$-representation, locally $U({\hat {\mathfrak g}}_{>0})$-finite, and in fact belongs to $\mathcal O_{ -\varkappa}$. Given $V \in \mathcal O_{ -\varkappa}$, define a map $\phi_V: V^* {\otimes}V \to U({\hat {\mathfrak g}}, \bar k)^*; \phi_V ( f {\otimes}v ) (u) = f ( u {\cdot}v )$ for any $f \in V^*, v\in V, u \in U({\hat {\mathfrak g}})$. It is easy to see that $\phi_V$ is a ${\hat {\mathfrak g}}_{- \bar k} \oplus {\hat {\mathfrak g}}_{\bar k}$-map, where the ${\hat {\mathfrak g}}_{- \bar k} \oplus {\hat {\mathfrak g}}_{\bar k}$-module structure of $U({\hat {\mathfrak g}}, \bar k)^*$ is given by $ (X, 0) {\cdot}g = - g {\cdot}X$ and $(0, X) {\cdot}g = X {\cdot}g$ for any $X \in {\hat {\mathfrak g}}$, $g \in U({\hat {\mathfrak g}}, \bar k)^*$. Denote the image of $\phi_V$ by ${\mathbb M}(V)$, which is called the matrix coefficients of $V$. Recall the ${\hat {\mathfrak g}}_{ - \bar k} \oplus {\hat {\mathfrak g}}_{\bar k}$-map $\Phi: S_\gamma {\otimes}_U {\mathbb V}\to U({\hat {\mathfrak g}}, \bar k)^*$ defined in Section 3. As pointed out in Remark \[explreal\], the map $\Phi$ is injective and its image, which we denote by ${\mathbb M}^{\mathcal O_{- \varkappa}}$, is isomorphic to $U({\hat {\mathfrak g}}_{<0})^\circledast {\otimes}U({\mathfrak{{g}}})^*_{\text{Hopf}} {\otimes}U({\hat {\mathfrak g}}_{>0})^\circledast$. Here $U({\hat {\mathfrak g}}_{<0})^\circledast = \bigoplus_{n \leq 0} (U({\hat {\mathfrak g}}_{<0})_n)^*$, $U({\hat {\mathfrak g}}_{>0})^\circledast = \bigoplus_{n \geq 0} (U ({\hat {\mathfrak g}}_{<0})_n)^*$ are graded duals. ${\mathbb M}^{\mathcal O_{- \varkappa}}$ consists of matrix coefficients of modules from the category $\mathcal O_{- \varkappa}$, i.e. ${\mathbb M}^{\mathcal O_{- \varkappa}} = \sum_{V \in \mathcal O_{- \varkappa}} {\mathbb M}(V)$. Let $V = \bigoplus_z V_z \in \mathcal O_{- \varkappa}$, $v \in V$ and $f \in V^*$, for any $u = u_{<0} u_0 u_{>0} \in U = U({\hat {\mathfrak g}})$, we have $\phi_V ( f {\otimes}v ) ( u) = \langle f, u_{<0} u_0 u_{>0} {\cdot}v \rangle =\langle \overline {u_{<0}} {\cdot}f, u_0 {\cdot}u_{>0} {\cdot}v \rangle$. Since $V \in \mathcal O_{- \varkappa}$, there exist $n_1, n_2 \in {\mathbb {N}}$ such that $U({\hat {\mathfrak g}}_{<0})_{ - n_1} {\cdot}f = U( {\hat {\mathfrak g}}_{>0} )_{n_2} {\cdot}v = 0 $. Moreover each $V_z$ is finite-dimensional and semisimple as a ${\mathfrak{{g}}}$-module, therefore it is not hard to see that $\phi_V (f {\otimes}v) \in U({\hat {\mathfrak g}}_{<0})^\circledast {\otimes}U({\mathfrak{{g}}})^*_{\text{Hopf}} {\otimes}U({\hat {\mathfrak g}}_{>0})^\circledast$, i.e. ${\mathbb M}(V) \subset {\mathbb M}^{\mathcal O_{- \varkappa}}$. On the other hand, let $g \in {\mathbb M}^{\mathcal O_{- \varkappa}}$, there exists an $n \in {\mathbb {N}}$ such that $U({\hat {\mathfrak g}}_{>0})_{n} {\cdot}g = 0$. Since each $U({\hat {\mathfrak g}}_{>0})_{n'}$ is finite-dimensional and ${\mathfrak{{g}}}$ acts on ${\mathbb M}^{\mathcal O_{- \varkappa}}$ locally finitely, the ${\hat {\mathfrak g}}_{\geq 0}$-submodule generated by $g$ is a nil-module. Hence the ${\hat {\mathfrak g}}$-submodule $W = U({\hat {\mathfrak g}}) {\cdot}g$ generated by $g$ is a quotient of a generalized Weyl module, hence it belongs to $\mathcal O_{ - \varkappa}$. Let $\delta$ be the functional on $U^*$ defined by $\delta (g' ) = g' (1)$, then $\delta \in W^*$ and $g = \phi_W ( \delta {\otimes}g ) \subset {\mathbb M}(W)$. Define two operators $\bar {L_0}, L_0'$ that act on ${\mathbb M}^{\mathcal O_{- \varkappa}}$ as follows: for any $g \in {\mathbb M}^{\mathcal O_{- \varkappa}}$, set $\bar {L_0} g = - \frac{1}{\varkappa} \sum_{j > 0} \sum_i \tau_i(-j) {\cdot}\tau_i(j) {\cdot}g - \frac{1}{2 \varkappa} \sum_i \tau_i(0) {\cdot}\tau_i(0) {\cdot}g$ and $L_0' g = \frac{1}{\varkappa} \sum_{j > 0} \sum_i g {\cdot}\tau_i (- j ) {\cdot}\tau_i( j ) + \frac{1}{2 \varkappa} \sum_i g {\cdot}\tau_i(0) {\cdot}\tau_i(0) $. Let ${\mathbb M}^{\mathcal O_{- \varkappa}}_{z', z}$ be the subspace consisting of all $g \in {\mathbb M}^{\mathcal O_{- \varkappa}}$ such that $g$ is in the kernel of some power of $\bar {L_0} + z \text{Id}$ and the kernel of some power of $L_0' + z' \text{Id}$. Then ${\mathbb M}^{\mathcal O_{- \varkappa}} = \bigoplus_{z, z'} {\mathbb M}^{\mathcal O_{- \varkappa}}_{z', z}$, and $\phi_V ( (V^*)_{z'} {\otimes}V_z ) \subset {\mathbb M}^{\mathcal O_{- \varkappa}}_{z', z}$ for any $V \in \mathcal O_{-\varkappa}$. Moreover $ {\mathbb M}^{\mathcal O_{- \varkappa}}_{z', z} {\cdot}x(n) \subset {\mathbb M}^{\mathcal O_{- \varkappa}}_{z' +n, z}$ and $ x(n) {\cdot}{\mathbb M}^{\mathcal O_{- \varkappa}}_{z', z} \subset {\mathbb M}^{\mathcal O_{- \varkappa}}_{z', z+ n}$ for any $x (n) \in {\hat {\mathfrak g}}$. Define a ${\mathbb Z}$-grading on ${\mathbb M}^{\mathcal O_{- \varkappa}}$: for any $g_1 \in (U({\hat {\mathfrak g}}_{<0})_n)^*$, $a \in U({\mathfrak{{g}}})^*_{\text{Hopf}}$, $g_2 \in (U({\hat {\mathfrak g}}_{>0})_{n'})^*$, define $\text{deg } g_1 {\otimes}a {\otimes}g_2 = - n - n'$; set ${\mathbb M}^{\mathcal O_{- \varkappa}}_{\,\, n} = \{ g | \text{ deg } g = n \}$. It is not difficult to see that ${\mathbb M}^{\mathcal O_{- \varkappa}}_{\,\, n} = \bigoplus_{z + z' = n} {\mathbb M}^{\mathcal O_{- \varkappa}}_{z', z}$. Following \[KL1, 3.3\], define a partial order on $P^+$ as follows: $\lambda \leq \mu$ if either $\lambda = \mu$ or $\langle \lambda, \lambda + 2 \rho \rangle < \langle \mu, \mu + 2 \rho \rangle$. Let $\mathcal O_{- \varkappa}^s$ be the full subcategory of $\mathcal O_{- \varkappa}$ whose objects are the $V$ in $\mathcal O_{- \varkappa}$ such that the composition factors of $V$ are of the form $L_{\lambda, \bar k}$ for some $\lambda$ in the finite set $F^s = \{ \lambda \in P^+ | \langle \lambda, \lambda + 2 \rho \rangle \leq s \}$. We say that a module $V \in \mathcal O_{- \varkappa}$ is tilting if both $V$ and $V^c$ have a Weyl filtration. For any $\lambda \in P^+$, there exists an indecomposable tilting module $T_{\lambda, \bar k}$ such that $V_{\lambda, \bar k} \hookrightarrow T_{\lambda, \bar k}$, and any other Weyl modules $V_{\mu, \bar k}$ entering the Weyl filtration of $T_{\lambda, \bar k}$ satisfy $\mu < \lambda$ (see \[KL4, Proposition 27.2\]). \[(contra)weylfil\] Let $V, V' \in \mathcal O_{ -\varkappa}$. 1. If $V$ has a (finite) Weyl filtration with factors isomorphic to $V_{\lambda_i, \bar k}$ for various $\lambda_i \in P^+$, then ${\mathbb M}(V) \subset \sum_i {\mathbb M}(T_{\lambda_i, \bar k})$. 2. If $V'$ has a (finite) filtration with factors isomorphic to $V_{\mu_i, \bar k}^c$ for various $\mu_i \in P^+$, then ${\mathbb M}(V') \subset \sum_i {\mathbb M}(T_{\mu_i, \bar k}^c)$. The proof is exactly the same as that of \[Z2, Lemma 3.2\]: we can construct an injection $V \hookrightarrow \bigoplus_{i} T_{\lambda_i, \bar k}$, and a surjection $\bigoplus_i T_{\mu_i, \bar k}^c \twoheadrightarrow V'$, since $\text{Ext}_{\mathcal O_{- \varkappa}}^1 ( V_{\lambda, \bar k}, V_{\mu, \bar k}^c ) = 0$ (see \[KL4, Proposition 27.1\]). ${\mathbb M}^{\mathcal O_{- \varkappa}}$ consists of the matrix coefficients of tilting modules from $\mathcal O_{- \varkappa}$, i.e. ${\mathbb M}^{\mathcal O_{- \varkappa}} = \sum_{V \in \mathcal O_{- \varkappa}, V \text{tilting}} {\mathbb M}(V)$. For any $V \in \mathcal O_{- \varkappa}$, choose $s$ such that $V \in \mathcal O_{- \varkappa}^s$. By \[KL1, Proposition 3.9\], there exists a $P$, projective in $\mathcal O_{- \varkappa}^s$ and having a (finite) Weyl filtration, such that $V$ a quotient of $P$. Hence by Lemma \[(contra)weylfil\] (1), we have ${\mathbb M}(V) \subset {\mathbb M}(P) \subset \sum_i {\mathbb M}(T_{\lambda_i, \bar k})$ for some $\lambda_i \in F^s$. Order the dominant weights in such a way $P^+ = \{ \nu_1, \cdots, \nu_i, \cdots \}$ that $\nu_i < \nu_j$ implies $i < j$. Set ${\mathbb M}^{\mathcal O_{- \varkappa}, i} = \sum_{j \leq i} {\mathbb M}(T_{\nu_j, \bar k} )$, then ${\mathbb M}^{\mathcal O_{- \varkappa}, 1} \subset \cdots \subset {\mathbb M}^{\mathcal O_{- \varkappa}, i-1} \subset {\mathbb M}^{\mathcal O_{- \varkappa}, i} \subset \cdots$ is an increasing filtration of ${\hat {\mathfrak g}}_{- \bar k} \oplus {\hat {\mathfrak g}}_{\bar k}$-submodules of ${\mathbb M}^{\mathcal O_{- \varkappa}}$ with factors ${\mathbb M}^{\mathcal O_{- \varkappa}, i} / {\mathbb M}^{\mathcal O_{- \varkappa}, i-1}$ isomorphic to $V_{\nu_i, \bar k}^* {\otimes}V_{- \omega_0 \nu_i, \bar k}^c$, where $\omega_0$ is the longest element in the Weyl group. The proof is the same as that of \[Z2, Theorem 3.3\], using Lemma \[(contra)weylfil\]. \[decomposition\] The category $\mathcal O_{- \varkappa}$ is a direct sum of subcategories corresponding to the orbits of the shifted action of affine Weyl group on the weight lattice (see \[KL4, Lemma 27.7\]). Hence we can decompose ${\mathbb M}^{\mathcal O_{- \varkappa}}$, as a ${\hat {\mathfrak g}}_{- \bar k} \oplus {\hat {\mathfrak g}}_{\bar k}$-module, into summands corresponding to the orbits as well. Some summands are semisimple (see \[KL4, Proposition 27.4\], \[Z2, Proposition 3.1\]), but all have an increasing filtration of the above type. The vertex operator algebra ${\mathbb V}$ is isomorphic to ${\mathcal H \text{om}}_U (S_\gamma, {\mathbb M}^{\mathcal O_{- \varkappa}} )$ as a ${\hat {\mathfrak g}}_k \oplus {\hat {\mathfrak g}}_{\bar k}$-module. Recall that ${\mathbb M}^{\mathcal O_{- \varkappa}} \cong S_\gamma {\otimes}_U {\mathbb V}= {N^ \circledast}{\otimes}{\mathbb B}$. Hence ${\mathcal H \text{om}}_U (S_\gamma, {\mathbb M}^{\mathcal O_{- \varkappa}} ) \cong {\mathcal H \text{om}}_N ({N^ \circledast}, {N^ \circledast}{\otimes}{\mathbb B}) \cong {\mathcal H \text{om}}_{\mathbb{C}}( {N^ \circledast}, {\mathbb B}) \cong N {\otimes}{\mathbb B}\cong {\mathbb V}$, the second to last isomorphism is because ${\mathbb B}$ is non-positively graded while ${N^ \circledast}$ is non-negatively graded. Moreover the induced isomorphism ${\mathbb V}\to {\mathcal H \text{om}}_U (S_\gamma, {\mathbb M}^{\mathcal O_{- \varkappa}} ) \cong {\mathcal H \text{om}}_U (S_\gamma, S_\gamma {\otimes}_U {\mathbb V})$ is a ${\hat {\mathfrak g}}\oplus {\hat {\mathfrak g}}$-map. \[small\] For any $b \in {\mathbb B}$, there exists an $i$ such that ${N^ \circledast}{\otimes}b \subset {\mathbb M}^{\mathcal O_{- \varkappa}, i}$. For any $ f \in {N^ \circledast}$ and $u_{\geq 0} \in U({\hat {\mathfrak g}}_{\geq 0})$, we have $u_{\geq 0} {\cdot}(f {\otimes}b) = f {\otimes}( u_{\geq 0}^r {\cdot}b )$. Let $\mathcal N$ be the $U({\hat {\mathfrak g}}_{\geq 0}, \bar k)$-submodule of ${\mathbb B}$ generated by $b$, then $\mathcal N$ is a nil-module and the ${\hat {\mathfrak g}}_{\bar k}$-submodule $U({\hat {\mathfrak g}}, \bar k) {\cdot}(f {\otimes}b)$ generated by $f {\otimes}b$ is a quotient of the generalized Weyl module $\mathcal N_{\bar k}$. Hence $f {\otimes}b \in {\mathbb M}(\mathcal N_{\bar k})$ for any $f \in {N^ \circledast}$, hence there exists an $i$ such that ${N^ \circledast}{\otimes}b \subset {\mathbb M}^{\mathcal O_{- \varkappa}, i}$. \[main2\] Set $\Sigma^i = {\mathcal H \text{om}}_U (S_\gamma, {\mathbb M}^{\mathcal O_{- \varkappa}, i} )$, then ${\mathbb V}= \bigcup_i \Sigma^i$ and $\Sigma^1 \subset \cdots \subset \Sigma^{i-1} \subset \Sigma^i \subset \cdots $ is an increasing filtration of ${\hat {\mathfrak g}}_k \oplus {\hat {\mathfrak g}}_{\bar k}$-submodules of ${\mathbb V}$ with factors $\Sigma^i / \Sigma^{i-1}$ isomorphic to $V_{ -\omega_0 \nu_i, k} {\otimes}V_{ - \omega_0 \nu_i, \bar k}^c$. For any $u_{<0} {\otimes}b \in N {\otimes}{\mathbb B}\cong {\mathbb V}$, let $\mathcal N' \subset {\mathbb B}$ be the $U({\hat {\mathfrak g}}_{\geq 0}, k)$-submodule generated by $b$, then $\mathcal N' $ is finite-dimensional. For any $s \in S_\gamma$ we have $p ( s {\otimes}(u_{<0} {\otimes}b ) ) \in {N^ \circledast}{\otimes}\mathcal N'$, where $p: S_\gamma {\otimes}{\mathbb V}\to S_\gamma {\otimes}_U {\mathbb V}$ is the canonical projection. By Lemma \[small\], there exists an $i$ such that $p ( s {\otimes}(u_{<0} {\otimes}b ) ) \in {\mathbb M}^{\mathcal O_{- \varkappa}, i}$ for any $s \in S_\gamma$, hence $u_{<0} {\otimes}b \in {\mathcal H \text{om}}_U (S_\gamma, {\mathbb M}^{\mathcal O_{- \varkappa}, i}) = \Sigma^i$. This proves that ${\mathbb V}= \bigcup_i \Sigma^i$. Note that ${\mathbb M}^{\mathcal O_{- \varkappa}, i} = \bigoplus_{z', z} {\mathbb M}^{\mathcal O_{- \varkappa}, i}_{z', z}$ with $\text{dim } {\mathbb M}^{\mathcal O_{- \varkappa}, i}_{z', z} < \infty$. Fix $z$, the exact sequence of ${\hat {\mathfrak g}}_{-\bar k} \oplus {\hat {\mathfrak g}}_{\bar k}$-modules $0 \to {\mathbb M}^{\mathcal O_{- \varkappa}, i-1} \to {\mathbb M}^{\mathcal O_{- \varkappa}, i} \to V_{\nu_i, \bar k}^* {\otimes}V_{ - \omega_0 \nu_i, \bar k}^c \to 0$ restricts to an exact sequence of ${\hat {\mathfrak g}}_{- \bar k}$-modules $0 \to \bigoplus_{z'} {\mathbb M}^{\mathcal O_{- \varkappa}, i-1}_{z', z} \to \bigoplus_{z'} {\mathbb M}^{\mathcal O_{- \varkappa}, i}_{z', z} \to V_{\nu_i, \bar k}^* {\otimes}(V_{ - \omega_0 \nu_i, \bar k}^c)_z \to 0$. Since $V_{\nu_i, \bar k}^* {\otimes}(V_{ - \omega_0 \nu_i, \bar k}^c)_z$ is isomorphic to a finite direct sum of grading-shifted copies of ${N^ \circledast}$ over $N$, by induction on $i$, so does $\bigoplus_{z'} {\mathbb M}^{\mathcal O_{- \varkappa}, i}_{z', z}$ for each $i$, and the two exact sequences split over $N$, which means that there exists a grading preserving $N$-map $V_{\nu_i, \bar k}^* {\otimes}V_{ - \omega_0 \nu_i, \bar k}^c $ $\to {\mathbb M}^{\mathcal O_{- \varkappa}, i} $ so that its composition with the projection is identity on the former. Therefore the sequence of ${\hat {\mathfrak g}}_k \oplus {\hat {\mathfrak g}}_{\bar k}$-modules $0 \to {\mathcal H \text{om}}_U (S_\gamma, {\mathbb M}^{\mathcal O_{- \varkappa}, i-1} ) \to {\mathcal H \text{om}}_U (S_\gamma, {\mathbb M}^{\mathcal O_{- \varkappa}, i} ) \to {\mathcal H \text{om}}_U (S_\gamma, V_{\nu_i, \bar k}^* {\otimes}V_{ - \omega_0 \nu_i, \bar k}^c) \to 0$ is exact since ${\mathcal H \text{om}}_U (S_\gamma, - ) \cong {\mathcal H \text{om}}_N ({N^ \circledast}, -)$. Hence we have $\Sigma^i / \Sigma^{i-1} \cong {\mathcal H \text{om}}_U (S_\gamma, V_{\nu_i, \bar k}^* {\otimes}V_{ - \omega_0 \nu_i, \bar k}^c)$, which is isomorphic to $V_{ - \omega_0 \nu_i, k} {\otimes}V_{ - \omega_0 \nu_i, \bar k}^c$ by Proposition \[hom\], Lemma \[tiny\] and the fact that the grading on $V_{ - \omega_0 \nu_i, \bar k}^c$ is bounded from above. The decomposition of ${\mathbb M}^{\mathcal O_{- \varkappa}}$ discussed in Remark \[decomposition\] leads to a decomposition of ${\mathbb V}$, as a ${\hat {\mathfrak g}}_k \oplus {\hat {\mathfrak g}}_{\bar k}$-module, into summands corresponding to the orbits of the affine Weyl group on the weight lattice. Again some summands are semisimple, but each has an increasing filtration of the above type. The vertex operator algebra ${\mathbb V}$ admits a decreasing filtration of ${\hat {\mathfrak g}}_k \oplus {\hat {\mathfrak g}}_{\bar k}$-submodules ${\mathbb V}\supset \Xi_1 \supset \cdots \supset \Xi_{i-1} \supset \Xi_i \supset \cdots $ with factors $\Xi_{i-1} / \Xi_i $ isomorphic to $V_{- \omega_0 \nu_i, k}^c {\otimes}V_{ - \omega_0 \nu_i, \bar k}$, and $\bigcap_i \Xi_i = 0$. Let $L_0, \bar {L_0}: {\mathbb V}\to {\mathbb V}$ be the Sugawara operators associated to the ${\hat {\mathfrak g}}_k$- and ${\hat {\mathfrak g}}_{\bar k}$-actions on ${\mathbb V}$ respectively, i.e. $L_0 = \frac{1} { \varkappa} \sum_{j >0} \sum_i \tau_i (-j) \tau_i (j) + \frac{1}{2 \varkappa} \sum_i \tau_i (0) \tau_i (0) $, and $\bar {L_0} = - \frac{1} {\varkappa} \sum_{j >0} \sum_i \bar {\tau_i} (-j) \bar {\tau_i} (j) - \frac{1} {2 \varkappa} \sum_i \bar {\tau_i} (0) \bar {\tau_i} (0)$. Now we regard the vertex operator algebra ${\mathbb V}= \bigoplus_{n \geq 0} {\mathbb V}_n$ as non-negatively graded, then the sum $\mathcal L_0 = L_0 + \bar {L_0}$ is the gradation operator, i.e. $\mathcal L_0 |_{{\mathbb V}_n} = n \text {Id}$ (see \[Z1, Proposition 3.20, 3.24\]). Let ${\mathbb V}_{z_1, z_2}$ be the subspace consisting of $v \in {\mathbb V}$ such that $v$ is killed by some power of $L_0 - z_1 \text{Id}$ and some power of $\bar {L_0} - z_2 \text{Id}$. It follows from Theorem \[main2\] that ${\mathbb V}= \bigoplus_{z_1, z_2} {\mathbb V}_{z_1, z_2}$ with $\text{dim } {\mathbb V}_{z_1, z_2} < \infty$. Recall the symmetric non-degenerate bilinear form $\langle, \rangle: {\mathbb V}\times {\mathbb V}\to {\mathbb{C}}$ constructed in \[Z1, Proposition 3.28\]. It is shown to be compatible with the vertex operator algebra structure of ${\mathbb V}$, in particular we have $\langle x(n) {\cdot}, {\cdot}\rangle = \langle {\cdot}, - x(-n) {\cdot}\rangle$ and $\langle \bar y(n) {\cdot}, {\cdot}\rangle = \langle {\cdot}, - \bar y(-n) {\cdot}\rangle$ for any $x (n) \in {\hat {\mathfrak g}}_k, \bar y(n) \in {\hat {\mathfrak g}}_{\bar k}$. It implies that $\langle L_0 {\cdot}, {\cdot}\rangle = \langle {\cdot}, L_0 {\cdot}\rangle$ and $\langle \bar {L_0} {\cdot}, {\cdot}\rangle = \langle {\cdot}, \bar {L_0} {\cdot}\rangle$. Hence $\langle, \rangle|_{{\mathbb V}_{z_1, z_2} \times {\mathbb V}_{z_1', z_2'}} = 0$ except when $z_1 = z_1'$ and $z_2 = z_2'$, in which case the pairing is non-degenerate. Let ${\mathbb V}^c = \bigoplus_{z_1, z_2} {\mathbb V}_{z_1, z_2}^*$ be the contragredient dual of ${\mathbb V}$, where the ${\hat {\mathfrak g}}_k$- and ${\hat {\mathfrak g}}_{\bar k}$-actions on ${\mathbb V}^c$ are both defined by the anti-involution $x (n) \mapsto - x(-n); \underline c \mapsto \underline c$ of ${\hat {\mathfrak g}}$. Then we have ${\mathbb V}\cong {\mathbb V}^c$ because of the bilinear form $\langle, \rangle$. Set $\Xi_i= \{ v \in {\mathbb V}\, | \, \langle v, \Sigma_i \rangle = 0 \}$, then $\Xi_i $ is a ${\hat {\mathfrak g}}_k \oplus {\hat {\mathfrak g}}_{\bar k}$-submodule of ${\mathbb V}$. Moreover we have $\Xi_i \subset \Xi_{i-1}$, and $\bigcap_i \Xi_i = 0$ because $\bigcup_i \Sigma_i = {\mathbb V}$ and $\langle, \rangle$ is non-degenerate. In fact $\Xi_{i-1} / \Xi_i \cong (\Sigma_i / \Sigma_{i-1})^c \cong ( V_{ -\omega_0 \nu_i, k} {\otimes}V_{ - \omega_0 \nu_i, \bar k}^c )^c \cong V_{ -\omega_0 \nu_i, k}^c {\otimes}V_{ - \omega_0 \nu_i, \bar k}$. [99999]{} S. Arkhipov, Semi-infinite cohomology of quantum groups, Comm. Math. Phys. 188 (1997), no. 2, 379-405. S.M.Arkhipov, Semi-infinite cohomology of associative algebras and bar duality, Int. Math. Res. Not. 1997, no. 17, 833-863. S. Arkhipov, D. Gaitsgory, Differential operators on the loop group via chiral algebras, Int. Math. Res. Not. 2002, no. 4, 165-210. I. Frenkel, K. Styrkas, Modified regular representations of affine and Virasoro algebras, VOA structure and semi-infinite cohomology, Adv. Math. 206 (2006), 57-111. V. Gorbounov, F. Malikov, V. Schechtman, Gerbes of chiral differential operators, II, Vertex algebroids, Invent. Math. 155 (2004), no. 3, 605-680. V. Gorbounov, F. Malikov, V. Schechtman, On chiral differential operators over homogeneous spaces, Int. J. Math. Math. Sci. 26 (2001), no. 2, 83-106. D. Kazhdan, G. Lusztig, Tensor structures arising from affine Lie algebras I, J. Amer. Math. Soc. 6 (1993), 905-947. D. Kazhdan, G. Lusztig, Tensor structures arising from affine Lie algebras II, J. Amer. Math. Soc. 6 (1993), 949-1011. D. Kazhdan, G. Lusztig, Tensor structures arising from affine Lie algebras III, J. Amer. Math. Soc. 7 (1994), 335-381. D. Kazhdan, G. Lusztig, Tensor structures arising from affine Lie algebras IV, J. Amer. Math. Soc. 7 (1994), 383-453. W. Soergel, Character formulas for tilting modules over Kac-Moody algebras, Represent. Theory, 2 (1998), 432-448. A. Voronov, Semi-infinite homological algebra, Invent. Math. 113 (1993), 103-146. M. Zhu, Vertex operator algebras associated to modified regular representations of affine Lie algebras, math.QA/0611517. M. Zhu, Regular representations of the quantum groups at roots of unity, preprint, 2007.
--- abstract: 'By the end of 2008 (approximately one year, at the time of writing), the NASA SMall EXplorer (SMEX) mission IBEX (Interstellar Boundary Explorer) will begin to return data on the flux of energetic neutral atoms (ENA’s) observed from an eccentric Earth orbit. This data will provide information about the inner heliosheath (the region of post-shock solar wind) where ENA’s are born through charge-exchange between interstellar neutral atoms and plasma protons. However, the observed flux will be a function of the heliosheath thickness, the shape of the proton distribution function, the bulk plasma flow, and loss mechanisms acting on ENA’s traveling to the detector. As such, ENA fluxes obtained by IBEX can be used to better parametrize global models which can then provide improved quantitative data on the shape and plasma characteristics of the heliosphere. In a recent letter [@HPZandF07a], we explored the relationship between various geometries of the global heliosphere and the corresponding ENA all-sky maps. There we concentrated on energies close to the thermal core of the heliosheath distribution (200 eV), which allowed us to assume a simple Maxwellian profile for heliosheath protons. In this paper we investigate ENA fluxes at higher energies (IBEX detects ENA’s up to 6 keV), by assuming that the heliosheath proton distribution can be approximated by a $\kappa$-distribution. The choice of the $\kappa$ parameter derives from observational data of the solar wind (SW). We will look at all-sky ENA maps within the IBEX energy range, as well as ENA energy spectra in several directions. We find that the use of $\kappa$ gives rise to greatly increased ENA fluxes above 1 keV, while medium energy fluxes are somewhat reduced. We show how IBEX data can be used to estimate the spectral slope in the heliosheath, and that the use of $\kappa$ reduces the differences between ENA maps at different energies. We also investigate the effect introducing a $\kappa$-distribution has on the global interaction between the SW and the local interstellar medium (LISM), and find that there is generally an increase in energy transport from the heliosphere into the LISM, due to the modified profile of ENA’s energies. This results in a termination shock that moves out by 4 AU, a heliopause that moves in by 9 AU and a bow shock 25 AU farther out, in the nose direction.' author: - 'J. Heerikhuisen, N.V. Pogorelov, V. Florinski, G.P. Zank and J. A. le Roux' title: 'The effects of a $\kappa$-distribution in the heliosheath on the global heliosphere and ENA flux at 1 AU' --- Introduction ============ With the crossing of the termination shock (TS) by the [*Voyager*]{} 1 and 2 spacecraft [@BNALCSandMcD05; @DKRHAGHandL05; @SCMcDHLandW05], the post-shock solar wind (SW) region, known as the inner heliosheath [@Zank99], has become an area of increased interest [@igpp_conf6]. Despite its non-functioning plasma instrument, [*Voyager*]{} 1 has provided important data on the flow, energetic particle, and magnetic field orientation in the heliosheath, much of which is poorly understood. Now that [ *Voyager*]{} 2 has crossed the TS at 84 astronomical units (AU), new data will further increase our understanding of the outer reaches of the heliosphere. Although *in situ* measurements by the [*Voyager*]{} spacecrafts are immensely valuable, they do not provide much information about the global structure of the heliosphere-interstellar medium interaction region. The Interstellar Boundary Explorer [[ *IBEX*]{}, @McComas_IGPP04; @McComas_IGPP06] will try to infer global heliospheric structure by surveying the sky in energetic neutral atoms (ENA’s) from Earth orbit. ENA’s are created in the heliosheath after a neutral atom from the local interstellar medium (LISM) charge-exchanges with a plasma proton. The new neutral atom (generally hydrogen) is born from the proton distribution, and, as such, reflects the characteristic plasma conditions at the point of creation. ENA’s propagate virtually ballistically (particularly ENA hydrogen), subject only to the sun’s gravity and radiation pressure. [*IBEX*]{} will directly detect ENA’s and create all-sky maps at a variety of energies between 10 eV and 6 keV at the rate of one complete map every six months. The challenge to both data analysts and theorists is how to interpret the ENA flux measurements made by the IBEX-Lo (10 eV – 2 keV) and IBEX-Hi (300 eV – 6 keV) instruments. The ENA flux at a given energy will be a function of the properties of the heliosheath along a particular line of sight. As shown in [@HPZandF07a], this includes plasma and neutral number densities, plasma flow speed and direction, plasma temperature, and distance to the heliopause (heliosheath thickness). However, that analysis was limited to energies close the thermal core of the heliosheath distribution, since we did not incorporate high energy tails in the ENA parent population due to either pick-up ions, or energetic protons accelerated by other mechanisms. Recently, [@Prested_kappa08] used a $\kappa$-distribution for the ENA parent population to obtain ENA maps. The advantage of using this distribution, as opposed to a Maxwellian, is that it has a power-law tail, and is therefore capable of producing ENA’s at suprathermal energies. However, the focus in that paper was on the [*IBEX*]{} instrument’s response to ENA fluxes, and feed-back of ENA’s on the global solution was not considered. In this paper we seek to extend the investigations of [@HPZandF07a] to higher energies by adopting a $\kappa$-distribution for heliosheath protons, using an approach similar to [@Prested_kappa08]. The suggestion that the supersonic SW should be described by a $\kappa$-distribution rather than a Maxwellian has a long history [@GABFZPSandH81; @SandT91]. More recently, with the measurement of PUI’s by [*Ulysses*]{} [@GFandL05; @FandG06], it became apparent that the PUI distribution merged cleanly into the solar wind distribution, yielding an extended energetic tail. This was carried further by [@Mewaldt_etal01] who constructed an extended supersonic SW proton spectrum showing that a high energy tail emerged smoothly from the clearly identifiable low energy solar wind particles. The results of [@Mewaldt_etal01] showed that not only did a continuous power law tail emerge from the thermal distribution, but this tail merged naturally into higher energies associated with (low energy) anomalous cosmic rays (ACR’s) [@DKRHAGHandL05]. The Voyager LECP data obtained in the heliosheath indicates that a power law distribution at thermal energies is maintained, but of course we have no means to show that a tail emerges smoothly from the shocked SW plasma. Nonetheless, we do not expect an abrupt departure from the supersonic SW particle distribution characteristics in that its overall “smoothness” should be preserved. We use a self-consistently coupled MHD-plasma/kinetic-neutral code to compute a steady-state heliosphere with a $\kappa$-distribution in the SW, and investigate ENA fluxes at 1 AU, looking in particular for signatures which can be related to the heliospheric structure. We begin, however, by investigating the effects of assuming such a distribution on the supersonic and subsonic SW and, due to the non-local coupling mediated by charge-exchanging neutrals, the global heliosphere. The heliosphere with $\kappa$ heliosheath ========================================= At around 100 astronomical units (AU) the supersonic SW flow encounters the termination shock (TS), whereupon it becomes subsonic and heated. The hot subsonic SW fills the inner heliosheath and heliotail (these features are visible in the computed plasma distributions shown in Figure \[fig:HPFZandL07\_global\]). At the same time, the solar system is thought to travel supersonically through the partially ionized plasma of the LISM. As a result, a bow shock forms upstream of the heliosphere, and a tangential discontinuity, known as the heliopause (HP), separates the shocked solar and LISM plasmas. Interstellar neutral gas (primarily hydrogen) is weakly coupled to the plasma through charge-exchange, but readily traverses the heliopause (with a filtration ratio of about 45%) and may be detected near Earth at a range of energies that correspond to the creation site of the neutral H, ranging from the LISM to the hot heliosheath, to the fast solar wind. To determine the flux of neutral atoms at 1 AU, we use a steady-state solution obtained from the 3D heliospheric model based on the 3D MHD code of [@PZandO06] and a 3D version of the kinetic neutral hydrogen code of [@HFandZ06]. The first self-consistently coupled 3D application of this code appears in [@PHandZ08]. A steady-state is reached by iteratively running the coupled plasma and neutral codes until successive iterations converge. Although several plasma-only models of the heliosphere are still in use, it is now recognized that including neutral atoms in a global model is critical to obtaining the correct location and shape of the termination shock and heliopause, as well as determining the right temperature of the heliosheath, since interstellar neutrals contribute to significant cooling and heating of the inner and outer heliosheath respectively [@PSFandZ07]. We also note that inter-particle collisions do not significantly alter the neutral distribution and that charge-exchange mean free paths are of the order of the size of the heliosphere, so that neutral atoms should ideally be modelled kinetically, with charge-exchange coupling the neutral and charged populations [@BandM93; @AandI05; @HFandZ06]. ----------------------- -------------- -------------- ------------------ Parameter Interstellar Low Speed High Speed $U$ (km/s) 26.4 400 800 $T$ (K) 6527 $10^5$ 2.6$\times 10^5$ $n_p$ (cm$^{-3}$) 0.05 7 2.6 $n_H$ (cm$^{-3}$) 0.15 0 0 $|B|$ ($\mu$G) 1.5 37.5 ($B_r$) 37.5 ($B_r$) $\phi_B$ ($^\circ$) 90 $\theta_B$ ($^\circ$) 60 ----------------------- -------------- -------------- ------------------ : Boundary conditions for the 3D heliospheric model considered here. We use a spherical coordinate system, where $\phi$ is the angle in the ecliptic plane around from the meridional plane and $\theta$ is the angle above the ecliptic plane. The solar rotation axis is assumed orthogonal to the ecliptic plane. The SW is assumed to change from a slow wind to a high speed wind at 35 degrees above the ecliptic plane, as suggested by Ulysses observations [@McComas_etal00_Ulysses] of the SW during solar minimum.[]{data-label="table:3D_bc"} Our model treats the ion population as a single fluid whose total pressure is the sum of the pressure contribution from electrons, thermal ions (SW or LISM), and PUI’s. Because the pick-up of interstellar neutral H yields a PUI population co-moving with the bulk SW flow, a single fluid model captures exactly the energetics and dynamics of the combined SW/PUI plasma. The only assumption that is needed is for the value of the adiabatic index ($\gamma = 2$ corresponds to no scattering of the PUI distribution, $\gamma = 5/3$ corresponds to scattering of the PUI’s onto a shell distribution) – see, for example, [@KSZandP96] or section 4.1 of [@Zank99]. The pick-up of ions and the creation of new H-atoms is included self-consistently through source integrals in the plasma momentum and energy equations [@Holzer72; @PZandW95]. The pick-up of interstellar neutrals and the creation of PUI’s in the supersonic SW removes energy and momentum from the SW since the newborn ions are accelerated in the SW motional electric field to co-move with the SW flow. The fast neutrals created in the supersonic SW propagate radially outward, typically experiencing charge-exchange in the LISM. Pick-up of neutrals in the SW therefore decelerates the flow, and since a population of PUI’s with thermal velocities comparable to the bulk SW speed ($\sim 1$ keV energies) is created, the [*total*]{} pressure/temperature in the one-fluid model begins to increase with increasing heliocentric radius. Of course, the thermal SW ions experience no heating other than due to enhanced dissipation associated with excitation of turbulence by the pick-up process [@WZandM95; @ZMandS96]. These effects are all captured by the self-consistent coupling of plasma, via a one-fluid plasma model, and neutral H, and the plasma pressure and velocity respond directly to the distribution of neutral H throughout the heliosphere. Finally, as neutral H drifts through the heliosphere from the upwind to downwind, neutral H is depleted leading to less pick-up towards the heliotail region. This results in a (relatively weak) upwind-downwind asymmetry in the SW plasma flow velocity (see Figure \[fig:Mach\_speed\], below) and the one-fluid (i.e. PUI’s) pressure/temperature. It should be noted that these results are independent of the specific form of the plasma ion (thermal and PUI) distribution function, as long as it is assumed isotropic. Only in computing the specific source term for both the plasma and neutral equations does the detailed distribution become important, and then primarily for the neutral distribution (since new-born PUI’s are always accelerated by the motional electric field to co-move with the SW flow). What we have just described is the heating/pressurization of a single fluid SW due to charge-exchange with interstellar Hydrogen. Our $\kappa$-distribution approach tries to improve on this by using a distribution with core and tail features to approximate the core SW, suprathermal ion, and PUI distributions respectively. Of course in reality the solar wind is much better described by separate distributions. In fact, a drawback of our approach is that the value of $\kappa$ we use fixes the ratio between the core and tail number densities so that one cannot change independently characteristics of the core without making self-similar change to the wings of the $\kappa$-distribution. In particular, this manifests itself in the radial temperature profile of the solar wind. Observations by [@RPLandB95] suggest that the core SW does not cool adiabatically, but instead appears to be heated. New-born PUIs form an unstable ring-beam distribution which excites Alfvén waves that then scatter the PUIs onto a bispherical distribution. The power in the excited waves can be computed geometrically as the difference in the energy between the an energy conserving shell distribution for PUIs and a bispherical distribution for PUIs [@WandZ94] or directly from quasi-linear theory [@LandI87]. To explain the heating observed by [@RPLandB95], [@WZandM95] suggested that the dissipation of the PUI excited waves could account for the heating, but it was only with the development of a transport model for magnetic field fluctuations and their turbulent dissipation (which leads to heating of the plasma) that the PUI excited fluctuations be properly accounted for [@ZMandS96]. Since the dissipation of magnetic fluctuation power is strengthened in the outer heliosphere by PUI excited fluctuations, this leads to a corresponding heating of the solar wind plasma in the outer heliosphere. [@MZSandO99] applied the turbulence transport model of [@ZMandS96] to show explicitly that PUI enhanced turbulent dissipation of magnetic field fluctuations could account for the observed solar wind plasma heating, a result that was examined in considerably more detail by [@SMZNOandR01] [see also @CFandL03; @SIMandR06]. The dissipation of magnetic energy affects only the solar wind core, heating it, but leaves the suprathermal and PUI population unchanged energetically. Within a single fluid description, both the core and tail components of the distribution broaden simultaneously, and we cannot alter the ratio of energization between these components, as would be required if we were to account for turbulent dissipation of magnetic fluctuation energy into the solar wind plasma. Nonetheless, the total dynamics of the system, including charge exchange levels, is preserved but the detailed energy allotment between the core SW and PUI’s is fixed by the choice of the $\kappa$ parameter. Figure \[fig:HPFZandL07\_global\] shows cuts of the heliosphere in three planes for the plasma temperature and neutral hydrogen density. These results were obtained using our 3D MHD-plasma/kinetic-neutral model, where we assumed a $\kappa$-distribution for protons in the heliosheath with $\kappa = 1.63$. The SW and LISM boundary conditions used in this calculation are summarized in Table \[table:3D\_bc\]. As described above, the pick-up process for our single ion fluid approach results in solar wind properties expected from observational data – i.e. increased pressure and decreased speed at larger radial distances. To demonstrate this using our code, Figure \[fig:Mach\_speed\] shows profiles of the bulk speed of the SW, and the fast magnetosonic Mach number given by $$\label{eq:mach} M = 2 u_r / \left(\sqrt{c_s^2 + \frac{B^2}{4\pi\rho} + \frac{|B_r|c_s}{\sqrt{\pi\rho}}} + \sqrt{c_s^2 + \frac{B^2}{4\pi\rho} - \frac{|B_r|c_s}{\sqrt{\pi\rho}}} \right) \; ,$$ where $\rho$, $P$ and $c_s^2 = \gamma P/\rho$ are the plasma density, pressure and sound speed respectively. The adiabatic index $\gamma = 5/3$. The slowdown in our simulation from 400 km/s at 1 AU, down to 335 km/s at the TS matches the 15 % slowdown inferred from [ *Voyager*]{} 2 observations [@RLandW08]. [*Voyager*]{} 2 observed a TS compression ratio of about 2 [@Richardson_AGU07], which corresponds to a Mach number of 1.7 if we assume a simple gas-dynamic shock. Our simulation yields a Mach number of 2.3, which is slightly higher, due, in part, to the absence of a shock precursor. The implications of using a $\kappa$-distribution in the heliosheath, and how this result relates to a traditional Maxwellian approach, is described in the next section. Maxwellian $\kappa = 1.63$ ----------------------------- ------------ ----------------- TS distance (AU) 83 87 HP distance (AU) 139 131 BS distance (AU) 400 440 $n_H$ at TS (cm$^{-3}$) 0.095 0.09 $n_H$ at H-wall (cm$^{-3}$) 0.23 0.215 : Comparison of global heliospheric densities and distances in the upstream LISM direction between the solution with a Maxwellian distribution for protons in the heliosheath, and when we take protons to obey a $\kappa$-distribution in the inner heliosheath with $\kappa = 1.63$ and allow feed-back of the modified ENA distribution on the global solution.[]{data-label="table:distances"} Implications of using a $\kappa$-distribution in the heliosheath {#sec:implications} ---------------------------------------------------------------- Pick-up ions (PUI’s) originate in the SW due to charge-exchange of LISM neutrals with SW protons. However, they do not thermalize with the background SW plasma [@Isenberg86; @Zank99] and are not therefore equilibrated with the SW. Thus, PUI’s constitute a separate suprathermal population of the SW [@MHKSandG85; @GGBFGIOvSandW93; @Gloeckler96; @GandG98]. PUI’s contribute to the power-law tails observed almost universally in the SW plasma distribution [@Mewaldt_etal01; @FandG06]. A simple way to add a power-law tail, and thereby model the proton, energetic particle, and PUI populations as a single distribution, is to assume a generalized Lorentzian, or “$\kappa$”, function [@BAFHandS67; @SandT91; @Collier95; @Leubner04] given by $$\label{eq:kappa_dist} f_p({\bf v}) = \frac{n_p}{\pi^{3/2}\Theta_p^3} \frac{1}{\kappa^{3/2}} \frac{\Gamma(\kappa+1)}{\Gamma(\kappa-1/2)} \left[1 + \frac{1}{\kappa} \frac{({\bf v} - {\bf u}_p)^2}{\Theta_p^2}\right]^{-(\kappa+1)}$$ where $\Theta_p$ is a typical speed related to the effective temperature of the distribution, and is evaluated using the pressure equation (\[eq:pressure\]) below. This distribution has a Maxwellian core, a power-law tail which scales as $v^{-2\kappa-2}$, and reduces to a Maxwellian in the limit of large $\kappa$. Although the core and tail features agree qualitatively with observations, a limitation of the $\kappa$ formalism is that it does not allow us to adjust their relative abundances. The observed flat-topped PUI population is also absent in the $\kappa$ approximation. In Figure \[fig:kappa1.63\_Maxwellian\], we plot a $\kappa$-distribution for $\kappa = 1.63$, along with a Maxwellian distribution. The basic principle in our approach is to note that the MHD equations for the plasma do not change if we assume a $\kappa$-distribution for SW protons. This is facilitated by the fact that the basic fluid conservation laws do not assume any specific form of the distribution function [see for example @Burgers69]. Closure at the second moment is possible if the distribution is isotropic, since the heat flux and the off-diagonal components of the stress tensor are then identically zero. The only difference from conventional fluid dynamics is that the collision integrals do not vanish as they would for a Maxwellian distribution. However, collisional frequencies are so low for the SW that we may neglect these collisional terms and treat the distribution function (\[eq:kappa\_dist\]) as “frozen” into the plasma. Even though the SW is effectively collisionless, an MHD approach is still warranted since the plasma has fluid properties perpendicular to the magnetic field, while various wave phenomena help isotropize this [see for example @Kulsrud84]. For these reasons we solve the regular MHD equations to find the bulk plasma quantities, but in the inner heliosheath we simply interpret these as having come from (\[eq:kappa\_dist\]). For simplicity we assume $\kappa = 1.63$ in all SW plasma, which is a value consistent with the data analysis of [@DKRHAGHandL05]. As we show in Section \[sec:ENA\_spectra\], observations by the upcoming [*IBEX*]{} mission can be used to estimate $\kappa$ in the heliosheath. The two distribution functions, $\kappa$ and Maxwellian, used to model the plasma are linked through the choice of $\Theta_p$, and we reconcile these using the isotropic plasma pressure, given by $$\label{eq:pressure} P = \frac{m_p}{3} \int_0^\infty v^2 f_p(v) \; 4\pi v^2 {\rm d}v = \frac{m_p n_p}{2} \Theta_p^2 \frac{\kappa}{\kappa-3/2}$$ Note that the thermal core collapses as $\kappa \rightarrow 3/2$ and the pressure becomes undefined. This limiting case corresponds to a $v^{-5}$ tail [@FandG06]. For the purposes of comparison, we define an effective temperature for the $\kappa$-distribution $$\label{eq:T_eff} T_{\rm eff} = \frac{P}{n_pk_B}$$ The temperature profiles depicted in Figures \[fig:HPFZandL07\_global\] and \[fig:plasma\_slices\] refer to the effective temperature. Charge-exchange couples the neutral and plasma populations. However, the charge exchange loss terms are different when we use a $\kappa$-distribution for protons. In the Appendix we derive the charge exchange rate for a hydrogen atom traveling through a $\kappa$-distribution of protons, which is used in our kinetic code for H atoms in the heliosheath. Other authors have included pick-up ions into their heliospheric models in various different ways. The Bonn model [@FKandS00] include PUI’s as a separate fluid with a source term due to interstellar neutrals charge-exchanging in the supersonic SW, and a sink due to PUI’s being energized and becoming part of the anomalous cosmic ray population, which is modeled as a separate fluid. The PUI distribution function of the Bonn model is assumed to be isotropic and flat-topped between 0 and $v_{SW}$ in the frame of the SW. Although this type of distribution agrees reasonably well with observations of PUI’s in the supersonic SW [@GandG98], the validity of the same distribution downstream of the TS is more questionable. Such a distribution also does not have a tail that extends beyond the pick-up energy, which is a requirement for obtaining ENA’s at high energies. This model was modified in [@FandS04] to include a significant improvement in the form of the PUI distribution, based on the work of [@FandL00] which includes analytic estimates of the effects of upstream turbulence. Although restricted by axial symmetry, this model includes time-dependent effects, and allows the authors to estimate various properties of ENA’s. [@MIandC06] recently introduced a more complicated PUI model based on earlier work by [@CFandI03]. In this model a host of different neutral atom and PUI populations are tracked kinetically. This model incorporates more physics than our relatively simple $\kappa$-distribution approach, but to manage the added complexity, it also requires a number of additional assumptions. These include the form of the velocity diffusion coefficient, that the magnetic moment is conserved by PUI’s as they cross the TS, and an ad hoc assumption about the downstream energy partition between electrons, protons and PUI’s. The increased computational requirements also forces [@MIandC06] to consider only the case of axial symmetry, thereby neglecting the IMF and restricting the ISMF to being aligned with the flow. Although their assumptions are reasonable, it is difficult to determine the influence these have on their conclusions. One of the interesting results from their model is that the locations of the TS, HP and BS change when the effects of PUI’s are allowed to self-consistently react back on the plasma – a result which agrees quite well quantitatively with our findings in the next section. Effects of heliosheath $\kappa$-distribution on the global solution {#sec:global_solution} =================================================================== In the preceding section we showed that we may solve the regular MHD equations for the plasma in the heliosheath, and interpret these results in terms of a $\kappa$-distribution for the ion population. It is less clear, however, what the effects of $\kappa$-distributed neutral atoms originating from the heliosheath will have on the global heliosphere-interstellar medium solution. Figure \[fig:HPFZandL07\_dist\] shows the velocity distribution of heliosheath hydrogen at various locations along the LISM flow vector. It is clear from this figure that for a $\kappa = 1.63$ distribution significantly more H-atoms with energies above 1 keV result than for a Maxwellian ion population in the heliosheath. It is also important to note that ENA’s in the heliotail (left plot) show a clear power-law tail ($\sim v^{-2(\kappa + 1)}$), mirroring the plasma, when a $\kappa$-distribution is assumed for heliosheath protons. These tails persist even outside the heliosphere (middle and right plots) for energies above 1 keV. To test the effect of keV ENA’s on the global heliosphere, we ran our code with $\kappa = 1.63$ in the heliosheath, and allowed these ENA’s to feed back self-consistently on the global solution. Since H-atoms are modeled kinetically, this provides no extra difficulty for our model. The only difference, by comparison with the case of a Maxwellian proton distribution, is that we need to use a different formula for the relative motion between a given particle and the ambient plasma. This formula is derived in the appendix. Figure \[fig:plasma\_slices\] compares plasma density and temperature along radial lines in the nose, polar and tail directions for the Maxwellian and equilibrated $\kappa = 1.63$ heliosheath cases. Secondary charge-exchange of neutrals created in the hot heliosheath was identified by [@ZPWandH96] as a critical medium for the anomalous transport of energy from the shocked solar wind to the shocked and unshocked LISM. In particular, the upwind region abutting the HP experienced considerable heating as a result of secondary charge-exchange of hot ($\sim 10^6$ K) neutrals with the cold LISM protons. The efficiency of this medium of anomalous heat transfer is increased with a $\kappa$-distribution in the inner heliosheath. This results simultaneously in a shrinking of the inner heliosheath and an expansion of the outer heliosheath. The inner heliosheath plasma temperature (defined in terms of pressure) remains unchanged, because the Maxwellian and $\kappa$-distributions have the same second moment (see Section \[sec:implications\]). We find that in the nose direction the termination shock moves out by about 4 AU, while the heliopause moves inward by about 9 AU. The bow shock stand-off distance increases by 25 AU, and the shock itself is weakened by the additional heating of the LISM plasma by fast neutrals from the SW. Table \[table:distances\] summarizes these changes in heliospheric geometry. The observed modifications to the heliospheric discontinuity locations agree quite well with the changes observed by the multi-component heliospheric model of [@MIandC06], which includes a kinetic representation of PUI’s. These authors report a 5 AU increase in the TS distance and a 12 AU decrease in the distance to the HP, for an axially symmetric calculation without magnetic fields. Another important distinction between the Maxwellian and $\kappa$-distribution based models is that the filtration rate of hydrogen changes at the heliopause. We find that in the Maxwellian case the hydrogen density at the TS is about 63% of the interstellar value, while for the $\kappa$-distributed model the density drops slightly to 60%. As with the TS and HP locations, these results agree quite well with the [@MIandC06] model. Implications for [*IBEX*]{} =========================== The Interstellar Boundary EXplorer mission will provide all-sky maps of ENA’s coming from the inner heliosheath, at 14 energy bands from 10 eV to 6 keV. However, this data is unusual in that all the ENA’s detected at a particular pixel and energy bin, will have come from a large volume of space with non-uniform plasma properties. As such it is not possible to invert an ENA map to determine the heliosheath’s shape, size, and plasma distribution. For this reason, we need to use forward modeling to help us understand the relationship between model heliosheaths and their corresponding synthetic ENA maps. In [@HPZandF07a], we identified several possible signatures to infer heliosheath properties from [*IBEX*]{} data. Below we present ENA maps and spectra from our improved heliospheric model, and relate these to the properties of our model heliosheath. Ionization losses {#sec:ionization_losses} ----------------- ENA’s propagating from the heliosheath to a detector at 1 AU may experience re-ionization due to charge exchange, electron impact ionization, or photo-ionization. These effects are of major importance close to the Sun, and in the simplest approximation scale according to $$\label{eq:losses} w = w_0 \exp(-\int\beta \; {\rm d}t) \;,\quad \beta(r) = \beta_E/r^2[AU] \;,\quad \beta_E \simeq 6\times 10^{-7} {\rm s}^{-1}$$ where $w$ is a pseudo-particle weight which is initially equal to $w_0$ at the point of charge-exchange and decays with time as a function of position. Alternatively, we can view $w/w_0$ as the survival probability for a particular particle. We note here that $\beta_E$ does not have to be uniform in all directions, so that ionization losses for particles coming in over the poles could be different from those traveling in the ecliptic plane, and it may also have temporal variations. Generally ENA’s will travel on effectively straight trajectories since solar gravity is approximately balanced by radiation pressure. [@BandT06] show that for solar minimum conditions the deflection angle will be less than 5 degrees, even for the lowest energies we consider. In the simulations presented here, we assume zero deflection, since we are mainly interested in the gross features of the ENA maps. Trajectory “A” in Figure \[fig:IBEX\_path\] shows the shortest straight-line path to 1 AU for an ENA, while path B represents the longest. If we assume straight line propagation at constant speed $-v_0$, then the survival probability (i.e. $w/w_0$) is given by $$P = \exp\left( -\frac{\beta_E}{v_0}\int_1^\infty \frac{1}{x^2+y_0^2} {\rm d} x \right) \;,$$ where $y_0 = 0$ for path A and $y_0 = 1$ for path B. Upon integration we have $$P_A = \exp\left(-\frac{\beta_E}{v_0}\right) \;,\quad P_B = \exp\left(-\frac{\pi\beta_E}{2v_0}\right)$$ where $v_0$ is the particle speed in AU per second. Here path B is relevant to [*IBEX*]{} observations, but experiences more ionization losses. A simple $\pi/2$ factor can be used to switch between 1 AU fluxes and [*IBEX*]{} fluxes, assuming no deflection due to gravity or radiation pressure occurs. Figure \[fig:IBEX\_path\] shows survival probability profiles for both paths, and we note that profile “A” corresponds to Figure 4 of [@GRMFFandMcC01]. These loss formulae will we used in the next section to undo the losses simulated in the code so that we can use the pristine ENA fluxes to construct energy spectra. Such a procedure would also be necessary for [ *IBEX*]{} data, when we want to infer properties of the parent plasma. ENA spectra {#sec:ENA_spectra} ----------- We may extract information about the proton energy spectrum in the heliosheath by simply plotting the [*IBEX*]{} energy bin data for a particular pixel (i.e. direction). Our global model allows us to both prescribe a form for the distribution function in the heliosheath for ENA’s (i.e. $\kappa$) and then attempt to deconvolve this from the data. The only difference is that [*IBEX*]{} spectral data will be line-of-sight integrated, rather than at a particular point in space. Nevertheless, we have the global data from our model, which we can use to compare an [*IBEX*]{} line-of-sight spectrum with plasma properties along that line of sight. This is particularly interesting in the nose direction, where the plasma distribution observed by the [*Voyager*]{} spacecraft can be compared with the spectral slope inferred from the [*IBEX*]{} data. To obtain a more accurate representation of the ENA spectrum in the heliosheath, we need to undo the ionization losses experienced by particles as they travel to the detector. In Section \[sec:ionization\_losses\] we derived a simple expression to estimate the survival probability of a particle with a given energy along a particular line of sight. Figure \[fig:ibex\_spectra\] shows three energy spectra for ENA’s originating from the nose, tail and polar directions. For these spectra, we have divided the flux measured at 1 AU by the survival probability for each energy band to undo the ionization losses, as mentioned above. We find that for the three directions considered, the energy spectrum tends toward the value of $-\kappa$ above about 1 keV. This result shows that the [*IBEX*]{} data, in spite of being line-of-sight integrated, should be able to help determine the spectral slope of the heliosheath protons in the 0.6 – 6 keV range. Figure \[fig:ibex\_spectra\] also shows that the spectra in the three directions considered have very similar properties. This will not necessarily be true for the real heliosphere, where the post-shock SW may develop different high energy tails in different directions. The dotted line (labeled “nose2”) is for a spectrum in the nose direction obtained using 32 energy bins (compared to about 10 non-overlapping [*IBEX*]{} bins). The agreement between this curve and the green markers shows that, for $\kappa = 1.63$ at least, the number of [*IBEX*]{} bins is sufficient to reproduce the spectrum. ENA all-sky maps ---------------- The method we use for computing all-sky ENA maps is described in [@HPZandF07a], where we first obtain a steady-state heliosphere and then trace ENA’s born through charge-exchange in the heliosheath down to 1 AU, where these are then binned according to energy and the direction of origin. Additional ionization losses along the particle’s trajectory act to “evaporate” its computational weight. The key difference from our previous results is that we now assume a $\kappa$-distribution for the heliosheath protons which form the parent population for ENA’s. This modification allows us to obtain ENA’s up to several keV, and is more consistent with SW data. Figure \[fig:skymaps\] shows all-sky ENA maps obtained from our steady-state solution with a $\kappa$-distribution for heliosheath protons. The top right plot shows the ENA map for 200 eV, which can be compared with our previous work [@HPZandF07a], where we did not self-consistently couple the plasma and kinetic neutral atoms, and where we assumed a Maxwellian proton distribution. We find that when we use a $\kappa$-distribution, the ENA flux at 200 eV is two to three times smaller than for the Maxwellian case, due to the shape of the proton distribution (see Figure \[fig:kappa1.63\_Maxwellian\]) and resulting ENA distribution (Figure \[fig:HPFZandL07\_dist\]), as well as the thinner inner heliosheath resulting from the use of a $\kappa$-distribution (see Section \[sec:global\_solution\]). As expected, this decrease of medium energy (100’s of eV) ENA’s is compensated by an increased ENA flux above 1 keV. Our results predict a count rate of about 3 atoms per (cm$^2$ sr s keV) at 6 keV. Less obvious is the decline in low energy flux when compared to the Maxwellian results [@HPZandF07a], even though there are more ENA’s being generated at the lowest energies (see Figure \[fig:HPFZandL07\_dist\]). The principal reason for this is that the SW core temperature is significantly lower when we use $\kappa$, so that these ENA’s lack the energy to propagate upstream, since the bulk speed exceeds the thermal speed of the core. This low SW core temperature is in fact qualitatively consistent with the latest [ *Voyager*]{} 2 findings [@Richardson_AGU07]. The heliosphere depicted in Figure \[fig:HPFZandL07\_global\], is commensurate to approximately “solar minimum” conditions, with a clearly defined high speed wind emanating from the poles. The high speed wind gives rise to hotter high latitude heliosheath plasma, which in turn increases the energy of ENA’s generated in the subsonic polar SW. The all-sky maps of Figure \[fig:skymaps\] show that at energies above about 1 keV, these streams of hot SW dominate the ENA flux, while at lower energies the central tail region is the major source of ENA’s. Comparing skymaps at different energies, we see from Figure \[fig:skymaps\] that the qualitative properties do not vary widely over the [*IBEX*]{} energy range. This contrasts sharply with the results for a Maxwellian heliosheath, where we generally see a higher flux coming from the tail than the nose at low energies, and the reverse at high energies [@HPZandF07a]. This can be attributed to the steep decline in the Maxwellian distribution, compared to the much broader $\kappa$-distribution (see Figure \[fig:kappa1.63\_Maxwellian\]), which means that particles observed at a given energy have come from plasma with a narrower range of temperatures. In other words, the relatively cool plasma in the distant heliotail can still be a significant source of high energy ENA’s, if we assume it has a $\kappa$-distribution. Only at the highest energies, above about 2 keV, does the nose-tail asymmetry favor the nose direction. Conclusions =========== We have used our 3D MHD-kinetic code to investigate the impact of assuming an alternative heliosheath proton distribution, a $\kappa$-distribution rather than the more usual Maxwellian, on both the SW-LISM interaction region, and the observed ENA flux at 1 AU. The motivation for this is that pick-up ions, generated when an interstellar neutral atom charge-exchanges in the supersonic solar wind, form high energy tails that are always observed in the solar wind plasma. The $\kappa$-distribution has core and tail features, and is often invoked in data analysis of the SW proton distribution function. The use of a $\kappa$-distribution introduces (possibly) more realistic estimates of the ENA flux at 1 AU, and thereby serves as an important tool in reconciling global heliospheric models with data from the upcoming [*IBEX*]{} mission. One drawback of this approach is that we cannot control the ratio between core and tail populations. While obviously not capturing the full details of the thermal and PUI plasma distributions in either the inner heliosheath or throughout the supersonic SW, a $\kappa$-distribution is nonetheless well grounded in observations as a general representation of the SW distribution function. We used $\kappa=1.63$ in our calculations, based on the [*Voyager*]{} 1 LECP data of [@DKRHAGHandL05]. Although the LECP data is for much higher energies than [*IBEX*]{} will measure, we have shown that [*IBEX*]{} data can be used to infer the spectral slope of the heliosheath distribution for energies between 1 keV and 6 keV. The tails of the energy spectra may have different slopes in different directions (over the poles, for example). The use of a $\kappa$-distribution for the ENA parent proton population results in a significant increase of the ENA flux at energies above 1 keV, when compared with a Maxwellian distribution. Our results predict a count rate of about 3 per (cm$^2$ sr s keV) at the highest energies considered by [*IBEX*]{}, which is many orders of magnitude higher than could be expected from a Maxwellian heliosheath distribution. At the same time, there is a marked reduction in the flux for intermediate energies, to about half the Maxwellian value at a few hundred eV. We have also calculated the feed back of the revised ENA distribution on the global heliospheric solution. The result is an increased transport of energy from the inner to the outer heliosheath, with a corresponding thinning and expansion of the former and latter. The distance between the TS and HP decreases by 13 AU (about 25%) in the nose direction, and the bow shock moves out farther and becomes very weak. The thinner heliosheath is also partly responsible for the decreased ENA flux at energies of a few hundred eV. Finally, we note that we have not considered time-dependent effects in this paper. [@SFandS07] recently looked at the changes in the ENA maps when they included a simple model for the solar cycle into their 3D hydrodynamic (i.e. no magnetic fields) code which includes a single fluid for neutral gas. They found cyclic changes in the ENA flux at 100 eV, which varied by about 25%. The observed variations at 1 keV were considerably larger, but because they assumed a Maxwellian distribution for protons in the heliosheath, their fluxes were about an order of magnitude lower than ours at this energy. Effectively, they found that fluctuations in ENA flux due to the solar cycle are relatively small for energies close to the core of the distribution (a few hundred eV in the heliosheath), while at high energies the changes in ENA flux are larger. Since the $\kappa$-distribution declines much more slowly than the Maxwellian away from the core, we expect our ENA fluxes to vary by perhaps 50% over a solar cycle for energies relevant to [*IBEX*]{}. This, however, remains to be confirmed. This work was supported by NASA grants NNG05GD45G, NNG06GD48G, and NNG06GD43G, and NSF award ATM-0296114. Calculations were performed on supercomputers Fujitsu Primepower HPC2500, in the framework of the collaborative agreement with the Solar-Terrestrial Environment Laboratory of Nagoya University, Columbia at NASA Ames Research Center (award SMD-06-0167), and IBM Data Star (award ATM-070011) in the San Diego Supercomputer Center. [**Appendix:**]{} Charge-exchange formulation with a $\kappa$- distribution {#appendix-charge-exchange-formulation-with-a-kappa--distribution .unnumbered} =========================================================================== Our kinetic neutral atom method solves the time-dependent Boltzmann equation $$\label{eq:Boltzmann} \frac{\partial}{\partial t}f_H + {\bf v}\cdot\nabla f_H + \frac{\bf F}{m_p}\nabla_{\bf v}\cdot f_H = P - L \;,$$ using a Monte Carlo approach. Here $f_H$ is the distribution function of neutral hydrogen, ${\bf F}$ is the external force, and $P$ and $L$ are the production and loss terms. Below we derive the loss rate for a neutral particle traveling through a $\kappa$-distribution of protons. The production and loss rates for the hydrogen population may be written as $$P = f_p({\bf x},{\bf v},t)\eta({\bf x},{\bf v},t) \;,$$ $$L = f_H({\bf x},{\bf v},t)\beta({\bf x},{\bf v},t) \;,$$ where $$\label{eq:eta} \eta({\bf x},{\bf v},t) = \int \sigma_{ex} f_H({\bf x},{\bf v}_H,t) \;|{\bf v} - {\bf v}_H| \;{\rm d}{\bf v}_H$$ $$\label{eq:beta1} \beta({\bf x},{\bf v},t) = \int \sigma_{ex} f_p({\bf x},{\bf v}_p,t) \;|{\bf v} - {\bf v}_p| \;{\rm d}{\bf v}_p \;.$$ Here we assume that the charge exchange cross-section, approximated using the [@FSandS62] expression $$\label{eq:sigma_ex} \sigma_{ex}(v_{rel}) = \left[2.1 - 0.092\ln(v_{rel})\right]^2 10^{-14} {\rm cm}^2 \;,$$ varies slowly and can be taken outside the integrals in (\[eq:eta\]) and(\[eq:beta1\]). In the kinetic code we require the neutral loss term $\beta$ to compute charge-exchange on a particle-by-particle basis. To derive this, we use the $\kappa$-distribution for the charged component, i.e., $$f_p({\bf v}_p)=\frac{n_p}{\pi^{3/2}\Theta_p^3}\frac{1}{\kappa^{3/2}} \frac{\Gamma(\kappa+1)}{\Gamma(\kappa-1/2)}\left[1+\frac{1}{\kappa} \frac{({\bf v}_p-{\bf u}_p)^2}{\Theta_p^2}\right]^{-(\kappa+1)},$$ where ${\bf u}_p$ is the bulk speed and $\Theta_p$ is related to the plasma pressure via equation (\[eq:pressure\]). Upon introduction of the new variables ${\bf g}=({\bf v}-{\bf v}_p) /(\sqrt{\kappa}\Theta_p)$ and ${\bf x}=({\bf u}_p-{\bf v}_p) /(\sqrt{\kappa}\Theta_p)$, equation (\[eq:beta1\]) becomes $$\begin{aligned} \beta=\frac{n_p \sigma_{ex} \Theta_p}{\pi^{3/2}} \frac{\sqrt{\kappa}\Gamma(\kappa+1)}{\Gamma(\kappa-1/2)} \int g[1+({\bf g}-{\bf x})^2]^{-(\kappa+1)}d^3g \nonumber \\ =\frac{2n_p \sigma_{ex}\Theta_p}{\sqrt{\pi}}\frac{\sqrt{\kappa}\Gamma(\kappa+1)} {\Gamma(\kappa-1/2)}\int_0^{\infty}dg\int_{-1}^1 d\mu\;g^3 (1+g^2-2\mu g x+x^2)^{-(\kappa+1)},\end{aligned}$$ where $\mu=\cos\theta$, $\theta$ being the angle between ${\bf g}$ and ${\bf x}$. After integrating over $\mu$ the result is $$\beta=\frac{n_p \sigma_{ex}\Theta_p}{\sqrt{\pi\kappa}x} \frac{\Gamma(\kappa+1)}{\Gamma(\kappa-1/2)}\int_0^{\infty} g^2\left\lbrace[1+(g-x)^2]^{-\kappa}-[1+(g+x)^2]^{-\kappa}\right\rbrace dg.$$ Introducing the new variable $z=g-x$ in the first term and $z=g+x$ in the second term and using the symmetry properties of the integrand, we obtain $$\begin{aligned} \label{eq:beta2} \beta=\frac{2n_p \sigma_{ex}\Theta_p}{\sqrt{\pi\kappa}x} \frac{\Gamma(\kappa+1)}{\Gamma(\kappa-1/2)} \left(\int_0^x z^2(1+z^2)^{-\kappa}dz +x^2\int_0^x(1+z^2)^{-\kappa}dz \right. \nonumber \\ \left. +2x\int_x^{\infty}z(1+z^2)^{-\kappa}dz\right).\end{aligned}$$ The integrals are $$\label{eq:int1} x^2\int_0^x(1+z^2)^{-\kappa}dz =x^3\,{_2F}_1\left(\frac{1}{2},\kappa;\frac{3}{2};-x^2\right) =x^3(1+x^2)^{-\kappa}\,{_2F}_1\left(1,\kappa;\frac{3}{2}; \frac{x^2}{1+x^2}\right),$$ $$\label{eq:int2} 2x\int_x^{\infty}z(1+z^2)^{-\kappa}dz=\frac{x(1+x^2)^{-k+1}}{(k-1)},$$ $$\label{eq:int3} \int_0^x z^2(1+z^2)^{-\kappa}dz=\frac{x^3}{3}\, {_2F}_1\left(\frac{3}{2},\kappa;\frac{5}{2};-x^2\right) =\frac{x^3}{3}(1+x^2)^{-\kappa}\,{_2F}_1\left(1,\kappa;\frac{5}{2}; \frac{x^2}{1+x^2}\right),$$ where $_2F_1$ is the hypergeometric function. The exact solution for $\beta$ is therefore $$\begin{aligned} \label{eq:beta3} \beta=\frac{2n_p \sigma_{ex}\Theta_p}{\sqrt{\pi\kappa}} \frac{\Gamma(\kappa+1)}{\Gamma(\kappa-1/2)}(1+x^2)^{-\kappa} \left[x^2\,{_2F}_1\left(1,\kappa;\frac{3}{2};\frac{x^2}{1+x^2}\right)\right. \nonumber \\ \left.+\frac{x^2}{3}\,{_2F}_1\left(1,\kappa;\frac{5}{2};\frac{x^2}{1+x^2}\right) +\frac{1+x^2}{\kappa-1}\right].\end{aligned}$$ However, it is more convenient to take the limits $\sqrt{\kappa}x\ll 1$ and $\sqrt{\kappa}x\gg 1$ in (\[eq:int1\]) and (\[eq:int3\]) before the integration. In the former limit we obtain $$x^2\int_0^x(1+z^2)^{-\kappa}dz\simeq x^3,$$ $$\int_0^x z^2(1+z^2)^{-\kappa}dz\simeq\frac{x^3}{3}$$ and the expression inside the parentheses in (\[eq:beta2\]) becomes $x/(\kappa-1)+x^3/3$. Finally, in this limit $$\beta=\frac{2n_p \sigma_{ex}\Theta_p}{\sqrt{\pi\kappa}} \frac{\Gamma(\kappa+1)}{\Gamma(\kappa-1/2)}\left[\frac{1}{\kappa-1} +\frac{({\bf v}_p-{\bf u}_p)^2}{3\kappa \Theta_p^2}\right].$$ For large $\kappa$, $\Gamma(\kappa+a)\simeq\kappa^a\Gamma(\kappa)$ and $$\beta\simeq\frac{2n_p \sigma_{ex}\Theta_p}{\sqrt{\pi}} \left[1+\frac{({\bf v}-{\bf u}_p)^2}{3\Theta_p^2}\right].$$ In the limit $x\gg 1$ we obtain $$x^2\int_0^{\infty}(1+z^2)^{-\kappa}dz=\frac{\sqrt{\pi}\Gamma(\kappa-1/2)x^2} {2\Gamma(\kappa)},$$ $$\int_0^{\infty}z^2(1+z^2)^{-\kappa}dz=\frac{\sqrt{\pi}\Gamma(\kappa-3/2)} {4\Gamma(\kappa)}.$$ In this limit $$\beta\simeq n_p\sigma_{ex}|{\bf v}-{\bf u}_p|$$ and is independent of $\kappa$. A reasonable approximation to (\[eq:beta3\]) that has the correct asymptotic behavior is $$\beta\simeq n_p\sigma_{ex}\sqrt{\frac{4\Gamma^2(\kappa+1)\Theta_p^2} {\pi\kappa(\kappa-1)^2\Gamma^2(\kappa-1/2)}+({\bf v}-{\bf u}_p)^2}.$$ For large $\kappa$ this reduces to the Maxwellian limit obtained by [@PZandW95] $$\beta\simeq n_p\sigma_{ex}\sqrt{\frac{4}{\pi}\Theta_{p}^2+({\bf v}-{\bf u}_p)^2}.$$ Alexashov, D., & Izmodenov, V. 2005, Astron. Astrophys., 439, 1171 , S. J., [Asbridge]{}, J. R., [Felthauser]{}, H. E., [Hones]{}, E. W., & [Strong]{}, I. B. 1967, , 72, 113 Baranov, V.B., & Malama, Yu. G. 1993, J. Geophys. Res., 98, 15157 , J. M. 1969, [Flow Equations for Composite Gases]{} (Flow Equations for Composite Gases, New York: Academic Press, 1969) , L. F., [Ness]{}, N. F., [Acu[ñ]{}a]{}, M. H., [Lepping]{}, R. P., [Connerney]{}, J. E. P., [Stone]{}, E. C., & [McDonald]{}, F. B. 2005, Science, 309, 2027 , M., & [Tarnopolski]{}, S. 2006, in American Institute of Physics Conference Series, Vol. 858, Physics of the Inner Heliosheath, ed. J. [Heerikhuisen]{}, V. [Florinski]{}, G. P. [Zank]{}, & N. V. [Pogorelov]{}, 251 , S. V., [Fahr]{}, H. J., & [Izmodenov]{}, V. V. 2003, , 108, 1266 , I. V., [Fahr]{}, H. J., & [Lay]{}, G. 2003, Annales Geophysicae, 21, 1405 , M. R. 1995, , 22, 2673 , R. B., [Krimigis]{}, S. M., [Roelof]{}, E. C., [Hill]{}, M. E., [Armstrong]{}, T. P., [Gloeckler]{}, G., [Hamilton]{}, D. C., & [Lanzerotti]{}, L. J. 2005, Science, 309, 2020 Fahr, H. J., Kausch, T., & Scherer, H. 2000, Astron. Astrophys., 357, 268 , H. J., & [Lay]{}, G. 2000, Astron. Astrophys., 356, 327 , H.-J., & [Scherer]{}, K. 2004, Astrophys. Space Sci. Trans., 1, 3 , L. A., & [Gloeckler]{}, G. 2006, , 640, L79 Fite, W.L., Smith, A. C. H., & Stebbings, R. F. 1962, Proc. R. Soc. London Ser. A, 268, 527 , G. 1996, Space Science Reviews, 78, 335 , G., [Fisk]{}, L. A., & [Lanzerotti]{}, L. J. 2005, in ESA Special Publication, Vol. 592, ESA Special Publication , G., & [Geiss]{}, J. 1998, Space Science Reviews, 86, 127 , G., et al. 1993, Science, 261, 70 , J. T., [Asbridge]{}, J. R., [Bame]{}, S. J., [Feldman]{}, W. C., [Zwickl]{}, R. D., [Paschmann]{}, G., [Sckopke]{}, N., & [Hynds]{}, R. J. 1981, , 86, 547 Gruntman, M., Roelof, E. C., Mitchell, D. G., Fahr, H. J., Funsten, H. O., & McComas, D. J. 2001, J. Geophys. Res., 106, 15767 Heerikhuisen, J., Florinski, V., & Zank, G. P. 2006, J. Geophys. Res., 111, A06110 Heerikhuisen, J., Florinski, V., Zank, G. P., & Pogorelov, N. V.(editors). 2006, Physics of the Inner Heliosheath (AIP) , J., [Pogorelov]{}, N. V., [Zank]{}, G. P., & [Florinski]{}, V. 2007, , 655, L53 , T. E. 1972, J. Geophys. Res., 77, 5407 , P. A. 1986, , 91, 9965 , I. K., [Summers]{}, D., [Zank]{}, G. P., & [Pauls]{}, H. L. 1996, , 469, 921 , R. M. 1984, in Basic Plasma Physics: Selected Chapters, Handbook of Plasma Physics, Volume 1, ed. A. A. [Galeev]{} & R. N. [Sudan]{}, 115 , M. A., & [Ip]{}, W.-H. 1987, , 92, 11041 , M. P. 2004, Physics of Plasmas, 11, 1308 , Y. G., [Izmodenov]{}, V. V., & [Chalov]{}, S. V. 2006, , 445, 693 , W. H., [Zank]{}, G. P., [Smith]{}, C. W., & [Oughton]{}, S. 1999, Physical Review Letters, 82, 3444 , D., et al. 2006, in Physics of the Inner Heliosheath, ed. J. Heerikhuisen, V. Florinski, G. P. Zank, & N. V. Pogorelov, Vol. 858 (AIP), 400 , D., et al. 2004, in Physics of the Outer Heliosphere, ed. V. Florinski, N. V. Pogorelov, & G. P. Zank, Vol. 719 (AIP), 162 , D. J., et al. 2000, , 105, 10419 , R. A., et al. 2001, in American Institute of Physics Conference Series, Vol. 598, Joint SOHO/ACE workshop ”Solar and Galactic Composition”, ed. R. F. [Wimmer-Schweingruber]{}, 165 , E., [Hovestadt]{}, D., [Klecker]{}, B., [Scholer]{}, M., & [Gloeckler]{}, G. 1985, , 318, 426 Opher, M., Stone, E. C., & Liewer, P. C. 2006, Astrophys. J., 640, L71 , H. L., & [Zank]{}, G. P. 1996, , 101, 17081 , H. L., & [Zank]{}, G. P. 1997, , 102, 19779 Pauls, H. L., Zank, G. P., & Williams, L. L. 1995, J. Geophys. Res., 100, 21,595 , N. V., [Heerikhuisen]{}, J., & [Zank]{}. 2008, , 675, in press , N. V., [Stone]{}, E. C., [Florinski]{}, V., & [Zank]{}, G. P. 2007, , 668, 611 , N. V., & [Zank]{}, G. P. 2006, in Astronomical Society of the Pacific Conference Series, Vol. 359, Numerical Modeling of Space Plasma Flows, ed. G. P. [Zank]{} & N. V. [Pogorelov]{}, 184 Pogorelov, N. V., Zank, G. P., & Ogino, T. 2004, Astrophys. J., 614, 1007 Pogorelov, N. V., Zank, G. P., & Ogino, T. 2006, Astrophys. J., 644, 1299 Prested, C., et al. 2008, , accepted , J. D. 2007, AGU Fall Meeting Abstracts, A3 , J. D., [Liu]{}, Y., & [Wang]{}, C. 2008, Advances in Space Research, 41, 237 , J. D., [Paularena]{}, K. I., [Lazarus]{}, A. J., & [Belcher]{}, J. W. 1995, , 22, 325 , C. W., [Isenberg]{}, P. A., [Matthaeus]{}, W. H., & [Richardson]{}, J. D. 2006, , 638, 508 , C. W., [Matthaeus]{}, W. H., [Zank]{}, G. P., [Ness]{}, N. F., [Oughton]{}, S., & [Richardson]{}, J. D. 2001, , 106, 8253 Sternal, O., Fichtner, H., & Scherer, K. 2007, Astron. Astrophys., to appear , E. C., [Cummings]{}, A. C., [McDonald]{}, F. B., [Heikkila]{}, B. C., [Lal]{}, N., & [Webber]{}, W. R. 2005, Science, 309, 2017 , D., & [Thorne]{}, R. M. 1991, Physics of Fluids B, 3, 1835 , L. L., & [Zank]{}, G. P. 1994, , 99, 19229 , L. L., [Zank]{}, G. P., & [Matthaeus]{}, W. H. 1995, , 100, 17059 Zank, G. P. 1999, Space Sci. Rev., 89, 413 , G. P., [Matthaeus]{}, W. H., & [Smith]{}, C. W. 1996, , 101, 17093 Zank, G. P., Pauls, H. L., Williams, L.L., & Hall, D.T. 1996, J. Geophys. Res., 101, 21639
--- abstract: 'Let $\mathbf D$ be the set of isomorphism types of finite double partially ordered sets, that is sets endowed with two partial orders. On ${\mathbb Z}\mathbf D$ we define a product and a coproduct, together with an internal product, that is, degree-preserving. With these operations ${\mathbb Z}\mathbf D$ is a Hopf algebra, self-dual with respect to a scalar product which counts the number of pictures (in the sense of Zelevinsky) between two double posets. The product and coproduct correspond respectively to disjoint union of posets and to a natural decomposition of a poset into order ideals. Restricting to special double posets (meaning that the second order is total), we obtain a notion equivalent to Stanley’s labelled posets, and obtain a sub-Hopf-algebra already considered by Blessenohl and Schocker. The mapping which maps each double poset onto the sum of the linear extensions of its first order, identified via its second (total) order with permutations, is a Hopf algebra homomorphism, which is isometric and preserves the internal product, onto the Hopf algebra of permutations, previously considered by the two authors. Finally, the scalar product between any special double poset and double posets naturally associated to integer partitions is described by an extension of the Littlewood-Richardson rule.' address: - | Claudia Malvenuto\ Dipartimento di Informatica\ Università La Sapienza\ Via Salaria 113\ 00198 Roma Italia - | Christophe Reutenauer\ Département de Mathématiques UQAM\ Case Postale 8888 Succ. Centre-ville\ Montréal (Québec) H3C 3P8 Canada author: - Claudia Malvenuto - Christophe Reutenauer title: ' A Self-Dual Hopf Algebra on Double Partially Ordered Sets ' --- Introduction ============ We define a (combinatorial) Hopf algebra based on “double posets”, with a scalar product based on “pictures” between double posets, in analogy to pictures of tableaux as defined by Zelevinsky in [@Zele]. Pictures had been introduced previously by James and Peel in [@JaPe] p.351-352. Zelevinsky’s definition extends straightforwardly to double posets. The results we prove show that pictures are fundamentally linked to scalar products, a point of view already present in Zelevinsky’s work, who proved that the scalar product of two skew Schur functions is equal to the number of pictures between their shapes. See also [@FoGr] and [@Leeu] for the study of pictures between skew shapes. We call [*double poset*]{} a set which is endowed with two partial orders $<_1$ and $<_2$. We consider isomorphism classes of double posets: on the ${\mathbb Z}$–module with basis the set of (isomorphism classes of) double posets, we define combinatorially a product and a coproduct, which will make it a graded Hopf algebra, self-dual with respect to the scalar product $\langle x,y\rangle$ defined as the number of pictures from $x$ to $y$; in other words, $$\langle xy,z\rangle=\langle x\otimes y,\delta(z)\rangle.$$ Recall that self-dual Hopf algebras play a great role in representation theory, see [@Gei; @Zele1]. When the second order $<_2$ of a double poset is total, one obtains the notion which we call special double poset; it is equivalent to that of labelled poset of Stanley [@Stan], or that of shape of Blessenohl and Schocker [@BlSc]. The corresponding submodule is then a sub-bialgebra; this bialgebra has already been considered by Blessenohl and Laue, see [@BlSc] p.41-42. There is a natural homomorphism into the bialgebra of permutations of [@MaRe]. This mapping is implicit in Stanley’s work (see also [@Gess]). The fact that one has a sub-bialgebra and a homomorphism is already due to [@BlSc] (see also [@Mal1] and [@HP]). We give further properties of this homomorphism: it is an isometry, and preserves the internal product. To each integer partition is naturally associated a special double poset; this construction is described in [@Gess]. We describe the scalar product of such a double poset and any special double poset by a rule which extends the Littlewood-Richardson rule. Note that all the bialgebras in this article are ${\mathbb Z}$-algebras, are graded and connected (that is, the 0-component is ${\mathbb Z}$), hence these bialgebras are Hopf algebras. The bialgebra on double posets ============================== The self-dual bialgebra on double posets ---------------------------------------- A [*double poset*]{} (a notion which is implicit in [@MaRe1]) is a triple $(E,<_1,<_2)$, where $E$ is a [*finite*]{} set, and $<_1$ and $<_2$ are two partial orders on $E$. When no confusion arises, we denote $(E,<_1,<_2)$ simply by $E$. We call $<_1$ the [*first order*]{} of $E$ and $<_2$ [*the second order*]{} of $E$. As expected, we say that two double posets $(E,<_1,<_2)$ and $(F,<_1,<_2)$ are [*isomorphic*]{} if there exists a bijection $\phi: E\rightarrow F$ which is an isomorphism from the partial order $(E,<_1)$ to $(F,<_1)$ and from $(E,<_2)$ to $(F,<_2)$, i.e. $$\forall x,y \in E: \ x <_i y \mbox{ in } E \Leftrightarrow \phi(x)<_{i}\phi(y) \mbox{ in } F, \mbox{ for }i=1,2.$$ Rather than on double posets, we want to work on isomorphism classes of double posets: to avoid too much notation, we simply say double poset, meaning its isomorphism class. Let ${\mathbf D}$ denote the set of double posets. We define several combinatorial operations on this set, which will serve to define the bialgebra structure on ${\mathbb Z}{\mathbf D}$, the set of ${\mathbb Z}$-linear combinations of double posets. If $E$ and $F$ are two double posets, their [*composition*]{}, denoted $EF$, is the double poset , where the union is disjoint and where - the first order $<_1$ of $EF$ is the extension to $E\cup F$ of the first orders $<_1$ of $E$ and $F$, and no element of $E$ is comparable to any element of $F$; - the second order $<_2$ of $EF$ is the extension to $E\cup F$ of the second orders $<_2$ of $E$ and $F$, together with the relations $e<_2 f$ for any $e\in E, f\in F$. The product on ${\mathbb Z}{\mathbf D}$ is obtained by extending linearly the composition on $\mathbf D$. Recall that an [*inferior ideal*]{} of a poset $(E,<)$ is a subset $I\subseteq E$ such that if $y\in I$ and $x< y$, then $x\in I$. A [*superior ideal*]{} of $E$ is a subset $S\subseteq E$ such that if $x\in J$ and $x< y$, then $y\in S$. Clearly, the complement of an inferior ideal is a superior ideal and conversely. A [*decomposition*]{} of a poset $(E,<)$ is a couple $(I,S)$ where $I$ is an inferior ideal and $S$ its complement. We call [*decomposition*]{} of a double poset $(E,<_1,<_2)$ a pair $$((I,<_1,<_2),(S,<_1,<_2)),$$ where $(I,S)$ is a decomposition of the poset $(E,<_1)$, and where the first and second orders $<_1,<_2$ for $I$ and $S$ are obtained by restricting the orders $<_1,<_2$ of the double poset $E$. Now let $\delta: {\mathbb Z}{\mathbf D}\longrightarrow {\mathbb Z}{\mathbf D}\otimes {\mathbb Z}{\mathbf D}$ be the linear map defined on ${\mathbf D} $ by $$\label{defcopr} \delta((E,<_1,<_2))=\sum_{}(I,<_1,<_2)\otimes (S,<_1,<_2),$$ where the sum is extended to all decompositions of $(E,<_1,<_2)$. A [*picture*]{} between double posets $(E,<_1,<_2)$ and $(F,<_1,<_2)$ is a bijection $\phi:E\rightarrow F$ such that: - $e<_1 e' \Rightarrow \phi(e)<_2 \phi(e')$ and - $f<_1 f' \Rightarrow \phi^{-1}(f)<_2 \phi^{-1}(f').$ In other words, a picture is a bijection $\phi$ of $E$ to $F$ which is increasing from the first order of $E$ to the second order of $F$ and such that its inverse $\phi^{-1}$ is increasing from the first order of $F$ to the second order of $E$. We define a pairing $\langle ,\rangle: {\mathbb Z}{\mathbf D}\times {\mathbb Z}{\mathbf D}\rightarrow {\mathbb Z}$ for any double posets $E,F$ by: $$\langle E,F\rangle=|\{\alpha: E\rightarrow F, \alpha \mbox{ is a picture}\}|,$$ and extend it bilinearly to obtain a scalar product on ${\mathbb Z}{\mathbf D}$, which we call [*Zelevinsky scalar product*]{}. ${\mathbb Z}{\mathbf D}$ is a graded self-dual Hopf algebra. Note that a similar bialgebra structure has been defined on posets (not double posets) by Schmitt in [@Schm] p. 27-28; see also [@ABSo] Example 2.3. Formally it means that the mapping of ${\mathbb Z}{\mathbf D}$ into the bialgebra of Schmitt, which sends a double poset onto the poset with only the first order $<_1$, is a bialgebra homomorphism. Self-duality means that for any double posets $E,F,G$, $$\langle EF,G\rangle=\langle E\otimes G, \delta G\rangle.$$ [**Proof.**]{} We omit the easy verification of the associativity of the product and of the coassociativity of the coproduct. Similarly for the homogeneity of both, where the degree of a double poset is the number of its elements. In order to show that ${\mathbb Z}{\mathbf D}$ is a bialgebra, we show that the coproduct $\delta$ is a homomorphism for the product. This is rather a tautology, since isomorphic double posets are identified. It amounts to show that there is a bijection between the set of decompositions of the double poset $EF$ and the set of pairs $(E_iF_i,E_sF_s)$, where $(E_i,E_s)$ is a decomposition of $E$ and $(F_i,F_s)$ is a decomposition of $F$. The bijection is the natural one: take a decomposition $(I,S)$ of $EF$; then $I$ is an inferior ideal of $(E\cup F,<_1)$; hence, $I\cap E$ is an inferior ideal of $(E,<_1)$, $I\cap F$ is an inferior ideal of $(E,<_1)$ and $I=(I\cap E)(I\cap F)$. Similarly, $S\cap E$ is a superior ideal of $(E,<_1)$, $S\cap F$ is a superior ideal of $(F,<_1)$ and $S=(S\cap E)(S\cap F)$. Hence the mapping is the identity mapping (modulo double posets isomorphisms): $$(I,S)\mapsto \big((I\cap E)(I\cap F), (S\cap E)(S\cap F)\big).$$ To see that it is a bijection, note that if $(E_i,E_s)$ is a decomposition of $E$ and $(F_i,F_s)$ is a decomposition of $F$, then $E_i\cup F_i$ is an inferior ideal $I$, and $E_s\cup F_s$ the complementary superior ideal $S$ of $(E\cup F, <_1)$; moreover, $I=E_iF_i$ and $S=E_sF_s$. We prove now self-duality. In view of the identity stated before the proof, this amounts to give, for any double posets $E,F,G$, a bijection between pictures from $EF$ to $G$ and 4-tuples $(\phi,\psi,I,S)$, where $I$ is an inferior ideal of $G$, $S$ the complementary superior ideal, $\phi$ a picture of $E$ onto $I$ and $\psi$ a picture of $F$ onto $S$. So, let $\alpha$ be a picture from $EF$ onto $G$. Define $I=\alpha(E)$, $S=\alpha(F)$ and the bijections $\phi: E \rightarrow I, \psi:F\rightarrow S$ obtained by restriction of $\alpha$ to $E$ and $F$. We verify first that $I$ is an inferior ideal of G; take $g,g'$ in G with $g<_1 g'$ and $g'\in I$, hence $\alpha^{-1} (g')\in E$. Then, $\alpha$ being a picture, we have $\alpha^{-1}(g)<_2 \alpha^{-1}(g')$. Now, if we had $g\notin I$, then $g\in S$, hence $\alpha^{-1}(g) \in F$, hence $\alpha^{-1}(g')< _2 \alpha^{-1}(g)$, by definition of the second ordre of $EF$: contradiction. Similarly, $S$ is a superior ideal of $G$. Now, the restriction of a picture is a picture, so $\phi$ and $\psi$ are pictures. Conversely, given a 4-tuple as above, we glue together the two bijections $\phi$ and $\psi$ and obtain a bijection $\alpha: E \rightarrow G$. Since in $EF$, elements of $E$ and elements of $F$ are $<_1$-incomparable, the fact that $\alpha$ is increasing from $(EF,<_1)$ onto $(G,<_2)$ follows from the similar property for $\phi$ and $\psi$. Now let $g,g'\in G$ with $g<_1 g'$; if they are both in $I$ or both in $S$, then $\alpha^{-1}(g)<_2\alpha^{-1}(g')$, by the similar property for $\phi$ and $\psi$; otherwise, we have $g\in I$ and $g'\in S$, since $I,S$ are ideals. Then $\alpha^{-1}(g)\in E$ and $\alpha^{-1}(g')\in F$, and consequently, $\alpha^{-1}(g)<_2\alpha^{-1}(g')$, by the definition of the second order of $EF$. Thus $\alpha$ is a picture. The counity map $\epsilon :{\mathbb Z}{\mathbf D} \longrightarrow {\mathbb Z}$ maps the empty double poset on $1$, and all other doubles posets onto $0$. It is a morphism for the product. By a well–known fact, a graded connected bialgebra is a Hopf algebra. $\Box$ A homomorphism into quasi-symmetric functions --------------------------------------------- Let $\pi=(E,<_1,<_2)$ be a double poset. Similarly to [@Stan] and [@Gess], we call [*$\pi$-partition*]{}, a function $x$ from $E$ into a totally ordered set $X$, such that: - $e <_1 e'$ implies $x(e)\leq x(e')$; - $e <_1 e'$ and $e\geq_2 e'$ implies $x(e)<x(e')$. Note that if $x$ is injective, the first condition suffices. Now suppose that $X$ is an infinite totally ordered set of commuting variables. Then the generating quasi-symmetric function of $\pi$ is the sum, over all $\pi$-partitions, of the monomials $\prod_{e\in E} x(e)$. We denote it $\Gamma (\pi)$. By extending linearly $\Gamma$ to ${\mathbb Z}{\mathbf D}$, we obtain a linear mapping into the algebra of quasi-symmetric function. For quasi-symmetric functions, see [@Stan1] 7.19. They form a bialgebra denoted $\mathbf {QSym}$, see e.g. [@MaRe]. $\Gamma:{\mathbb Z}{\mathbf D}\rightarrow \mathbf {QSym}$ is a homomorphism of bialgebras. This result is implicit in [@Mal], Proposition 4.6 and Theorème 4.16. We therefore omit the proof, recalling only the definition of the coproduct of the bialgebra of quasi-symmetric functions ([@Gess] p.300). Take a second infinite totally ordered set of commuting variables Y. Now order $X\cup Y$ by $x<y$ for any $x\in X$ and $y\in Y$. Then a quasi-symmetric function $F$ on $X$ defines a unique quasi-symmetric function on $X\cup Y$, which may be rewritten as a finite linear combination $\sum_i F_i(X)G_i(Y)$, for some quasi-symmetric functions $F_i$, $G_i$. Then the coproduct $\delta(F)$ is defined as $\sum_i F_i \otimes G_i$. Note also that a bialgebra constructed on posets has been considered in [@Ehre] and [@BeSo], together with a bialgebra homomorphism into quasi-symmetric functions. There seems to be no evident link between their construction and ours, although the coproduct is essentially the same, while the product rests on product of posets. An internal product ------------------- Let $(E,<_1,<_2)$ and $(F,<_1,<_2)$ be two double posets. Let $\phi: (E,<_1)\rightarrow (F,<_2)$ be a bijection and denote its graph by $$E\times_\phi F=\{(e,f): \phi(e)=f\}.$$ This set becomes a double poset by: $(e,f)<_1(e',f')$ if and only if $f<_1f'$; $(e,f)<_2(e',f')$ if and only if $e<_2e'$. In other words, denoting by $p_1, p_2$ the first and second projections, $p_1$ is an order isomorphism $(E\times_\phi F,<_2)\rightarrow (E,<_2)$ and $p_2$ is an order isomorphism $(E\times_\phi F,<_1)\rightarrow (F,<_1)$. Note that the inverse isomorphisms are $p_1^{-1}=(id,\phi)$ and $p_2^{-1}=(\phi^{-1},id)$. Define the [*internal product*]{} of $(E,<_1,<_2)$ and $(F,<_1,<_2)$ as the sum of the double posets $E\times_\phi F$ for all increasing bijections $\phi: (E,<_1)\rightarrow (F,<_2)$. It is denoted $E\circ F$. Note that this product has been chosen, among several symmetrical ones, so that the following holds: let $\sigma$ be a permutation in $S_n$ and denote by $P_\sigma$ the double poset with underlying set $\{1,...,n\}$, with $<_2$ as the natural order of this set, and $<_1$ the order defined by $\sigma(1)<_1\sigma(2)<_1...<_1\sigma(n)$. Then given two permutations $\tau$ and $\sigma$, one has $$P_\sigma\circ P_\tau = P_{\sigma\circ\tau},$$ as the reader may easily verify. The internal product is compatible with the Zelevinsky scalar product, as follows. Let $E,F,G$ be double posets. Then $\langle E\circ F,G \rangle=\langle E,F\circ G\rangle$. The proposition immediately follows from the following lemma. Let $(E,<_1,<_2)$, $(F,<_1,<_2)$, $(G,<_1,<_2)$ be three double posets. There is a natural bijection between \(i) the set of pairs $(\phi,\alpha)$, where $\phi$ is an increasing bijection $(E,<_1)\rightarrow (F,<_2)$ and and $\alpha$ is a picture from $E\times_\phi F$ into $G$; \(ii) the set of pairs $(\psi,\beta)$, where $\psi$ is an increasing bijection $(F,<_1)\rightarrow (G,<_2)$ and $\beta$ a picture from $E$ into $F\times_\psi G$. [**Proof.**]{} We show that the bijection is defined by $\psi=\alpha\circ(\phi^{-1},id)$ and $\beta=(id,\psi)\circ\phi$, the inverse bijection being defined by $\phi=p_1\circ \beta$, with $p_1$ the projection $F\times G\rightarrow F$ and $\alpha=\psi\circ p_2$ with $p_2$ the projection $E\times F\rightarrow F$. 1\. Let $(\phi,\alpha)$ be as in (i) and define $\psi=\alpha\circ(\phi^{-1},id)$ and $\beta=(id,\psi)\circ\phi$. Now notice that $(\phi^{-1},id)$ is a mapping $F\rightarrow E\times_\phi F$ and by definition of the first order of $E\times_\phi F$, it is increasing for the first orders on $F$ and $E\times_\phi F$. Since $\alpha$ is increasing $(E\times_\phi F,<_1)\rightarrow (G,<_2)$, we see that $\psi$ is increasing $(F,<_1)\rightarrow (G,<_2)$. Now $\beta$ maps bijectively $E$ into $F\times_\phi G$, as desired, and we verify that it is a picture. Note that $(id,\psi):F\rightarrow F\times_\psi G$ is increasing for the second orders, by the definition of the latter on $F\times_\psi G$. Hence $\beta$ is increasing $(E,<_1)\rightarrow (F\times_\psi G,<_2)$. Moreover, let $(f,g), (f',g')\in F\times_\psi G$ with $(f,g)<_1(f',g')$; then $\psi(f)=g$, $\psi(f')=g'$ and $g<_1g'$. Thus $\psi(f)<_1\psi(f')$. We have for some $e,e'\in E$, $\beta(e)=(f,g)$ and $\beta(e')=(f',g')$. Note that the definition of $\psi$ implies that $\alpha^{-1}\circ\psi=(\phi^{-1},id)$, thus $\alpha^{-1}(\psi(f)=(\phi^{-1}(f),f)$. Since $\alpha$ is a picture, $\alpha^{-1}$ is increasing $(G,<_1)\rightarrow (E\times_\phi F,<_2)$, so that $\alpha^{-1}(\psi(f))<_2\alpha^{-1}(\psi(f'))$ and therefore by definition of the second order on $E\times_\phi F$, we have $\phi^{-1}(f)<_2\phi^{-1}(f')$; since $\beta(e)=(\phi(e),\psi(\phi(e))$, we obtain $f=\phi(e)$ and $e=\phi^{-1}(f)$. Finally $e<_2e'$ showing that $\beta$ is a picture. Define now $\phi'=p_1\circ\beta$ and $\alpha'=\psi\circ p_2$. We must show that $\phi'=\phi$ and $\alpha'=\alpha$. We have $\phi'=p_1\circ (id,\psi)\circ\phi$, which is equal to $\phi$ since $p_1\circ(id,\psi)$ is the identity of $F$. Moreover, $\alpha'=\alpha\circ (\phi^{-1},id)\circ p_2$, and we are done, since $(\phi^{-1},id)\circ p_2$ is the identity of $E\times_\phi F$. 2\. Let $(\psi,\beta)$ be as in (ii), and define $\phi=p_1\circ \beta$ and $\alpha=\psi\circ p_2$. Now, $\beta$ is an increasing bijection $(E,<_1)\rightarrow (F\times_\psi G,<_2)$ and $p_1$ is an increasing bijection $(F\times_\psi G,<_2)\rightarrow (F,<_2)$ by definition of the second order on $F\times_\psi G$. Thus $\phi$ is an increasing bijection $(E,<_1)\rightarrow(F,<_2)$. Moreover, $p_2$ is an increasing bijection $(E\times_\phi F,<_1)\rightarrow (F,<_1)$ by definition of the first order of $E\times_\phi F$, and $\psi$ is an increasing bijection $(F,<_1)\rightarrow (G,<_2)$. Thus $\alpha$ is an increasing bijection: $$\alpha:(E\times_\phi F,<_1)\rightarrow(G,<_2).$$ We show that $\alpha^{-1}$ is also increasing $(G,<_1)\rightarrow(E\times_\phi F,<_2)$. Indeed, let $g,g'\in G$ with $g<_1g'$. Then $g=\alpha(e,f)$, $g'=\alpha(e',f')$ with $\phi(e)=f$, $\phi(e')=f'$ and we must show that $(e,f)<_2(e',f')$, that is, $e<_2e'$. Since $\beta$ is a picture, $\beta^{-1}$ is increasing $(F\times_\psi G,<_1)\rightarrow(E,<_2)$. We have $g=\alpha(e,f)=\psi(f)$ and similarly $g'=\psi(f')$. Hence $(f,g), (f',g')\in F\times_\psi G$ and $(f,g)<_1(f',g')$ since $g<_1g'$. Thus $\beta^{-1}(f,g)<_2\beta^{-1}(f',g')$. Now $f=\phi(e)=p_1(\beta(e))\Rightarrow\beta(e)=(f,\psi(f))=(f,g)\Rightarrow e=\beta^{-1}(f,g)$. Similarly, $e'=\beta^{-1}(f',g')$. It follows that $e<_2e'$. Define now $\psi'=\alpha\circ(\phi^{-1},id)$ and $\beta'=(id, \psi)\circ \phi$. We need to show that $\psi'=\psi$ and $\beta'=\beta'$. We have $\psi'=\psi\circ p_2\circ(\phi^{-1},id)$ which is clearly equal to $\psi$. Moreover, $\beta'=(id,\psi)\circ p_1\circ\beta$ and we are done since $(id,\psi)\circ p_1$ is the identity of $F\times_\psi G$. $\Box$ The sub-bialgebra of special double posets ========================================== Special double posets --------------------- We call a double poset [*special*]{} if its second order is total. Since we identify isomorphic double posets, a special double poset is nothing else than a labelled poset in the sense of [@Stan]. Given a labelled poset, the labelling (which is a bijection of the poset into $\{1,\ldots,n\}$) defines a second order, which is total, on the poset. We denote by ${\mathbb Z}\mathbf {DS}$ the submodule of ${\mathbb Z}{\mathbf D}$ spanned by the special double posets. A [*linear extension*]{} of a special double poset $\pi=(E,<_1,<_2)$ is a total order on $E$ which extends the first order $<_1$ of $E$. We may identify a total order on $E$ with the word obtained by listing the elements increasingly for this order: let $e_1e_2\ldots e_n$ be this word, with $\mid E\mid =n$. Let moreover $\omega$ be the [*labelling*]{} of $\pi$, that is the unique order isomorphism from $(E,<_2)$ onto $\{1,\ldots,n\}$. Then we identify the linear extension with the permutation $\sigma=\omega(e_1)...\omega(e_n)$ in $S_n$; in other words $\sigma(i)=\omega(e_i)$ and the mapping $\sigma^{-1}\circ\omega$ is an increasing bijection $(E,<_1)\rightarrow \{1,\ldots,n\}$, since $\sigma^{-1}\circ\omega(e_i)=i$. In this way, a linear extension of $\pi$ is a permutation $\sigma\in S_n$ such that $\sigma^{-1}\circ\omega$ is an increasing bijection $(E,<_1)\rightarrow \{1,\ldots,n\}$. In [@MaRe] a bialgebra structure on ${\mathbb Z}S =\oplus_{n\in {\mathbb N}} {\mathbb Z}S_n$ has been constructed. We recall it briefly. Recall that, for any word $w$ of length $n$ on a totally ordered alphabet, the [*standard permutation*]{} of $w$, denoted by $st(w)$, is the permutation which is obtained by giving the numbers $1,\ldots,n$ to the positions of the letters in $w$, starting with the smallest letter from left to right, then the second smallest, and so on. For example, if $w=4\ 3\ 2\ 4\ 1\ 3\ 4\ 4\ 2\ 3\ 3$, then $st(w)=8\ 4\ 2\ 9\ 1\ 5\ 10\ 11\ 3\ 6\ 7$. The product $\ast$ for two permutations $\sigma\in S_n$ and $\tau\in S_p$ is defined as the sum of permutations in $S_{n+p}$ which are in the [*shifted shuffle product*]{} of the words $\sigma$ and $\tau$, that is the shuffle product of $\sigma$ and $\bar \tau$, where the latter word is obtained from $\tau$ by replacing in it each digit $j$ by $j+n$. The coproduct $\delta$ on ${\mathbb Z}S$ is defined on a permutation $\sigma\in S_n$ by: $\delta(\sigma)$ is the sum, over all factorizations (as concatenation) $\sigma=uv$ of the word $\sigma$, of $st(u)\otimes st(v)$, where $st$ denotes standardization of a word. See [@MaRe] for these definitions. On ${\mathbb Z}S$ put the [*Jöllenbeck scalar product*]{}, defined by $$(\sigma,\tau)= \left\{ \begin{array}{cl} 1 & \mbox{ if } \sigma=\tau^{-1}\\ 0 & \mbox{ otherwise. } \end{array} \right.$$ It turns ${\mathbb Z}S$ into an self-dual bialgebra, see [@BlSc] 5.14 (where an isomorphic algebra is considered, obtained by replacing each permutation by its inverse). Define the linear mapping $L$ from ${\mathbb Z}\mathbf {DS}$ into ${\mathbb Z}S$ by sending each special double poset on the sum of its linear extensions. The following result is already in [@BlSc], see 4.18, 5.5 and 5.10. Note that what we call here special double poset is called [*shape*]{} by Blessenohl and Schocker. Also, what we call composition is called [*semi-direct product*]{} by them. For sake of completeness, we give here an alternative proof of the coalgebra property; we construct a bijection between the appropriate sets of decompositions. The proof in [@BlSc] (different from ours) uses an argument, due to Gessel [@Gess], which is inductive on the number of edges in the Hasse diagramm of the poset. ${\mathbb Z}\mathbf {DS}$ is a sub-bialgebra of ${\mathbb Z}{\mathbf D}$ and $L:{\mathbb Z}\mathbf {DS}\rightarrow{\mathbb Z}S$ is homomorphism of bialgebras. [**Proof.**]{} It is straightforward to see that the composition of two special double posets is special, and that a decomposition of a special double poset is a pair of special double posets (the class of special double posets is a [*hereditary family*]{} in the sense of Schmitt [@Schm]: it is closed under taking disjoint unions and ideals). Thus, ${\mathbb Z}\mathbf {DS}$ is a sub-bialgebra of ${\mathbb Z}{\mathbf D}$. Moreover, the set of linear extensions of the composition of $\pi$ and $\pi'$ is classically the shifted shuffle product of the set of linear extensions of $\pi$ by that of $\pi'$. Hence $L$ is a homomorphism of algebras. The fact that it is also a homomorphism of coalgebras is proved as follows. Let $\pi=(E,<_1,<_2)$ be a special double poset. Then $$\delta\circ L(\pi)=\sum_{\sigma,u,v}st(u)\otimes st(v),$$ where the summation is over all triples $(\sigma,u,v)$, with $\sigma$ a linear extension of $(E,<_1)$ and where $\sigma$ is the concatenation $uv$; moreover, $$(L\otimes L)\circ\delta (\pi)=\sum_{I,S,\alpha,\beta}\alpha\otimes\beta,$$ where the summation is over all quadruples $(I,S,\alpha,\beta)$ with $I$ an inferior ideal of $(E,<_1)$ and $S$ its complementary superior ideal, and $\alpha, \beta$ are respectively linear extensions of $I, S$ for the induced order $<_1$. We show that there is a bijection between the set of such triples and quadruples. To simplify, take $\pi=(E,<_1,<_2)$ with $E=\{1,...,n\}$ and $<_2=<$ the natural total order on $E$. Then the labelling $\omega$ of $\pi$ is the identity mapping. We show that $$(\sigma,u,v)\mapsto (I,S,st(u),st(v)),$$ with $I$ the set of naturals appearing in $u$ and $S$ the set of naturals appearing in $v$, is the desired bijection. Note first that, since $\sigma$ is a linear extension of $(E,<_1)$ and $\sigma=uv$, then $I$ and $S$ as defined are a lower and a superior ideal of $(E,<_1)$; moreover, $st(u)$ and $st(v)$ are linear extensions of $(I,<_1)$ and $(S,<_1)$. This mapping is injective, since any permutation $\sigma=uv$ is determined by $st(u)$, $st(v)$ and the sets of digits in $u$ and $v$. We show that it is also surjective: let $(I,S,\alpha,\beta)$ a quadruple as above, and define uniquely $\sigma=uv$ with $st(u)=\alpha$, $st(v)=\sigma$ and $I,S$ the set of digits in $u,v$. All we have to show is that $\sigma$ is a linear extension of $(E,<_1)$. That is: if $e=\sigma(j)$ and $e'=\sigma(k)$, with $e<_1e'$, then $j<k$. Since $\sigma$ is the concatenation of $u$ and $v$, and since $st(u)$, $st(v)$ are linear extensions of $I,S$ for the order $<_1$, this is clear if $j,k$ are both digits in $\{1,...,i\}$ or $\{i+1,...,i+s\}$, with $i,s$ the cardinality of $I,S$; also, if $j$ is in the first set and $k$ in the second. Suppose by contradiction that $j$ is in the second set and $k$ in the first; then $e\in S$ and $e'\in I$, contradicting the ideal property. $\Box$ The homomorphism $L$ has two other properties. The homomorphism $L:{\mathbb Z}\mathbf {DS}\rightarrow{\mathbb Z}S$ preserves the Zelevinsky scalar product and the internal product. Recall that the [*internal product*]{} of ${\mathbb Z}S$ is simply the product which extends the product on permutations. The next lemma extends known results on classical pictures between skew shapes, cf. [@GR], [@BlSc] Remark 13.6. Let $\pi,\pi'$ be special double posets. There is a natural bijection between pictures from $\pi$ into $\pi'$ and linear extensions of $\pi$ whose inverse is a linear extension of $\pi'$. [**Proof.**]{} Let $\phi$ be a picture from $\pi$ to $\pi'$, and denote by $\omega,\omega'$ the respective labellings. Then, the first condition on a picture means that $\omega'\circ \phi$ is an increasing bijection of $(P,<_1)$ into $\{1,...,n\}$; hence $\omega\circ \phi^{-1}\circ\omega'^{-1}$ is a linear extension of $(P,<_1)$. Similarly, the second condition means that $\omega'\circ \phi\circ\omega^{-1}$ is a linear extension of $(P',<_1)$. Thus the lemma follows. $\Box$ [**Proof of theorem.**]{} The fact that $L$ preserves the scalar product is immediate from the lemma. It remains to show that $L$ is a homomorphism for the internal product. Let $\pi, \pi'$ be two special double posets with underlying sets $E,F$. We may assume that $E=F=\{1,...,n\}$, such that their second order $<_2$ is the natural order on $\{1,...,n\}$. Then, since the labellings of $E$ and $F$ are the identity mappings, a linear extension of $\pi$ (resp. $\pi'$) is a permutation $\alpha$ (resp. $\beta$) in $S_n$ such that $\alpha^{-1}$ (resp. $\beta^{-1}$) is increasing from $(E,<_1)$ (resp. $(F,<_1)$) into $\{1,...,n\}$. Now, let $\phi$ be increasing $(E,<_1)\rightarrow (F,<_2)=\{1,...,n\}$. We construct the double poset $\Pi=E\times_\phi F$ as in Section 2.2. Since we identify isomorphic double posets, we may take $F$ as underlying set, with the first order $<_1$ of $F$ as first order of $\Pi$, and with second order defined by the labelling $\phi^{-1}: F\rightarrow E=\{1,...,n\}$. Then a linear extension $\sigma$ of $\Pi$ is a permutation $\sigma$ such that $\sigma{-1}\circ\phi^{-1}$ is increasing $(F,<_1)\rightarrow \{1,...,n\}$. Define $\alpha=\phi^{-1}$ and $\beta=\phi\circ\sigma$. Then $\alpha^{-1}=\phi$ (resp. $\beta^{-1}=\sigma^{-1}\circ\phi^{-1}$) is increasing $(E,<_1)\rightarrow \{1,...,n\}$ (resp. $(F,<_1)\rightarrow \{1,...,n\}$), and therefore $\alpha$ and $\beta$ are linear extensions of $\pi$ and $\pi'$ with $\alpha\circ\beta=\sigma$. Conversely let $\alpha$ and $\beta$ be linear extensions of $\pi$ and $\pi'$. Put $\sigma=\alpha\circ\beta$ and $\phi=\alpha^{-1}$. Then $\phi$ is increasing $(E,<_1)\rightarrow (F,<_2)=\{1,...,n\}$ and $\sigma^{-1}\circ\phi^{-1}=\beta^{-1}$ is increasing $(F,<_1)\rightarrow \{1,...,n\}$. Hence $\sigma$ is a linear extension of $\Pi$. All this implies that $L(\pi)L(\pi')$, which is the sum of all $\alpha\circ\beta$’s, is equal to $L(\pi\circ\pi')$, which is equal to the sum of all $\sigma$’s, for all possible $\phi$’s . $\Box$ It is easy to prove also that the submodule of ${\mathbb Z}\mathbf {DS}$ spanned by the [*naturally labelled*]{} special double posets, that is, those whose second order is a linear extension of the first order, is a sub-bialgebra of ${\mathbb Z}\mathbf {DS}$ (the class of naturally labelled posets is closed under taking disjoint unions and ideals). It may be possible that one could compute the antipode of this subalgebra, and that of ${\mathbb Z}\mathbf {DS}$, by extending the techniques of Aguiar and Sottile [@ASo], who computed the antipode of ${\mathbb Z}S$ using the weak order on the symmetric group, and a recursive method in posets, as [@Gess] proof of Th.1 and [@BlSc] Lemma 4.11. Recall that for a permutation $\sigma\in S_n$, its [*descent composition*]{} $C(\sigma)$ is the composition of $n$ equal to $(c_1,...,c_k)$, if $\sigma$ viewed as a word has $k$ consecutive ascending runs of length $c_1...,c_k$ and $k$ is minimum. For example $C(51247836)=(1,5,2)$, the ascending runs being $5,12478, 36$. Recall from [@Gess] p.291 the definition of the fundamental quasi-symmetric function $F_C$, for any composition $C$ (see also [@Stan1] 7.19 where it is denoted $L_\alpha$). Then it follows from [@MaRe] Th.3.3 that the linear function $F: {\mathbb Z}S\rightarrow \mathbf {QSym}$ defined by $\sigma\mapsto F_{C(\sigma)}$ is a homomorphism of bialgebras. Recall that the bialgebra homomorphism $\Gamma: {\mathbb Z}{\mathbf D}\rightarrow \mathbf {QSym}$ has been defined in Section 3. Then the following result is merely a reformulation of a result of Stanley (see [@Gess] Th.1 and Eq.(1) page 291, or [@Stan1] Cor.7.19.5). The mapping $F\circ L$ is equal to $\Gamma$ restricted to ${\mathbb Z}\mathbf {DS}$. Littlewood-Richardson rule -------------------------- A [*lattice permutation*]{} (or Yamanouchi word) is a word on the symbols in ${\mathbb P} = \{1,2, 3, ...\}$ such that, for any $i$, in each left factor, the number of $i$’s is not less than the number of $i+1$’s. For example, $11122132$ is such a lattice permutation. Given a word $w$ on the symbols $1,2, 3, ...$, with $k$ the greatest symbol appearing in it, we call [*complement*]{} of $w$, the word obtained from $w$ by exchanging $1$ and $k$ in $w$ , then $2$ and $k-1$, and so on. For the word of the above example, its complement is therefore $33322312$. The [*weight*]{} of a word $w$ is the partition $\nu=1^{n_1}2^{n_2}...$, where $n_i$ is the number of $i$’s in $w$. For the word above, it is the partition $1^42^33^1$. Given a special double poset $\pi=(E,<_1,<_2)$ of cardinality $n$ with labelling $\omega$, and a word $a_1a_2...a_n$ of length $n$ over a totally ordered alphabet $A$, we say that $w$ [*fits into*]{} $\pi$ (we take the terminology from [@GR]) if the function $E\rightarrow A$ defined by $e\mapsto a_{\omega(e)}$ is a $\pi$-partition. In other words, given a function $f: E\rightarrow A$, call [*reading word*]{} of $f$ the word $f(\omega(1))...f(\omega(n))$. Then $f$ is a $\pi$-partition if and only if its reading word fits into $\pi$. Note the case where the word is a permutation $\tau$: we have that $\tau$ fits into $\pi$ if and only if the word $\tau(1)...\tau(n)$ fits into $\pi$. This means that the mapping $e\mapsto\tau(\omega(e))$ is a $\pi$-partition, that is, since it is a bijection, is increasing $(E,<_1)\rightarrow \{1,...,n\}$. Given a partition $\nu$ of $n$, we define (as in [@Gess]) a special double poset $\pi_\nu=(E_\nu, <_1,<_2)$ where $E_\nu$ is the Ferrers diagram of $\nu$, where $<_1$ is the order induced on $E_\nu$ by the natural partial order of ${\mathbb N}\times{\mathbb N}$, and where $<_2$ is given on the elements of $E_\nu$ by $(x,y)<_2(x',y')$ if and only if either $y>y'$, or $y=y'$ and $x<x'$. Recall that there is a well-known bijection between standard Young tableaux of shape $\nu$ and lattice permutations of weight $\nu$, see [@Stan1] Prop.7.10.3 (d). In this bijection, the shape of the tableau is equal to the weight of the lattice permutation. If $\pi$ is a special double poset, we denote by $\tilde \pi$ the special double poset obtained by replacing the two orders of $\pi$ by their opposite. Clearly, a permutation $\sigma$ fits into $\pi$ (assumed to be special) if and only if $w_0\circ \sigma \circ w_0$ fits into $\tilde \pi$ ($w_0$ is defined in Lemma 3.3 below). Let $\pi$ be a special double poset and $\nu$ be some partition. Then, the scalar product $(\pi,\pi_\nu)$, that is, the number of pictures from $\pi$ to $\pi_\nu$, is equal to: \(i) the number of lattice permutations of weight $\nu$ whose complements fit into $\pi$; \(ii) the number of lattice permutations of weight $\nu$ whose mirror images fit into $\tilde \pi$. Note that part (ii) of this is the classical formulation of the Littlewood-Richardson rule (see [@Ma] (9.2) or [@Stan1] Th.A1.3.3), once one realizes that a skew Schur function indexed by a skew shape is equal to the skew Schur function obtained by rotating by 180 degrees that shape (cf. [@BlSc] Chapter 11 p.109-110). We first need the following: Let $\pi$ be a special double poset. A permutation $\sigma$ is a linear extension of $\pi$ (with respect to $<_1$) if and only if its inverse fits into $\pi$. [**Proof.**]{} We may assume that $\pi=(E,<_1,<_2)$ with $E=\{1,...,n\}$ and $<_2$ the natural order of $E$. Then $\omega $ is the identity and therefore a permutation $\sigma$ is a linear extension of $\pi$ if and only if $\sigma^{-1}$ is increasing $(E,<_1)\rightarrow \{1,...,n\}$. On the other hand, $\tau$ fits into $\pi$ if and only if it is increasing $(E,<_1)\rightarrow \{1,...,n\}$, as noted previously. $\Box$ Let $\pi$ be a special double poset. A word $w=a_1...a_n$ fits into $\pi$ if and only if its standard permutation does. This result is equivalent to a result of Stanley, see [@Gess] Th.1 or [@Stan1] Th.7.19.14. A standard Young tableau of shape $\nu$ is the same thing as a $\pi_\nu$-partition which is a bijection from the Ferrers diagram of $\nu$ onto $\{1,...,n\}$. Thus we can speak of the reading word of a tableau, which is a classical notion. We denote it $read(T)$. We consider also the [*mirror reading word*]{} of $T$, which is the mirror image of $read(T)$. The following result must be well-known, but we give a proof for the convenience of the reader. Part (i) is proved in [@BlSc] page 109. Let $T$ be a standard Young tableau and $w$ the associated lattice permutation. \(i) Let $u$ be the complement of $w$. Then the reading word of $T$ is equal to the inverse of the standard permutation of $u$. \(ii) Let $v$ be the mirror image of $w$. Then the mirror reading word of $T$ is equal to the complement of the inverse of the standard permutation of $v$. Consider for instance the Young tableau $$\begin{array}{cccc} 5 \\ 3&9 \\ 2&6&10&11 \\ 1&4&7&8 \end{array}$$ Then its lattice permutation is $w=1\ 2\ 3\ 1\ 4\ 2\ 1\ 1\ 3\ 2\ 2$. The complement of the latter is $u=4\ 3\ 2\ 4\ 1\ 3\ 4\ 4\ 2\ 3\ 3$. Standardizing this latter word, we obtain the permutation $8\ 4\ 2\ 9\ 1\ 5\ 10\ 11\ 3\ 6\ 7$, whose inverse is $5\ 3\ 9\ 2\ 6\ 10\ 11\ 1\ 4\ 6\ 7$, which is indeed the reading word of the given tableau, obtained by concatenating its rows, beginning with the last row. This illustrates (i). For (ii), the mirror image of $w$ is $v=\ 2\ 2\ 3\ 1\ 1\ 2\ 4\ 1\ 3\ 2\ 1$. The standard permutation of $v$ is $\ 5 \ 6 \ 9 \ 1 \ 2 \ 7 \ 11 \ 3 \ 10 \ 8 \ 4$. The inverse of this permutation is $\ 4\ 5\ 8\ 11\ 1\ 2\ 6 \ 10 \ 3 \ 9\ 7$. Finally, the complement of this permutation is $\ 8\ 7\ 4\ 1\ 11\ 10\ 6\ 2\ 9\ 3\ 5$, which is indeeed the mirror reading word of $T$. We need a lemma. For this, we use a variant of the reading word of a tableau. Call [*row word*]{} of a standard Young tableau $T$ the permutation, in word form, obtained by reading in increasing order the first row of $T$, then the second, and so on. Denote it by $row(T)$. For example, the row word of the previous example is $$row(T)= 1 \ 4\ 7\ 8\ 2\ 6\ 10\ 11\ 3\ 9\ 5.$$ Let $w$ be a word on the alphabet $\mathbb P$ of weight equal to the partition $\nu=(\nu_1 > ... > \nu_k > 0)$ of $n$. Let $w_0= n\ n-1\ \ldots 2\ 1$ be the longest element in the group $S_n$ and $\gamma$ the longest element in the Young subgroup $S_{\nu_1}\times...\times S_{\nu_k}$. \(i) Let $u$ be the complement of $w$. Then $st(u)= w_0\circ\gamma \circ st(w)$. \(ii) Let $v$ be the mirror image of $w$. Then $st(v)= \gamma \circ st(w) \circ w_0$. \(iii) Suppose that $w$ is a lattice permutation and let $T$ be the tableau of shape $\nu$ corresponding to $w$. Then $st(w)$ is the inverse of $row(T)$. [**Proof.**]{} (i) Note that $\gamma$ as word is equal to $$\nu_1 \ldots 1 \ \ (\nu_1+\nu_2) \ldots (\nu_1+1)\ \ \ldots \ \ n \ldots (\nu_1+\ldots+\nu_{k-1}+1).$$ Hence $$w_0\circ\gamma = (\nu_2+\ldots+\nu_k+1)\ldots n \ \ (\nu_3+\ldots+\nu_k+1)\ldots (\nu_2+\ldots+\nu_k) \ \ \ldots \ \ 1 \ldots \nu_k.$$ The proof of (i) then follows by inspection. \(ii) Likewise, one proves that $st(v)\circ w_0=\gamma \circ st(w)$ by inspection. \(iii) Let $I_1, \ldots, I_k$ denote the successive intervals of $\{1,\ldots,n\}$ of cardinality $\nu_1, \ldots, \nu_k$. Let $L_1,\ldots, L_k$ be the set of elements in the successive rows of $T$. If $I$, $L$ are two subsets of equal cardinality of $\mathbb P$, we denote by $I\nearrow L$ the unique increasing bijection from $I$ into $L$. We denote also $f_1\cup \ldots \cup f_k$ the function which restricts to $f_i$ on its domains, assuming the domains are disjoint. Then $row(T)$ is the permutation $\cup_{j=1,\ldots,k} I_j\nearrow L_j$, and its inverse is therefore $\cup_{j=1,\ldots,k} L_j\nearrow I_j$. Moreover, the word $w$ is defined by the following condition: for each position $p\in\{1,\ldots,n\}=\cup_{j=1,\ldots,k}L_j$, the $p$-th letter of $w$ is $j$ if and only if $p\in L_j$. Recall that $st(w)$, viewed as word, is obtained by giving the numbers $1,...,n$ to the positions of the letters in $w$, starting with the $1$’s from left to right, then $2$’s, and so on. Therefore $st(w)=\cup_{j=1,\ldots,k} L_j\nearrow I_j.$ $\Box$ [**Proof of proposition.**]{} (i) We have to prove that $read(T)=st(u)^{-1}$. We know by Lemma 5.2.(i) and (iii) that $st(u)=w_0 \circ\gamma \circ st(w)$ and $st(w)^{-1}=row(T)$. Clearly $read(T)=row(T)\circ \delta$, where $\delta$ is the permutation $$(\nu_1+\ldots+\nu_{k-1}+1)\ldots n \ \ (\nu_1+\ldots+\nu_{k-2} +1)\ldots(\nu_1+\ldots+\nu_{k-1}) \ \ \ldots \ \ 1\ldots \nu_1.$$ Now $\delta=\gamma \circ w_0$. Therefore, $read(T)=row(T) \circ \gamma \circ w_0=st(w)^{-1}\circ\gamma\circ w_0=st(u)^{-1}$, since $w_0$ and $\gamma$ are involutions. \(ii) We have to show that $read(T)\circ w_0=w_0 \circ st(v)^{-1}$. We know by Lemma 5.2.(ii) that $st(v)= \gamma \circ st(w) \circ w_0$, hence $st(w)=\gamma \circ st(v) \circ w_0$. Using what we have done in(i), we have therefore $$read(T)= st(w)^{-1} \circ \gamma \circ w_0 = w_0 \circ st(v)^{-1} \circ\gamma \circ\gamma \circ w_0=w_0 \circ st(v)^{-1}\circ w_0.$$ $\Box$ [**Proof of theorem.**]{} (i) By Lemma 3.1 and Lemma 3.2, the indicated scalar product is equal to the number of permutations $\sigma$ which fit into $\pi$ and whose inverse fits into $\pi_\nu$. Let $H$ denote the set of complements of lattice permutation of weight $\nu$. By Proposition 3.2.(i), the mapping $H\rightarrow RWSYT_\nu$, $w\mapsto st(w)^{-1}$ is a bijection, where we denote by $RWSYT_\nu$ the set of reading words of standard Young tableaux of shape $\nu$. By Proposition 3.1, part (i) of the theorem follows. \(ii) Let $K$ denote the set of mirror images of lattice permutations of weight $\nu$. By Prop. 3.2.(ii), the mapping $K\rightarrow RWSYT_\nu$, $v\mapsto \sigma=w_0\circ st(v)^{-1}\circ w_0$ is a bijection. Moreover, $\sigma^{-1}=w_0\circ st(v)\circ w_0$ fits into $\tilde \pi$ if and only if $st(v)$ fits into $\pi$, that is, by Prop. 3.1, if and only if $v$ fits into $\pi$. $\Box$ [99]{} M. Aguiar, N. Bergeron and F. Sottile: *Combinatorial Hopf algebras and generalized Dehn-Sommerville relations,* Compositio Mathematica **142**, 1-30 (2006). M. Aguiar and F. Sottile: *Structure of the Malvenuto-Reutenauer Hopf algebra of permutations,* Advances in Mathematics **191**, 225-275 (2005). N. Bergeron and F. Sottile: *Hopf algebras and edge-labelled posets,* Journal of Algebra **216**, 641–651 (1999). D. Blessenohl and M. Schocker: Noncommutative character theory of the symmetric group, Imperial College Press, London (2005). R. Ehrenborg: *On posets and Hopf algebras,* Advances in Mathematics **138**, 211–262 (1996). S. Fomin and C. Greene: *A Littlewood-Richardson miscellany,* European Journal of Combinatorics **14** n. 3, 191–212 (1993). A. Garsia and J.Remmel: *Shuffles of permutations and the Kronecker product,* Graphs and Combinatorics **1**, 217-263 (1985). L. Geissinger: *Hopf algebras of symmetric functions and class functions,* in Combinatoire et Représentation du Groupe Symétrique, Springer Lecture notes in Mathematics **579**, 168–181 (1977). I. Gessel: *Multipartite $P$-partitions and inner product of skew Schur functions,* in Combinatorics and Algebra, Contemporary Mathematics **34**, 289–301 (1984). S. K. Hsiao and T. Kyle Petersen: *Colored posets and colored quasi-symmetric functions,* Annals of Combinatorics (to appear). G. D. James and M. H. Peel: *Specht series for skew representations of symmetric groups,* Journal of Algebra **56**, 343–364 (1979). M. A. A. van Leeuwen: *Tableau algorithms defined naturally for pictures,* Discrete Mathematics **157**, 321-362 (1996). M. Lothaire: Algebraic combinatorics on words, Cambridge University Press (2002). I. Macdonald: Symmetric functions and Hall polynomials, Oxford University Press (1995). C. Malvenuto and C. Reutenauer: *Duality between quasi-symmetric functions and the Solomon descent algebra,* Journal of Algebra **177**, 967–982 (1995). C. Malvenuto and C. Reutenauer: *Plethysm and conjugation of quasi-symmetric functions,* Discrete Mathematics **193**, 225–233 (1998). C. Malvenuto: *Produits et coproduits des fonctions quasi symétriques et de l’algébre des descentes,* Publications du Laboratoire de Combinatoire et d’Informatique Mathématique **16** (1994). C. Malvenuto: *A Hopf algebra of labelled partially ordered sets,* AMS Meeting \#976, Special Session on Combinatorial Hopf Algebras, http://www.ams.org/notices/200206/montreal-prog.pdf, Montreal (2002). S. Poirier and C. Reutenauer: *Algèbre de Hopf des tableaux,* Annales des Sciences Mathématiques du Québec **19** n. 1, 79–90 (1995). W. Schmitt: *Incidence Hopf algebras,* Journal of Pure and Applied Algebra **96**, 299–330 (1994). R. Stanley: *Ordered structures and partitions,* Memoirs of the American Mathematical Society **119** (1972). R. Stanley: Enumerative Combinatorics Vol 2, Cambridge University Press (1999). A. V. Zelevinsky: *A generalization of the Littlewood-Richardson rule and the Robinson-Schensted-Knuth correspondence,* Journal of Algebra **69**, 82–94 (1981). A.V. Zelevinsky: Representations of finite classical groups. A Hopf algebra approach, Lecture Notes in Mathematics, **869** (1981).
--- abstract: | [**Abstract**]{} Theoretical analysis of e-cloud instability in the Fermilab Recycler is represented in the paper. The e-cloud in strong magnetic field is treated as a set of immovable snakes each being initiated by some proton bunch. It is shown that the instability arises because of injection errors of the bunches which increase in time and from bunch to bunch along the batch. being amplified by the e-cloud electric field. The particular attention is given to nonlinear additions to the cloud field. It is shown that the nonlinearity is the main factor which restricts growth of the bunch amplitude. Possible role of the field free parts of the Recycler id discussed as well. Results of calculations are compared with experimental data demonstrating good correlation. author: - 'V. Balbekov' title: 'Nonlinear effects at the Fermilab Recycler e-cloud instability.' --- Introduction ============ Fast coherent instability of horizontal betatron oscillations of bunched proton beam was observed in the Fermilab Recycler since 2014 as it is described in Ref. [@MAIN]. It has been shown in this paper that the instability is caused by electron cloud which arises at ionization of residual gas by protons, and grows later due breeding of the electrons at collision with the beam pipe walls. A theoretical model of the instability has been proposed in Ref. [@MY]. The electron cloud is treated as a set of “snakes” each of them appearing as a footprint of some proton bunch. The snakes are immovable in horizontal plane due to strong vertical magnetic field. However, the electrons are very mobile in vertical direction because they move between the beam pipe walls under the influence of electric field of the protons. They can breed or perish at collisions with the beam pipe walls. The model provides a suitable description of initial part of the instability including dependence of the bunch amplitude on time and the position in the batch. However, it predicts an unrestricted growth of the bunch amplitudes which statement is in conflict with the experimental evidence. It follows from the experiment that the amplitude increases with variable growth rate within 60-80 turns and becomes about stable after that. It was suggested in [@MY] that nonlinearity of the e-cloud field can be responsible for similar behavior of the proton beam, and several examples have been represented there. The development of this idea is a subject if this paper. It is shown the it is a way to bring the calculation into accordance with the experimental evidence. Electron cloud model ==================== ![Top view of the e-cloud. Each proton bunch gives rise to an immovable e-snake. The snakes coincide with each other if their parent proton bunches have the same injection conditions being different otherwise (\#2 in the picture). Local density of each snake depends on time.](01_snake.eps){width="100mm"} It has been shown in [@MY] that horizontal motion of electrons in the cloud is awfully obstructed by the Recycler magnetic field. Vertical motion between the walls and the electron breeding in the walls result in creation of a vertical strips [@MAIN] and in the formation of the “snake” as it is shown in Fig. 1. Each proton bunch creates the wake following the bunch in accordance with the injection error. The bunch wakes coincide if they are injected with the same error (\#0, 1, 3, 4 in Fig. 1). Any wake has a steady shape but variable density dependent on time. According to this model, electron density at distance $\,s\,$ from beginning of the batch can be represented in the form $$\rho_e(x,s,t) = \int_0^s w\left(\frac{s-s'}{v}\right)\,\bar\rho \left(x-X\Big(s,t-\frac{s-s'}{v}\Big)\right)\lambda(s')\,ds'$$ where $\,\bar\rho(x)\,$ is normalized projection of the proton steady state distribution on axis $\,x$, $\,X(s,t)\,$ is the beam coherent displacement, and $\,\lambda(s)\,$ is its linear density. The coefficient $\,w(\tau)\,$ describes evolution of the snake local density which has been considered in Ref. [@FUR]-[@PIV]. Calculation of this function is not a subject of this paper, and it will be treated further as some phenomenological parameter. Because the electron distribution is flat in $(y$-$z)$ plane, and effect of the walls is small within the proton beam, electric field of this beam is $$\begin{aligned} E_e(x,s,t) = e\int_0^s w\left(\frac{s-s'}{v}\right)\,F\left(x-X\Big(s,t -\frac{s-s'}{v}\Big)\right)\lambda(s')\,ds'\end{aligned}$$ with the function F satisfying the equation $$\begin{aligned} F'(x) = 4\pi \bar\rho(x)\end{aligned}$$ If the beam consists of short identical bunches, the integral turns into the sum $$E_n(t,x) = eN_b\sum_{m=0}^{n} w_{k} F\Big(x-X_{n-m}(t-mT_{RF})\Big)$$ where $\,N_b\,$ is the bunch population, $\,T_{RF}\,$ is the time separation of the bunches which are enumerated from the beam head (index 0) to the current bunch (index $n$). Proton equation of motion ========================= With the cloud electric field taken into account, equation of horizontal betatron oscillations of a proton in $\,n^{\rm th}$ bunch is $$[\ddot x(t)+\omega_0^2x]_n = -\frac{e^2N_b}{m\gamma}\sum_{m=0}^{n}w_{m}F\Big(x-X_{n-m}(t-mT_{RF})\Big)$$ where $\omega_0$ is betatron frequency without e-cloud (we do not consider here other factors which could affect the betatron motion, for example chromaticity). Because $\,\bar\rho(x)\,$ is the odd function, approximate solution of Eq. (3) including the lowest nonlinearity is $$F(x)\simeq 4\pi \bar\rho(0)\left(x +\frac{\epsilon x^3}{3}\right),\qquad \epsilon = \frac{1}{2\bar\rho(0)}\frac{d^2\rho(0)}{dx^2}$$ Therefore equation of betatron oscillations of a proton in $\,n^{\rm th}\,$ bunch obtains the form $$[\ddot x(t)+\omega_0^2x(t)]_n=-2\omega_0\sum_{m=0}^n W_m\xi_m \left(1+\frac{\epsilon_m\xi_m^2}{3}\right), \qquad\xi_m = x(t)-X_{n-m}(t-nT_{RF})$$ where $\,W_m=4\pi e^2\bar\rho_m(0)w_m/(m\gamma\omega_0)$. Without coherent oscillations that is at $\,X_j=0\,$, equation of small incoherent oscillations of protons in $\,n^{\rm th}\,$ bunch is $$\ddot x(t)+\omega_n^2x(t)=0, \qquad\omega_n=\omega_0+\Delta Q_n, \qquad \Delta Q_n=\sum_{m=0}^nW_m.$$ It means that $\Delta Q_n$ is the incoherent tune shift of protons in $\,n^{\rm th}$ bunch caused by e-cloud produced by all foregoing bunches, and $\,W_m\,$ is the contribution of the bunch \#$(n-m)$. Linear approximation ==================== At $\,\epsilon_m=0$, Eq. (7) can be averaged over all particles of $\,n^{\rm th}\,$ bunch resulting series of equations for coherent oscillations of the bunches $$\ddot X_n(t)+\omega_0^2X_n=-2\omega_0\sum_{m=0}^{n} W_{m}\Big[X_n(t)-X_{n-m}(t-mT_{RF})\Big]$$ This series has been investigated in detail in Ref. [@MY]. The main conclusions of the paper are summarized below and illustrated by Fig. (2). 1\. Injection errors are the root cause of the “instability”. The initial amplitude can increase in time as well as from bunch to bunch along the batch. 2\. Some spread of the errors is another condition for the instability. Otherwise solution of Eq. (9) is $\,X_n(t)=X_0(t-nT_{RF})$ that is all bunches move one by one along the same stable trajectory. Coherent interaction of the bunches is absent at such conditions. 3\. A variability of the wake is another condition of the instability because the bunches have different eigentunes and their resonant interaction is impossible at $\,W_m=\,$const. 4\. With restricted wakes, the eigentunes have the same value in the batch tail where the amplitude growth should be maximal. This statement is in agreement with experimental evidence. 5\. Dependence of the amplitude on time is non-exponential generally being different from bunch to bunch. 6\. However, growth of amplitudes is unrestricted at long last, which conclusion contradicts the experimental evidence. Therefore this statement requires an analysis beyond the scope of linear approximation. ![image](04_1.eps){width="85mm"} ![image](04_2.eps){width="85mm"} Nonlinear consideration ======================= We will represent the variables $\,x\,$ and $\,X\,$ in Eq. 7 with help of the complex amplitudes $\,a\,$ and $\,A$: $$x(t) = a(t)\exp\big(i\omega_0[t-nT]\big)+c.c.,\qquad X_m(t) = A_m(t)\exp\big(i\omega_0[t-mT]\big)+c.c.$$ Substituting these values in Eq. (7) and applying the standard method of averaging, one can get following equations for amplitude of a proton inside $\,n^{\rm th}$ bunch [@MY] $$\dot a(t) = i\sum_{m=0}^n W_m\eta (1+\epsilon_m|\eta_m|^2), \qquad \eta_m=a(t)-A_{n-m}(t-mT).$$ One-step wake will be investigated further: $\,W_n=W\delta_{n1}$. Note that the condition $\,W_0=0$ follows from this definition being very reasonable because a noticeable e-cloud cannot appear in the leading bunch without secondary electrons. Therefore any proton has a constant betatron amplitude in this bunch, and the same is valid for the bunch coherent amplitude as well. The last can be taken as $\,A_0=0\,$ because difference of the bunch amplitudes is the only crucial circumstance. With these approximations, equations of motion of any proton inside $\,n^{\rm th}$ bunch is $$\dot a(t) = iW\big[\,a(t)-A_{n-1}(t-T)\,\big] \big[\,1+\epsilon\big|a(t)-A_{n-1}(t-T)|^2\big], \qquad A_0=0.$$ Following steps have to be used for numerical solution of these equations: 1\. To generate a random initial distribution of particles in first bunch $(N=1)$. The bunch central amplitude should be $\,A_1(0)\ne 0\,$ to begin the process. 2\. To calculate the function $\,a(t)\,$ for each particle of the first bunch $(n=1)$ by solution of Eq. (12) with the known value of the amplitude $A_{n-1}=A_0=0$. 3\. To calculate the central amplitude $\,A_1(t)\,$ as a function of time by the averaging over all particles of the bunch; 4\. To repeat the operation for second bunch with known $\,A_1(t)$, etc.\ Results of the calculation are represented below. Physics of the phenomenon ------------------------- The linear approximation for the one-step wake has been commented in Sec. IV being represented by left-hand Fig. 2. At present the same case will be investigated with nonlinear additions taken into account. Initial amplitude of all bunches except as the leading one $\,A_{n\ne0}(0)=1$, and the nonlinear parameter given by Eq. (6) is taken as large as $\epsilon=-0.001$. The proton beam is considered as thin one that is its radius is assumed to be small in comparison with the injection errors. Obtained coherent amplitude of the bunches is represented in Fig. 3 against the normalized time. In the beginning, it is about the same as it has been shown in Fig. 2. However, further behavior is strongly different. It is seen that the growth of the bunch amplitudes ceases at about $\,|A_n/A_1|=20$ - 30  which limit is achieved at $\,Wt=8$ - 10. ![image](06_1.eps){width="85mm"} The saturation cannot be treated as Landau damping because thin proton beam with negligible incoherent tune spread could not be an object of this phenomenon. Therefore the nonlinearity does not prevent the instability in the case, but merely restricts its growth. This statement is illustrated by Fig. 4 where behavior of second bunch of the batch is considered in more details. The leading bunch does not oscillate as it was assumed, and the first bunch has constant amplitude because there is no external force to excite it. The relative amplitude of second bunch is shown in the left-hand graph against time at different nonlinearity, and several phase trajectories are represented in the right-hand figure. It is a typical behavior of nonlinear oscillator exited by periodical external field. ![image](07_1.eps){width="85mm"} ![image](07_2.eps){width="85mm"} Dependence on value of the nonlinearity --------------------------------------- Two more examples are represented in Fig. 5. In the case, the batch has the same arrangement and initial conditions as in Fig. 3 but other parameters of the nonlinearity: $\,\epsilon=10^{-4}\,$ and $\,10^{-2}$. As one can expected, the more nonlinearity results in the less coherent amplitude. The ultimate amplitude can be estimated by the relation $\,\epsilon A^2\simeq-1$, and it is attained at about $\,Wt=8-10$. ![image](08_1.eps){width="85mm"} ![image](08_3.eps){width="85mm"} ![image](08_4.eps){width="85mm"} The results are summarized in Fig. 6 where averaged across the batch parameters are shown. Solid lines represent the averaged coherent amplitude, and dashed lines – its instantaneous rate (it is just a picture which has been measured in the experiment [@MAIN], and corresponding comparison will be made later). Four cases are considered in this example being taken from Fig. 2 (left), 3, and 5. It is seen that the amplitude growth has about exponential behavior only at zero nonlinearity, and only at $\,Wt>\sim 5$. The nonlinearity does not reveal itself at $\,Wt<\sim 3\,$ but restricts the amplitude growth at $\,Wt>\sim 6$ - 10. The maximal growth rate is about $\,\sim 1/\ln|\epsilon| A_1^2$, and the maximal amplitude is $\,A_{max}^2 \sim 0.5/|\epsilon|$. Dependence on the beam radius ----------------------------- ![image](09_2.eps){width="85mm"} ![image](09_3.eps){width="85mm"} ![image](09_4.eps){width="85mm"} Thick beam is considered in this subsection at the same conditions as it has been done in previous part. The water-bag model of radius $\,R\,$ is used for transverse distribution of the proton beam. The injection error is taken to be unity, and parameter of nonlinearity $\,\epsilon=-0.001\,$ in all the cases. The results are represented in Fig. 7 at $\,R/A_1=1\,$ and 10. Corresponding averaged values are shown in Fig. 8 where the case $\,R=0,$ is added being taken from Fig. 3. Comparison of these figures with Fig. 6 and 7 allows to conclude that the beam radius is a factor of second importance for the problem. Influence of the field free areas ================================= About a half of the Recycler perimeter is occupied by the field free regions where the dipole magnetic field is absent. The electron production and breeding take place in these regions as well as in the field filled regions. Therefore, there is no reasons to think that e-cloud density in the field-free zones essentially differs from the density in the magnetic zones. However, there is no an effective mechanism in the free zones to correlate e-cloud position with proton beam so firmly as it makes strong dipole magnetic field. Therefore direct contribution of the field-free zones to the instability is expected to be relatively small. However, this part can affect the incoherent motion of protons including linear and nonlinear tune shift. The last is especially important because one cannot to exclude an additional restriction of the coherent amplitude due to this addition. Because this part of the cloud does not follow the proton beam, its distribution should depend on $\,x\,$ but not on $\,X$, in the used terminology. Taking it into account, one can write correspondingly modified Eq. (7) in the form $$[\ddot x(t)+\omega_0^2x(t)]_n=-2\omega_0 W \left(\xi+\frac{\epsilon_B\xi^3}{3}+\frac{\epsilon_F x^3}{3}\right)$$ where $\,\xi = x(t)-X_{n-1}(t-T_{\rm RF})$. This equation describes betatron oscillations of arbitrary proton in $\,n^{\rm th}\,$ bunch. A one-step wake is considered here, and only cubic nonlinearity is taken into account (incoherent linear contribution can be included to $\,\omega_0$). The coefficients $\,\epsilon_B\,$ and $\,\epsilon_F\,$ describe the nonlinearity of the field filled (B) and the field free (F) parts with their relative length being taken into account. ![image](11_2.eps){width="85mm"} ![image](11_3.eps){width="85mm"} ![image](11_4.eps){width="85mm"} Results of the calculations are represented in Fig. 9. The used beam parameters are: beam radius is taken to be unity, leading bunch does not oscillate, injection error of other bunches $\,A_n(0)=1\;(n\ne 0)$. The left-hand Fig. 9 represents the contribution of the field free parts only: $\,\epsilon_B=0,\;\epsilon_F=-0.001$. It should be compared with left Fig. 7 where the contribution of the field filled part has been shown at the same nonlinear parameter. It is seen that nonlinearity of the field free parts have less influence on the proton coherent oscillations. It is confirmed by the right-hand Fig. 9 where equal nonlinearities of both kinds are considered: $\,\epsilon_B=\epsilon_F=-0.001$. It considerably differs from the left-hand figure being rather similar to left Fig. 7. The same conclusion follows from Fig. 10 where the averaged beam parameters ate plotted like Fig. 6 and 8. It is seen that the addition of the field free regions only slightly change the results (the blue and the red lines). Comparison with the experiment ============================== ![image](12_2_summa.ps){width="85mm"} Presented results are in a reasonably good agreement with the experimental evidence represented in Ref. [@MAIN]. One of the resumptive plots of this paper is copied and shown here as Fig.11 The black curve in this plot has the same sense as dashed lines in Fig. 6 and 8. All of them demonstrate the instability rate dependent of the parameters which can be treated as time measured in different units. The curves are similar in shape, and the quantitative agreement can be obtained at following relation of the parameters: $$Wt=10\quad\mbox{corresponds to 80 revolutions, that is}\quad\,WT_{\rm rev}\simeq 1/8$$ with $\,T_{\rm rev}\,$ as the Recycler revolution time. On the other hand, it has been shown in Sec. III that $W$ should be treated as betatron tune shift of protons produced by the electron cloud. It means that $$WT_{\rm rev}=2\pi\Delta Q\qquad\mbox{that is}\qquad \Delta Q\simeq\frac{1}{16\pi}\simeq 0.02$$ This result can be used to estimate the central density of the e-cloud $\,n_e$. At the accepted model of the cloud, the relation is $$\Delta Q=\frac{r_0 n_p P^2}{2\pi Q\beta^2\gamma}$$ where $\,n_p=1.54\times 10^{-18}$m is the electromagnetic proton radius, $\,Q=25.45\,$ is the Recycler tune, $\,P=3319\,$m is its perimeter, $\,\beta\simeq 1\,$, and $\,\gamma=9.53\,$ is the normalized energy of protons. It gives numerically $$\Delta Q \simeq \frac{n_e}{10^{14}{\rm m}^3}\quad \mbox {that is}\quad n_e\simeq 2\times 10^{12}{\rm m}^{-3}\quad \mbox{at}\quad \Delta Q=0.02$$ Measurement of the density was not performed in the experiment but simulation with code POSINS is presented in [@MAIN] resulting in 5-10 times more density. Conclusion ========== The model of electron cloud in the form of a motionless snake is considered in the paper. Ionization of residual gas by protons is the primary source of the electrons being supported by their multiplication in the beam pipe walls. Fixation of the electron horizontal position is realized by strong vertical magnetic field. The model allows to explain the electron instability of bunched proton beam in the Fermilab Recycler. According it, the instability is caused by injection errors which initiate coherent betatron oscillations of the bunches, and electric field of the electron snake promotes an increase of their amplitude in time, as well as from the batch head to its tail. Nonlinearity of the e-cloud electric field is considered in detail as the important factor restricting the amplitude growth. The parts of the Recycler perimeter without dipole magnetic field are included in the investigation as well. However, it turns out that their contribution in the instability is negligible. Results of calculations are in reasonable agreement with the Recycler experiment evidence. [9]{} J. Eldred, P. Adamson, D. Capista, I.Kourbanis, D.K. Morris, J. Thangaraj, M.J. Jang, R. Zwaska, Y. Ji, “Fast Transverse Instability and Electron Cloud Measurement in the Fermilab Recycler”, 54th ICFA Advanced Beam Dynamics Workshop on High-Intensity and High-Brightness Hadron Beams, THO4LR04 (HB2014), accelconf.web.cern.ch/AccelConf/HB2014/papers/tho4lr04.pdf V. Balbekov, “Model of e-cloud instability in the Fermilab Recycler”, FERMILAB-FN-1001-APC, arXiv:1506.08139 (2015). M. A. Furman and M. T. F. Pivi, “Probabilistic model for the simulation of secondary electron emission”, PRSTAB 5, 12404 (2002) M. T. F. Pivi and M. A. Furman, “Electron cloud development in the Proton Storage Ring and in the Spallation Neutron Source”, PRSTAB 6, 034201 (2003)
--- abstract: 'Two entwined problems have remained unresolved since pulsars were discovered nearly 50 years ago: the orientation of their polarized emission relative to the emitting magnetic field and the direction of putative supernova “kicks’ relative to their rotation axes. The rotational orientation of most pulsars can be inferred only from the (“fiducial’) polarization angle of their radiation, when their beam points directly at the Earth and the emitting polar fluxtube field is $\parallel$ to the rotation axis. Earlier studies have been unrevealing owing to the admixture of different types of radiation (core and conal, two polarization modes), producing both $\parallel$ or $\perp$ alignments. In this paper we analyze the some 50 pulsars having three characteristics: core radiation beams, reliable absolute polarimetry, and accurate proper motions. The “fiducial’ polarization angle of the core emission, we then find, is usually oriented $\perp$ to the proper-motion direction on the sky. As the primary core emission is polarized $\perp$ to the projected magnetic field in Vela and other pulsars where X-ray imaging reveals the orientation, this shows that the proper motions usually lie $\parallel$ to the rotation axes on the sky. Two key physical consequences then follow: first, to the extent that supernova “kicks’ are responsible for pulsar proper motions, they are mostly $\parallel$ to the rotation axis; and second that most pulsar radiation is heavily processed by the magnetospheric plasma such that the lowest altitude “parent’ core emission is polarized $\perp$ to the emitting field, propagating as the extraordinary (X) mode.' author: - 'Joanna M. Rankin' bibliography: - 'coreXmode.bib' title: 'TOWARD AN EMPIRICAL THEORY OF PULSAR EMISSION XI. UNDERSTANDING THE ORIENTATIONS OF PULSAR RADIATION AND SUPERNOVA “KICKS”' --- Introduction ============ Radio pulsars now contribute importantly to many fields of physical science, but paradoxically, two fundamental coupled issues have remained unresolved since they were discovered 47 years ago: the orientation of their linearly polarized emission relative to the magnetic field in their polar fluxtube emitting regions and the origin/orientation of their often large space velocities (and thus proper motions) relative to their rotation axes. Most pulsars are known only by their lighthouse-like radio signals, and thus we have no direct means of determining the orientation of a pulsar’s rotation axis on the sky, which is crucial both to interpreting the polarization direction of the radiation and the orientation of the proper motion relative to the spin axis. Pulsars radiate because highly energetic outward-going charges are accelerated by the curved dipolar field within their polar regions, so it is crucial to understand how this radiation is polarized relative to the (projected) [**B**]{} field orientation on the sky. Figure \[fig1\] shows how this field appears splayed when a pulsar’s beam points directly at the Earth—the “fiducial” instant—and that the emission reaching us is associated with that bundle in the plane of the rotation axis . Clearly, we have no knowledge of the radial component of a pulsar’s space velocity and only weak estimates of the radial component of . ![Diagram showing the splayed curving field lines—here shown in projection—associated with pulsar emission at the “fiducial” instant when the beam faces the Earth directly. Both the magnetic axis [**M**]{} and pulsar rotation axis  are indicated in relation to the “fiducial” field lines (magneta) associated with the radiation we receive at that instant. The projected fiducial field is $\parallel$ to the rotation axis  and thus used as a proxy for  which is otherwise usually not observable. Also indicated is the emission-beam structure consisting of two concentric conal beams (outer: blue and inner: green) and the central core beam (yellow) in relation to a typical sightline traverse (black dashed). Any of the three beams can be present in a given pulsar and different sightline paths result in different profile types. Profiles with a core beam are of primary interest here—that is, the core-single ([**S$_t$**]{}) where it appears on its own; triple ([**T**]{}) profiles where one of the cones is present; and five-component ([**M**]{}) profiles where both cones appear.[]{data-label="fig1"}](Figure1b.ps){width="86mm"} At one time it seemed obvious that pulsar radiation must be polarized $\parallel$ to the projected magnetic field direction. How could it be otherwise given that both the curving [**B**]{} field and the resulting curvature acceleration lie in the same plane? Even after the discovery that pulsars emit in two orthogonal (hereafter OPM) polarization modes \[[@1980ApJS...42..143B]; [@1975ApJ...196...83M]\] many assumed in the absence of any direct proof that the “primary” polarization mode must be $\parallel$. This easy presumption was dashed in the new millennium by X-ray imaging of the Vela pulsar \[[@2001ApJ...556..380H]; \] where arcs indicated the orientation of the star’s rotation axis  relative to its polarization and proper-motion (PM) directions. Shockingly, the radiation was polarized $\perp$ to the projected magnetic field [**B**]{} plane, a circumstance then beautifully confirmed for the radio emission [@2005MNRAS.364.1397J hereafter Johnston I]—and this pulsar’s radio emission is almost completely linearly polarized, so there was no OPM ambiguity. In an earlier paper [@2007ApJ...664..443R hereafter Paper I] we investigated the PPA vs. PM alignments of a number of pulsars, drawing strongly on Johnston I as well as other sources. Here, a “fiducial” polarization position angle (PPA) $PA_0$, at a (“fiducial”) rotational phase representing the magnetic axis longitude, is measured and referred to infinite frequency as a proxy for the (unseen) orientation of the rotation axis . These were compared with well determined proper-motion (PM) directions $PA_V$, and the differences $\Psi$ showed strong peaks at both 0 and 90. Given, however, that most of the pulsars showed strong OPM activity in their profiles, it was not possible to draw general conclusions about the polarization orientation with respect to the projected [**B**]{} direction. The second coupled key question is how a pulsar’s rotation vector  is oriented with respect to its space velocity (of which we can usually measure only their projections on the plane of the sky)? The possibility of a correlation was raised early \[[@1969AZh....46..715S]; [@1970ApJ...160L..91G]; [@1975Natur.254..676T]\] as their large peculiar velocities were realized, initially through scintillation studies. Though no binary pulsar was then known, their presumptive birth in the disruption of such systems appeared to be a significant factor in their PMs. However, binary disruption ultimately seemed inadequate to account for the very large velocities of some pulsars. Theorists then began to explore other mechanisms for their acceleration—in particular the question of whether natal supernovae imparted “kicks” to their progeny. This “kick” question has a complex history which is nicely summarized in [@2012MNRAS.423.2736N]. Suffice it to say that theorists suggested ingenious “kick” mechanisms—both $\parallel$ and $\perp$ to the orientation of a pulsar’s rotation  (as well possibly in combination?). Mechanisms such as Galactic acceleration are known which would tend to alter such “birth” alignments [, @2013MNRAS.430.2281N], however the analyses quoted above indicate that a surprisingly large proportion of pulsars show $\Psi$ orientation close to either 0 or 90. Again, however, the orientation of  cannot usually be determined directly, but only though the fiducial PPA proxy $PA_0$ that is complicated by OPM radiation either $\parallel$ or $\perp$ to the projected [**B**]{} direction. This double ambiguity of proper-motion direction and OPM orientation has plagued efforts to settle how supernovae contribute to pulsar velocities. Although much of our earlier work had focused on classifying the properties of different pulsar profile populations according primarily to their emission geometry and frequency evolution \[Rankin [-@1983ApJ...274..333R], [-@1990ApJ...352..247R], [-@1993ApJ...405..285R], [-@1993ApJS...85..145R]; [@1989ApJ...346..869R]; [@2011ApJ...727...92M]; hereafter ET I, IV, VIa,b, — & IX\], our analysis in Paper I failed to use this information, despite the fact that most of the stars there under consideration had long been classified through detailed study. Most of the pulsars in the Johnston I study exhibited core, core-cone, or core-double cone profiles of the core-single ([**S$_t$**]{}), triple ([**T**]{}) or five-component ([**M**]{}) types (see Fig. \[fig1\]), and most of the 90 alignments pertained to pulsars of these types, but this circumstance was not then observed or interpreted. ![Polarization profile of B1706–16 from Johnston I showing its highly depolarized single core-component profile at 1369 MHz. The roughly linear, negative-going PPA (black curve) plotted in the upper panel connects smoothly with the PPA – 90 (blue) curve on the trailing edge of the profile, coinciding with the minimum in the linear polarization (red curve) in the lower panel where the total power (black) and circular polarization (green) are also plotted. The PPAs are referred to infinite frequency, and the $PA_V$ value is shown as a horizontal (cyan) line. See text.[]{data-label="fig2"}](Figure2.ps){width="76mm"} In large part this failure stemmed from the complexity of the PPA traverses in some pulsars with bright core components. A number exhibit PPA traverses that are readily fitted by the expected rotating-vector (RVM) model [@1969ApL.....3..225R], while others have depolarized, OPM-active components with distorted PPA traverses that have proven difficult to explain and interpret. Recently, several detailed analyses of particular bright pulsars with prominent core emission have helped us to understand these effects more fully \[[@2007MNRAS.379..932M]; [@2013MNRAS.435.1984S]\], and we have found that intensity-dependent aberration/retardation [A/R, see @1991ApJ...370..643B]—an effect first seen in the Vela pulsar [@1983ApJ...265..372K]—is also a major source of PPA-traverse distortion in cores. Figure \[fig2\] shows a clear case of this sort of depolarization and distortion of an otherwise nearly linear PPA traverse. The emission in the trailing part of the component reflects one OPM and that in the earlier part the other, apparently because more intense fractions of this later “parent” emission appear earlier due to the intensity-dependent A/R and some of it is seemingly converted to the other OPM. Three Overlapping Analyses ========================== The analysis of this paper is based on three different types of pulsar investigation: a) reliable identification of pulsars with central core-beam emission; b) well determined fiducial PPA measurements; and c) accurate proper motion (PM) directions. Core emission was first distinguished from the conal type in ET I and populations of pulsars with core emission components identified in ET IV, VI & IX. Emission geometry analyses for more than 50 core single ([**S$_t$**]{}), some 50+ core-cone triple ([**T**]{}) as well as a few core/double-cone ([**M**]{}) stars are given in the tables of ET VIb. This overall core population is key to our analysis below because almost all of these identifications have proven to be accurate, many through further detailed single pulse studies. Moreover, core components usually appear to fall close to the magnetic axis longitude in pulsar profiles, so provide a useful indicator in this regard. Fiducial polarization angle measurements $PA_0$ are here used as proxies for the orientation of a pulsar’s rotation axis on the sky. First, they require absolute PPA calibration—that is, referenced to an origin measured counter-clockwise from North on the sky—and second, they must be referred to infinite frequency by accurately unwrapping the Faraday rotation of the PPAs. Then the “fiducial” magnetic axis longitude is estimated by PPA-fitting or profile-analysis. Most of the published pulsar polarimetry lacks absolute calibration. The Effelsberg instrument pioneered this work \[; ; \], but important recent efforts have been carried out at Parkes (Johnston I; [@2006MNRAS.365..353K]; [@2007MNRAS.381.1625J], hereafter Johnston II\] as well as Arecibo (Paper I) and the GBT \[[@2012MNRAS.423.2736N]; [@preprint]\]—bringing the total to about 100 objects. Accurate proper-motion directions $PA_V$, measured CCW from North, are known for just over 100 normal and some 25 millisecond pulsars. Earlier, pulsar PMs were measured reliably only with interferometry, but over the last decade special pulsar-timing methods have also produced useful measurements (Hobbs  2004, 2005). [llllll]{} Pulsar & $PA_V$ &Ref.& $PA_0$ &Ref.& $\Psi$\ & (deg) && (deg) && (deg)\ \ B0011+47 & +136(3)&1& 43(7) &\[tbl-A4\]& –87(8)\ B0136+57 & –131(0)&2 & 43(3) &\[tbl-A4\]& 6(3)\ B0329+54 & 119(1)&3& 20(4) &\[tbl-A2\]& 99(4)\ B0355+54 & 48(1)&4& –41(4) &\[tbl-A2\]& 89(5)\ B0450+55 & 108(0)&2 & –23(16) &\[tbl-A2\]& –94(16)\ B0450–18 & 40(5)&2 & 47(3) &\[tbl-A3\]& –7(6)\ B0540+23 & 58(19)&5& –85(3) &\[tbl-A3\]& –37(19)\ B0628–28 & 294(2)&1 & 26(2) &\[tbl-A1\]& 88(3)\ B0736–40 & 227(5)$^{\it a}$&1& –44(5) &\[tbl-A2\]& 91(7)\ B0823+26 & 146(1)&6& 48(3) &\[tbl-A2\]& 98(3)\ B0833–45 & 301(0)&7 & 37(1) &\[tbl-A1\]& 84(1)\ B0835–41 & 187(6)&1 & –84(5) &\[tbl-A3\]& 91(8)\ B0919+06 & 12(0)&8 & –77(10) &\[tbl-A1\]& 89(10)\ B1237+25 & 295(0)&3 & –28(4) &\[tbl-A1\]& –37(4)\ B1322+83 &–76(13) &5& 26(3) &\[tbl-A4\]& 78(15)\ B1426–66 & 236(5)&9 & –29(1) &\[tbl-A1\]& 84(5)\ B1449–64 & 217(2)&9 & –57(0) &\[tbl-A1\]& 94(2)\ B1451–68 & 253(0)&10 & –22(4) &\[tbl-A1\]& 95(4)\ B1508+55 & –130(0)&5& 5(4) &\[tbl-A2\]& 45(4)\ B1541+09 & –111(0)&2& –32(15) &\[tbl-A5\]& –79(17)\ B1600–49 & 268(6)&9& 0(5) &\[tbl-A3\]& 88(8)\ B1642–03 & 353(3)&1 & 89(8) &\[tbl-A1\]& 84(8)\ B1706–16 & 192(4)&11 & –75(2) &\[tbl-A1\]& 87(4)\ B1706–44 & 160(10)&12 & 71(10) &\[tbl-A1\]& 89(14)\ B1732–07 & –5(3)&1& –22(1) &\[tbl-A4\]& 17(4)\ B1737+13 & 228(6)&1 & –46(4) &\[tbl-A1\]& 94(7)\ B1818–04 & –22(17)&13& 55(3) &\[tbl-A3\]& –77(17)\ B1821–19 &–173(17)&14 &53(2)&\[tbl-A4\]& –46(17)\ B1826–17 &+172(9)&14 &11(1)&\[tbl-A4\]& –19(9)\ B1842+14 & 36(8)&11 & –52(2) &\[tbl-A1\]& 88(9)\ B1848+13 & 237(16)&13& –45(3) &\[tbl-A3\]& –78(16)\ B1857–26 & 203(0)&5 & –69(10) &\[tbl-A1\]& 92(10)\ B1911–04 & 166(5)&11 & –99(8) &\[tbl-A1\]& 85(9)\ B1913+10 & 174(15)&13& 85(3) &\[tbl-A3\]& 89(15)\ B1929+10 & 65(0)&8 & –11(0) &\[tbl-A1\]& 77(0)\ B1933+16 & 176(1)&11 & –87(1) &\[tbl-A1\]& 89(1)\ B1935+25 & 220(9)&13& –56(8) &\[tbl-A3\]& 96(12)\ B1946+35 & –93(3)$^{\it b}$&13& –78(15) &\[tbl-A5\]& –16(15)\ B2045–16 & 92(2)&15& –13(5) &\[tbl-A3\]& –75(6)\ B2053+36 & 157(1)&5& –77(11) &\[tbl-A5\]& 54(11)\ B2110+27 & –157(2)&5&–64(5) &\[tbl-A5\]& 87(5)\ B2111+46 & –20(44) &13 & 86(0) &\[tbl-A4\]& 66(44)\ B2217+47 &–158(10)&6 & 65(8) &\[tbl-A4\]& –4411)\ B2224+65 & +52(1) &5& –48(5) &\[tbl-A4\]& –80(5)\ B2255+58 & +106(12)&14 & 24(3) &\[tbl-A4\]& 82(12)\ B2310+42 & +76(0) &2 & 18(10) &\[tbl-A4\]& 58(10)\ B2327–20 & 86(2)&1& 21(10) &\[tbl-A3\]& 65(10)\ $^a$Here we use [@2003AJ....126.3090B] $PA_V$ value rather than the analysis in Johnston  II, correcting the error in Paper I. [$^b$]{}The origin of the interferometric proper motion value in [@2005MNRAS.360..974H] is unclear, so we have used the timing value in Hobbs  (2004). Proper-motion references: (1) [@2003AJ....126.3090B], (2) [@2009ApJ...698..250C], (3) [@2002ApJ...571..906B], (4) [@2005ApJ...630L..61C], (5) [@1993MNRAS.261..113H], (6)[@1982MNRAS.201..503L], (7) [@2003ApJ...596.1137D], (8) [@2001ApJ...550..287C], (9) [@1990Natur.343..240B], (10) [@1990MNRAS.247..322B], (11) Johnston  I updates of Hobbs  (2004), (12) [@2004ApJ...601..479N], (13) [@2004MNRAS.353.1311H], (14) [@2005MNRAS.362.1189Z], (15) [@1997MNRAS.286...81F] In what follows then, we assemble this three-fold information as indicated in Figure \[fig3\] on the nearly 50 pulsars having reliable PM directions, absolute fiducial PPA determinations and well identified core-emission components. Table \[tbl-1\] summarizes these proper-motion directions $PA_V$, proxy rotation-axis orientations $PA_0$ as well as their differences $\Psi$. The $PA_V$ references show the origins of these values and in several cases their correction. The $PA_0$ references trace their origins from Johnston I, Paper I, Johnston II, Force , and the present study (Tables \[tbl-A1\]-\[tbl-A5\], respectively). Notes to these Appendix tables explain needed revisions or problems. Most classifications remain accurate from ET IV and ET VI, with only a few having been classified anew or reclassified as a result of new information. Similarly, most $PA_0$ values appeared accurate as determined in the above sources, and where corrections or different interpretations are made, these are described in the table notes. ![Distribution showing the core-emission alignment angles $\Psi$ of the 47 pulsars in Tables \[tbl-A1\] to \[tbl-A5\]. Here the alignment of each pulsar is represented by a 180-von Mises function with peak $\Psi$ and standard deviation corresponding to its error value. Each function has equal area such that the total area of the cumulative distribution corresponds to the area of the region below unity.[]{data-label="fig4"}](Figure4.ps){width="76mm"} Discussion ========== The overall distribution of the computed alignment angles $\Psi$ \[= $PA_V - PA_0$\] for core emission is shown in Figure \[fig4\]. Each value is represented by a von Mises function of equal area with its standard deviation corresponding to its error. All 47 pulsars currently having the three-fold analysis—that is, accurate proper motions together with fiducial PPAs of relatively well identified core emission—appear in Table \[tbl-1\], the Appendix tables and the above figure. Clearly, the analysis shows that core emission exhibits a strong orthogonal alignment on the plane of the sky. Most of the $\Psi$ values fall near 90 and often accurately so. Indeed, interpreting Fig. \[fig4\] probabilistically, half the weight falls within 90$\pm$10, 2/3 within $\pm$20and 3/4 within $\pm$30; only 14% falls within 0$\pm$30/degr. We see here that the “fiducial” PPA $PA_0$ (in several cases computed to track the “parent” core OPM) is usually orthogonal to the proper-motion direction $PA_V$. The “fiducial” instant furthermore implies that the projected direction of the emitting magnetic field is in turn $\parallel$ to the rotation axis  on the sky (see Fig. \[fig1\]). None of this, however, fixes how the electric vector (defining the linear PPA direction) of the core emission is oriented with respect to the projected magnetic field direction. For this we must refer to X-ray images of Vela and certain other pulsars (, Helfand  2001; Ng & Romani 2004) from which the rotation-axis orientation on the sky can be compared with the radio “fiducial” polarization direction of the core emission—and the result that they are again orthogonal. These circumstances then support several conclusions— - Core emission tends to be polarized perpendicularly to the projected magnetic field direction and thus propagates as the extraordinary (X) physical mode. In a few cases we must distinguish the (later or probably lower altitude) “parent” core emission. - Most pulsar proper motions fall closely $\parallel$ to the magnetic axis direction . The narrowness of this distribution around 90 is first surprising given the mechanisms that would degrade such alignments, and second a $\parallel$ alignment is not what would be expected if binary disruption is a major contributor to pulsar proper motions. - Natal supernovae “kicks” would then seem to be the dominant mechanism behind pulsar proper motions, and these “kicks” are primarily directed $\parallel$ to a pulsar’s rotation axis. - The orderly orientations of core polarization provide further demonstration of the the distinct character of core emission by contrast to conal radiation or other types not yet identified. - That core radiation both is the primary (central, low altitude) radio-frequency emission we are able to detect and tends strongly to propagate as the extraordinary (X) mode provides major insights into the operation of the pulsar “engine”. It must be tertiary to the high energy primary particle acceleration just above the polar cap. It reflects heavy processing by the highly magnetized dense plasma within the polar flux tube above the polar cap in which electromagnetic waves cannot propagate. Therefore, it must originate at a somewhat higher altitude at which plasma wave coupling to the X mode first becomes possible. These conclusions are important not only for understanding the origins of pulsar emission but also for supernova theory. Spin-aligned supernova “kicks” imply either rotation averaging of hydrodynamic “kicks” on timescales much longer that the natal spin period or a magnetic field driven mechanism. Similarly, the orderly alignments have important implications for understanding the detailed characteristics of the emission from individual pulsars. As we saw in our study of pulsar B0329+54 (Mitra  2007), our ability to identify the X and O modes of the emission facilitates a much more physical interpretation. Establishing that the primary core emission is X mode further enables us to distinguish the magnetic orientation of its two OPMs everywhere within a pulsar’s profile and to pursue questions about the overall emission more physically. Or said differently, the identification of core-associated X-mode emission in the profile centers of triple or five-component profiles permits us to identify the X/O character of the two conal-associated OPMs. This all said, our largely average-polarimetry based analysis above remains the result of a rough tool. Our facilitating insights have come from the few available pulse-sequence polarization studies, and very much remains to be learned from such investigations of other pulsars. As regards core emission, its mechanisms and dynamics will be revealed through broad band polarimetric studies of its depolarization. While we have been fairly successful in identifying core features on the basis of their geometry, spectral behavior and circular polarization, their linear polarization properties are highly variable. A few are highly polarized, but many are not, some are unimodal in form and other show bifurcation or a leading “pedestal” feature. In view of these varying characteristics, again it is surprising that the distribution of alignments is as strongly orthogonal as we have seen in Fig. \[fig4\]. In particular, the “parent” core emission is not always dominant or is conflated with other profile features so is not easy to identify. Thus further study may well show that some of the 0 alignments represent O-mode emission rather than the “parent” X-mode radiation. Again, we emphasize that these alignments are on the sky, as we have no direct means of considering the radial components of pulsar velocities. Given, however, the surprisingly compact distribution of transverse alignments, it is hard to imagine that the radial alignments could have a very different distribution. The effect of the unknown radial velocity components can be modeled statistically as Noutsos  (2012) have done, and their overall conclusion to the above effect suggests that a detailed statistical modeling is unnecessary to support the results here. It is interesting to look closely at those pulsars showing $\parallel$ or oblique alignments. Regarding the latter, some undoubtedly are cases like B1237+25 and B1508+55 where their positions on the sky are such that our lack of the radial velocity component is crucial—and the $\Psi$ values thus misleading. Several other pulsars undoubtedly remain misclassified, for instance because core ([**S$_t$**]{}) and conal single ([**S$_d$**]{}) profiles can sometimes be difficult to distinguish. The overall implication of our interpretation is that the fundamental curvature emission drives longitudinal plasma oscillations in the dense inner plasma of the polar flux tube [, @2002nsps.conf..230L]. These plasma oscillations then first couple to X-mode radiation which often then seems to be converted into the O mode at higher altitude in an intensity-dependent manner. The plasma oscillations may well be responsible for the “bunching” needed for pulsars to radiate mainly at radio frequencies, and the consistent characteristic dimensions of core components probably follows from the non-refractiveness of the X mode. [**Acknowledgments**]{}: The author thanks the anonymous referee and also Alice Harding, Aris Noutsos, Dipanjan Mitra and Paul Demorest for helpful suggestions on the analysis and manuscript. Portions of this work were supported by US NSF grants 08-07691 and 09-68296. Arecibo Observatory is operated by SRI International under a cooperative agreement with the NSF, and in alliance with Ana G. Méndez-Universidad Metropolitana, and the Universities Space Research Association. This research has made use of NASA’s Astrophysics Data System. \[lastpage\] APPENDIX: Results for Individual Pulsars {#sec:results .unnumbered} ======================================== [llcllc]{} Pulsar &Cl & $P$ &log($\tau$)& $PA_0$ & Method\ && (s) & (yrs)& (deg) &\ B0628–28$^{\it a}$ &[S$_t$]{}& 1.244&6.443& 26(2) & RVM fit\ B0833–45 &[S$_t$]{}& 0.0893&4.053& 37(1) & RVM fit\ B0919+06 &[T]{} & 0.4306&5.696& –77(10) & PPA geom.$^{\it b}$\ B1237+25 &[M]{} & 1.3824&7.358& –28(4) & traverse\ B1426–66 &[T?]{} & 0.7854&6.652& –29(1) & RVM fit\ B1449–64 &[S$_t$]{} & 0.1795&6.017& –57(0) & RVM fit\ B1451–68 &[M]{}& 0.2634&7.628& –22(4) & RVM fit\ B1642–03 &[S$_t$]{} & 0.3877&6.538& [**89(8)**]{}$^{\it c}$ & see text\ B1706–16 &[T]{} &0.6531&6.215& [**–75(2)**]{}$^{\it c}$ & see text\ B1706–44 &[S$_t$]{} & 0.1025&4.243& 71(10) & Paper I\ B1737+13 &[M]{} & 0.8031&6.943& –46(4) & RVM fit\ B1842+14 &[T]{} & 0.3755&6.502& –52(2) & RVM fit\ B1857–26 &[M]{} & 0.6122&7.676& –69(10) & Paper I\ B1911–04 &[S$_t$]{} & 0.8259&6.508& –99(8) & Paper I\ B1929+10 &[T]{} & 0.2265&6.491& –11(0) & RVM fit\ B1933+16 &[T]{} & 0.3587&5.976& [**–87(1)**]{}$^{\it d}$ & see text\ $^a$ET VI classified this star as a conal (S$_d$) profile, but newer information now tilts toward its being a S$_t$ star. Like B0823+26, it shows neither profile bifurcation nor conal outriders. $^b$ See [@2006MNRAS.370..673R] $^c$The pulsar’s latest core emission is polarized along a “track” that is $\perp$ to the PM direction at the profile peak. See text. $^d$Here the underlying SPM PPA sweep is clearly seen only prior to about –7  longitude and in the tailing core region near +5-7—and connecting the two shows the main regions of PPM power under the main peak and in the –6 to –3  interval. Connecting the former gives and intercept of about –87 at the primary profile peak. See the text. [llcllc]{} Pulsar &Cl & $P$ &log($\tau$)& $PA_0$ & Method\ && (s) & (yrs)& (deg) &\ B0329+54 &[T/M]{} & 0.7145&6.743& 20(4) & PPA geom.\ B0355+54 &[S$_t$]{} & 0.1564&5.751& –41(4) & PPA geom.\ B0450+55 &[T]{} & 0.3407&6.358& –23(16)$^{\it a}$ & PPA geom.\ B0736–40 &[T]{} & 0.3749&6.566& –44(5) & PPA geom.\ B0823+26 &[S$_t$]{} & 0.5307&6.692& 48(3) & PPA geom.\ B1508+55 &[T]{} & 0.7397&6.369& 5(4) & PPA geom.\ $^a$Both Xilouris  (1991) and recently Noutsos  (2012) provide absolute polarimetry and in neither is the central PPA rotation resolved; however, the 327-MHz polarimetry in ET IX as well as that of [@1998MNRAS.301..235G] and [@1988SvA....32..177S] show that the PPA rotates negatively though about 140. Moreover, strong A/R effects in this pulsar were documented in ET IX from which it is clear that the fiducial longitude lags the central core by some 13. Therefore a reasonable estimate of the PPA at the fiducial longitude $PA_0$ is about –23. Johnston I and Paper I Values {#johnston-i-and-paper-i-values .unnumbered} ----------------------------- Of the 25 pulsars analyzed by Johnston  (2006, Johnston I) and then reanalyzed by Rankin (2007, Paper I), 16 appear in Table \[tbl-A1\] by virtue (with one exception) of their earlier classification in ET IV and/or ET VIb as having a core emission component. Apart from pulsar B1237+25, they all exhibit alignments $\Psi=PA_V-PA_0$ of close to 90. The alignment values for B1642–03, B1706–16 and B1933+16 have been reinterpreted here as we have discussed above. They are given in boldface and differ from the values in Paper I in a similar manner as explained below. These and several other core-dominated pulsars in Johnston I (, B1426–66 and B1451–68) share the common properties of distorted PPA traverses and highly depolarized profiles. In Paper I we appealed to observations at other (often lower) frequencies to more reliably interpret these PPA traverses. Now, dynamical studies of two similar pulsars \[B0329+54 (Mitra  (2007) and B1237+25 (Smith  (2013)\] have helped us understand how this distortion and depolarization occurs systematically. Single mode emission in the trailing parts of a core component also appears earlier because of intensity-dependent aberration/retardation, and in some cases it seems to undergo conversion to the other mode. A relatively straightforward such case is shown for pulsar B1706–16 in Figure \[fig2\], where overall the emission in the two OPMs follows a negative-going, nearly linear PPA traverse. Here and in other cases, this later core emission (not that of any trailing conal outrider) is relatively undistorted by the intensity-dependent A/R and consequent depolarization through OPM mixing, so we use this later OPM to estimate the “fiducial” PPA. Clearly core emission entails a variety of core properties wherein for Vela and most other pulsars in Table \[tbl-A1\] the dominant (or later “parent”) emission is $\perp$ to the PM $PA_v$ direction and thus to [**B**]{}; whereas in other cases complex depolarization occurs and/or the secondary, apparently converted OPM becomes dominant at the fiducial longitude. Paper I also studied pulsars using the absolute polarimetry from Morris   (1979, 1981), Xilouris  (1991) and Karastergiou & Johnston (2006). Of the 21 such stars in Paper I, 6 appear in Table \[tbl-A2\] below again by virtue of their classification among one of the three profile groups having core components in ET IV or VIb. For these prominent and well studied pulsars, the PPA traverse geometry is usually well understood from multiple sources. Most of these $PA_0$ values are identical to those in Paper I; however, the $PA_0$ values for B0450+55 has been reinterpreted on the basis of new information. [llcllc]{} Pulsar &Cl & $P$ &log($\tau$)& $PA_0$ & Method\ && (s) & (yrs)& (deg) &\ B0450–18$^{\it a}$ &[T]{} & 0.5489&6.179& 47(3) & RVM fit\ B0540+23$^{\it b}$ &[S$_t$]{} & 0.2460&5.403& –85(3)$^{\it c}$ & RVM fit\ B0835–41 &[S$_t$]{} & 0.7516&6.526& –84(5) & RVM fit\ B1600–49 &[T?]{} & 0.3274&6.707& 0(5)$^{\it d}$ & $V$ zero\ B1818–04 &[T]{} & 0.5981&6.176& 55(3)$^{\it e}$ & PPA geom.\ B1848+13 &[S$_t$]{}? & 0.3456&6.565& –45(3) & RVM fit\ B1913+10 &[S$_t$]{} & 0.4045&5.623& 85(3) & RVM fit\ B1935+25 &[S$_t$]{} & 0.2010&6.695& –56(8)$^{\it f}$ & PPA geom.\ B2045–16 &[T]{} & 1.9616&6.453& –13(5) & RVM fit\ B2327–20 &[T]{}& 1.6436&6.750& 21(10) & RVM fit\ This pulsar needs study at a single pulse level. Johnston II’s 3.1-GHz and and 691-MHz profiles seem from different stars—and the smooth high frequency PPA traverse is deceptive; most lower frequency profiles show multiple $L$ minima and 90 “jumps”—possibly due to A/R effects. [$^b$]{}Long classified as [**S$_t$**]{}, the asymmetry of the PPA inflection and lack of outriders cast this classification into serious doubt. [$^c$]{} BCW’s fiducial PPA longitude falls 18 after the peak. [$^d$]{}Fiducial longitude taken at the $V$ zero-crossing point. [$^e$]{}The high frequency core center provides a better fiducial longitude. [$^f$]{}This pulsar’s leading component is a highly polarized precursor, and the second feature appears to have a core-single profile. Thus we take the fiducial longitude at what seems to be the core peak at 3.1 GHz. The 691-MHz profile is so depolarized in this region that no reliable PPAs can be measured. [llcllc]{} Pulsar &Cl & $P$ &log($\tau$)& $PA_0$ & Method\ && (s) & (yrs)& (deg) &\ B0011+47 &[T?]{} & 1.2407&7.542& 43(7) & RVM fit\ B0136+57$^a$&[S$_t$]{} & 0.2725&5.605& 43(3) & RVM fit\ B1322+83 &[S$_t$]{} & 0.6700&7.272& 26(3) & PPA geom.\ B1732–07 &[T]{} & 0.4193&6.738& –22(1)$^{\it b}$ & RVM fit\ B1821–19 &[S$_t$]{}?& 0.1893&5.758&53(2)& RVM fit\ B1826–17 &[T?]{} & 0.3071&5.943 &11(1)& RVM fit\ B2111+46 &[T]{} & 1.0147&7.352& 86(0) & RVM fit\ B2217+47 &[S$_t$]{} & 0.5385&6.490& 65(8) & RVM fit\ B2224+65 &[S$_t$]{} & 0.6825&6.049& –48(5) & ET IX,X\ B2255+58 & [S$_t$]{} & 0.3682&6.004& 24(3) & RVM fit\ B2310+42 & [M]{} & 0.3494&7.693& 18(10) & RVM fit\ $^a$ ET VIb classified this star as having an [**S$_t$**]{} profile, but find weak evidence of drifting. Neither conal outriders nor low frequency bifurcation has been seen—so the classification is uncertain. The PPA rotation in Noutsos  (2012) seems to have the incorrect sense. [$^b$]{}Fitted steep central PPA traverse unresolved in Johnston II. [llcllc]{} Pulsar &Cl & $P$ &log($\tau$)& $PA_0$ & Method\ && (s) & (yrs)& (deg) &\ B1541+09 &[T]{} & 0.7484&7.438& –32(15) & PPA geom.\ B1946+35$^{\it a}$ &[S$_t$]{}& 0.7173&6.207& –78(15) & PPA geom.\ B2053+36 &[S$_t$]{} & 0.2215&6.978& –77(11) & PPA geom.\ B2110+27 &[S$_t$]{} & 1.2028&6.861&–64(5) & PPA geom.\ Single-pulse polarimetry study is needed to reliably interpret this pulsar’s complex and depolarized profile in the core region. -- --- --     -- --- -- Johnston II, Force , and AO Values ---------------------------------- A further major source of fiducial $PA_0$ values is Johnston  (2007, Johnston II), and these appear in Table \[tbl-A3\]. Of the 22 pulsars in the foregoing paper, 10 appear here, and most were classified as above or in ET IX. Overall this group is less well studied in terms of profile geometry, often because polarimetric observations are available only at one or two frequencies. Notes show how the pulsars were interpreted here when the resulting $PA_0$ values differ from those in Johnston II. Force  (2015) conducted absolute polarimetric measures with the Green Bank Telescope. Of the 33 Force  stars, 11 with clear or probable core components appear in Table \[tbl-A4\]. Again, this group has been less well studied in terms of profile geometry, often because their weakness makes observations over a broad band difficult. Finally, we report four values from Arecibo measurements in this paper. These used four Mock Spectrometers (www.naic.edu/$\sim$astro/mock.shtml) sampling bands of 86 MHz centered at 1270, 1420, 1520 and 1620 MHz with milliperiod sampling. The resulting polarized profiles were derotated to infinite frequency are shown in Fig. \[figA1\]. All reference nominal 21-cm observations (by averaging the bottom three bands), and they entail straightforward interpretations in that their fiducial longitudes fall very close to the central component peaks. Their $PA_0$ values are given in Table \[tbl-A5\].
--- abstract: 'We introduce soft recollisions in laser-matter interaction. They are characterized by the electron missing the ion upon recollision in contrast to the well-known head-on collisions responsible for high-harmonic generation or above-threshold ionization. We demonstrate analytically that soft recollisions can cause a bunching of photo-electron energies through which a series of low-energy peaks emerges in the electron yield along the laser polarization axis. This peak sequence is universal, it does not depend on the binding potential, and is found below an excess energy of one fifth of the ponderomotive energy.' address: | $^1$Max Planck Institute for the Physics of Complex Systems, Nöthnitzer Stra[ß]{}e 38, 01187 Dresden, Germany\ $^2$Max Planck Advanced Study Group at CFEL, Luruper Chaussee 149, 22761 Hamburg, Germany author: - 'Alexander Kästner$^{1}$, Ulf Saalmann$^{1,2}$, and Jan M. Rost$^{1,2}$' title: 'Electron-energy bunching in laser-driven soft recollisions' --- Recollision of an electron with its parent ion under a linearly polarized strong laser field has been shown to be the basis of a plethora of phenomena in atoms [@co93; @ku87], molecules [@itle+04], clusters [@saro08] and solids [@br87]. In principle the recollision process is very simple and a single degree of freedom along the laser polarization axis is sufficient to describe it (often referred to as the three-step model [@co93]): Firstly, the bound electron is released from an atom due to the strong electric field of a laser. Secondly, it is accelerated and driven back to the ion. In the third step it either recombines in the atomic potential or is scattered from it. In the former case, high-order harmonics are generated (HHG) due to recombination of the electron [@leba+94]. In the latter case, the elastic head-on collision induces the high-energy phenomenon of above-threshold ionization (ATI) with fast electrons emitted [@pabe+94; @mipa+06]. The enormous impact of HHG up to recent proposals for imaging of molecular orbitals [@itle+04] and the generation of attosecond pulses [@kriv09] is not the least due to the simple yet accurate description with the three-step model. Recently, a surprising strong peak—the “low-energy structure” (LES)—was observed at few eV in the photo-electron spectrum of atoms in strong infra-red (a few $\mu$m wavelength) laser pulses [@blca+09; @quli+09] and confirmed numerically with classical calculations [@liha10; @yapo+10]. Although the LES peak contains about half of the photo electrons it was not seen in any of the numerous experiments done with 800nm laser pulses. Here, we will give an analytical explanation of the LES by introducing a low-energy soft-recollision mechanism. It gives rise to a universal series of low-energy peaks in the momentum spectrum of the photo electron with well defined relative positions of 3/5, 5/7, 7/9 … on an absolute energy scale of about one fifth of the ponderomotive energy $F^{2}/(4\omega^{2})$, where $F$ is the amplitude and $\omega$ the frequency of the laser field. These peaks do not require a special binding potential, e.g., long range, nor do they need more than one degree of freedom to appear, and they can be derived classically since they rely essentially on the well known strong-field trajectories as will become clear later. ![image](Fig1-contours123){width="\textwidth"} We will begin by working out the classical structures which are responsible for the LES [@blca+09], i.e., we consider a Hamiltonian $H = H_{0}+V$ with (throughout the paper we use atomic units unless stated otherwise) \[eq:ham\]$$\begin{aligned} \label{eq:ham1} H_{0}&=\mathbf{p}^{2}/2+z\,\EF \cos(\omega t) \\ \label{eq:ham2} V&=-1/\left(\rho^{2}{+}z^{2}\right)^{1/2},\end{aligned}$$ describing an electron with position $\mathbf{r}{\equiv}(\rho,z)$ and momentum $\mathbf{p}{\equiv}(p_{\rho},p_{z})$ using cylindrical coordinates. The electron is exposed to the potential $V$ and driven by a laser field linearly polarized along $\hat z$. The probability to measure a photo electron with momentum $P_{z}$, ejected along the laser polarization axis $\hat z$, is given as a two-dimensional integral over initial phase-space variables (denoted with a prime), $$\label{eq:spectrum} {\cal P}(P_{z}) = \int\!\!\!\int\d\phi' \d p_{\!\rho}'\,w(\phi', p_{\!\rho}')\,\delta\left(P_{z}-p_{z}(\phi',p_{\!\rho}')\right)\,,$$ where $\phi' \equiv \omega t'$ is the phase of the laser at the time when the electron tunnels and $p_{\!\rho}'$ is the initial momentum perpendicular to the tunneling direction $\hat z$. The weight $w(\phi', p_{\!\rho}')$ accounts for the tunnel probability and Jacobian factors. The relevant dynamical object in a classical dynamical theory is the deflection function $p_{z}(\phi',p_{\!\rho}')$ which relates final variables to the initial conditions of the trajectory [@ro98]. shows $p_{z}$ as a function of the initial phase $\phi'$ (or rather the corresponding vector potential $A'=A\sin\phi'$ with $A\equiv F/\omega$) and the initial transverse momentum $\prhoi$. One can see that $p_{z}$ develops “finger-like” structures with increasing time. They emerge first in the second laser period and with each period an additional finger appears. These fingers are due to head-on collisions, responsible for the well-known high-energy phenomena such as HHG and ATI. Also, these regions are characterized by chaotic dynamics [@saro99], very sensitive to initial conditions. Responsible for the distinct peaks in ${\cal P}(P_{z})$ in d at low energies, however, are *not* the fingers, but the crossings of contour lines above the fingers in b,c. They represent saddle points in $p_{z}(\phi',p_{\!\rho}')$ with $$\begin{aligned} &\partial p_{z}/\partial \phi'= 0,\quad \partial p_{z}/\partial p_{\!\rho}'=0, \notag\\ \label{eq:saddle} &\left(\partial^{2} p_{z}/\partial\phi'{}^{2}\right) \left(\partial^{2} p_{z}/\partial p_{\!\rho}'^{2}\right)<0. \end{aligned}$$ Such two-dimensional saddles are known to produce peaks since they represent integrable singularities in the spectrum [^1]. With each additional laser period (for the first three we show contour plots and for the first four we show the spectra in ) a new peak appears. This establishes, that the LES actually consists of a series of peaks converging towards threshold $P_{z}=0$. Figure \[fig:bunchingandtrajectory\]a illustrates that these peaks are due to an energy-bunching mechanism of neighboring trajectories: The three trajectories shown, start at similar but different phases $\phi'$ with corresponding drift momenta $A\sin\phi'$. However, they carry the same momentum after the “soft recollision” with the nucleus in the second laser period at $\phic\approx3\pi$. The trajectory shown in b (corresponding to the central trajectory of a) reveals, that it is a soft recollision with the nucleus which leads to the energy bunching: The electron “misses” the nucleus ($\rhoc>0$) and recollides by virtue of the laser force which turns the electron around at $\zc$ with $|\zc|\ll\zquiv\equiv F/\omega^{2}$. Hence, this new type of a recollision is quite different from the elastic reflection off the potential with finite momentum $\pc$ as in the head-on collisions in ATI or HHG. How do these soft recollisions provide a series of peaks? With the characteristics of the soft recollision (all related quantities are denoted with a star) as observed $$\label{eq:softcondition} \zc\equiv z(\phic) \sim 0\:\mbox{ and }\: \pc\equiv p_{z}(\phic) = 0\,,$$ this is easy to see using strong-field trajectories \[eq:sftrajectories\] $$\begin{aligned} \label{eq:sftrajectories-z} z(\phi) &= \zquiv\left([\phi'{+}\phiV]\phi + \cos\phi{-}1\right) \\ \label{eq:sftrajectories-p} p_{z}(\phi) &= A[\phi'{+}\phiV] - A\sin\phi\,.\end{aligned}$$ where we have linearized the solutions of Hamilton’s equations for $H_{0}$ in $\phi'$ since tunneling occurs near the maximum of the field $F\cos\phi' \sim F$, i.e., $\phi'\ll 1$. Moreover, $\phiV=\Delta p/A$ accounts for an overall $\Delta p$ offset of the drift momentum due to the potential. As can be seen in a this offset does not dependent on $\phi'$. ![(Color online) Time-dependent drift momentum $p_{z}(\phi)+A\sin(\phi)$ for three trajectories (left panel) with different initial drift momenta showing the effect of the Coulomb potential after release and the bunching during the soft recollision . Sketch of the rescattering trajectory (right) in a Coulomb potential (orange-shaded area). Full trajectory (upper panel) and details (lower) of the three interactions events: emission at $\phi\approx0$, effectless passing at $\phi\approx3\pi/2$, and soft recollison at $\phi\approx3\pi$.[]{data-label="fig:bunchingandtrajectory"}](Fig2a-pzoft "fig:"){width="0.38\columnwidth"} ![(Color online) Time-dependent drift momentum $p_{z}(\phi)+A\sin(\phi)$ for three trajectories (left panel) with different initial drift momenta showing the effect of the Coulomb potential after release and the bunching during the soft recollision . Sketch of the rescattering trajectory (right) in a Coulomb potential (orange-shaded area). Full trajectory (upper panel) and details (lower) of the three interactions events: emission at $\phi\approx0$, effectless passing at $\phi\approx3\pi/2$, and soft recollison at $\phi\approx3\pi$.[]{data-label="fig:bunchingandtrajectory"}](Fig2b-trajectory "fig:"){width="0.6\columnwidth"} From $p_{z}(\phic)=0$ we get with immediately $\phic = m\pi +(-1)^{m}[\phi'{+}\phiV]$. A little thought reveals that only odd integers $m = 2n+1$ yield non-trivial solutions. Requiring that $\zc=0$ with the recollision location $$\label{coll-zc} z(\phic) = \zquiv[(2n{+}1)\pi[\phi'{+}\phiV]-2]=0$$ gives the initial phases $\phi'_{n}$ and in turn the drift momenta $p_{n} =A[\phi'_n{+}\phiV]$, cf. , for the soft recollisions $$\label{eq:peaks} p_n = \frac{F/\omega}{(n{+}1/2)\pi} \,.$$ From this equation we expect a series of photo-electron momentum peaks. This is indeed confirmed by the spectra shown in d, where peaks appear cycle by cycle. How do these peaks emerge? From our analysis so far, one-dimensional (1D) dynamics with some potential (short or long range) should be sufficient to explain the underlying mechanism. To this end we consider the 1D Hamiltonian $H=p^{2}/2+xF\cos(\omega t)+V_{s}(x)$, with position $x$, momentum $p$ and the range $s$ of the potential. Starting always at the origin $x=0$, but with different phases $\phi'$, the electron is propagated until $|x|\gg s$ and the drift momentum $p(\phi){+}A\sin(\phi)$ is constant. The deflection function $p(\phi')$ along with the corresponding photo-electron spectrum \[eq:model\]$$\begin{aligned} {\cal P}(P) & = \frac 1{2\pi}\int\d \phi'\,\delta\left(P-p(\phi')\right) \label{eq:model1} \\ & = \frac 1{2\pi}\sum_{i}\left|\frac{\d p}{\d\phi'}\right|^{-1}_{p(\phi'_{i}) = P}\,, \label{eq:model2}\end{aligned}$$ is shown in for the short-range Gaussian potential $$\label{eq:gausspot} V_{s}(x) = -\exp\left(-(x/s)^{2}\right)/s\,.$$ By integrating the force due to the potential $V_{s}$ the deflection function can be written in the form $$\begin{aligned} \label{momentum} p(\phi') &= A\phi' +\delta p(\phi')\\ \label{eq:impact} \delta p(\phi') &= - \frac 1\omega \int_{\phi'}^{\infty}\!\!\mathrm d\phi \left.\frac{\d V_{s}}{\d x}\right|_{x=x(\phi)} ,\end{aligned}$$ where $A\phi'$ (shown as a dashed line in a) represents the contribution from the laser field without the potential. The second term $\delta p$ represents the impact from the external potential which leads to modulations in $p(\phi')$. Whether the modulations are really visible as pronounced peaks in the spectrum depends on the strength of the impact $\delta p(\phi')$ in . Peaks occur in the first place if $dp/d\phi'= 0$, cf. . Physically, this means that the change in the impact strength $d\delta p/d\phi'$ must exactly compensate the change in the drift momemtum, which is simply $A$. A weak impact leads only to a marginal decrease of the slope of $p(\phi')$ giving rise to a shallow hump in the spectrum. On the other hand a strong impact $\delta p$ will overcompensate the change of the drift momentum leading to a negative slope for some $\phi'$ accompanied by two extrema. This is indeed the case in for the higher recollisions $n>1$. The cross-over between weak and strong impacts is determined by $d^{2}p/d\phi'^{2}= 0$, which we take as a measure for a potential to produce pronounced peaks. These two conditions allow us to determine the initial phase $\phi'$ and the strength parameter $s$ of the potential for producing pronounced peaks through soft collisions as we will show now analytically. ![(Color online) Deflection function (left) and corresponding photo-electron spectrum with finite resolution (right) for the Gaussian potential , laser parameter as in . The missing interval in the deflection function represents initial conditions which lead to trapped trajectories. The dashed line corresponds to the strong field drift momentum $p(\phi') = A\sin\phi'$.[]{data-label="fig:gausspot"}](Fig3-defl-spec-1D){width="\columnwidth"} To this end we consider the integral, cf. Eq. \[eq:impactintegral\] $$\delta p(\phi') = - \frac 1\omega \int_{-\infty}^{+\infty}\!\!\mathrm d\phi \left.\frac{\d V_{s}}{\d x}\right|_{x=x(\phi)}$$ for a soft-recollision trajectory $$x(\phi)=\xc(\phi')+ \frac{\xquiv}{2}(\phi{-}\phic)^{2}.$$ It is defined by the quadratic dependence of a strong-field trajectory around the recollision phase $\phic$. One can extend this behaviour to $\phi\to\pm\infty$, since the force in Eq.(\[eq:impactintegral\]a) vanishes for large $|\phi|$. Note that the impact $\delta p$ depends through the recollision point $\xc$ on the initial phase $\phi'$. Fullfilling the condition $d^{2}p/d\phi'^{2}= 0$ can be cast into the form $f_{2}(\cc) = 0$ for the ratio $\cc\equiv\xc/s$ and the function \[eq:generalf\]$$\begin{aligned} \label{eq:generalf1} f_{j}(c^{\star}) &\equiv -\frac{k_{j}}{\omega}\int_{-\infty}^{+\infty} \!\!\!\mathrm{d}\phi\left.\frac{d^{j+1}V_{1}(x)}{dx^{j+1}}\right|_{x=c^{\star}+\phi^{2}/2} \\ \label{eq:generalf2} k_{j} &\equiv\left[(2n{+}1)\pi\right]^{j}\xquiv^{j-1/2}\big/s^{j+3/2}.\end{aligned}$$ This follows directly from Eq. by using the chain rule and an appropriate rescaling of the integration variable. Note, that the integral in Eq. does only depend on the *shape* of the potential, but not on any specific parameters of the problem $s$, $F$, or $\omega$. Hence, the value $\cc$ which solves $f_{2}(\cc) = 0$ is a general constant which assumes the value $\cc = -0.319$ for the Gaussian potential . The first condition $dp/d\phi'= 0$ reads with the definition simply $A+f_{1}(\cc)=0$ which can be recast into a form that determines $s$ as a function of the laser parameters $F$, $\omega$ and the order of the recollision $n$, $$\label{eq:svalue} s = \frac{[2(2n{+}1)\pi f_{1}(\cc)]^{2/5}}{(F\omega^{2})^{1/5}}.$$ This allows us to determine quantitatively the scale $s$ and through the relation $\xc/s = \cc$ also the point of the recollision $\xc$ at which the deflection function has a zero-slope inflection point. Our quasi-analytical determination of the soft-collision parameters $s$ and $\xc$ is remarkably accurate as the comparison with the numerical exact values from the soft colliding trajectory propagated under $H$ reveals in Table I. There, we also list the corresponding values for the 1D soft-core Coulomb potential $$\label{eq:softcore} V_{s}(x) = -1/\left(x^{2}{+}s^{2}\right)^{1/2},$$ for which one obtains $\cc=-0.264$. ---------- ------------ ------------ ------------ ------------ --------- ------------ $n=1$ $n=2$ $n=1$ $n=2$ $n=1$ $s$ 32.7 40.2 24.6 30.2 $\rhoc$ 23.6 \[-2pt\] [*31.9*]{} [*38.4*]{} [*23.9*]{} [*29.4*]{} [*22.2*]{} $-\xc$ 10.5 12.8 6.5 8.0 $-\zc$ 10.9 \[-2pt\] [*10.2*]{} [*12.6*]{} [*6.4*]{} [*8.1*]{} [*12.4*]{} ---------- ------------ ------------ ------------ ------------ --------- ------------ : Soft-recollision parameters (in a.u.) for the Gaussian and the soft-core potential as obtained from Eqs. and , respectively. For comparison full numerical results for propagation until $t=2T$ and $t=3T$, respectively, are shown in italics for the laser parameters of . In fact, the 3D physical case discussed in the beginning can be mapped onto the 1D soft-core potential since $\rho$ is very slowly varying across the soft collision (see b) and can be effectively treated as a parameter, i.e., we take at the soft collision $\rhoc=s$ in the soft-core potential . In order to fullfill the saddle-point conditions we only have to exchange $d^{2}p/d\phi'^{2}=0$ from our 1D treatment with $\partial p_{z}/\partial \rho =0$. The latter reads $$\label{eq:integralford} \int_{-\infty}^{+\infty}\!\!\mathrm{d}\phi\left[2\frac{dV}{dz}+z\frac{d^{2}V}{dz^{2}}\right]_{z=\Cc+\phi^{2}/2}=0,$$ and can be expressed with integrals from producing an equation only dependent on $\Cc = \zc/\rhoc$. A similar procedure as described above for the 1D case yields $\Cc=-0.462$ and ultimately $$\label{eq:rhovalue} \rhoc= 2.90/\left(F\omega^{2}\right)^{1/5}$$ for the first recollision ($n=1$) in very good agreement with the numerical values, see Table I. In summary, we have identified a soft-recollision mechanism which induces energy bunching for low-energy photo electrons along the laser polarization. The bunching occurs since electrons with initially different drift momenta can aquire impacts through soft recollisions which exactly counterbalance the initial differences leading to a series of photo-electron peaks with relative positions $p_{n}/p_{n-1}=(2n{-}1)/(2n{+}1)$ for $n{>}1$. This is a universal result which does neither depend on the dimensionality of the potential (one degree of freedom is enough) nor on the character of the potential (short or long range) or the laser intensity and frequency. It does, however, require a quiver amplitude $\zquiv$ much larger than the range of the potential $s$. This is necessary to provide well defined impacts $\delta p$ by the potential when the mainly laser driven electron trajectory passes the potential. The absolute positions $p_{n}$ of the peaks are slightly dependent on the potential and the laser pulse and will also be influenced by focal-volume averaging and the pulse envelope. This applies in particular to the higher-order peaks very close to threshold where also additional dynamical effects may mask the soft recollision peaks. However, since each peak $p_{n}$ is generated in a successively later laser period $n{+}1$, one can in principle control the number of peaks by varying the total number of cycles in the laser pulse [^2]. The phenomenon is essentially classical because the potential perturbs the strong field dynamics only marginally. The latter contains only up to quadratic operators (depending on the length or velocity gauge). Owing to the Ehrenfest theorem the quantum evolution can therefore be described equivalently by classical mechanics. [10]{} P. B. Corkum, Phys. Rev. Lett. [**71**]{}, 1994 (1993). M. Y. Kuchiev, JETP Lett. [**45**]{}, 404 (1987). J. Itatani [*et al.*]{}, Nature [**432**]{}, 867 (2004). U. Saalmann and J. M. Rost, Phys. Rev. Lett. [**100**]{}, 133006 (2008). F. Brunel, Phys. Rev. Lett [**59**]{}, 52 (1987). M. Lewenstein, P. Balcou, M. Y. Ivanov, A. L’Huillier, and P. B. Corkum, Phys. Rev. A [**49**]{}, 2117 (1994). G. G. Paulus, W. Becker, W. Nicklich, and H. Walther, J. Phys. B [**27**]{}, L703 (1994). D. B. Milosěvi[ć]{}, G. G. Paulus, D. Bauer, and W. Becker, J. Phys. B [ **39**]{}, R203 (2006). F. Krausz and M. Ivanov, Rev. Mod. Phys. [**81**]{}, 163 (2009). C. I. Blaga [*et al.*]{}, Nat. Phys. [**5**]{}, 335 (2009). W. Quan [*et al.*]{}, Phys. Rev. Lett. [**103**]{}, 093001 (2009). C. Liu and K. Z. Hatsagortsyan, Phys. Rev. Lett. [**105**]{}, 113003 (2010). T.-M. Yan, S. V. Popruzhenko, M. J. J. Vrakking, and D. Bauer, Phys. Rev. Lett. [**105**]{}, 253002 (2010). J. M. Rost, Phys. Rep. [**297**]{}, 271 (1998). G. van de Sand and J. M. Rost, Phys. Rev. Lett. [**83**]{}, 524 (1999). [^1]: The situation is similar to the logarithmic singularities of the density of states in a two-dimensional crystal: L. V. Hove, Phys. Rev. [**89**]{}, 1189 (1953). [^2]: A. Kästner, U. Saalmann, and J.M. Rost, in preparation
--- abstract: 'We prove results on the structure of a subset of the circle group having positive inner Haar measure and doubling constant close to the minimum. These results go toward a continuous analogue in the circle of Freiman’s $3k-4$ theorem from the integer setting. An analogue of this theorem in ${\mathbb}{Z}_p$ has been pursued extensively, and we use some recent results in this direction. For instance, obtaining a continuous analogue of a result of Serra and Zémor, we prove that if a subset $A$ of the circle is not too large and has doubling constant at most $2+\varepsilon$ with $\varepsilon<10^{-4}$, then for some integer $n>0$ the dilate $n\cdot A$ is included in an interval in which it has density at least $1/(1+\varepsilon)$. Our arguments yield other variants of this result as well, notably a version for two sets which makes progress toward a conjecture of Bilu. We include two applications of these results. The first is a new upper bound on the size of $k$-sum-free sets in the circle and in ${\mathbb}{Z}_p$. The second gives structural information on subsets of ${\mathbb}{R}$ of doubling constant at most $3+\varepsilon$.' address: - | Autonomous University of Madrid, and ICMAT\ Ciudad Universitaria de Cantoblanco\ Madrid 28049\ Spain - 'Universit'' e de Lorraine, Institut Elie Cartan de Lorraine, UMR 7502, Vandoeuvre-lès-Nancy, F-54506, France.' author: - Pablo Candela - Anne De Roton title: On sets with small sumset in the circle --- Introduction ============ A result of Freiman from 1959 [@Freiman1], often called the $3k-4$ theorem, states that if $A$ is a set of integers such that the sumset $A+A$ satisfies $|A+A|\leq 3|A| - 4$, then $A$ is contained in an arithmetic progression of length $|A+A| - |A| + 1$. This theorem motivated the search for analogues in other settings, especially in groups ${{\mathbb}{Z}_{p}}$ of integers with addition modulo a prime $p$. Treatments of the latter direction include [@Freiman2; @Gryn; @Nat; @Rodseth; @S-Z]. Part of the difficulty in finding a fully satisfactory ${{\mathbb}{Z}_{p}}$-analogue of the $3k-4$ theorem is that the statement has to involve more assumptions than in the integer setting, in particular to avoid certain counterexamples that occur in ${{\mathbb}{Z}_{p}}$ when $A+A$ is too large. In [@S-Z], Serra and Zémor proposed the following conjecture and proved a result towards it (namely [@S-Z Theorem 3], which we also recall below). \[conj:S-Z\] Let $p$ be a prime, let $r$ be a non-negative integer, and let $A\subset {{\mathbb}{Z}_{p}}$ satisfy $$|A+A| = 2|A| + r -1 \leq \tfrac{p}{2}+|A|-2, \;\; \textrm{ and }\;\; r\leq |A|-3.$$ Then $A$ is included in an arithmetic progression of length $|A| + r$. By an *interval* in ${{\mathbb}{Z}_{p}}$ we mean an arithmetic progression of difference 1. For a subset $A$ of an abelian group and an integer $n$, we denote by $n\cdot A$ the image of $A$ under the homomorphism $x\mapsto n\,x$ (for $A\subset {{\mathbb}{Z}_{p}}$ and $n\in {{\mathbb}{Z}_{p}}$ we also use $n\cdot A$ to denote the image of $A$ under $x\mapsto n\,x$). The conclusion of Conjecture \[conj:S-Z\] can be rephrased as follows: there exists $n\in{{\mathbb}{Z}_{p}}\setminus\{0\}$ and an interval $I\subset {{\mathbb}{Z}_{p}}$ such that $n\cdot A\subset I$ and $|I|\leq |A|+r$. Freiman’s $3k-4$ theorem has an extension applicable to two possibly different sets $A$, $B$ . A ${{\mathbb}{Z}_{p}}$-analogue of this extension has also been proposed, namely the so-called *$r$-critical pair conjecture*. A version of this conjecture appeared[^1] in [@H-S-Z] and was proved for small sets in [@B-L-R; @Green-Ruzsa]. We recall the following more recent version [@Gryn Conjecture 19.2]. \[conj:asymZp\] Let $p$ be a prime, let $r$ be a non-negative integer, and let $A, B$ be non-empty subsets of ${{\mathbb}{Z}_{p}}$ with $|A|\geq |B|$ and satisfying $$|A + B | = |A|+|B|+r-1 \leq \tfrac{1}{2}(p+|A|+|B|)-2, \;\; \textrm{ and }\;\; r\leq |B|-3.$$ Then there exist intervals $I,J,K\subset {{\mathbb}{Z}_{p}}$ and $n\in {{\mathbb}{Z}_{p}}\setminus\{0\}$ such that $n\cdot A\subset I$, $n\cdot B\subset J$, $n\cdot(A + B) \supset K$, and $|I|\leq |A|+r$, $|J|\leq |B|+r$, $|K|\geq |A| + |B|-1$. Note that this extends Conjecture \[conj:S-Z\] in particular in that the conclusion here concerns not only $A,B$ but also the third set $A+B$. The following equivalent version of Conjecture \[conj:asymZp\], appearing for instance in [@Gryn Conjecture 19.5], is notable for its symmetry. \[conj:Zptrio\] Let $p$ be a prime, let $r$ be a non-negative integer, and let $A_1,A_2,A_3$ be subsets of ${{\mathbb}{Z}_{p}}$ satisfying the following conditions: $$|A_1|,|A_2|,|A_3|> r+2, \qquad |A_1|+|A_2|+|A_3|>p-r, \qquad |A_1+A_2+A_3|< p.$$ Then there exist intervals $I_1,I_2,I_3\subset {{\mathbb}{Z}_{p}}$ and $n\in {{\mathbb}{Z}_{p}}\setminus\{0\}$ such that $n \cdot A_j \subset I_j$ and $|I_j|\leq |A_j|+r$ for $j=1,2, 3$. Considering analogues of the $3k-4$ theorem in the continuous setting of the circle group ${\mathbb}{T}={\mathbb}{R}/{\mathbb}{Z}$ goes back at least to the paper [@FMY] from 1973 by Freiman, Judin, and Moskvin. Conjecture \[conj:Zptrio\] has a natural analogue in this setting. In this paper, we obtain the following result toward this continuous analogue. \[thm:Ttrio\] Let $\rho\in (0,c)$ where $c=3.1\cdot 10^{-1549}$. Let $A_1,A_2,A_3$ be subsets of ${\mathbb}{T}$ satisfying the following conditions: $${\mu}(A_1),{\mu}(A_2),{\mu}(A_3)> \rho, \quad {\mu}(A_1)+{\mu}(A_2)+{\mu}(A_3)>1-\rho, \quad {\mu}(A_1+A_2+A_3)< 1.$$ Then there exist closed intervals $I_1,I_2,I_3\subset {\mathbb}{T}$ and $n\in {\mathbb}{N}$ such that $n \cdot A_j \subset I_j$ and $\mu(I_j)\leq {\mu}(A_j)+\rho$ for $j=1,2,3$. Here and throughout this paper, we denote by ${\mathbb}{N}$ the set of positive integers and by $\mu$ the inner Haar measure on ${\mathbb}{T}$, thus for any set $A\subset{\mathbb}{T}$ we have that $\mu(A)$ is the supremum of the Haar measures of closed sets included in $A$. We use the inner Haar measure, rather than the Haar measure, in order to deal with non-measurable sets and with the fact that the sumset of two measurable sets can be non-measurable. The conjecture mentioned just before Theorem \[thm:Ttrio\] puts forward that this theorem holds for every $\rho\in (0,1)$. As we will show, the argument that establishes the equivalence between Conjectures \[conj:asymZp\] and \[conj:Zptrio\] can be adapted to the continuous setting, by incorporating several additional technicalities, to show that Theorem \[thm:Ttrio\] implies the following result. \[thm:Tasym\] Let $\rho\in (0,c)$ where $c=3.1\cdot 10^{-1549}$. Let $A,B\subset {\mathbb}{T}$ satisfy $${\mu}(A+B) \; = \; {\mu}(A)+{\mu}(B)+ \rho \, < \; \tfrac{1}{2}\big(1+{\mu}(A)+{\mu}(B)\big), \quad \textrm{ and }\quad \rho < {\mu}(B)\leq {\mu}(A).$$ Then there exist intervals $I,J,K\subset {\mathbb}{T}$, with $I,J$ closed, $K$ open, $n\in {\mathbb}{N}$, such that $n\cdot A \subset I$, $n\cdot B \subset J$, $K\subset n\cdot (A+B)$, and $\mu(I)\leq {\mu}(A)+\rho$, $\mu(J)\leq {\mu}(B)+\rho$, $\mu(K)\geq {\mu}(A)+{\mu}(B)$. This theorem makes progress toward an analogue of Conjecture \[conj:asymZp\] for ${\mathbb}{T}$, analogue which originated in work of Bilu on the so-called *$\alpha+2\beta$ inequality* in the torus (see [@Bilu Conjecture 1.2]). We detail this in Remark \[rem:BiluConj\] in Section \[sec:mainpfs\], after having proved Theorem \[thm:Tasym\]. The bound $c=3.1\cdot 10^{-1549}$ in Theorems \[thm:Ttrio\] and \[thm:Tasym\] comes from a result in ${\mathbb}{Z}_p$ due to Grynkiewicz, as we explain in Section \[sec:mainpfs\]. In the *symmetric case*, i.e. when $A=B$, a better bound had been given by the result of Serra and Zémor toward Conjecture \[conj:S-Z\] (result recalled as Theorem \[thm:Zpsym\] below). We prove the following ${\mathbb}{T}$-analogue of this result. \[thm:Tsym\] Let $0\leq \varepsilon\leq 10^{-4}$. Let $A\subset {\mathbb}{T}$ satisfy ${\mu}(A)>0$ and $${\mu}(A+A)= (2+\varepsilon)\, {\mu}(A)\; <\;\tfrac{1}{2} +{\mu}(A).$$ Then there exist intervals $I,K\subset {\mathbb}{T}$, with $I$ closed, $K$ open and $n\in {\mathbb}{N}$, such that $n\cdot A \subset I$, $K\subset n\cdot (A+A)$, $\mu(I)\leq {\mu}(A+A)-{\mu}(A)$ and $\mu(K)\geq 2{\mu}(A)$. Apart from their relation to continuous analogues of the $3k-4$ theorem and Bilu’s conjecture, the theorems above are motivated by the following applications. The first application concerns the problem of determining the supremum of measures of Borel sets $A\subset {\mathbb}{T}$ such that the cartesian power $A^3$ contains no triple $(x,y,z)$ solving the equation $x+y=kz$, where $k\geq 3$ is a fixed integer. This is an analogue in ${\mathbb}{T}$ of a problem which goes back to Erdős (see ) and which has been treated in several works, first in the integer setting (see in particular ) and then also in the continuous setting of an interval in ${\mathbb}{R}$ . The above-mentioned supremum is seen to be at most $1/3$ by a simple application of Raikov’s inequality from [@Raikov] (see also [@Macbeath Theorem 1]). Our result, discussed in Section \[sec:kfs\], improves on this upper bound using Theorem \[thm:Tsym\]; see Theorem \[thm:ksfs-ub\]. Via a correspondence established in [@CS] between this problem in ${\mathbb}{T}$ and a similar problem in ${{\mathbb}{Z}_{p}}$, Theorem \[thm:ksfs-ub\] implies a similar result in ${{\mathbb}{Z}_{p}}$; see Remark \[rem:TtoZp\]. The second application provides new results about the structure of subsets of ${\mathbb}{R}$ of doubling less than 4. We discuss this in Section \[sec:dlt4\]. Essentially, if a closed set $A\subset [0,1]$ has doubling constant at most $3+\varepsilon$, then modulo 1 it has doubling constant at most $2+\varepsilon$, and so Theorem \[thm:Tsym\] can be used to obtain information on the structure of $A$; see Theorem \[thm:small\_doubling\_R\]. In particular, under a special case of the conjecture of Bilu mentioned above [@Bilu Conjecture 1.2], we obtain a version of [@EGM Theorem 6.2] with effective bounds; see Corollary \[cor:EGM\]. **Acknowledgements.** The authors are very grateful to Imre Ruzsa, Oriol Serra and Gilles Zémor for useful comments. This work was supported by project ANR-12-BS01-0011 CAESAR and by grant MTM2014-56350-P of MINECO. Proofs of the main results {#sec:mainpfs} ========================== As mentioned in the introduction, the analogues in ${{\mathbb}{Z}_{p}}$ of Theorems \[thm:Ttrio\] and \[thm:Tsym\] are known. Indeed, Theorem \[thm:Ttrio\] is a ${\mathbb}{T}$-analogue of the following result. \[thm:Zptrio\] Let $p$ be a prime, and let $r$ be an integer with $0\leq r \leq c\, p-1.2$ where $c=3.1\cdot 10^{-1549}$. Let $A_1,A_2,A_3$ be subsets of ${{\mathbb}{Z}_{p}}$ satisfying the following conditions: $$|A_1|,\, |A_2|, \,|A_3|> r+2, \quad |A_1|+|A_2|+|A_3|> p-r, \quad |A_1+A_2+A_3|< p.$$ Then there exist intervals $I_1,I_2,I_3\subset {{\mathbb}{Z}_{p}}$ and a non-zero $n\in {{\mathbb}{Z}_{p}}$ such that $n \cdot A_j \subset I_j$ and $|I_j|\leq |A_j|+r$, for $j=1,2,3$. For a subset $A$ of an abelian group $G$ we denote by $A^c$ the complement $G\setminus A$. Theorem \[thm:Zptrio\] can be deduced from the following result of Grynkiewicz (see [@Gryn Theorem 21.8]). \[thm:ZpGryn\] Let $p$ be a prime, and let $r$ be an integer with $0\leq r \leq c\,p-1.2$ where $c=3.1\cdot 10^{-1549}$. Let $A,B$ be subsets of ${{\mathbb}{Z}_{p}}$ satisfying the following conditions: $$|A|,\, |B|,\, |C|> r+2, \qquad |A+B|\leq |A|+|B|+r-1,$$ where $C=- (A+B)^c$. Then there exist intervals $I,J,K\subset {{\mathbb}{Z}_{p}}$ and a non-zero $n\in {{\mathbb}{Z}_{p}}$ such that $n \cdot A \subset I$, $n \cdot B \subset J$, $n \cdot C \subset K$, and $|I|\leq |A|+r$, $|J|\leq |B|+r$, $|K|\leq |C|+r$. Theorem \[thm:ZpGryn\] implies Theorem \[thm:Zptrio\]. Starting from the assumptions in Theorem \[thm:Zptrio\], note that since $|A_1+A_2+A_3|<p$ we may assume (modulo translating $A_3$, which does not affect the theorem) that $0\not\in A_1+A_2+A_3$. Hence $A_3\subset (-A_1-A_2)^c= -(A_1+A_2)^c$. Let $A=A_1$, $B=A_2$, $C=-(A_1+A_2)^c$, and note that $|A|,\,|B|,\,|C|> r+2$. Moreover, from $|A_1|+|A_2|+|A_3|>p-r$ we deduce that $|A+B|\leq |A|+|B|+r-1$. Let $s\leq r$ be such that $|A+B|=|A|+|B|+s-1$. Applying Theorem \[thm:ZpGryn\] with $s$, we obtain intervals $I_1=I$, $I_2=J$, $I_3=K$ and $n\in{{\mathbb}{Z}_{p}}\setminus\{0\}$ such that $n\cdot A_j\subset I_j$ and $|I_j|\leq |A_j|+s\leq |A_j|+r$ for $j=1,2$. Moreover $n\cdot A_3\subset n\cdot C\subset I_3$, and $|I_3|\leq |C|+s = p-|A_1+A_2|+s = p-|A_1|-|A_2|+1 \leq |A_3|+r$, so we obtain the conclusion of Theorem \[thm:Zptrio\]. One can also deduce Theorem \[thm:ZpGryn\] from Theorem \[thm:Zptrio\] in a straightforward way; we leave this to the reader, and in any case the main ideas in this deduction will be used in the continuous setting in Subsection \[subsec:gen\], to prove Theorem \[thm:Tasym\]. In the case $A=B$ of Theorem \[thm:ZpGryn\] (the symmetric case), the following result of Serra and Zémor toward their Conjecture \[conj:S-Z\] provided a better bound for $r$ than in Theorem \[thm:ZpGryn\] (see [@S-Z Theorem 3]). \[thm:Zpsym\] Let $p$ be a prime greater than $2^{94}$, let $0\leq \varepsilon \leq 10^{-4}$, and let $A\subset {{\mathbb}{Z}_{p}}$ satisfy $$|A+A| = (2+\varepsilon)|A|-1 \leq \min\big\{\,3|A|-4, \; \tfrac{p}{2}+|A|-2\,\big\}.$$ Then there is an interval $I\subset {{\mathbb}{Z}_{p}}$ and $n \in {{\mathbb}{Z}_{p}}\setminus\{0\}$ such that $n\cdot A \subset I$ and $|I|\leq |A+A|-|A|+1$. In this section we prove Theorems \[thm:Ttrio\], \[thm:Tasym\] and \[thm:Tsym\]. Inspired by arguments of Bilu from [@Bilu], we deduce the first and the third of these theorems from their discrete versions, i.e. Theorems \[thm:Zptrio\] and \[thm:Zpsym\] respectively. In the process, we also deduce Theorem \[thm:Tasym\] using Theorem \[thm:Ttrio\]. Let $\tfrac{1}{p}{{\mathbb}{Z}_{p}}$ denote the subgroup of ${\mathbb}{T}$ isomorphic to ${{\mathbb}{Z}_{p}}$. We use the following notation for discrete approximations of sets in ${\mathbb}{T}$. For any set $A\subset {\mathbb}{T}$ and prime $p$, we define the set $$A_p= A\, \cap \, \tfrac{1}{p}{{\mathbb}{Z}_{p}}.$$ To prove Theorem \[thm:Ttrio\], first we focus on sets in ${\mathbb}{T}$ that are unions of finitely many intervals. We refer to such sets as *simple sets*. In Subsection \[subsec:sos\] we show that if $A_1,A_2,A_3$ are open simple sets and satisfy the conditions of Theorem \[thm:Ttrio\], then their discrete approximations $A_{j,p}$ obey the conclusion of Theorem \[thm:Zptrio\]. However, this is not enough to deduce directly that the conclusion of Theorem \[thm:Ttrio\] holds for the original sets $A_j$, because the integer $n$ provided by Theorem \[thm:Zptrio\] is not a priori bounded in any way that would ensure that the sets $n\cdot A_j$ are contained in suitably small intervals the way their discrete approximations are. To ensure this additional fact, in the next subsection we use the Fourier transform on ${{\mathbb}{Z}_{p}}$ to bound the integer $n$. Finally, in Subsection \[subsec:gen\] we obtain Theorems \[thm:Ttrio\] and also \[thm:Tasym\] and \[thm:Tsym\], by generalizing from simple sets to arbitrary sets. On the $n$-diameter of simple sets ---------------------------------- \ Given a set $A\subset {\mathbb}{T}$ and an integer $n$, we define the *$n$-diameter* of $A$ by $$D_n(A)=\inf \{\mu(I): I\subset {\mathbb}{T}\textrm{ a closed interval such that }n\cdot A\subset I\}.$$ For a set $B\subset {{\mathbb}{Z}_{p}}$ and $n\in {\mathbb}{Z}_p$, we define similarly the *$n$-diameter* of $B$ by $$D_n(B)=\min \{|I|/p: I\subset {{\mathbb}{Z}_{p}}\textrm{ an interval such that }n\cdot B\subset I\}.$$ We prove the following result concerning the $n$-diameter of simple sets in ${\mathbb}{T}$. \[prop:SimpSetDiamT\] Let $A\subset {\mathbb}{T}$ be a union of at most $m$ intervals with $\mu(A)>0$, and suppose that $n\in {\mathbb}{Z}$ satisfies $D_n(A)<\min\big(\tfrac{1}{2},\tfrac{\mu(A)}{1-2/\pi}\big)$. Then $|n|\leq \tfrac{m}{2\big(\mu(A) -(1-2/\pi)D_n(A)\big)}$. For $s\in {{\mathbb}{Z}_{p}}$ we denote by $|s|_p$ the absolute value of the unique integer in $(-\tfrac{p}{2},\tfrac{p}{2})$ congruent to $s$ modulo $p$. We deduce Proposition \[prop:SimpSetDiamT\] from the following discrete version, which is in fact the main result from this subsection that we use in the sequel. \[prop:SimpSetDiamZp\] Let $B\subset {{\mathbb}{Z}_{p}}$ be a union of at most $m$ intervals with $\tfrac{|B|}{p}=\beta >0$, and suppose that $n\in {{\mathbb}{Z}_{p}}$ satisfies $D_n(B)<\min\big(\tfrac{1}{2},\tfrac{\beta}{1-2/\pi}\big)$. Then $|n|_p\leq \tfrac{m}{2\big(\beta -(1-2/\pi)D_n(B)\big)}$. Proposition \[prop:SimpSetDiamT\] follows by applying Proposition \[prop:SimpSetDiamZp\] to $B=A_p$ for primes $p\to \infty$. To obtain Proposition \[prop:SimpSetDiamZp\], we use the following result concerning the Fourier coefficients of a subset of ${{\mathbb}{Z}_{p}}$ that is the union of at most $m$ disjoint intervals, to the effect that these Fourier coefficients decay in a useful way. For $f:{{\mathbb}{Z}_{p}}\rightarrow {\mathbb}{C}$, let ${\widehat}{f}$ denote the Fourier transform ${\widehat}{{{\mathbb}{Z}_{p}}}\cong {{\mathbb}{Z}_{p}}\to \mathbb{C}$ defined by ${\widehat}{f}(s)=\frac1p\sum_{j\in{{\mathbb}{Z}_{p}}}f(s)e^{2\pi i\, \frac{sj}{p}}$. We write $[m]$ for the set of integers $\{1,\ldots,m\}$. Let $J_1,J_2,\dots,J_m$ be pairwise disjoint intervals in ${{\mathbb}{Z}_{p}}$, and let $B = \bigsqcup_{i\in [m]} J_i$. Let $s$ be a non-zero element of ${\widehat}{{{\mathbb}{Z}_{p}}}\cong {{\mathbb}{Z}_{p}}$. Then we have $$\label{eq:FourierJordan} |{\widehat}{1_B}(s)| \leq \frac{m}{2 |s|_p}.$$ We first estimate the Fourier coefficients of a single interval $J\subset {{\mathbb}{Z}_{p}}$, by the following standard calculation. Supposing that $J=\{a,a+1,\dots,a+(t-1)\}$, for every non-zero $s\in (-\frac{p}{2},\frac{p}{2})$ we have $$\label{eq:geomcalc1} |{\widehat}{1_J}(s)| = \Big| \tfrac{1}{p} \sum_{j=0}^{t-1} e^{2\pi i \frac{s(a+j)}{p}}\Big| = \tfrac{1}{p} \Big| \sum_{j=0}^{t-1} e^{2\pi i\, \frac{sj}{p}}\Big| = \tfrac{1}{p} \frac{|1-e^{2\pi i\, \frac{st}{p}}|}{|1-e^{2\pi i\, \frac{s}{p}}|} \leq \tfrac{1}{p} \frac{2}{|1-e^{2\pi i\, \frac{s}{p}}|}.$$ Letting $\|\theta\|_{{\mathbb}{T}}$ denote the distance from $\theta\in {\mathbb}{R}$ to the nearest integer, and using the standard estimate $|1-e^{2\pi i\,\theta}| \geq 4 \|\theta\|_{{\mathbb}{T}}$ for $\|\theta\|_{{\mathbb}{T}}<1/2$, we deduce that $$\label{eq:FourierInterval} |{\widehat}{1_J}(s)| \leq \frac{1}{2\,|s|_p}.$$ Now, since $1_B = 1_{J_1}+\cdots+1_{J_m}$, we deduce by linearity of the Fourier transform, the triangle inequality, and applying to each interval $J_i$. An immediate consequence of this lemma is that for such a set $B$ the large Fourier coefficients can only occur at bounded frequencies, in the following sense. \[cor:FreqUB\] Let $J_1,\dots,J_m \subset {{\mathbb}{Z}_{p}}$ be pairwise disjoint intervals, let $B = \bigsqcup_{i\in [m]} J_i$, and let $\gamma>0$. If $s\in {{\mathbb}{Z}_{p}}$ satisfies $|{\widehat}{1_B}(s)| \geq \gamma$, then $|s|_p\leq \frac{m}{2\gamma}$. We may assume that $s\neq 0$, and then by we have $ \frac{m}{2|s|_p} \geq |{\widehat}{1_B}(s)| \geq \gamma$, whence the result follows. We shall combine this corollary with the following result. \[lem:FourierLB\] Let $B\subset {{\mathbb}{Z}_{p}}$, let $n\in {{\mathbb}{Z}_{p}}\setminus\{0\}$, and let $I$ be an interval in ${{\mathbb}{Z}_{p}}$ such that $n\cdot B\subset I$ and $|I|< p/2$. Then $$\label{eq:FourierLB} \big|{\widehat}{1_B}(n)\big| > \tfrac{1}{p}\big(|B| - (1-\tfrac{2}{\pi})|I|\big).$$ This lemma yields a positive lower bound for $\big|{\widehat}{1_B}(n)\big|$ when $|B|/|I| > 1-\tfrac{2}{\pi} \lesssim 0.364$. We have $$\begin{aligned} {\widehat}{1_B}(n) & = & \tfrac{1}{p}\sum_{j\in {{\mathbb}{Z}_{p}}} 1_B(j)\, e^{2\pi i\, \frac{nj}{p}} = \tfrac{1}{p}\sum_j 1_B(n^{-1} j) \, e^{2\pi i\, \frac{j}{p}} = \tfrac{1}{p}\sum_j 1_{n\cdot B}(j)\, e^{2\pi i\, \frac{j}{p}} \\ & = & \tfrac{1}{p}\sum_j 1_I(j) \, e^{2\pi i\, \frac{j}{p}} + \tfrac{1}{p}\sum_j (1_{n\cdot B}(j)-1_I(j)) \, e^{2\pi i\, \frac{j}{p}}.\end{aligned}$$ The last sum here has magnitude at most $\tfrac{1}{p} |I\setminus n\cdot B|$. We may assume that $I=\{0,1,\dots,(t-1)\}$ for some $t< p/2$. Hence $$\begin{aligned} | {\widehat}{1_B}(n) | & \geq & \tfrac{1}{p} \Big(\Big|\sum_{j \in I} e^{2\pi i\, \frac{j}{p}} \Big|-|I\setminus n\cdot B|\Big) = \tfrac{1}{p}\Big( \frac{|1-e^{2\pi i\, \frac{t}{p}}|}{|1-e^{2\pi i\, \frac{1}{p}}|}+|B|-|I|\Big)\end{aligned}$$ where we have used the same calculation as in . Using the estimates $$4 \|\theta\|_{{\mathbb}{T}} \leq |1-e^{2\pi i\,\theta}| \leq 2\pi \|\theta\|_{{\mathbb}{T}}\;\textrm{ for }\|\theta\|_{{\mathbb}{T}}<1/2,$$ we obtain $\frac{|1-e^{2\pi i\, \frac{t}{p}}|}{|1-e^{2\pi i\, \frac{1}{p}}|} \geq \frac{4 t/p}{2\pi/p}=\frac{2}{\pi}|I|$, and the result follows. By definition of $D_n(B)$ there is an interval $I\subset {{\mathbb}{Z}_{p}}$ satisfying $\frac{|I|}{p}=D_n(B)<\tfrac{1}{2}$ and $n\cdot B\subset I$. Lemma \[lem:FourierLB\] gives us $|{\widehat}{1_B}(n)|> \tfrac{1}{p}\big(|B|- (1-\tfrac{2}{\pi})|I|\big)$, and this lower bound is positive by our assumptions. Combining this with Corollary \[cor:FreqUB\], we obtain $|n|_p\leq \frac{m}{2\big(\beta- (1-{2}/{\pi})D_n(B)\big) }$, as claimed. Some restriction on the size of $D_n(B)$ is necessary in Lemma \[lem:FourierLB\] and in Proposition \[prop:SimpSetDiamZp\]. Indeed, if $B=I=\{0,\ldots,t-1\}$ with $t=p(1-\theta)$ and $0<\theta< 1/2$, then $D_1(B)=|B|/p=1-\theta$. In this case, with $n=1$ we have $$|{\widehat}{1_B}(n)| =|{\widehat}{1_I}(1) |=\tfrac{1}{p} \frac{|1-e^{2\pi i\, \frac{t}{p}}|}{|1-e^{2\pi i\, \frac{1}{p}}|}=\tfrac{1}{p} \frac{\big|\sin\big(\pi \frac{t}{p}\big)\big|}{\sin\big(\frac{\pi}{p}\big)}=\tfrac{1}{p} \frac{\sin\left(\pi \theta\right)}{\sin\big(\frac{\pi}{p}\big)}.$$ For $p$ large, this is very close to $\tfrac1\pi\sin(\pi\theta)< 1/\pi$, whereas $\tfrac{1}{p}\big(|B| - (1-\tfrac{2}{\pi})|I|\big)=\tfrac{2}{\pi}(1-\theta)> 1/\pi$. This shows that can fail if $D_n(B)>1/2$. Proposition \[prop:SimpSetDiamZp\] fails for this set $B$ also if $\beta=1-\theta>\pi/4$, since in this case we have $D_1(B)=\beta$, $m=1$, and yet $1>\tfrac{\pi}{4\beta}=\tfrac{1}{2\big(\beta -(1-2/\pi)D_1(B)\big)}$. Proof of the main result for simple open sets {#subsec:sos} --------------------------------------------- \ In this subsection we establish Theorem \[thm:Ttrio\] for simple open sets as follows. \[prop:SimpSet\] Let $\rho\in (0,c)$ where $c=3.1\cdot 10^{-1549}$. For each $j\in [3]$ let $A_j\subset {\mathbb}{T}$ be a union of at most $m$ pairwise disjoint open intervals, and suppose that $$\label{eq:SimpSet} \min_j\mu(A_j)>\rho, \qquad \mu(A_1)+\mu(A_2)+\mu(A_3)>1-\rho, \qquad \mu(A_1+A_2+A_3)<1.$$ Then there exists a positive integer $n\leq \frac{2m}{\min_j \mu(A_j)}$ and closed intervals $I_1,I_2,I_3\subset {\mathbb}{T}$ such that $n\cdot A_j\subset I_j$ and $\mu(I_j)\leq \mu(A_j)+\rho$ for $j\in [3]$. Fix any $\delta>0$ satisfying $$\delta< \min \big\{\tfrac{1}{2}\big(\mu(A_1)+\mu(A_2)+\mu(A_3)-(1-\rho)\big), \; 1-\mu(A_1+A_2+A_3), \; \rho/10\big\}.$$ For $p$ sufficiently large, we can assume that $A_{j,p}$ is the union of at most $m$ intervals in $\tfrac{1}{p}{{\mathbb}{Z}_{p}}$. Let $\mu_p$ denote the discrete measure $\frac{1}{p}\sum_{j=0}^{p-1} \delta_{j/p}$ on ${\mathbb}{T}$, where $\delta_{j/p}$ is a Dirac $\delta$ measure at $j/p$. We have that $\mu_p$ converges weakly to the Haar probability measure on ${\mathbb}{T}$ as $p\to \infty$, and so $\tfrac{1}{p}|A_{j,p}|=\mu_p(A_{j,p})\to \mu(A_j)$ and $\mu_p((A_1+A_2+A_3)_p)\to \mu(A_1+A_2+A_3)$ as $p\to \infty$. Note that the inner Haar measure and the Haar measure of simple sets in ${\mathbb}{T}$ coincide and that a sumset of simple sets is a simple set. In particular, for $p$ sufficiently large (depending on the sets $A_j$ and $\delta$) we have $$\label{eq:estmumupS} |\mu_p\big(A_{j,p})-\mu(A_j)|\leq \delta/3, \qquad |\mu_p\big((A_1+A_2+A_3)_p \big)-\mu(A_1+A_2+A_3)| \leq \delta.$$ By our assumptions in , the fact that $A_{1,p}+A_{2,p}+A_{3,p}\subset (A_1+A_2+A_3)_p$, and our choice of $\delta$ and $p$, we then have $$\label{eq:SimpSetZp} \mu_p\big(A_{j,p}\big)>\rho-\delta/3, \quad \sum_{j\in [3]}\mu_p\big(A_{j,p}\big)>1-\rho+\delta, \quad \mu_p\big(A_{1,p}+A_{2,p}+A_{3,p}\big)<1.$$ Let $r$ be an integer in $((\rho-\delta)p,(\rho-2\delta/3)p\big)$. For $p\geq 6/\delta$ sufficiently large we can apply Theorem \[thm:Zptrio\] to the sets $A_{j,p}$ with this integer $r$. This yields intervals $I_{j,p}\subset \tfrac{1}{p}{{\mathbb}{Z}_{p}}$ and a non-zero integer $n\in (-\frac{p}{2},\frac{p}{2})$ such that $n\cdot A_{j,p}\subset I_{j,p}$ and $\mu_p(I_{j,p})\leq \mu_p(A_{j,p})+r/p\leq \mu(A_j)+\rho-\delta/3$. For every $x$ in the simple open set $A_j$, there exists $y\in A_{j,p}$ such that $\|x-y\|_{{\mathbb}{T}}\leq \tfrac{1}{2p}$, which implies that $\|nx-ny\|_{{\mathbb}{T}}\leq \tfrac{|n|}{2p}$. Hence $$n \cdot A_j\,\subset\, n\cdot A_{j,p} + \big[-\tfrac{|n|}{2p},\tfrac{|n|}{2p}\big] \subset I_{j,p}+\big[-\tfrac{|n|}{2p},\tfrac{|n|}{2p}\big].$$ Let $I_j$ be the closed interval $I_{j,p}+\big[-\tfrac{|n|}{2p},\tfrac{|n|}{2p}\big]$. We have $$\label{eq:IjUB} \mu(I_j)\leq \mu_p(I_{j,p}) +\tfrac{|n|}{p} \leq \mu(A_j)+\rho-\delta/3+\tfrac{|n|}{p}.$$ We have $\min_j\mu(A_j)<\tfrac{1}{3}$, by the third inequality in and Raikov’s inequality [@Macbeath Theorem 1]. Therefore, supposing without loss of generality that $\min_j\mu(A_j) = \mu(A_1)$, we have $\mu_p(I_{1,p})<1/2$. We also have $\mu_p(I_{1,p})< \tfrac{\mu_p(A_{1,p})}{1-2/\pi}$. By Proposition \[prop:SimpSetDiamZp\], we conclude that $|n|\leq \tfrac{m}{2\big(\mu_p(A_{1,p})-(1-2/\pi)\mu_p(I_{1,p})\big)}$. Since $\mu_p(I_{1,p})< 2\mu(A_1)-\delta/3$ and $\delta<\rho/10$, we have $$|n|\leq \frac{m}{2\big(\mu(A_1)-\delta/3-(1-2/\pi)(2\mu(A_1)-\delta/3)\big)} \leq \frac{m}{2\big((\tfrac{4}{\pi}-1)\mu(A_1)-\tfrac{2\delta}{3\pi}\big)}\leq \frac{2m}{\mu(A_1)}.$$ Since we can take $p>\tfrac{6m}{\delta\,\rho}$, we have $\tfrac{2m}{\mu(A_1)}\leq \tfrac{\delta\, p}{3}$, and so from we have $\mu(I_j)\leq \mu(A_j)+\rho$. Finally, as $n\cdot A_j$ and $-n\cdot A_j$ are both included in suitable intervals, we can have $n>0$. From simple sets to arbitrary sets {#subsec:gen} ---------------------------------- \ In this subsection we deduce Theorem \[thm:Ttrio\] using Proposition \[prop:SimpSet\]. To that end, we first prove Theorem \[thm:Ttrio\] for closed sets. \[prop:Ttrioclosed\] Let $\rho\in (0,c)$ where $c=3.1\cdot 10^{-1549}$. Let $A_1,A_2,A_3$ be closed subsets of ${\mathbb}{T}$ satisfying the following conditions: $$\mu(A_1),\mu(A_2),\mu(A_3)> \rho, \quad \sum_{j\in [3]}\mu(A_j)>1-\rho, \quad \mu(A_1+A_2+A_3)< 1.$$ Then there exist closed intervals $I_1,I_2,I_3\subset {\mathbb}{T}$ and a positive integer $n$ such that $n \cdot A_j \subset I_j$ and $\mu(I_j)\leq \mu(A_j)+\rho$ for $j\in [3]$. Fix any $\varepsilon>0$ with $\varepsilon< \min \big\{\mu(A_1)+\mu(A_2)+\mu(A_3)-(1-\rho), \; 1-\mu(A_1+A_2+A_3)\big\}$. For $\delta>0$ let $I_\delta$ denote the open interval $(-\delta,\delta)$ in ${\mathbb}{T}$. We have $A=\cap_{\delta>0}(A+I_\delta)$ and $A_1+A_2+A_3=\cap_{\delta>0}(A_1+A_2+A_3+I_{3\delta})=\cap_{\delta>0}(A_1+I_\delta+A_2+I_\delta+A_3+I_\delta)$. Let $\delta>0$ be sufficiently small so that $$\forall\,j\in [3],\;\mu(A_j+I_\delta)\leq \mu(A_j)+\varepsilon, \quad\textrm{and}\quad \mu(A_1+A_2+A_3+I_{3\delta})\leq \mu(A_1+A_2+A_3)+\varepsilon <1.$$ By compactness of each set $A_j$, there exists a set $A_j'$ that is the union of finitely many translates of $I_\delta$ such that $A_j\subset A'_j\subset A_j+I_\delta$. The simple open sets $A'_1,A'_2,A'_3$ satisfy the inequalities in with initial parameter $\rho-\varepsilon$. Therefore, by Proposition \[prop:SimpSet\] applied to these sets with this parameter, we obtain a positive integer $n$ and closed intervals $I_1,I_2,I_3 \subset{\mathbb}{T}$ with $n\cdot A_j\subset n\cdot A'_j\subset I_j$, and $\mu(I_j)\leq \mu(A'_j)+\rho-\varepsilon \leq \mu(A_j)+\rho$. From this we shall deduce Theorems \[thm:Ttrio\] and \[thm:Tasym\]. To do so we use the following lemma based on ideas of Bilu from [@Bilu]. \[lem:clapprox\] Let $C\subset A\subset {\mathbb}{T}$, where $C$ is a closed set, and let $X$ be a finite set of integers. Then for every $\varepsilon>0$ there exists a closed set $E$ with $C\subset E\subset A$ such that $\mu(E)=\mu(C)$ and $n\cdot A\subset n\cdot E+[-\varepsilon,\varepsilon]$ for all $n\in X$. For each $n\in X$, since $A$ is totally bounded, there is a finite subset $E(n,\varepsilon)\subset A$ such that $A\subset E(n,\varepsilon)+[-\tfrac{\varepsilon}{n},\tfrac{\varepsilon}{n}]$, and so $n\cdot A \subset n\cdot E(n,\varepsilon)+[-\varepsilon,\varepsilon]$. The set $E = C \cup \big(\bigcup_{n\in X} E(n,\varepsilon)\big)$ satisfies the claim in the lemma. We shall combine this with the following special case of [@Bilu Lemma 4.2.1]. \[lem:finset\] Let $B$ be a Haar measurable subset of ${\mathbb}{T}$ with $\mu(B)>0$, and let $\lambda < 1$. Then there exist only finitely many integers $n$ such that $\mu(n\cdot B)\leq \lambda$. We can now prove the main result. We assume that ${\mu}(A_j)>\rho$, that ${\mu}(A_1)+{\mu}(A_2)+{\mu}(A_3)>1-\rho$, and that ${\mu}(A_1+A_2+A_3)<1$. Fix an arbitrary $\delta$ satisfying $$0< \delta < \min \big\{\; \tfrac{1}{2} \big( {\mu}(A_1)+{\mu}(A_2)+{\mu}(A_3)-(1-\rho) \big),\;\rho\; \big\}.$$ We may assume that ${\mu}(A_1)=\min_{j\in [3]} {\mu}(A_j)$. Let $C_1$ be a closed subset of $A_1$ such that $\mu(C_1)> {\mu}(A_1)-\delta/3 >0$. Let $X$ be the set of integers $n$ such that $\mu(n\cdot C_1)\leq {\mu}(A_1)+\rho$. Since ${\mu}(A_1+A_2+A_3)<1$, by Raikov’s inequality we have ${\mu}(A_1)< 1/3$, and so we can certainly apply Lemma \[lem:finset\] to deduce that $X$ is finite. Let $A_1'$ be the closed subset of $A_1$ obtained by applying Lemma \[lem:clapprox\] to $C_1\subset A_1$ with $\varepsilon=\delta/2$. Similarly, for $j=2,3$ let $A'_j$ be a closed subset of $A_j$ such that $\mu(A'_j)\geq {\mu}(A_j)-\delta/3$ and $n\cdot A_j\subset n\cdot A_j'+[-\tfrac{\delta}{2},\tfrac{\delta}{2}]$ for all $n\in X$. We then have $\mu(A'_j)>\rho-\delta/3$ for every $j$, we have $\mu(A_1'+A_2'+A_3')<1$, and $$\mu(A_1')+\mu(A_2')+\mu(A_3')>{\mu}(A_1)+{\mu}(A_2)+{\mu}(A_3)-\delta >1-(\rho-\delta).$$ By Proposition \[prop:Ttrioclosed\] applied to the sets $A_j'$ with initial parameter $\rho-\delta$, there exist closed intervals $I_j'$ such that $\mu(I_j')\leq\mu(A'_j)+\rho-\delta \leq {\mu}(A_j)+\rho-\delta$, and a positive integer $n$ such that $n\cdot A'_j\subset I_j'$, for $j=1,2,3$. In particular we must have $n\in X$, and then by our choice of the sets $A_j'$ we have $n\cdot A_j\subset I_j'+[-\tfrac{\delta}{2},\tfrac{\delta}{2}]$. Letting $I_j$ be the closed interval $I_j'+[-\tfrac{\delta}{2},\tfrac{\delta}{2}]$ for each $j$, we have $\mu(I_j)\leq {\mu}(A_j)+\rho$, and the result follows. We now deduce Theorem \[thm:Tasym\] from Proposition \[prop:Ttrioclosed\]. Let $A,B\subset {\mathbb}{T}$ satisfy the assumptions in the theorem, namely $$\label{eq:asymassumps} {\mu}(A+B) = {\mu}(A)+{\mu}(B)+ \rho < \tfrac{1}{2}\big(1+{\mu}(A)+{\mu}(B)\big),\quad \rho < {\mu}(B) \leq {\mu}(A),\quad \rho<c.$$ Note that this implies that $1-{\mu}(A+B) >\rho$. One could want to deduce from this that ${\mu}\big((A+B)^c\big)>\rho$ and then apply Theorem \[thm:Ttrio\] to $A,B,-(A+B)^c$, but this raises several technical difficulties (in particular the behaviour of the inner Haar measure ${\mu}$ relative to taking complements). It is convenient to prove the result first for closed sets. Suppose, then, that $A,B$ are closed and satisfy . Fix any $\delta>0$ satisfying $$\label{eq:initas} \mu(A)+\mu(B) + \rho + \delta < \tfrac{1}{2}\big(1+\mu(A)+\mu(B)\big), \quad \delta<\rho/4, \quad \mu(B)>\rho+\delta, \quad \rho+\delta < c.$$ Note that the equality in is equivalent to $\mu(A)+\mu(B)+\mu\big(-(A+B)^c\big)=1-\rho$, which together with the first inequality in implies that $\mu\big(-(A+B)^c\big)> \rho+2\delta$. In particular this, the last equality, and $\mu(B)>\rho+\delta$ together imply that $$\label{eq:initobs} \mu(A) + 3(\rho+\delta) < 1\textrm{ ; \quad we have similarly \quad }\mu\big(-(A+B)^c\big) + 3 \rho+2\delta<1.$$ Now let $A_1=A$, $A_2=B$, and let $A_3'$ be a closed subset of $-\big(A_1+A_2)^c$ satisfying $\mu(A_3')>\mu\big(-(A_1+A_2)^c\big)-\delta > \rho + \delta$. Let $X$ be the finite set of integers $n$ such that $\mu(n\cdot A_2)\leq \mu(A_2)+\rho +\delta$ (finite by Lemma \[lem:finset\], since $\mu(A_2)+\rho +\delta<1$ by ). Let $A_3$ be the closed set given by applying Lemma \[lem:clapprox\] to $A_3'\subset -(A_1+A_2)^c$ with $\varepsilon=\delta/2$. We now show that $A_1,A_2,A_3$ satisfy the conditions to apply Proposition \[prop:Ttrioclosed\] with initial parameter $\rho+\delta$. Firstly, as seen above, by construction we have $\mu(A_j)>\rho+\delta$ for $j=1,2,3$. Secondly, we have $$\label{eq:trioas1} \mu(A_1)+\mu(A_2)+\mu(A_3)> \mu(A)+\mu(B)+ \mu\big(- (A+B)^c\big)- \delta = 1-(\rho+\delta).$$ Finally, since $A_1+A_2+A_3$ is a closed set included in $A+B- (A+B)^c$, and since the latter set does not contain 0, the closed set $A_1+A_2+A_3$ must miss an entire open interval about 0, whence $\mu(A_1+A_2+A_3)<1$. We can now apply Proposition \[prop:Ttrioclosed\] to $A_1,A_2,A_3$ and thus obtain a positive integer $n$ and closed intervals $I_j'$ such that $\mu(I_j')\leq \mu(A_j)+\rho+\delta$ and $n\cdot A_j\subset I_j'$ for $j=1,2,3$. In particular $n$ must be in $X$, so by construction $n\cdot (A+B)^c$ is included in the closed interval $- I_3'+[-\tfrac{\delta}{2},\tfrac{\delta}{2}]$, and therefore so is $(n\cdot(A+B))^c$. Let $I_{3,\delta}=- I_3'+[-\tfrac{\delta}{2},\tfrac{\delta}{2}]$, thus $\mu(I_{3,\delta})\leq \mu(A_3)+\rho+2\delta$, and let $I_{j,\delta}:=I_j'$ for $j=1,2$. Now, we repeat the above argument for each term of a decreasing sequence of positive numbers $\delta_m$ satisfying and tending to 0 as $m\to\infty$. Note that although the integer $n=n(\delta_m)$ could vary as $m$ increases, we can assume that it is constant, by passing to a subsequence if necessary, since $X$ is finite. For $j=1,2,3$ let $I_{j,m}=\bigcap_{k\leq m} I_{j,\delta_k}$. We have that $I_{j,m}$ is a closed interval for all $m$. Indeed, a priori the intersection of two intervals $I,J$ in ${\mathbb}{T}$ could be a union of two disjoint intervals, but this can occur only if $I\cup J ={\mathbb}{T}$, whereas here for $k<\ell$ we have by inclusion-exclusion that $\mu(I_{j,\delta_k}\cup I_{j,\delta_{\ell}})\leq \mu(A_j)+2\rho+4\delta_k$, which is less than 1 by . Therefore $(I_{j,m})_m$ is a decreasing sequence of closed intervals, each including $n\cdot A_j$ for $j=1,2$ and $(n\cdot(A+B))^c$ for $j=3$, whence the closed intervals $I_j=\bigcap_m I_{j,m}$ also include these sets respectively (in particular $n\cdot(A+B)$ includes the open interval $I_3^c$), and we have $\mu(I_j)\leq \mu(A_j)+\rho$. This completes the proof of Theorem \[thm:Tasym\] for closed sets $A$ and $B$. Now let $A,B$ be arbitrary subsets of ${\mathbb}{T}$ satisfying . Let $\delta>0$ satisfy $$\rho+2\delta<c,\quad \delta<\frac13({\mu}(B)-\rho),\quad {\mu}(A)+{\mu}(B)+\rho+2\delta<\tfrac{1}{2}(1+{\mu}(A)+{\mu}(B)-2\delta).$$ Then, let $A_1',A_2'$ be closed subsets of $A,B$ respectively with $\mu(A_1')>{\mu}(A)-\delta$ and $\mu(A_2')> {\mu}(B)-\delta$, and let $A_1,A_2$ be the closed sets obtained by applying Lemma \[lem:clapprox\] with $A_1'\subset A$, $A_2'\subset B$, with $X$ being the finite set of integers $n$ such that $\mu(n\cdot A_1')\leq {\mu}(A)+\delta$. There is then $\rho'\leq \rho+2\delta$ such that $$\mu(A_1+A_2) = \mu(A_1)+\mu(A_2)+ \rho' < \tfrac{1}{2}\big(1+\mu(A_1)+\mu(A_2)\big),\quad \rho' < \min(\mu(A_1),\mu(A_2)).$$ Applying the result for closed sets we obtain closed intervals $I,J$, an open interval $K$, and a positive integer $n$ such that $n\cdot A\subset I+[-\tfrac{\delta}{2},\tfrac{\delta}{2}]$ and this closed interval has measure at most ${\mu}(A)+\rho+3\delta$, similarly $n\cdot B\subset J+[-\tfrac{\delta}{2},\tfrac{\delta}{2}]$ with $\mu(J+[-\tfrac{\delta}{2},\tfrac{\delta}{2}])\leq {\mu}(B)+\rho+3\delta$, and finally $n\cdot(A+B)\supset n\cdot(A_1+A_2)\supset K$ with $\mu(K)\geq \mu(A_1)+\mu(A_2)\geq {\mu}(A)+{\mu}(B)-2\delta$. Now letting $\delta \to 0$ in an argument similar to the one above for closed sets (taking a countable union of open intervals in the case of $K$), the result follows. In the case $A=B$, using Theorem \[thm:Zpsym\] rather than Theorem \[thm:Zptrio\] yields a better bound $c$. Following a similar strategy as for Theorem \[thm:Ttrio\], we can reduce to the case of $A$ being a union of finitely many open intervals. We replace the condition $\rho<c$ from Theorem \[thm:Tasym\] by $\rho<\varepsilon\mu(A)$ with $\varepsilon<10^{-4}$ and use an argument similar to the proof of Proposition \[prop:SimpSet\] to obtain a positive integer $n$ and a closed interval $I$ such that $n\cdot A \subset I$. Now Theorem \[thm:Zpsym\] does not give information on the structure of $A+A$, so we need to proceed differently to find some interval $K$ included in $n\cdot (A+A)$. Write $\tilde{A}=n\cdot A\subset{\mathbb}{T}$. We have $\mu(\tilde{A})\geq\mu(A)$ and, since $\tilde{A}\subset I$, we have $$\mu(\tilde{A}+\tilde{A})\leq 2\mu(I)\leq 2(\mu(A+A)-\mu(A))\leq (2+2\varepsilon)\mu(A)<3\mu(A)\leq 3\mu(\tilde{A}).$$ We know that $\tilde{A}$ is included in an interval of length at most $\mu(A+A)-\mu(A)< 1/2$. The desired conclusion, i.e. that $\tilde{A}+\tilde{A}$ contains a large interval, is not affected by translating $\tilde{A}$ in ${\mathbb}{T}$, so we may suppose that $\tilde{A}\subset [0,1/2)$, where we identify ${\mathbb}{T}$ as a set with $[0,1)$. Then the sum $\tilde{A}+\tilde{A}$ behaves as a sum in ${\mathbb}{R}$, and so $\tilde{A}$ can be treated as a subset of ${\mathbb}{R}$ of doubling constant strictly less than $3$. Theorem 1 from [@dR] then ensures the existence of an interval $K\subset \tilde{A}+\tilde{A}$ of length at least $2\mu(\tilde{A})\geq 2\mu(A)$, which completes the proof. Given the above deductions of Theorems \[thm:Ttrio\], \[thm:Tasym\] and \[thm:Tsym\] from their counterparts in ${{\mathbb}{Z}_{p}}$, any improvement of the bounds $c$ in these discrete counterparts will immediately yield the same improvement in the continuous setting. In [@S-Z] Serra and Zémor give an example in ${{\mathbb}{Z}_{p}}$ to show that the condition $|A+A|<\tfrac{p-3}{2} + |A|$ in Theorem \[thm:Zpsym\] is necessary. We can adapt this to show that the condition ${\mu}(A+A)< \tfrac{1}{2}+{\mu}(A)$ is also necessary for Theorem \[thm:Tsym\] to hold, as follows. \[ex:S-Z\] Viewing ${\mathbb}{T}$ as $[0,1]$ with addition mod 1, consider the set $$A= \big(\tfrac{1}{4}-\delta,\;\tfrac{1}{2}\big] \cup \big(1-\delta,\; 1\big] \subset {\mathbb}{T}, \textrm{ for an arbitrary fixed }\delta\in(0,\tfrac{1}{8}).$$ We have $\mu(A)=\tfrac{1}{4}+2\delta$, and $A+A=\big(\tfrac{1}{2}-2\delta,\;1\big] \cup \big(1-2\delta,\;1\big]\cup\big(\tfrac{1}{4}-2\delta,\;\tfrac{1}{2}\big] =\big(\tfrac{1}{4}-2\delta,\;1\big]$. Hence $\mu(A+A)=\tfrac{3}{4}+2\delta=\tfrac{1}{2}+\mu(A)<3\mu(A)$. Moreover $\mu(A+A)=2\mu(A)+2\left(\frac18-\delta\right)$ and $\tfrac{1}{8}-\delta$ can be made arbitrarily small. However, we cannot include $n\cdot A$ in a preimage of a closed interval $I$ of measure $\mu(A+A)-\mu(A)=\tfrac{1}{2}$, for any positive integer $n$. Indeed, this is clear for $n=1$, as $A$ is not contained in an interval of length $\tfrac{1}{2}=\mu(A+A)-\mu(A)$. For $n\geq 2$, note that $\mu(n\cdot A)\geq \mu(n\cdot (\tfrac{1}{4}-\delta,\tfrac{1}{2}])$, and this is at least $\mu(2\cdot (\tfrac{1}{4}-\delta,\tfrac{1}{2}])$ (in general, for any interval $J\subset {\mathbb}{T}$ and any integers $n\geq m >0$, we have $\mu(nJ)\geq \mu(mJ)$). Since $\mu\big(2\cdot (\tfrac{1}{4}-\delta,\tfrac{1}{2}]\big)=\mu\big((\tfrac{1}{2}-2\delta,1]\big) > \tfrac{1}{2}$, we must have $D_n(A)> \tfrac{1}{2}$. \[rem:BiluConj\] The conjecture of Bilu mentioned in the introduction, namely [@Bilu Conjecture 1.2], proposes (in its special case for ${\mathbb}{T}$) that if $A,B\subset {\mathbb}{T}$ with $\alpha={\mu}(A)\geq {\mu}(B)=\beta$ satisfy ${\mu}(A+B)<\min(\alpha+2\beta,1)$, then there exist closed intervals $I,J\subset{\mathbb}{T}$ and $n\in {\mathbb}{N}$ such that $n\cdot A\subset I$, $n\cdot B\subset J$, and $\mu(I)\leq {\mu}(A+B)-{\mu}(B)$, $\mu(J)\leq {\mu}(A+B)-{\mu}(A)$. Bilu proved that this conjecture holds under the additional condition that $\alpha/\tau\leq \beta \leq \alpha \leq c(\tau)$, where $c$ is some positive constant depending on $\tau\geq 1$; see [@Bilu Theorem 1.4]. However, note that the conjecture itself does not hold for arbitrary $\alpha,\beta$. Indeed, Example \[ex:S-Z\] shows that the conjecture can fail for sets greater than $1/4$. These counterexamples can be ruled out by adding a condition to the conjecture, for instance that ${\mu}(A+B)<\tfrac{1}{2}(1+{\mu}(A)+{\mu}(B))$. Thus, a plausible version of Bilu’s conjecture on ${\mathbb}{T}$, without a fixed upper restriction on ${\mu}(A)$, could be that Theorem \[thm:Tasym\] holds for every $\rho\in (0,1)$. In another direction, one may try to find the largest upper bound on ${\mu}(A)$ under which Bilu’s conjecture holds (given Example \[ex:S-Z\], this bound must be at most $1/4$). Application to $k$-sum-free sets in ${\mathbb}{T}$ {#sec:kfs} ================================================== A subset of an abelian group is said to be *$k$-sum-free* if it does not contain any triple $(x,y,z)$ solving the linear equation $x+y=kz$, where $k$ is a fixed positive integer.[^2] In the case $k=1$ the corresponding sets are known simply as sum-free sets, and their study dates back to work of Schur from 1916 [@Schur]. The case $k=2$ concerns sets avoiding 3-term arithmetic progressions, and this topic includes Roth’s theorem from 1953 [@Roth] as well as the numerous related later works, recent examples of which include . Note that this case differs in nature from the other cases, in that this is the only value of $k$ for which the linear equation in question is *translation invariant*, meaning that if $(x,y,z)$ is a solution then so is $(x+t,y+t,z+t)$ for every fixed element $t$ in the group. For $k\geq 3$ the topic goes back at least to work of Erdős, who conjectured in particular that for large $n$ the odd numbers in $[n]$ form the unique $3$-sum-free set of maximum size (see ). Chung and Goldwasser proved this conjecture in , and made an analogous conjecture about the maximum size of $k$-sum-free subsets of $[n]$ for $k\geq 4$, which was proved by Baltz, Hegarty, Knape, Larsson and Schoen in . Chung and Goldwasser also initiated the study of $k$-sum-free sets in the continuous setting. In particular, in they determined the structure and measure of maximal $k$-sum-free Lebesgue measurable subsets of the interval $(0,1]$ for $k\geq 4$. They then made a conjecture concerning the structure and measure of maximal $3$-sum-free sets in this setting. Significant progress toward this conjecture was made by Matolcsi and Ruzsa in , and the conjecture was then fully proved by Plagne and the second named author in . Here we initiate the study of $k$-sum-free sets in ${\mathbb}{T}$ by considering the problem of estimating the following quantity: $$d_k({\mathbb}{T})=\sup\{\mu(A): A\textrm{ is a Haar measurable $k$-sum-free subset of }{\mathbb}{T}\}.$$ Note that $A$ is $k$-sum-free if and only if $(A+A)\cap k\cdot A=\emptyset$, and since by Raikov’s inequality we have $\mu(A+A)\geq 2\mu(A)$, it follows that $$\label{eq:CDbound} 3\mu(A)\leq \mu(A+A)+\mu(k\cdot A)= \mu((A+A)\cup k\cdot A)\leq 1, \;\textrm{ so }\mu(A)\leq 1/3.$$ Given this, the problem of determining $d_1({\mathbb}{T})$ is easily settled: in ${\mathbb}{T}$ viewed as $[0,1)$ with addition mod 1, the interval $(\tfrac{1}{3},\tfrac{2}{3})$ is a sum-free set of maximum measure $1/3$. For $k=2$, it follows from the above-mentioned invariance of the equation $x+y=2z$ that $d_2({\mathbb}{T})=0$ (in fact any set $A\subset {\mathbb}{T}$ of measure $\alpha>0$ must contain a positive measure $c(\alpha)$ of 3-term progressions; see for instance [@CS2 Theorem 1.4]). Let us now focus on $k\geq 3$. Here we can improve on as follows. \[thm:ksfs-ub\] Fix any $\varepsilon>0$ for which Theorem \[thm:Tsym\] holds, and let $k\geq 3$ be an integer. Then $d_k({\mathbb}{T}) \leq \max\{\frac{1}{3+\varepsilon}, \frac{1+k\varepsilon}{k+2}\}$. The greatest value of $\varepsilon$ currently available here is the one provided by Serra and Zémor in [@S-Z], namely $\varepsilon=10^{-4}$. This gives us $d_k({\mathbb}{T}) \leq \tfrac{1}{3+10^{-4}}$ for all $k\geq 3$. We prove Theorem \[thm:ksfs-ub\] in several steps. For a set $X\subset {\mathbb}{T}$ and $n\in {\mathbb}{N}$, we denote by $n^{-1}X$ the set $\{t\in {\mathbb}{T}: n\,t \in X\}$. Note that $X\subset {\mathbb}{T}$ is $k$-sum-free if and only if $X\cap\, k^{-1}(X+X)=\emptyset$. The following lemma tells us that if $A$ is $k$-sum-free and has measure close to $1/3$ then for some $n\in {\mathbb}{N}$ we must have $n\cdot A$ contained efficiently in an interval $I$ that is *almost* $k$-sum-free, in the sense that $I\cap\, k^{-1}(I+I)$ has small measure. \[lem:alt\] Let $k\geq 3$ be an integer, let $A\subset {\mathbb}{T}$ be a $k$-sum-free Borel set, and let $\varepsilon\leq 10^{-4}$. Then either $\mu(A)\leq 1/(3+\varepsilon)$ or there exists a closed interval $I\subset {\mathbb}{T}$ and a positive integer $n$ such that $A\subset n^{-1}I$, $\mu(I)\leq \mu(A)(1+\varepsilon)$, and $\mu\big(I\cap\,k^{-1}(I+I)\big)\leq 2 \varepsilon\,\mu(I)$. If $\mu(A+A)\geq (2+\varepsilon)\mu(A)$, then arguing as in we deduce that $\mu(A)\leq 1/(3+\varepsilon)$. We may therefore assume that $\mu(A+A)\leq (2+\varepsilon)\mu(A)$. Applying Theorem \[thm:Tsym\] with $\varepsilon$, we obtain an interval $I$ with $\mu(I)\leq \mu(A+A)-\mu(A)$ and $n\in {\mathbb}{N}$ such that $A\subset n^{-1}I$. Letting $B=n^{-1}I$ and using that the map $x\mapsto nx$ is measure-preserving, we have $\mu(B)=\mu(I)$, and so $$\label{eq:1st} \mu(B\setminus A)=\mu(I)-\mu(A)\leq \varepsilon\, \mu(A) \leq \varepsilon\, \mu(I).$$ Note also that, since for every set $X\subset {\mathbb}{T}$ we have $n^{-1}(X+X)=n^{-1}X+n^{-1}X$, we have $\mu(B+B)=\mu\big(n^{-1}(I+I)\big)=2\mu(I)$ and so $\mu(B+B)\leq \mu(A+A)-\mu(A)+(1+\varepsilon)\mu(A) \leq \mu(A+A)+\varepsilon\,\mu(A)$. Hence $$\label{eq:2nd} \mu\big((B+B)\setminus(A+A)\big) \leq \varepsilon\, \mu(I).$$ Writing $(B+B) = (A+A)\sqcup \big((B+B)\setminus(A+A)\big)$, we have $$B \cap \, k^{-1}(B+B)\; \subset \; \big[B \cap\, k^{-1}(A+A)\big]\, \sqcup \, k^{-1}\big[(B+B)\setminus \, (A+A)\big].$$ Writing $B= A \sqcup ( B \setminus A)$, and using that $A$ is $k$-sum-free, we have $B \cap\, k^{-1}(A+A) \, \subset \, B\setminus A$. Hence $$\label{eq:3rd} B \cap \, k^{-1}(B+B) \; \subset \; (B\setminus A)\, \cup \, k^{-1}\big[(B+B)\setminus(A+A)\big].$$ Combining , , , and the fact that $\mu\big(I \cap \, k^{-1}(I+I)\big)=\mu\big(B \cap \, k^{-1}(B+B)\big)$, the result follows. Given this lemma, our goal now is to obtain a useful upper bound on the measure of an almost-$k$-sum-free interval. First we observe that if an interval $I\subset {\mathbb}{T}$ is $k$-sum-free then $\mu(I)\leq 1/(k+2)$. Indeed, we must have $k\cdot I$ disjoint from $I+I$, which implies that $\mu(I+I)+\mu(k\cdot I)\leq 1$, which in turn implies (since then $\mu(I+I)$ and $\mu(k\cdot I)$ are both less than 1) that $\mu(I+I)=2\mu(I)$ and $\mu(k\cdot I)=k\mu(I)$, which implies our claim. Note that this upper bound $1/(k+2)$ is attained by the interval $I=[\tfrac{2}{k^2-4},\tfrac{k}{k^2-4})$, which is indeed $k$-sum-free (a simple calculation shows that $(I+I)^c=k\cdot I$). We now show that if an interval is almost $k$-sum-free, then its measure cannot be much larger than $1/(k+2)$. \[lem:int-estim\] Let $k$ be a positive integer, let $\delta\in [0,1)$, and let $I$ be a closed interval in ${\mathbb}{T}$ such that $\mu\big(I \cap \,k^{-1}(I+I) \big)\leq \delta\mu(I)$. Then $\mu(I)\leq \frac{1+k \delta / 2}{k+2}$. Since $\mu\big(I \cap \,k^{-1}(I+I)\big)<\mu(I)$, we must have $I+I\neq {\mathbb}{T}$. The sumset $I+I$ is then a closed interval of measure $2\mu(I)$, and $k^{-1}(I+I)$ is a union of $k$ copies of $I+I$, each copy shrunk by a factor of $1/k$, and the centers of the copies forming an arithmetic progression of difference $1/k$. The complement of $k^{-1}(I+I)$ consists of $k$ components, each being an open interval of measure $\tfrac{1- 2\mu(I)}{k}$. Let $j$ be the number of these components that have non-empty intersection with $I$. Then $I$ must cover $j-1$ of the intervals making up $k^{-1}(I+I)$, so we have $(j-1) \tfrac{2\mu(I)}{k} \leq \mu\big(I\cap\, k^{-1}(I+I) \big) \leq \delta \mu(I)$, whence $j\leq 1+ \tfrac{\delta k}{2}$. We therefore have $$\begin{aligned} \mu(I) & = & \mu\big(I \setminus \,k^{-1}(I+I)\big) + \mu\big(I \cap \,k^{-1}(I+I)\big)\\ &\leq & j\,\tfrac{1-2\mu(I)}{k} + \delta \mu(I) \leq (1+ \tfrac{\delta k}{2}) \tfrac{1-2\mu(I)}{k} + \delta \mu(I),\end{aligned}$$ whence $\mu(I) (1-\delta) k \leq (1+ \tfrac{\delta k}{2})(1-2\mu(I))$. After rearranging, we find that this inequality is equivalent to $\mu(I)\leq \frac{1}{2+k} + \frac{\delta}{2+4/k}$, and the result follows. Combining Lemma \[lem:alt\] with Lemma \[lem:int-estim\] we have that either $\mu(A)\leq \frac{1}{3+\varepsilon}$ or there exist an interval $I$ and $n\in {\mathbb}{N}$ such that $\mu(I)\leq \frac{1+k \varepsilon }{k+2}$ and $n\cdot A\subset I$, whence $\mu(A)\leq \mu(n\cdot A)\leq \mu(I)$, and the result follows. \[rem:TtoZp\] There is an equivalence between determining $d_k({\mathbb}{T})$ and determining the quantity $d_k({\mathbb}{Z}_p)=\max\{\frac{|A|}{p}: A\subset {\mathbb}{Z}_p\textrm{ is $k$-sum-free}\}$ asymptotically as the prime $p$ tends to infinity. More precisely, it follows from [@CS Theorem 1.3] that $\lim_{p\to\infty} d_k({{\mathbb}{Z}_{p}})$ exists and equals $d_k({\mathbb}{T})$. Theorem \[thm:ksfs-ub\] therefore implies that this limit is at most $\max\{\frac{1}{3+\varepsilon}, \frac{1+k\varepsilon}{k+2}\}$. Application to sets of doubling less than 4 in ${\mathbb}{R}$ {#sec:dlt4} ============================================================= We shall write $\lambda$ for the inner Lebesgue measure on ${\mathbb}{R}$. For a bounded set $A\subset {\mathbb}{R}$ we denote by $\textrm{diam}(A)$ the diameter $\sup(A)-\inf(A)$. The main result of this section is the following theorem which, for a bounded set $A\subset {\mathbb}{R}$ having doubling-constant not much larger than 3, gives information on the structure of $A$ modulo $\textrm{diam}(A)$. \[thm:small\_doubling\_R\] Let $\varepsilon\in [0,1)$ be such that Theorem \[thm:Tsym\] holds. Let $A$ be a closed subset of $[0,1]$ satisfying $\lambda(A+A)\leq (3+\varepsilon)\lambda(A)$, $\lambda(A)\in \big(0,\frac{1}{2(1+\varepsilon)}\big)$, and $\textrm{diam}(A)=1$. Then there exists a positive integer $n\leq \frac{1+\varepsilon}{1-\varepsilon}$ such that $n\cdot A \bmod 1$ is included in a closed interval $I\subset {\mathbb}{T}$ with $\mu(I)\leq (1+\varepsilon)\lambda(A)$. Below, when we use the notation $\lambda$ together with a sumset, then addition is meant to be in ${\mathbb}{R}$; when we use instead the notation $\mu$, addition is meant to be in ${\mathbb}{T}$. Any progress on the upper bound for $\varepsilon$ in Theorem \[thm:Tsym\] would yield progress in Theorem \[thm:small\_doubling\_R\]. In particular, by [@FMY Lemma 2] (see also [@Bilu Corollary 1.5]), we already know that for some small absolute constant $a_0$, if we add to Theorem \[thm:Tsym\] the assumption that $A\subset {\mathbb}{T}$ has ${\mu}(A)\leq a_0$, then the theorem holds for every $\varepsilon\in [0,1)$. This implies that for any set $A\subset [0,1]$ satisfying $\lambda(A) \leq a_0$ and $\lambda(A+A)\leq (3+\varepsilon)\lambda(A) < 4\lambda(A)$, there is a positive integer $n\leq \frac{1+\varepsilon}{1-\varepsilon}$ such that $n\cdot A \bmod 1$ is included in an interval $I\subset{\mathbb}{T}$ with $\mu(I)\leq (1+\varepsilon)\lambda(A)$. Furthermore, if Bilu’s conjecture [@Bilu Conjecture 1.2] holds in the symmetric case $A=B\subset{\mathbb}{T}$ with ${\mu}(A)\leq 1/4$, then Theorem \[thm:small\_doubling\_R\] holds with the condition $\lambda(A)\in \big(0,\frac{1}{2(1+\varepsilon)}\big)$ replaced by $\lambda(A)\in (0, 1/4)$, and any $\varepsilon<1$. Theorem \[thm:small\_doubling\_R\] can be generalized to any bounded set $A\subset {\mathbb}{R}$, provided we replace the assumption $\lambda(A)\in \big(0,\frac{1}{2(1+\varepsilon)}\big)$ with $\lambda(A)\in \big(0,\frac{\textrm{diam}(A)}{2(1+\varepsilon)}\big)$ and that the conclusion is stated modulo $\textrm{diam}(A)$ rather than modulo 1. If $\varepsilon<1/3$ (this is the case for the $\varepsilon$ for which we know that Theorem \[thm:Tsym\] holds), then $n=1$. This means that under the hypothesis of Theorem \[thm:small\_doubling\_R\] with $\varepsilon<1/3$, the set $A$ is included in a union $I_1\cup I_2$ of two intervals, $I_1$ being of the form $[0,a]$ and $I_2$ of the form $[1-b,1]$, with $a+b\leq \lambda(A+A)-\lambda(A)$. By our assumptions $A\subset [0,1]$ is closed with $0,1\in A$. If $\mu$ is the inner Haar measure on ${\mathbb}{T}$ and $\tilde{A}$ denotes $A\bmod 1$, we have $$\lambda(A+A)=\mu(\tilde{A}+\tilde{A})+\mu(\Sigma_2),$$ where $\Sigma_2=\{x\in[0,1) \,:\, x,x+1\in A+A\}$. Since $0,1\in A$, we have that $A\setminus\{1\}$ is a subset of $\Sigma_2$, whence $\mu(\Sigma_2)\geq \mu(\tilde{A})=\lambda(A)$. Therefore $\lambda(A+A)\leq (3+\varepsilon)\lambda(A)$ implies that $\mu(\tilde{A}+\tilde{A})\leq (2+\varepsilon)\mu(\tilde{A})<\frac{1}{2}+\mu(\tilde{A})$. Theorem \[thm:Tsym\] applied to $\tilde{A}$ gives us a positive integer $n$ such that $n\cdot\tilde{A}$ is included in a closed interval $I\subset {\mathbb}{T}$ of length at most $(1+\varepsilon)\mu(\tilde{A})=(1+\varepsilon)\lambda(A)$. We thus have $\tilde{A}\subset n^{-1}I$, and $n^{-1}I$ viewed as a subset of $[0,1)$ is a disjoint union of intervals $\frac{i}{n}+J$ mod 1, $i=0,\ldots,n-1$, where $J$ is a closed interval in ${\mathbb}{T}$ with $n\cdot J=I$ and $J\cap [0,\frac{1}{n})\neq \emptyset$. (Note that $J$ viewed as a subset of $[0,1)$ could have a part in $[1-\frac{1}{n},1)$.) Therefore, there exist sets $A_0\subset [0,\frac{1}{n})$, $A_i\subset (-\frac{1}{n},\frac{1}{n})$ for $i\in [n-1]$, and $A_n\subset (-\frac{1}{n},0]$, such that $$A= \bigcup_{i=0}^{n}\left(\frac{i}{n}+A_i\right),\textrm{ and }\; \bigcup_{i=0}^{n} A_i \bmod 1 \,\subset J, \textrm{ so in particular } \lambda\Big(\bigcup_{i=0}^{n} A_i\Big)\leq (1+\varepsilon)\frac{\lambda(A)}{n}.$$ It remains to find an upper bound for $n$. We write $\alpha=\lambda(A)$, and $\alpha_i=\lambda(A_i)$ for $0\leq i\leq n$. Since $(1+\varepsilon)\alpha<\frac{1}{2}$, we have that $A+A$ is a disjoint union of subsets of ${\mathbb}{R}$ of the form $$A+A= \bigcup_{i=0}^{2n}\left(\frac{i}{n}+S_i\right) \quad \mbox{with}\quad S_i=\bigcup_{k,l\,:\, k+l=i}(A_k+A_l).$$ In particular $A_i+A_i\subset S_{2i}$ and $A_i+A_{i+1}\subset S_{2i+1}$. This yields $$\begin{aligned} \lambda(A+A)&= \sum_{i=0}^{2n}\lambda\left(S_i\right) = \sum_{i=0}^{n}\lambda\left(S_{2i}\right)+\sum_{i=0}^{n-1}\lambda\left(S_{2i+1}\right)\\ &\geq \sum_{i=0}^{n}2\alpha_i+\sum_{i=0}^{n-1}(\alpha_i+\alpha_{i+1})=4\alpha-(\alpha_0+\alpha_n).\end{aligned}$$ Now $\alpha_0 +\alpha_n \leq (1+\varepsilon)\alpha/n$, since mod 1 the sets $A_0\setminus\{0\}$ and $A_n$ are disjoint and their union is included in $J$. Hence $(3+\varepsilon)\alpha\geq \lambda(A+A)\geq 4\alpha-(\alpha_0+\alpha_n) \geq \alpha\left(4-\tfrac{1+\varepsilon}{n}\right)$, and this implies that $n\leq \frac{1+\varepsilon}{1-\varepsilon}$. In [@EGM], Eberhard, Green and Manners prove the following result (see [@EGM Theorem 6.2]). \[corEGM\] Let $A\subset[0,1]$ be an open set with $\lambda(A-A)\leq 4\lambda(A)-\delta$. Then for some constant $c>0$ depending on $\delta$ there is an interval $I$ of length $\lambda(I)\geq c$ such that $\lambda(A\cap I)\geq (\frac{1}{2}+\frac{\delta}{7})\lambda(I)$. Here $\lambda(A-A)$ can be replaced with $\lambda(A+A)$ (see Remark $(ii)$ after Theorem 6.2 in [@EGM]). Theorem \[thm:small\_doubling\_R\] above yields an effective version of this result when $\varepsilon$ is close to $\lambda(A)$. \[cor:EGM\] Let $A\subset[0,1]$ be a non-empty closed set with $\lambda(A+A)\leq 4\lambda(A)-\delta$ and $\lambda(A) < \frac{\textrm{diam}(A)}{4}+\frac{\delta}{2}$, for some $\delta>0$. If $\delta>\lambda(A)(1-\varepsilon)$, with $\varepsilon$ such that Theorem \[thm:Tsym\] holds, then there is an interval $I$ with $\lambda(I)\geq \min(\delta/4,\delta^2)$ such that $\lambda(A\cap I)\geq (\frac{1}{2}+\frac{\delta}{4})\lambda(I)$. The size of $\delta$ is conditioned by Theorem \[thm:Tsym\]. If Bilu’s conjecture holds for sets $A$ with ${\mu}(A)\leq \tfrac{1}{4}$, then Corollary \[cor:EGM\] gives an effective version of [@EGM Theorem 6.2] for sets $A$ with $\lambda(A)\leq \frac{\textrm{diam}(A)}{4}$. We first prove the result assuming that $\textrm{diam}(A)=1$. We use the notation introduced in the proof of Theorem \[thm:small\_doubling\_R\]. Thus $A\subset[0,1]$ is a closed set with $0,1\in A$, with $\lambda(A+A)\leq 4\alpha-\delta$ and $\delta>\alpha(1-\varepsilon)$, where $\alpha=\lambda(A)$. First suppose that $\delta> \alpha$. Then $\lambda(A+A)\leq 4\lambda(A)-\delta < 3\lambda(A)$. Note that since $\lambda(A+A)\geq 2\alpha$, we have $\delta\leq 2\alpha$. Applying Theorem \[thm:small\_doubling\_R\] with $\varepsilon=0$, we obtain an interval $I\subset {\mathbb}{T}$ covering $A$ and with $\lambda(I)=\alpha$. Now as a subset of ${\mathbb}{R}$, the set $I$ is either an interval, in which case the conclusion holds, since $\lambda(I)\geq \alpha\geq \delta/2$ and $\lambda(A\cap I)=\lambda(I)$; or $I$ is a union of two intervals, one of which has measure at least $\lambda(I)/2\geq \delta/4$, and then this interval satisfies the desired conclusion. Let us now suppose that $\delta\leq \alpha$, and write $\delta=(1-\varepsilon)\alpha$.\ Then we have $\lambda(A+A)\leq (3+\varepsilon)\alpha$, and our assumption $\alpha<\frac{1}{4}+\frac{\delta}{2}$ also implies that $\alpha<\frac{1}{2(1+\epsilon)}$. Therefore, by Theorem \[thm:small\_doubling\_R\] there exists a positive integer $n \leq \frac{1+\varepsilon}{1-\varepsilon}$ such that $$A= \bigcup_{i=0}^{n}\left(\frac{i}{n}+A_i\right) \quad \mbox{with}\quad \textrm{diam}_{{\mathbb}{T}}\left(\bigcup_{i=0}^n A_i\right)\leq (1+\varepsilon)\frac{\alpha}{n},$$ where for a set $B\subset {\mathbb}{T}$ we denote by $\textrm{diam}_{{\mathbb}{T}}(B)$ the infimum of the Haar measures of intervals in ${\mathbb}{T}$ that cover $B$. Writing $\tilde{A_i}=\begin{cases}A_i&\mbox{if }i\in[n-1]\\A_0\cup A_m&\mbox {if } i=0\end{cases}$, there exists $i\in[0,n-1]$ such that $\mu(\tilde{A_i})\geq \frac{\alpha}{n}$ and $\textrm{diam}_{{\mathbb}{T}}(\tilde{A_i})\leq (1+\varepsilon)\frac{\alpha}{n}$. There are now two cases. If $i\not=0$, then letting $I$ be the interval $[\inf(A_i),\sup(A_i)]$ in ${\mathbb}{R}$, we have $$\frac{\lambda(A\cap I)}{\lambda(I)}=\frac{\lambda(A_i)}{{\rm{diam}}(A_i)}\geq \frac{1}{1+\varepsilon}=\frac{1}{2-\delta/\alpha}\geq \frac12+\frac{\delta}{4\alpha}\geq \frac{1}{2}+\frac{\delta}{2},$$ where for the last inequality we used that $\alpha<\frac{1}{2(1+\varepsilon)}\leq \frac{1}{2}$. We also have $$\lambda(I)\geq \frac\alpha{n}\geq \alpha \frac{1-\varepsilon}{1+\varepsilon}=\frac{\delta}{2-\delta/\alpha}\geq\frac{\delta}{2}.$$ If $i=0$, so $\lambda(A_0)+\lambda(A_n)\geq \frac{\alpha}{n}$, then let $d_0={\rm{diam}}(A_0)$, $d_n={\rm{diam}}(A_n)$ and $\alpha_0=\lambda(A_0)$, $\alpha_n=\lambda(A_n)$. If $\tfrac{\alpha_i}{d_i}\geq \tfrac{1}{2}+\frac{\delta}{2}$ for both $i=0$ and $i=n$, then we choose $I=[0,d_0]$ if $d_0\geq d_n$, and $I=[-d_n,0]$ if $d_0\leq d_n$. We then have $\lambda(I)\geq \frac{\alpha}{2n}\geq \frac{\delta}{4}$, and $\lambda(A\cap I)=\tfrac{\alpha_i}{d_i} \lambda(I) \geq (\tfrac{1}{2}+\frac{\delta}{2})\lambda(I)$, so the desired conclusion holds. Otherwise, suppose that $\tfrac{\alpha_n}{d_n}< \tfrac{1}{2}+\frac{\delta}{2}$. Then $$\alpha_0 \geq \frac{\alpha}{n}-\alpha_n\geq \frac{\alpha}{n}-\left(\frac{1}{2}+\frac{\delta}2\right)d_n \geq \frac{\alpha}{n}-\left(\frac{1}{2}+\frac{\delta}{2}\right)\left((1+\varepsilon)\frac\alpha{n}-d_0\right).$$ Using that $\varepsilon=1-\delta/\alpha$, the last term above is seen to equal $$\begin{aligned} \frac{\alpha}{n}-\left(\frac{1}{2}+\frac{\delta}{2}\right)\left(\frac{2\alpha-\delta}{n}-d_0\right) & \geq & \left(\frac12+\frac{\delta}2\right)d_0+\frac{\delta}{2n}\left(1-2\alpha+\delta\right)\\ & > &\left(\frac12+\frac{\delta}2\right)d_0+\frac{\delta}{4n},\end{aligned}$$ where the last inequality used that $\alpha < \frac{1}{4}+\frac{\delta}{2}$. This implies on one hand that $\tfrac{\alpha_0}{d_0}\geq \tfrac12+\tfrac{\delta}{2}$, and on the other hand that $\alpha_0\geq \left(\frac12+\frac{\delta}{2}\right)\alpha_0+\frac{\delta}{4n}$, thus $\alpha_0\left(\frac12-\frac{\delta}{2}\right)\geq\frac{\delta}{4n}$, and so $\alpha_0 \geq \frac{\delta}{2n}\geq \frac{\delta}{2}\frac{1-\varepsilon}{1+\varepsilon}\geq\frac{\delta}{2}\frac{\delta}{2\alpha-\delta}\geq\delta^2$. Choosing $I=[0,d_0]$ yields the result. The case $\tfrac{\alpha_0}{d_0}< \tfrac12+\tfrac{\delta}{2}$ is similar. This completes the proof in the case $\textrm{diam}(A)=1$. Finally, if $\textrm{diam}(A)< 1$, we may rescale the set in ${\mathbb}{R}$ defining $B= \frac{1}{\textrm{diam}(A)} A$ (and translate if necessary so that we may assume that $0,1\in B$). Applying the previous case to $B$, with parameter $\delta/\textrm{diam}(A)$, we obtain an interval $I_B$ satisfying $\lambda(I_B)\geq \min\big( \delta/\textrm{diam}(A), (\delta/\textrm{diam}(A))^2\big)$ and $\lambda(B\cap I_B) \geq (\frac{1}{2}+\frac{\delta}{4 \,\textrm{diam}(A)}) \lambda(I_B)$. The interval $I=\textrm{diam}(A)\cdot I_B$ satisfies the desired conclusion. [99]{} A. Baltz, P. Hegarty, J. Knape, U. Larsson, T. Schoen, *The structure of maximum subsets of $\{1,\ldots,n\}$ with no solutions to $a+b=kc$*, Electron. J. Combin. **12** (2005), Paper No. R19, 16pp. Y. Bilu, *The $(\alpha + 2\beta)$-inequality on a torus*, J. Lond. Math. Soc. (2) **57** (1998), no. 3, 513–528. Y. Bilu, V. F. Lev, I. Z. Ruzsa, *Rectification principles in additive number theory*, Discrete Comput. Geom. **19** (1998), 343–353. T. F. Bloom, *A quantitative improvement for Roth’s theorem on arithmetic progressions*, J. Lond. Math. Soc. (2) **93** (2016), no. 3, 643–663. P. Candela, O. Sisask, *On the asymptotic maximal density of a set avoiding solutions to linear equations modulo a prime*, Acta Math. Hungar. [**132**]{} (2011), vol. 3, 223–243. P. Candela, O. Sisask, *A removal lemma for linear configurations in subsets of the circle*, Proc. Edinburgh Math. Soc. **56** (2013) (3), 657–666. F. R. K. Chung, J. L. Goldwasser, *Integer sets containing no solutions to $x+y=3z$*, in: R.L. Graham and J. Nesetřil eds., The Mathematics of Paul Erdős, Springer, Berlin (1997), 218–227. F. R. K. Chung, J. L. Goldwasser, *Maximum subsets of $(0,1]$ with no solutions to $x+y=kz$*, Electron. J. Combin. **3** (1996), no. 1, Research Paper 1. E. Croot, V. F. Lev, P. P. Pach, *Progression-free sets in ${\mathbb}{Z}_n^4$ are exponentially small*, Ann. of Math. (2) **185** (2017), no. 1, 331–337. S. Eberhard, B. Green, F. Manners, *Sets of integers with no large sum-free subset*, Ann. of Math. (2) **180** (2014), no. 2, 621–652. J. S. Ellenberg, D. Gijswijt, *On large subsets of ${\mathbb}{F}_q^n$ with no three-term arithmetic progression*, Ann. of Math. (2) **185** (2017), no. 1, 339–343. G. A. Freiman, *The addition of finite sets I*, Izv. Vyss. Ucebn. Zaved. Matematika **6** (13) (1959), 202–213. G. A. Freiman, *Inverse problems in additive number theory. Addition of sets of residues modulo a prime*, Dokl. Akad. Nauk SSSR **141** (1961) 571–573. G. A. Freiman, A. A. Judin, D. A. Moskvin, *Inverse problems of additive number theory and local limit theorems for lattice random variables*, Number-theoretic studies in the Markov spectrum and in the structural theory of set addition (Russian), Kalinin. Gos. Univ., Moscow, 1973, 148–162. B. Green and I. Z. Ruzsa, *Sets with small sumset and rectification*, Bull. London Math. Soc. **38** (2006), no. 1, 43–52. D. J. Grynkiewicz, *Structural additive theory*, Developments in Mathematics **30**, Springer, Cham, 2013. Y. O. Hamidoune, O. Serra, G. Zémor, *On the critical pair theory in $\mathbb{Z}/p\mathbb{Z}$*, Acta Arithmetica **121** (2006), no. 2, 99–115. S. Lev, P. Y. Smeliansky, *On addition of two distinct sets of integers*, Acta Arith. **70** (1) (1995), 85–91. T. Łuczak, *On sum-free sets of natural numbers*, Combinatorics Week (Portuguese) (São Paulo, 1994), Resenhas 2 (1995), no. 2, 229–238. A. M. Macbeath, [*On the measure of sum sets, II. The sum-theorem for the torus*]{}, Math. Proc. Cambridge Phil. Soc. **49** n.1 (1953), 40–43. M. Matolcsi, I. Z. Ruzsa, *Sets with no solutions to* $x + y = 3z$, European J. Combin. **34** (2013), no. 8, 1411–1414. M. B. Nathanson, *Additive number theory: inverse problems and the geometry of sumsets*, Grad. Texts in Math. **165**, Springer-Verlag, New York, 1996. A. Plagne, A. de Roton, *Maximal sets with no solution to $x+y=3z$*, Combinatorica **36** (2016), no. 2, 229–248. D. A. Raikov, *On the addition of point-sets in the sense of Schnirelmann*, Rec. Math. \[Mat. Sbornik\] N.S. **5**(47), (1939), 425–440. . J. R[ø]{}dseth, *On Freiman’s $2.4$-Theorem*, Skr. K. Nor. Vidensk. Selsk. (2006), no. 4, 11–18. K. F. Roth, *On certain sets of integers*, J. London Math. Soc. **28** (1953), 104–109. A. de Roton, *Small sumsets in ${\mathbb}{R}$ : a continuous $3k-4$ Theorem*, preprint (2016), arXiv:1605.04597, 22 pages. T. Sanders, *On Roth’s theorem on progressions*, Ann. of Math. (2) **174** (2011), no. 1, 619–636. I. Schur, *Uber die Kongruenz* $x^m+y^m=x^m $(mod $p$), J. ber. Deutch. Math. Verein. **25** (1916), 114–116. O. Serra, G. Zémor, *Large sets with small doubling modulo $p$ are well covered by an arithmetic progression*, Ann. Inst. Fourier **59** (2009), no. 5, 2043–2060. Y. Stanchescu, *On addition of two distinct sets of integers*, Acta Arith.**75** (2) (1996), 191–194. [^1]: Note that [@H-S-Z Conjecture 1] appeared before Conjecture \[conj:S-Z\], but in the case $A=B$ it was recognized only later as the likely optimal conjecture, in [@S-Z], thanks to an example given in that paper. [^2]: The term *$k$-sum-free set* is used for instance in . These sets should not be confused with sets free of solutions to the equation $a_1+\cdots+a_k=b$, which have also been called $k$-sum-free sets (see [@Luczak]).
--- abstract: 'In this work we study a degenerate pseudo-parabolic system with cross diffusion describing the evolution of the densities of an unsaturated two-phase flow mixture with dynamic capillary pressure in porous medium with saturation-dependent relaxation parameter and hypocoercive diffusion operator modeling cross diffusion. The equations are derived in a thermodynamically correct way from mass conservation laws. Global-in-time existence of weak solutions to the system in a bounded domain with equilibrium boundary conditions is shown. The main tools of the analysis are an entropy inequality and a crucial apriori bound which allows for controlling the degeneracy.' address: - 'Institute for Analysis and Scientific Computing, Vienna University of Technology, Wiedner Hauptstraße 8–10, 1040 Wien, Austria' - 'University of Zagreb, Faculty of Electrical Engineering and Computing, Unska 3, 10000 Zagreb, Croatia' - 'Institute for Analysis and Scientific Computing, Vienna University of Technology, Wiedner Hauptstraße 8–10, 1040 Wien, Austria' author: - 'Esther S. Daus' - 'Josipa-Pina Milišić' - Nicola Zamponi date: - - title: 'Global existence for a two-phase flow model with cross diffusion' --- [^1] Introduction ============ The problem of describing the transport of chemical mixtures in porous media is very important in many industrial applications. For a general overview on the modeling of multicomponent multiphase flows in porous media, we refer to [@BearBach90]. In this paper we consider a two-phase flow model with wetting and non-wetting phase (e.g. water and oil), where the non-wetting phase consists of a mixture of $n$ chemical components, including nonequilibrium effects concerning capillary pressure and cross-diffusion effects. The main result of this work is to provide an existence analysis of the proposed model. From a mathematical viewpoint, the transport equations for the mass densities form a degenerate pseudo-parabolic system of PDEs with cross-diffusion terms. The presence of the mixed-derivative third-order term, coming from the nonequilibrium capillary pressure law, in form of a time derivative inside the diffusion operator, as well as the cross-diffusion terms, involving the chemical potentials, make the analysis very demanding. Furthermore, the compactness of an approximate regularized system is obtained by applying the nonstandard compactness results of Dreyer *et al.* [@Drey17]. The modeling of nonequilibrium capillary effects in problems of enhancing oil and gas recovery from rocks was proposed by Barenblatt, Entov and Ryzhik in the classical book [@BER72], and later investigated by many scientists up to nowadays. In our work we follow the approach given by Hassanizadeh and Grey [@HG93], where the nonequilibrium capillary effects are given by a constitutive relationship between the non-wetting phase saturation and the capillary pressure. This relationship is characterized by the presence of the relaxation parameter which depends on the non-wetting saturation as well. Concerning the mathematical analysis, the global-in-time existence of weak solutions for the Richards’ equation with dynamic capillary pressure and constant relaxation parameter was shown by Mikelić [@Mik10]. The first existence result for the two-phase flow model with dynamic capillary pressure and saturation dependent relaxation parameter was obtained by Cao and Pop in [@CP16]. We note that the existence theorem can be proved under certain relations between the orders of the zeros of the relative permeabilities and the relaxation parameter and the order of the singularities of the capillary pressure function. In comparison to [@CP16], here we follow the approach given in [@JPM17], where it was shown that it is enough to analyze the case of the countercurrent imbibition flow instead of the full two-phase flow system. On the other side, the analysis of a model describing the transport of a single-phase fluid mixture in porous media taking into account also certain cross-diffusion effects was studied in [@JMZ18]. The equations are derived in a thermodynamically consistent way, and global-in-time existence of weak solutions in a bounded domain with equilibrium boundary conditions as well as long-time behaviour was proved with the help of the boundedness-by-entropy method [@BSW12; @Jue15; @Jue16]. The mathematical novelties rely on the complex structure of the equations and on the observation that the solution of the binary model satisfies an unexpected integral inequality leading to a minimum principle for this system. Our goal in this work is to combine the strategies of [@JPM17] and [@JMZ18], leading to a global-in-time existence of weak solutions result for a two-phase flow model with cross diffusion. Finally, up to our knowledge, the uniqueness and the long-time behaviour of a weak solution for a two-phase flow model with saturation-dependent relaxation parameter and cross diffusion are still open problems. For a uniqueness result of a two-phase flow model with saturation-dependent relaxation parameter but without cross diffusion, we mention the result in [@CP15]. Model equations {#sec.model} =============== We consider an incompressible, isothermal fluid mixture with $n$ components in a domain $\Omega \subset {{\mathbb R}}^3$. We note that the fact that we work in ${{\mathbb R}}^3$ is for convenience only, and can be easily adapted to an arbitrary space dimension $\Omega \subset {{\mathbb R}}^d$ with $d \geq 1$. The evolution of this fluid mixture is governed by the transport equations for the [*single component mass densities*]{} $S_1(x,t),\ldots,S_n(x,t)$ in the following way $$\begin{aligned} \label{1} \partial_t S_i = & {\operatorname{div}}\left( \frac{S_i}{S} a(S) \nabla (p_c(S) + \partial_t \beta(S)) + \sum_{j=1}^n D_{ij}({\boldsymbol{S}}) \nabla \mu_j \right)\\ &\qquad i=1,\ldots,n,~~ x\in\Omega,~~t>0.\nonumber\end{aligned}$$ Here $S = \sum_{i=1}^n S_i$ is the [*total mass density*]{}, ${\boldsymbol{S}}= (S_1,\ldots,S_n)$ is the vector of the single component mass densities, $a(S)$ is the [*diffusion mobility*]{}, $p_c(S)$ represents the [*stationary capillary pressure*]{}, $\tau(S)\equiv \beta'(S)$ plays the role of a relaxation parameter, $D=(D_{ij}({\boldsymbol{S}}))_{i,j=1,\ldots,n}$ is the [*diffusion matrix*]{}, and the quantities $\mu_1,\ldots,\mu_n$, called [*chemical potentials*]{}, are defined in terms of $S_1,\ldots,S_n$ as follows $$\begin{aligned} \mu_i = \log \Big( \frac{S_i}{S} \Big) + \int_{1/2}^S\frac{\tau(\sigma)}{a(\sigma)}d\sigma \qquad i=1,\ldots,n. \label{ChemPot}\end{aligned}$$ The sum $p_c^{\textrm{dyn}}(S)\equiv p_c(S) + \partial_t \beta(S)$ is referred to as [*dynamic capillary pressure*]{} [@HG93]. The quantities $a(S)$, $\tau(S)$, $p_c'(S)$ are assumed to be positive for $0<S<1$, while the diffusion matrix $D({\boldsymbol{S}})$ is assumed to be positive semidefinite. Following the approach in [@JMZ18], we impose equilibrium boundary conditions $$\begin{aligned} S_i & = S_i^{\Gamma}\; \textrm{ on } \; \partial \Omega,\; t > 0,~~ i=1,\ldots,n, \label{BC}\end{aligned}$$ where $S_1^\Gamma,\ldots,S_n^\Gamma > 0$ are generic constants, as well as general initial conditions $$\begin{aligned} S_i(\cdot,0) & = S_i^0 \; \textrm{ in } \; \Omega,~~ i=1,\ldots,n. \label{IC}\end{aligned}$$ For consistency of , with the physics, we require the single component concentrations $S_1,\ldots,S_n$ to be positive and the total concentration $S$ to be smaller than 1; that is, we seek for solutions ${\boldsymbol{S}}$ to , which take values in the set $${{\mathcal D}}= \left\{ {\boldsymbol{S}}\in {{\mathbb R}}^n : \; S_i > 0 \; \textrm{ for } \; i=1,\ldots,n, \;\; \sum_{j=1}^n S_j < 1. \right\}.$$ The chemical potentials $\mu_1,\ldots,\mu_n$ are the partial derivatives with respect to the species concentrations $S_1,\ldots,S_n$ of a free energy density function ${{\mathcal F}}$ satisfying $$\begin{aligned} \nonumber \mu_i &= \frac{\pa{{\mathcal F}}}{\pa S_i}({\boldsymbol{S}})\qquad i=1,\ldots,n,\\ {{\mathcal F}}({\boldsymbol{S}}) &= \sum_{i=1}^n S_i \log\frac{S_i}{S} + {\mathcal{E}}(S),\qquad {\mathcal{E}}(S) = \int_{1/2}^S\int_{1/2}^{S'}\frac{\tau(\sigma)}{a(\sigma)}d\sigma d S' . \label{Entropy}\end{aligned}$$ The [*thermodynamic pressure*]{} $p^{th}$ is given by the Gibbs-Duhem equation $$\begin{aligned} \label{Pth} p^{th}({\boldsymbol{S}}) &= \sum_{i=1}^n S_i\frac{\pa{{\mathcal F}}}{\pa S_i}({\boldsymbol{S}}) - {{\mathcal F}}({\boldsymbol{S}}). $$ The gradient of the thermodynamic pressure $p^{th}$ satisfies the simple relation $$\begin{aligned} \label{Pom} \sum_i S_i \nabla \mu_i = \nabla p^{th} = \frac{S\nabla\beta(S)}{a(S)} = \frac{S\tau(S)}{a(S)}\nabla S =\nabla \int_{1/2}^S \frac{\sigma \tau(\sigma)}{a(\sigma)}d\sigma.\end{aligned}$$ As a consequence of , by employing $\mu_i - \mu_i({\boldsymbol{S}}^\Gamma)$ as a test function in , one obtains the following [*entropy balance equation*]{}: $$\begin{aligned} \label{ei} &\frac{d}{dt}\int_\Omega\left(\tilde{{\mathcal F}}({\boldsymbol{S}}) +\frac{1}{2}|\nabla\beta(S)|^2 \right)dx\\ &\qquad = -\int_\Omega \beta'(S)p_c'(S)|\nabla S|^2 dx - \int_\Omega\sum_{i,j=1}^n D_{ij}({\boldsymbol{S}})\nabla\mu_i\cdot\nabla\mu_j dx \leq 0, \nonumber\end{aligned}$$ where the [*relative entropy density*]{} $\tilde{{\mathcal F}}$ is defined as $$\begin{aligned} \label{rel.entr} \tilde{{\mathcal F}}({\boldsymbol{S}}) = {{\mathcal F}}({\boldsymbol{S}}) - {{\mathcal F}}({\boldsymbol{S}}^\Gamma) - {\boldsymbol{\mu}}({\boldsymbol{S}}^\Gamma)\cdot ({\boldsymbol{S}}- {\boldsymbol{S}}^\Gamma). \end{aligned}$$ [Relations , easily imply $$\begin{aligned} \label{link} \exists C\in{{\mathbb R}}: \quad \sum_{i=1}^n S_i\frac{\pa{{\mathcal F}}}{\pa S_i}({\boldsymbol{S}}) - {{\mathcal F}}({\boldsymbol{S}}) = C + \int_{1/2}^S\frac{\sigma\tau(\sigma)}{a(\sigma)}d\sigma .\end{aligned}$$]{.nodecor} [Equation constitutes a necessary condition in order for the entropy balance equation to hold; without it is unclear how to handle the contribution of the nonstationary term $\pa_t\beta(S)$ in the dynamic capillary pressure $p^{dyn}$. In other words, is a constraint on the possible choices of free energies ${{\mathcal F}}$ which ensure that possesses an entropy structure.]{.nodecor} [Since is a linear nonhomogeneous equation, we can write any solution ${{\mathcal F}}$ to as ${{\mathcal F}}= {{\mathcal F}}_0 + {{\mathcal F}}_1$, where ${{\mathcal F}}_1$ is a specific solution to , while ${{\mathcal F}}_0$ is a generic solution to the corresponding linear homogeneous equation: $$\begin{aligned} \label{link.0} \sum_{i=1}^n S_i\frac{\pa{{\mathcal F}}_0}{\pa S_i}({\boldsymbol{S}}) - {{\mathcal F}}_0({\boldsymbol{S}}) = 0 .\end{aligned}$$ A simple ansatz ${{\mathcal F}}_1({\boldsymbol{S}})=\tilde{{\mathcal F}}_1(S)$ yields ${{\mathcal F}}({\boldsymbol{S}}) = {\mathcal{E}}(S)$ (up to additive constants). On the other hand, Euler’s theorem on homogeneous functions implies that is equivalent to the condition that ${{\mathcal F}}_0$ should be homogeneous of degree 1, *i.e.* ${{\mathcal F}}_0(\lambda{\boldsymbol{S}})=\lambda{{\mathcal F}}_0({\boldsymbol{S}})$ for every ${\boldsymbol{S}}\in{{\mathcal D}}$, $\lambda>0$. This condition has to be put together with the requirement that ${{\mathcal F}}$ has to be convex and the mapping ${\boldsymbol{S}}\in{{\mathcal D}}\mapsto (\mu_1,\ldots,\mu_n)\in{{\mathbb R}}^n$ globally invertible. A natural choice of ${{\mathcal F}}_0$ which fulfills all these requirements is ${{\mathcal F}}_0({\boldsymbol{S}}) = \sum_{i=1}^n S_i\log(S_i/S)$.]{.nodecor} Other quantities that will play a role in the analysis of are the [*relative chemical potentials:*]{} $$\mu_i^* = \mu_i - \frac{1}{n}\sum_{j=1}^n\mu_j , \qquad i=1,\ldots,n.$$ The concentrations $S_1,\ldots,S_n$ can be easily written in terms of the total concentration and the relative chemical potentials: $$\begin{aligned} S_i = S \frac{e^{\mu_i^*}}{\sum_{j=1}^n e^{\mu_j^*}},\quad i=1,\ldots,n. \label{Si.2}\end{aligned}$$ The structure of the paper is as follows. In Section \[sec.main\] the main result of the paper is stated and the state of the art for systems of the form is described. In Section \[sec.aux\] some auxiliary results are stated and proved. In Section \[Sec.Exis\] Theorem \[Thm.Main\] is proved. In the Appendix the derivation of the model is shown. Main result {#sec.main} =========== Throughout the paper we make the following assumptions: 1. The diffusion matrix $D=(D_{ij}({\boldsymbol{S}}))_{i,j=1}^n$ is symmetric and positive semidefinite (Onsager’s principle of thermodynamics). Moreover, constants $D_0$, $D_1 > 0$ exist such that $$\begin{aligned} D_0 \vert \Pi v \vert^2 \leq \sum_{i,j=1}^n D_{ij}({\boldsymbol{S}}) v_i v_j \leq D_1 \vert \Pi v\vert^2 \; \textrm{ for all } \; v \in {{\mathbb R}}^n,\; {\boldsymbol{S}}\in \mathcal{D},$$ where $\Pi = I - l \otimes l$ is the orthogonal projection on the subspace of ${{\mathbb R}}^n$ orthogonal to $l = (1,\ldots,1)/\sqrt{n}$. 2. The diffusion mobility $a(S)$ is given by $$\begin{aligned} a(S) = \frac{\lambda_o(S) \lambda_w(S)}{\lambda_o(S) + \lambda_w(S)} = \frac{(1-S)^\lambda S^\gamma}{(1-S)^\lambda + S^\gamma}, \end{aligned}$$ for some constants $\lambda, \gamma > 0$. 3. The stationary capillary pressure $p_c(S)$ has the form $$\begin{aligned} p_c'(S) = \frac{1}{S^{\beta_1}} + \frac{1}{(1-S)^{\beta_2}}, $$ for some constants $\beta_1, \beta_2>0$. 4. We assume that the relaxation parameter $\tau(S)$ is given by $$\begin{aligned} \tau(S) = \beta'(S) = \frac{S^\gamma}{S^\gamma + (1-S)^\lambda} \Big[ 1 + \frac{(1-S)^\lambda}{S^{\gamma_1}} \Big], $$ for some constant $\gamma_1>0$. 5. The following algebraic relations are satisfied: $$\begin{aligned} 5 < \beta_1 \leq \gamma_1 < \gamma <\frac{1}{2} \beta_1 + \frac{5}{6}(\gamma_1 -2 \big), \quad 5 < \beta_2 \leq \lambda < 3\beta_2-10 . $$ [In order to avoid technical difficulties, we use explicit forms for $a$, $p_c$ and $\tau$ like in [@JPM17]]{.nodecor}. [We point out that the upper bound $$\sum_{i,j=1}^n D_{ij}({\boldsymbol{S}}) v_i v_j \leq D_1 \vert \Pi v\vert^2\qquad \mbox{for }v \in {{\mathbb R}}^n,~~ {\boldsymbol{S}}\in \mathcal{D},$$ is consistent with the fact that the diffusion fluxes ${\mathbf{J}}_{i} = -\sum_{j=1}^n D_{ij}({\boldsymbol{S}}) \nabla\mu_j$ ($i=1,\ldots,n$) sum up to zero: $\sum_{i=1}^N {\mathbf{J}}_i = 0$. On the other hand, the lower bound $$D_0 \vert \Pi v \vert^2 \leq \sum_{i,j=1}^n D_{ij}({\boldsymbol{S}}) v_i v_j\qquad \mbox{for }v \in {{\mathbb R}}^n,~~ {\boldsymbol{S}}\in \mathcal{D},$$ often referred to as [*hypocoercivity*]{}, is the strongest coercivity property that $D$ can satisfy under the constraint $\sum_{i=1}^N {\mathbf{J}}_i = 0$. As a consequence of this assumption, the diffusion fluxes ${\mathbf{J}}_{i}=-\sum_{j=1}^n D_{ij}({\boldsymbol{S}}) \nabla\mu_j$ only depend on the gradients of the relative chemical potentials: ${\mathbf{J}}_{i}=-\sum_{j=1}^n D_{ij}({\boldsymbol{S}}) \nabla\mu_j^*$.]{.nodecor} We now present our definition of weak solution to –. In the following, the symbol $\langle \cdot, \cdot\rangle$ represents the duality product between $H^{-1}(\Omega)$ and $H^{1}_0(\Omega)$. \[def.weaksol\] A function ${\boldsymbol{S}}: \Omega\times (0,\infty)\to \mathcal D$ is called a [*global-in-time weak solution*]{} to – if and only if the following properties are fulfilled: $$\begin{aligned} \nabla\beta(S),~~\pa_t\beta(S),~~ \sqrt{a(S)p_c'(S)},~~ \sqrt{a(S)}\nabla\pa_t\beta(S)\in L^2_{loc}(0,\infty; L^2(\Omega)),\\ \mbox{for }i=1,\ldots,n:\quad (\Pi\mu)_i\in L^2_{loc}(0,\infty; H^1(\Omega)),\\ \mbox{for }i=1,\ldots,n:\quad \pa_t S_i\in L^2_{loc}(0,\infty; H^{-1}(\Omega)),\end{aligned}$$ as well as the weak formulation of : $$\begin{aligned} \label{weak} \sum_{i=1}^n\int_0^T\langle \partial_t S_i, \phi_i\rangle dt &+ \int_0^T\int_\Omega \sum_{i=1}^n\frac{S_i}{S}a(S)\left(p_c'(S)\nabla S + \nabla \partial_t \beta(S)\right)\cdot \nabla \phi_i \, dx dt\\ \nonumber &+ \int_0^T\int_\Omega \sum_{i,j=1}^n D_{ij}({\boldsymbol{S}}) \nabla \mu_j \cdot \nabla \phi_i\, dx dt = 0\\ &\nonumber \qquad \forall \phi_1,\ldots,\phi_n\in L^2_{loc}(0,\infty; H_0^1(\Omega)),\end{aligned}$$ relation , the boundary conditions [^2], and the initial condition : $$\begin{aligned} S_i(\cdot,t)\to S_i^0\qquad \mbox{strongly in $H^{-1}(\Omega)$ as $t\to 0$.}\end{aligned}$$ The result we present in this paper is concerned with the global existence of weak solutions to –. \[Thm.Main\] Let $S_1^0,\ldots,S_n^0 : \Omega\to {{\mathbb R}}_+$ be measurable functions satisfying $$\min_{i=1,\ldots,n}\inf_\Omega S_i^0 > 0,\qquad \max_\Omega S^0 < 1,\qquad \beta(S^0)\in H^1(\Omega).$$ Assume that Assumptions hold. Then there exists a global-in-time weak solution ${\boldsymbol{S}}: \Omega \times (0,\infty) \to \mathcal{D}$ to –. Key idea of the proof {#key-idea-of-the-proof .unnumbered} --------------------- The proof of Thr. \[Thm.Main\] is based on the entropy method [@BSW12; @Jue15; @Jue16]. The starting point of the argument is the formulation of a time-discretized and regularized version of . Such approximate equation is stated in terms of the variables $w_i = \mu_i + \pa_t\beta(S)$, $i=1,\ldots,n$ (or rather a discretized version of it). One of the key ingredients of the proof is the entropy balance equation , which yields crucial gradient estimates. The other key tool employed in the proof is a result shown in [@Drey17], which allows to prove compactness for the densities $S_1,\ldots,S_n$ if some bounds for the gradient of the relative chemical potentials $\nabla\mu_1^*,\ldots,\nabla\mu_n^*$ are known, together with compactness of the total density $S$. We point out that in the standard entropy method the approximate problem is formulated in terms of the “entropy variables” defined as partial derivatives of the mathematical entropy (or energy) density, which in the case here considered would be the functions $\mu_1,\ldots,\mu_n$ given by . However, this standard approach does not work in this setting: in fact, in order to obtain a crucial estimate for the dynamic capillary pressure, $\pa_t\beta(S)$ must be used as a test function in the weak formulation of , which would clash with the regularizing terms in case these latter were written in terms of just $\mu_1,\ldots,\mu_n$. Auxiliary results {#sec.aux} ================= We present here some results which will be used in the proof of Thr. \[Thm.Main\]. Define the variable $\boldsymbol{w}$ as follows: $$\begin{aligned} {\boldsymbol{w}}= {\boldsymbol{\mu}}({\boldsymbol{S}}) - {\boldsymbol{\mu}}({\boldsymbol{S}}^\Gamma) + \Big( \frac{\beta(S)-\beta(S^{k-1})}{\kappa} \Big){\mathbf{1}}, \label{TotEntrVar}\end{aligned}$$ where we denoted ${\mathbf{1}}= (1,\ldots,1)$ and ${\boldsymbol{S}}= (S_1,\ldots,S_n)$. (Invertibility of ${\boldsymbol{S}}\mapsto {\boldsymbol{\mu}}$ and ${\boldsymbol{S}}\mapsto {\boldsymbol{w}}$)\[lemma.3\]\ The mappings $\Phi : {\boldsymbol{S}}\in{{\mathcal D}}\mapsto {\boldsymbol{\mu}}\in{{\mathbb R}}^n$, and $\Phi_\kappa : {\boldsymbol{S}}\in{{\mathcal D}}\mapsto {\boldsymbol{w}}\in{{\mathbb R}}^n$ are invertible, and their Jacobians $\Phi'$, $\Phi_\kappa'$ are uniformely positive definite in ${{\mathcal D}}$. We note that ${\displaystyle}\frac{\partial \mu_i}{\partial S_j} = \frac{\partial^2 {{\mathcal F}}}{\partial S_j \partial S_i}$. Direct calculation gives that $$[{{\mathcal F}}'']_{ij} = \frac{\partial^2 {{\mathcal F}}}{\partial S_j \partial S_i} = \begin{cases} \frac{1}{1-S}, & i\neq j,\\ \frac{1}{S_i} + {\frac{1}{(1-S)}}, & i=j, \end{cases}$$ from where it follows that ${{\mathcal F}}''$ is uniformly positive definite in ${{\mathcal D}}$, *i.e.* ${{\mathcal F}}: {{\mathcal D}}\to{{\mathbb R}}$ is a differentiable, strictly convex mapping. As a consequence, its gradient $\Phi = {{\mathcal F}}' : {{\mathcal D}}\to{{\mathbb R}}^n$ is a monotone (and therefore injective) mapping. Its inverse $\Phi^{-1}$ can be explicitly computed: [$\Phi^{-1}({\boldsymbol{\mu}})_i = \frac{e^{\mu_i}}{1+\sum_{j=1}^n e^{\mu_j}}$, $i=1,\ldots,n$, ${\boldsymbol{\mu}}\in{{\mathbb R}}^n$.]{} Therefore $\Phi : {{\mathcal D}}\to{{\mathbb R}}^n$ is invertible. Moreover, since $\beta' \geq 0$, then $\frac{\partial{\boldsymbol{w}}}{\partial{\boldsymbol{S}}}$ is symmetric and positive definite. Furthermore, $\lim_{{\boldsymbol{S}}\to\pa{{\mathcal D}}}|w({\boldsymbol{S}})| = \infty$. Using the Hadamard global inverse theorem, [@RS15 Thm. 2.2], we conclude that $\Phi_\kappa : {{\mathcal D}}\to{{\mathbb R}}^n$ is invertible. \[lem.symm\] Let $f : [0,1]\to{{\mathbb R}}$ be a continuous function with $f'(S)>0$ for $S \in (0,1)$. Given any ${\boldsymbol{w}}\in{{\mathbb R}}^n$, we denote by ${\boldsymbol{S}}={\boldsymbol{S}}({\boldsymbol{w}})\in\{{\boldsymbol{S}}\in (0,\infty)^n ~:~ \sum_{i=1}^n S_i < 1\}$ the only solution to $$w_i = \log(S_i) - \log(S) + f(S),~~ i=1,\ldots,n ,\qquad S\equiv \sum_{i=1}^n S_i .$$ Then the matrix $(M_{ij}({\boldsymbol{w}}))_{i,j=1,\ldots,n}=(S_i({\boldsymbol{w}})\frac{\pa S({\boldsymbol{w}})}{\pa w_j})_{i,j=1,\ldots,n}$ is symmetric and positive semidefinite for every ${\boldsymbol{w}}\in{{\mathbb R}}^n$. The definition of ${\boldsymbol{S}}$ implies $$\begin{aligned} \sum_{i=1}^n e^{w_i} = e^{f(S({\boldsymbol{w}}))}.\end{aligned}$$ Differentiating the above identity with respect to $w_j$ leads to $$\begin{aligned} e^{w_j} = F(S({\boldsymbol{w}}))\frac{\pa S({\boldsymbol{w}})}{\pa w_j},\qquad F(S)\equiv \frac{d}{dS}\left(e^{f(S)}\right) = e^{f(S)} f'(S).\end{aligned}$$ Since $f$ is strictly increasing, then $F>0$ in $(0,1)$. It follows $$\begin{aligned} M_{ij}({\boldsymbol{w}}) = \frac{S_i({\boldsymbol{w}})e^{w_j}}{F(S({\boldsymbol{w}}))} = \frac{e^{f(S({\boldsymbol{w}}))}S_i({\boldsymbol{w}})S_j({\boldsymbol{w}})}{S({\boldsymbol{w}})F(S({\boldsymbol{w}}))} ,\end{aligned}$$ which means that $M({\boldsymbol{w}})$ is symmetric and positive semidefinite for every ${\boldsymbol{w}}\in{{\mathbb R}}^n$. This finishes the proof. The following bound holds $$ p_c^\prime (S) \leq \frac{\tau(S)}{a(S)},\quad S\in (0,1). \label{coeff:ineq}$$ \[lemma:coeff:1\] Through simple calculations using Assumptions , (\[coeff:ineq\]) can be written as $$\begin{aligned} \frac{1}{S^{\beta_1}} + \frac{1}{(1-S)^{\beta_2}} \leq \frac{1}{S^{\gamma_1}} + \frac{1}{(1-S)^{\lambda}} . \end{aligned}$$ Since $S\in (0,1)$, the claim follows from the fact that $\beta_1\leq\gamma_1$, $\beta_2\leq\lambda$. The next result has been proved in [@JMZ18 Lemma 5]: \[lem.JMZ\] Let $\boldsymbol{\alpha}$, $\boldsymbol{\beta} \in {{\mathbb R}}^n$ be such that $|\boldsymbol{\alpha}| = |\boldsymbol{\beta}| = 1$. Then, for any $\boldsymbol{v} \in {{\mathbb R}}^n$ it holds that $$|\boldsymbol{\alpha} \cdot \boldsymbol{v}|^2 + |\boldsymbol{v} -(\boldsymbol{\beta} \cdot \boldsymbol{v})\boldsymbol{\beta}|^2 \geq \frac{1}{4}(\boldsymbol{\alpha} \cdot \boldsymbol{\beta})^2|\boldsymbol{v}|^2 .$$ [**Notation.**]{} Let ${{\mathbb R}}_+\equiv [0,\infty)$. For $x \in {{\mathbb R}}_+\times {{\mathbb R}}^{N-1}$, we denote $x = (x_0,\overline{x})$. \[lemma.13\] Let $\mathcal{R} : {{\mathbb R}}_+\times {{\mathbb R}}^{N-1}\to {{\mathbb R}}_+^N$ be a continuous and bounded mapping. Let $K \subset L^2(\Omega)$ be relatively compact. Let $\{ \phi_i \in C_c^\infty(\Omega;{{\mathbb R}}^N): i\in{{\mathbb N}}\}$ be dense in $L^2(\Omega;{{\mathbb R}}^N)$. Then, for every $\delta > 0$, there are $C(\delta) > 0$, $m(\delta) \in {{\mathbb N}}$ such that, for all $\boldsymbol{w}^1,\boldsymbol{w}^2 \in K\times H^1(\Omega; {{\mathbb R}}^{N-1})$ it holds $$\begin{aligned} \nonumber \| \mathcal{R}(\boldsymbol{w^1}) & - \mathcal{R}(\boldsymbol{w^2})\|_{L^2(\Omega)} \\ & \leq \delta \Big( 1 + \sum_{i=1,2} \|\overline{\boldsymbol{w}}^i\|_{H^1(\Omega)} \Big) + C(\delta) \sum_{i=1}^{m(\delta)} \Big| \int_\Omega \big(\mathcal{R}(\boldsymbol{w}^1) - \mathcal{R}(\boldsymbol{w}^2) \big)\cdot \phi_i dx \Big| . \label{Dreyer.1} \end{aligned}$$ Assume by contradiction that there exists $\delta_0>0$ such that, for every $m\in{{\mathbb N}}$, there exist $\boldsymbol{w}^{1,m}, \boldsymbol{w}^{2,m}\in K\times H^1(\Omega ; {{\mathbb R}}^{N-1})$ such that $$\begin{aligned} \|\mathcal{R}(\boldsymbol{w}^{1,m}) & - \mathcal{R}(\boldsymbol{w}^{2,m})\|_{L^2(\Omega)}\\ & > \delta_0\Big( 1 + \sum_{i=1,2} \|\overline{\boldsymbol{w}}^{i,m}\|_{W^{1,1}(\Omega)} \Big) + m \sum_{i=1}^{n} \Big| \int_\Omega \big(\mathcal{R}(\boldsymbol{w}^{1,m}) - \mathcal{R}(\boldsymbol{w}^{2,m}) \big)\cdot \phi_i dx \Big| .\end{aligned}$$ Since $\mathcal{R}({{\mathbb R}}_+\times {{\mathbb R}}^{N-1})$ is bounded, then $(\overline{\boldsymbol{w}}^{i,m})_{m\in{{\mathbb N}}}$ is bounded in $H^1(\Omega ; {{\mathbb R}}^{N-1})$ and thus $\overline{\boldsymbol{w}}^{i,m}\rightharpoonup\overline{\boldsymbol{w}}^i$ weakly in $H^1(\Omega ; {{\mathbb R}}^{N-1})$ (as $m\to\infty$), for $i=1,2$. By a compact Sobolev embedding it holds that $\overline{\boldsymbol{w}}^{i,m}\to \overline{\boldsymbol{w}}^{i}$ strongly in $L^2(\Omega)$ and a.e. in $\Omega$ (up to a subsequence), for $i=1,2$. Moreover, the compactness of $K$ implies that $\boldsymbol{w}_0^{i,m}\to \boldsymbol{w}_0^i$ strongly in $L^2(\Omega)$ (up to a subsequence), for $i=1,2$. Therefore, $\boldsymbol{w}^{i,m}\to \boldsymbol{w}^{i}$ strongly in $L^2(\Omega ; {{\mathbb R}}^N)$ and a.e. in $\Omega$. It follows that $\mathcal{R}(\boldsymbol{w}^{i,m})\to \mathcal{R}(\boldsymbol{w}^i)$ strongly in $L^2(\Omega ; {{\mathbb R}}^N)$. On the other hand, $$\begin{aligned} &\sum_{i=1}^{n} \Big| \int_\Omega \big(\mathcal{R}(\boldsymbol{w}^{1,m}) - \mathcal{R}(\boldsymbol{w}^{2,m}) \big)\cdot \phi_i dx \Big|\\ & \qquad \leq \frac{1}{m}\|\mathcal{R}(\boldsymbol{w}^{1,m}) - \mathcal{R}(\boldsymbol{w}^{2,m})\|_{L^2(\Omega)}\leq \frac{C}{m}\to 0\quad (m\to\infty),\end{aligned}$$ and so $$\begin{aligned} \int_\Omega \big(\mathcal{R}(\boldsymbol{w}^{1}) - \mathcal{R}(\boldsymbol{w}^{2}) \big)\cdot \phi_i dx = 0\quad\forall i\in{{\mathbb N}}.\end{aligned}$$ Being $(\phi_i)_{i\in{{\mathbb N}}}$ dense in $L^2(\Omega)$, this implies that $\mathcal{R}(w^1) = \mathcal{R}(w^2)$. But $$\begin{aligned} \|\mathcal{R}(\boldsymbol{w}^1)-\mathcal{R}(\boldsymbol{w}^2)\|_{L^2(\Omega)} = \lim_{m\to\infty}\|\mathcal{R}(\boldsymbol{w}^{1,m}) - \mathcal{R}(\boldsymbol{w}^{2,m}) \|_{L^2(\Omega)} \geq\delta_0 > 0,\end{aligned}$$ which is a contradiction. This finishes the proof. We recall the following remark, see [@Drey17]. For completeness and clarity, we give a full proof. \[remark.14\] If a subset $\{ u_{\eps} \}_{\eps\in (0,1]}$ of $C([0,T];L^2(\Omega))$ is relatively compact in\ $C([0,T];L^2(\Omega))$, then the set $\mathcal F \equiv \cup_{\eps\in (0,1]} \cup_{t\in[0,T]}\{u_\eps(t)\}$ is relatively compact in $L^2(\Omega)$. In this case, given any $f\in C^0({{\mathbb R}})$, the set $\mathcal F_f \equiv \cup_{\eps\in (0,1]} \cup_{t\in[0,T]}\{f(u_\eps(t))\}$ is relatively compact in $L^2(\Omega)$. Let $(u_{\eps_n}(t_n))_{n\in{{\mathbb N}}}$ be an arbitrary sequence of points of $\mathcal F$. The sequence $(u_{\eps_n})_{n\in{{\mathbb N}}}\subset C([0,T]; L^2(\Omega))$ is relatively compact in $C([0,T]; L^2(\Omega))$, therefore is convergent up to a subsequence. Moreover, the sequence $(t_n)_{n\in{{\mathbb N}}}\subset [0,T]$ is convergent up to a subsequence, so w.l.o.g. we can write $u_{\eps_n}\to u$ strongly in $C([0,T]; L^2(\Omega))$ and $t_n\to t\in [0,T]$. It follows that $$\begin{aligned} \|u_{\eps_n}(t_n) - u(t)\|_{L^2(\Omega)} &\leq \|u_{\eps_n}(t_n) - u(t_n)\|_{L^2(\Omega)} + \|u(t_n) - u(t)\|_{L^2(\Omega)} \\ &\leq \|u_{\eps_n} - u\|_{C([0,T]; L^2(\Omega))} + \|u(t_n) - u(t)\|_{L^2(\Omega)} \quad\substack{\phantom{aaa} \\ \longrightarrow\\ n\to\infty}\quad 0 .\end{aligned}$$ Therefore $\mathcal{F}$ is relatively compact in $L^2(\Omega)$. In this case, given any $f\in C^0({{\mathbb R}})$, the relative compactness of $\mathcal{F}_f$ in $L^2(\Omega)$ is straightforward. This finishes the proof of the Lemma. Our main compactness tool is given in the following lemma (see Corollary 3.7. in [@Drey17]). \[lemma.15\] For $n \in {{\mathbb N}}$, let $w^n:[0,T] \to L^2(\Omega;{{\mathbb R}}_+\times {{\mathbb R}}^{N-1})$ be continuous. Assume that $K := \{ w_0^n(\cdot,t)\in L^2(\Omega) : n\in{{\mathbb N}}, t \in [0,T] \}$ is relatively compact in $L^2(\Omega)$, and that $\overline{w}^n$ is bounded in $L^1((0,T); H^1(\Omega))$. Furthermore, let $\mathcal{R} : {{\mathbb R}}_+ \times {{\mathbb R}}^{N-1} \to {{\mathbb R}}_+^N$ be continuous and bounded. Then, $\mathcal{R}(w^n)$ is (up to subsequence) strongly convergent in $L^1(\Omega\times (0,T))$. \[Dreyer.2\] Apply Lemma \[lemma.13\]. For every $\delta>0$ there exist $C(\delta)>0$, $m(\delta)\in{{\mathbb N}}$ such that, for every $n,n'\in{{\mathbb N}}$ it holds that $$\begin{aligned} &\|\mathcal{R}(w^n(t)) - \mathcal{R}(w^{n'}(t))\|_{L^2(\Omega)} \leq \delta(1 + \|\overline{w}^n(t)\|_{H^1} + \|\overline{w}^{n'}(t)\|_{H^1})\\ &\qquad +C(\delta)\sum_{i=1}^{m(\delta)}\left| \int_{\Omega}\left(\mathcal{R}(w^n(t)) - \mathcal{R}(w^{n'}(t))\right)\cdot\phi_i dx\right| .\end{aligned}$$ By integrating the above estimate in time and exploiting the boundedness of $\overline{w}^n$ in $L^1((0,T); H^1(\Omega))$, we deduce $$\begin{aligned} &\int_0^T\|\mathcal{R}(w^n(t)) - \mathcal{R}(w^{n'}(t))\|_{L^2(\Omega)}dt\\ &\leq \delta C +C(\delta)\sum_{i=1}^{m(\delta)}\int_0^T\left| \int_{\Omega}\left(\mathcal{R}(w^n(t)) - \mathcal{R}(w^{n'}(t))\right)\cdot\phi_i dx\right|dt .\end{aligned}$$ The boundedness of the mapping $\mathcal{R}$ implies that, up to subsequences, $\mathcal{R}(w^n(t))$ is weakly convergent in $L^2(\Omega)$ for a.e. $t\in [0,T]$, and so $$\begin{aligned} \int_{\Omega}\left(\mathcal{R}(w^n(t)) - \mathcal{R}(w^{n'}(t))\right)\cdot\phi_i dx\to 0\quad\mbox{as }n, n'\to\infty ,~~\mbox{a.e. }t\in [0,T], \quad i\in{{\mathbb N}}.\end{aligned}$$ Moreover, $$\begin{aligned} \left| \int_{\Omega}\left(\mathcal{R}(w^n(t)) - \mathcal{R}(w^{n'}(t))\right)\cdot\phi_i dx\right|\leq C\|\phi_i\|_{L^2(\Omega)},~~ \mbox{a.e. }t\in [0,T],~~ i\in{{\mathbb N}}.\end{aligned}$$ The dominated convergence theorem yields $$\begin{aligned} \int_0^T\left| \int_{\Omega}\left(\mathcal{R}(w^n(t)) - \mathcal{R}(w^{n'}(t))\right)\cdot\phi_i dx\right|dt\to 0\quad\mbox{as }n, n'\to\infty, ~~i\in{{\mathbb N}}.\end{aligned}$$ It follows that $\nu\in{{\mathbb N}}$ exists such that, for $n,n'\geq\nu$, $$\begin{aligned} \int_0^T\left| \int_{\Omega}\left(\mathcal{R}(w^n(t)) - \mathcal{R}(w^{n'}(t))\right)\cdot\phi_i dx\right|dt\leq\frac{\delta}{m(\delta)C(\delta)}, ~~1\leq i\leq m(\delta) .\end{aligned}$$ As a consequence, it holds that $$\begin{aligned} \int_0^T\|\mathcal{R}(w^n(t)) - \mathcal{R}(w^{n'}(t))\|_{L^2(\Omega)}dt\leq \delta C, \quad n,n'\geq\nu .\end{aligned}$$ In particular, $\mathcal{R}(w^n)$ is Cauchy (and therefore convergent) in $L^1(\Omega\times (0,T))$. This finishes the proof. Existence proof {#Sec.Exis} =============== The proof is divided into several steps. [**Step 1: discretization and regularization.**]{} Fix $T>0$. For $N\in {{\mathbb N}}$ we define $\kappa = T/N$, $t_k = \kappa k$ ($k=0,\ldots,N$), $S_{i}^{0} = S_{i,0}$ $(i=1,\ldots,n)$. Consider the implicit Euler discretization: $$\begin{aligned} \nonumber &\mbox{given ${{\boldsymbol{w}}^{k-1}}\in H^1_0(\Omega ; {{\mathbb R}}^n)$, find ${{\boldsymbol{w}}^k}\in H^1_0(\Omega; {{\mathbb R}}^n)$ such that:}\\ & \sum_{i=1}^n \int_\Omega \frac{S_i^k - S_i^{k-1}}{\kappa}\phi_i dx = -\sum_{i=1}^n \int_\Omega \frac{S_i^k}{S^k} a(S^k) p_c'(S^k) \nabla S^k \cdot \nabla \phi_i dx \nonumber\\ & \qquad - \sum_{i=1}^n \int_\Omega \frac{S_i^k}{S^k} a(S^k) \nabla \frac{\beta(S^k) - \beta(S^{k-1}) }{\kappa} \cdot \nabla \phi_i dx \nonumber \\ & \qquad - \int_\Omega \sum_{i,j=1}^n D_{ij}(S_1^k,\ldots,S_n^k)\nabla w_j^k \cdot \nabla \phi_i dx \nonumber\\ & \qquad - \eps \sum_{i=1}^n \int_\Omega \frac{S_i^k}{S^k} \nabla w_i^k\cdot \nabla \phi_i dx, \label{DisReg.1}\end{aligned}$$ for all $\phi_1,\ldots,\phi_n \in H^1_0(\Omega)$, where ${\boldsymbol{S}}^k : \Omega\times (0,T)\to{{\mathbb R}}^n$ is (implicitly) defined by $$w_i^k = \log \Big( \frac{S_i^k}{S^k}\Big) + \frac{\beta(S^k) - \beta(S^{k-1})}{\kappa}, \qquad i=1,\ldots,n,$$ and we denoted $S^k = \sum_{i=1}^n S_i^k$. Here we assume that $S^{k-1}\in H^{1}(\Omega)$. [**Step 2: linearized approximated problem.**]{} Using the fact that $$\nabla S = \sum_{\ell=1}^n\frac{\pa S}{\pa w_\ell}{(w)}\na w_\ell,$$ equation can be simply rewritten as $$\begin{aligned} \nonumber & \sum_{i=1}^n \int_\Omega \frac{S_i^k - S_i^{k-1}}{\kappa}\phi_i dx = - \sum_{i=1}^n \int_\Omega \frac{S_i^k}{S^k} a(S^k) p_c'(S^k) \sum_{\ell=1}^n\frac{\pa S}{\pa w_\ell}{(w^k)}\na w_\ell^k\cdot \nabla \phi_i dx \nonumber\\ & \qquad - \frac{1}{\kappa}\sum_{i=1}^n \int_\Omega \frac{S_i^k}{S^k} a(S^k) \tau(S^k)\sum_{\ell=1}^n\frac{\pa S}{\pa w_\ell}\na w_\ell^k\cdot \nabla \phi_i dx \nonumber\\ & \qquad + \frac{1}{\kappa}\sum_{i=1}^n\int_\Omega \frac{S_i^k}{S^k} a(S^k) \tau(S^{k-1})\nabla S^{k-1} \cdot \phi_i dx \nonumber \\ & \qquad - \int_\Omega \sum_{i,j=1}^n D_{ij}(S_1^k,\ldots,S_n^k)\nabla w_j^k \cdot \nabla \phi_i dx \nonumber\\ &\qquad - \eps \sum_{i=1}^n \int_\Omega \frac{S_i^k}{S^k} \nabla w_i^k\cdot \nabla \phi_i dx. \label{DisLin.2}\end{aligned}$$ Now, the linearized problem has the following form: $$\begin{aligned} \nonumber &\mbox{let ${\boldsymbol{w}}^*\in L^2(\Omega)$ and $\sigma\in [0,1]$ be given, find ${\boldsymbol{w}}\in H^1_0(\Omega)$ such that:}\\ & \sigma\sum_{i=1}^n \int_\Omega \frac{S_i^* - S_i^{k-1}}{\kappa}\phi_i dx \nonumber\\ &\quad = - \sum_{i=1}^n \int_\Omega \frac{S_i^*}{S^*} a(S^*) p_c'(S^*) \sum_{\ell=1}^n\frac{\pa S}{\pa w_\ell}(w^*)\na w_\ell\cdot \nabla \phi_i dx \nonumber\\ & \qquad - \frac{1}{\kappa}\sum_{i=1}^n \int_\Omega \frac{S_i^*}{S^*} a(S^*) \tau(S^*)\sum_{\ell=1}^n\frac{\pa S}{\pa w_\ell}(w^*)\na w_\ell\cdot \nabla \phi_i dx \nonumber\\ & \qquad + \frac{\sigma}{\kappa}\sum_{i=1}^n\int_\Omega \frac{S_i^*}{S^*} a(S^*) \tau(S^{k-1})\nabla S^{k-1} \cdot {\nabla \phi_i }dx \nonumber \\ & \qquad - \int_\Omega \sum_{i,j=1}^n D_{ij}(S_1^*,\ldots,S_n^*)\nabla w_j \cdot \nabla \phi_i dx \nonumber\\ &\qquad - \eps \sum_{i=1}^n \int_\Omega \frac{S_i^*}{S^*} \nabla w_i\cdot \nabla \phi_i dx, \label{DisRegLin.1}\end{aligned}$$ for all $\phi_i \in H_0^1(\Omega)$, where $S_i^*$ is defined by $$w_i^* = \log \Big( \frac{S_i^*}{S^*}\Big) + \frac{\beta(S^*) - \beta(S^{k-1})}{\kappa}, \qquad i=1,\ldots,n,$$ and we denoted $S^* = \sum_{i=1}^n S_i^*$. The above problem can be summarized as $$\begin{aligned} a({\boldsymbol{w}},{\boldsymbol{\phi}}) = \sigma F({\boldsymbol{\phi}}),\quad \forall {\boldsymbol{\phi}}\in H_0^1(\Omega;{{\mathbb R}}^n), \label{LaxMil_Eq} \end{aligned}$$ where $$\begin{aligned} \nonumber a({\boldsymbol{w}},{\boldsymbol{\phi}}) & = \sum_{i,\ell=1}^n \int_\Omega \frac{S_i^*}{S^*}\frac{\pa S}{\pa w_\ell}{(S^*)} a(S^*)\Big[ p_c'(S^*) + \frac{\tau(S^*)}{\kappa} \Big] \nabla w_\ell \cdot \nabla \phi_i dx \\ & + \sum_{i,j=1}^n \int_\Omega D_{ij}(S_1^*,\ldots,S_n^*) \nabla w_j \cdot \nabla \phi_i dx \nonumber \\ & {+} \eps \sum_{i=1}^n \int_\Omega \frac{S_i^*}{S^*} \nabla w_i\cdot \nabla \phi_i dx \label{bili_form}\end{aligned}$$ $$\begin{aligned} F({\boldsymbol{\phi}}) & = - \sum_{i=1}^n \int_\Omega \frac{S_i^* - S_i^{k-1}}{\kappa}\phi_i dx - \int_\Omega \frac{S_i^*}{S^*} a(S^*) \frac{\tau(S^{k-1})\nabla S^{k-1}}{\kappa} \cdot {\nabla \phi_i} dx. \label{Funct} \end{aligned}$$ It is easy to see that the functional $F$ is continuous, *i.e.* it holds $$|F({\boldsymbol{\phi}})| \leq C \|{\boldsymbol{\phi}}\|_{H^1(\Omega,{{\mathbb R}}^n)}.$$ The bilinear form can be written as: $$\begin{aligned} a({\boldsymbol{w}},{\boldsymbol{\phi}}) & = \sum_{i,j=1}^n \int_\Omega \alpha_{ij}(S_1^*,\ldots,S_n^*) \nabla {w_j} \cdot \nabla \phi_i dx + \eps \sum_{i=1}^n \int_\Omega \frac{S_i^*}{S^*} \nabla w_i\cdot \nabla \phi_i dx,\end{aligned}$$ with $$\alpha_{ij}(S_1^*,\ldots,S_n^*) = D_{ij}(S_1^*,\ldots,S_n^*) + S_i^* \frac{\pa S}{\pa {w_j}}(S^*) G(S^*),$$ where $$G(S^*) = \frac{a(S^*)}{S^*} \Big[ p_c'(S^*) + \frac{\tau(S^*)}{\kappa} \Big].$$ Thanks to Lemma \[lem.symm\] and the nonnegativity of $G(S^*)$:$$\begin{aligned} a({\boldsymbol{w}},{\boldsymbol{w}}) &\geq \sum_{i,j=1}^n \int_\Omega D_{ij}(S_1^*,\ldots,S_n^*) \nabla w_i \cdot \nabla w_j dx + \eps\sum_{i=1}^n\int_\Omega\frac{S_i^*}{S^*} |\nabla w_i|^2 dx . $$ From Assumption **(H1)** we obtain $$\begin{aligned} & a({\boldsymbol{w}},{\boldsymbol{w}}) \\ &\geq \min\{D_0,\eps\}\left( \sum_{i,j=1}^n \int_\Omega \left(\delta_{ij}-\frac{1}{n}\right) \nabla w_i \cdot \nabla w_j dx + \sum_{i=1}^n \int_\Omega \left|\sqrt{\frac{S_i^*}{S^*}}\nabla w_i\right|^2 dx \right). $$ Now we apply Lemma \[lem.JMZ\] and deduce $$\begin{aligned} \nonumber a({\boldsymbol{w}},{\boldsymbol{w}}) & \geq \frac{\min(D_0,\eps)}{4 n} \int_\Omega \Big( \sum_{i=1}^n \sqrt{\frac{S_i^*}{S^*}} \Big)^2 |\na {\boldsymbol{w}}|^2 dx . \nonumber \end{aligned}$$ Next, since $ \Big( \sum_{i=1}^n \sqrt{\frac{S_i^*}{S^*}} \Big)^2 \geq n$, we conclude that the bilinear form $a({\boldsymbol{w}}, {\boldsymbol{w}})$ is coercive in $H^1_0(\Omega)$, *i.e.* $$\begin{aligned} \nonumber a({\boldsymbol{w}}, {\boldsymbol{w}}) &\geq\sum_{i,j=1}^n\int_\Omega D_{ij}(S_1^*,\ldots,S_n^*)\nabla w_i\cdot \nabla w_j dx + \eps\sum_{i=1}^n\int_\Omega\frac{S_i^*}{S^*}|\nabla w_i|^2 dx\\ \label{lin.lb} &\geq C(\eps) \|\nabla {\boldsymbol{w}}\|_{L^2(\Omega,{{\mathbb R}}^n)}^2 \geq C(\eps)\|{\boldsymbol{w}}\|_{H^1(\Omega,{{\mathbb R}}^n)}^2 ,\end{aligned}$$ the last inequality being a consequence of Poincaré’s Lemma. Therefore we can deduce by Lax-Milgram lemma the existence of a unique solution ${\boldsymbol{w}}\in H^1_0(\Omega;{{\mathbb R}}^n)$ to . \[Bound-nabla-mu\] We note that from the coercivity of the bilinear form $a({\boldsymbol{w}},{\boldsymbol{w}})$ it directly follows that the solution ${\boldsymbol{w}}\in H^1_0(\Omega)$ to the linearized problem satisfies $\| \nabla {\boldsymbol{w}}\|_{L^2(\Omega, {{\mathbb R}}^n)} \leq C(\eps)$. [**Step 3: solution of the nonlinear approximated problem.**]{} We reformulate as a fixed-point problem for a suitable operator and we solve it via Leray-Schauder fixed point theorem. The [**Step 2**]{} allows us to define an operator $T: L^2(\Omega;{{\mathbb R}}^n)\times [0,1] \to L^2(\Omega;{{\mathbb R}}^n)$ in the following way: for ${\boldsymbol{w}}^{*} \in L^2(\Omega;{{\mathbb R}}^n)$, $\sigma \in [0,1]$, it holds that ${\boldsymbol{w}}= T({\boldsymbol{w}}^{*},\sigma) \in H^1_0(\Omega;{{\mathbb R}}^n)$ is the solution to . In a standard way we can show that the mapping $T$ is continuous. Moreover, $ T: L^2(\Omega;{{\mathbb R}}^n)\times [0,1] \to L^2(\Omega;{{\mathbb R}}^n)$ is compact due to the compact Sobolev embedding $H^1(\Omega;{{\mathbb R}}^n) \hookrightarrow L^2(\Omega;{{\mathbb R}}^n)$. Furthermore, it holds that $T(\cdot,0)\equiv 0$. It remains to prove a uniform bound (with respect to $\sigma$) for all fixed points of $T(\cdot,\sigma)$ in $L^2(\Omega,{{\mathbb R}}^n)$. Let ${\boldsymbol{w}}\in L^2(\Omega,{{\mathbb R}}^n)$ be such a fixed point. Then ${\boldsymbol{w}}$ solves with a test-function $\phi$ replaced by ${\boldsymbol{w}}$. We have $$\begin{aligned} C(\eps) \|{\boldsymbol{w}}\|_{H^1(\Omega,{{\mathbb R}}^n)}^2 \leq a({\boldsymbol{w}},{\boldsymbol{w}}) = \sigma F({\boldsymbol{w}}) \leq C \|{\boldsymbol{w}}\|_{L^2(\Omega,{{\mathbb R}}^n)},\end{aligned}$$ yielding an $H^1$ bound for ${\boldsymbol{w}}$, uniform in $\sigma$. Thanks to Leray-Schauder’s fixed point theorem we get the existence of a solution ${\boldsymbol{w}}\in H^1_0(\Omega;{{\mathbb R}}^n)$ to for $\sigma = 1$. In this way we proved the solution to . [**Step 4: uniform in $\kappa$ a-priori estimates.**]{} Let us choose $$\phi = {\boldsymbol{w}}^k = {\boldsymbol{\mu}}^k - {\boldsymbol{\mu}}({\boldsymbol{S}}^\Gamma) +\Big( \frac{\beta(S^k)-\beta(S^{k-1})}{\kappa} \Big) {\mathbf{1}}$$ in . Since $\mu_i^k = \partial_i {{\mathcal F}}({\boldsymbol{S}}^k)$ and ${{\mathcal F}}({\boldsymbol{S}})$ is convex, it follows that $$\sum_{i=1}^n (S_i^k - S_i^{k-1})(\mu_i^k - \mu_i({\boldsymbol{S}}^\Gamma)) \geq \tilde{{\mathcal F}}({\boldsymbol{S}}^k) - \tilde{{\mathcal F}}({\boldsymbol{S}}^{k-1}),$$ where $\tilde{{\mathcal F}}$ is the relative entropy density defined in . Moreover, the nonnegativity and boundedness of $\beta'$ allows us to write $$(S^k - S^{k-1})(\beta(S^k) - \beta(S^{k-1})) \geq C( \beta(S^k) - \beta(S^{k-1}) )^2,$$ where ${\displaystyle}C = \frac{1}{\max_{0 \leq S \leq 1} \beta'(S)}$. In this way we obtain $$\begin{aligned} \nonumber &\frac{1}{\kappa}\int_\Omega \tilde{{\mathcal F}}({\boldsymbol{S}}^k) dx + C \int_\Omega\left(\frac{\beta(S^k)-\beta(S^{k-1})}{\kappa}\right)^2 dx\\ \nonumber &\qquad + \sum_{i=1}^n \int_\Omega \frac{S_i^k}{S^k}a(S^k) p_c'(S^k) \nabla S^k \cdot \nabla w_i^k dx \nonumber\\ & \qquad + \frac{1}{\kappa} \sum_{i=1}^n \int_\Omega \frac{S_i^k}{S^k} a(S^k) \big( \nabla \beta(S^k) - \nabla \beta(S^{k-1}) \big) \cdot \nabla w_i^k dx\nonumber \\ & \qquad + \sum_{i,j=1}^n \int_\Omega D_{ij}(S_1^k,\ldots,S_n^k)\nabla w_j^k \cdot \nabla w_i^k dx + {\eps \sum_{i=1}^n \int_{\Omega}\frac{S_i^k}{S^k} |\nabla w_i^k|^2 dx} \nonumber \\ & \qquad \qquad \leq \frac{1}{\kappa} \int_{\Omega} \tilde{{\mathcal F}}({\boldsymbol{S}}^{k-1}) dx. \label{TF1-1}\end{aligned}$$ Taking into account and Assumption **(H1)**, one gets $$\begin{aligned} \nonumber &\frac{1}{\kappa}\int_\Omega \tilde{{\mathcal F}}({\boldsymbol{S}}^k) dx + C \int_\Omega\left(\frac{\beta(S^k)-\beta(S^{k-1})}{\kappa}\right)^2 dx\\ \nonumber &\qquad + \sum_{i=1}^n \int_\Omega \frac{S_i^k}{S^k}a(S^k) p_c'(S^k) \nabla S^k \cdot \nabla w_i^k dx \nonumber\\ & \qquad + \frac{1}{\kappa} \sum_{i=1}^n \int_\Omega \frac{S_i^k}{S^k} a(S^k) \big( \nabla \beta(S^k) - \nabla \beta(S^{k-1}) \big) \cdot \nabla w_i^k dx\nonumber \\ &\qquad + C\|\Pi\na {w^k}\|_{L^2(\Omega)}^2 \nonumber\\ & \qquad + C\eps\|w\|_{H^1(\Omega)}^2 \leq \frac{1}{\kappa} \int_\Omega \tilde{{\mathcal F}}({\boldsymbol{S}}^{k-1}) dx. \label{TF1-2}\end{aligned}$$ Using the relation , we obtain $$\begin{aligned} \sum_{i=1}^n \int_\Omega \frac{S_i^k}{S^k}a(S^k) & p_c'(S^k) \nabla S^k \cdot \nabla \mu_i^k dx = \int_\Omega p_c'(S^k) \beta'(S^k) |\nabla S^k|^2 dx. \label{TF1-3}\end{aligned}$$ In this way we get: $$\begin{aligned} \nonumber \frac{1}{\kappa} \int_\Omega \tilde{{\mathcal F}}({\boldsymbol{S}}^k) dx & + C \int_\Omega\left(\frac{\beta(S^k)-\beta(S^{k-1})}{\kappa}\right)^2 dx + \int_\Omega p_c'(S^k) \beta'(S^k) |\nabla S^k|^2 dx \nonumber\\ & + \frac{1}{\kappa}\int_\Omega a(S^k) p_c'(S^k) \nabla S^k \cdot \big( \nabla\beta(S^k)- \nabla \beta(S^{k-1}) \big) dx \nonumber\\ & + \frac{1}{\kappa}\int_\Omega \big( \nabla\beta(S^k)- \nabla \beta(S^{k-1}) \big) \cdot \nabla \beta(S^k) \nonumber \\ & + \frac{1}{\kappa^2}\int_\Omega a(S^k) | \nabla \beta(S^k) - \nabla \beta(S^{k-1}) |^2 dx \nonumber \\ & + C\eps \sum_{i=1}^n \|w_i^k\|^2_{H^1(\Omega} + C\sum_{i=1}^n\|\Pi\na \mu_i^k\|_{L^2(\Omega)}^2 \leq \frac{1}{\kappa} \int_\Omega \tilde{{\mathcal F}}({\boldsymbol{S}}^{k-1}) dx . \label{TF1-4}\end{aligned}$$ Next, using the fact that $ (a-b)a \geq \frac{1}{2} (a^2 - b^2),$ we obtain $$\begin{aligned} \int_\Omega \nabla \Big( \frac{\beta(S^k) - \beta(S^{k-1}) }{\kappa} \Big) \cdot \nabla \beta(S^k) dx \geq \frac{1}{2\kappa} \int_\Omega \Big( |\nabla \beta(S^k)|^2 - |\nabla \beta(S^{k-1})|^2 \Big) dx\end{aligned}$$ We have: $$\begin{aligned} \nonumber &\frac{1}{\kappa} \int_\Omega \Big(\tilde{{\mathcal F}}({\boldsymbol{S}}^k) + \frac{1}{2} |\nabla \beta(S^k)|^2 \Big) dx + C \int_\Omega\left(\frac{\beta(S^k)-\beta(S^{k-1})}{\kappa}\right)^2 dx\\ & + \int_\Omega p_c'(S^k) \beta'(S^k) |\nabla S^k|^2 dx \nonumber\\ & + \frac{1}{\kappa} \int_\Omega a(S^k) p_c'(S^k) \nabla S^k \cdot \Big( \nabla\beta(S^k)-\nabla\beta(S^{k-1}) \Big) dx \nonumber \\ & + \frac{1}{\kappa^2} \int_\Omega a(S^k) \Big|\nabla\beta(S^k) - \nabla\beta(S^{k-1}) \Big|^2 dx \nonumber \\ & + C\eps \sum_{i=1}^n \|w_i^k\|^2_{H^1(\Omega} + C \sum_{i=1}^n\|\Pi\na \mu_i^k\|_{L^2(\Omega)}^2\nonumber \\ &\qquad \leq \frac{1}{\kappa} \int_\Omega \Big( \tilde{{\mathcal F}}({\boldsymbol{S}}^{k-1}) + \frac{1}{2}|\nabla \beta(S^{k-1})|^2 \Big) dx. \label{TF1-5}\end{aligned}$$ Next, Young inequality gives: $$\begin{aligned} \int_\Omega a(S^k) & p_c'(S^k) \nabla S^k \cdot \nabla\frac{\beta(S^k)-\beta(S^{k-1})}{\kappa} dx \nonumber\\ & \leq \frac{3}{4} \int_\Omega a(S^k) \Big| \nabla \frac{\beta(S^k) - \beta(S^{k-1})}{\kappa} \Big|^2 dx \\ &\qquad + \frac{1}{3} \int_\Omega a(S^k) (p_c'(S^k))^2 |\nabla S^k|^2 dx\end{aligned}$$ In this way we get: $$\begin{aligned} \nonumber \frac{1}{\kappa} \int_\Omega \Big( & \tilde{{\mathcal F}}({\boldsymbol{S}}^k) + \frac{1}{2} |\nabla \beta(S^k)|^2 \Big) dx + C \int_\Omega\left(\frac{\beta(S^k)-\beta(S^{k-1})}{\kappa}\right)^2 dx \\ & + \int_\Omega p_c'(S^k) \beta'(S^k) |\nabla S^k|^2 dx + \frac{1}{4\kappa^2} \int_\Omega a(S^k) \Big|\nabla\beta(S^k) - \nabla\beta(S^{k-1}) \Big|^2 dx \nonumber \\ & + C\eps \sum_{i=1}^n \|w_i^k\|^2_{H^1(\Omega} + C\sum_{i=1}^n\|\Pi\na \mu_i^k\|_{L^2(\Omega)}^2\nonumber \\ & \leq \frac{1}{\kappa} \int_\Omega \Big( \tilde{{\mathcal F}}({\boldsymbol{S}}^{k-1}) + \frac{1}{2}|\nabla \beta(S^{k-1})|^2 \Big) dx + \frac{1}{3} \int_\Omega a(S^k) (p_c'(S^k))^2 |\nabla S^k|^2 dx. \label{TF1-6}\end{aligned}$$ Thanks to Lemma \[lemma:coeff:1\], we can estimate the second integral on the right-hand side of by means of the third integral of the left-hand side of . In this way we get: $$\begin{aligned} \nonumber \frac{1}{\kappa} \int_\Omega & \Big(\tilde{{\mathcal F}}({\boldsymbol{S}}^k) + \frac{1}{2} |\nabla \beta(S^k)|^2 \Big) dx + C \int_\Omega\left(\frac{\beta(S^k)-\beta(S^{k-1})}{\kappa}\right)^2 dx \\ & + \frac{2}{3}\int_\Omega p_c'(S^k) \beta'(S^k) |\nabla S^k|^2 dx + \frac{1}{4\kappa^2} \int_\Omega a(S^k) \Big|\nabla\beta(S^k) - \nabla\beta(S^{k-1}) \Big|^2 dx \nonumber \\ & + C\eps \sum_{i=1}^n \|w_i^k\|^2_{H^1(\Omega} + C\|\Pi\na \mu_i^k\|_{L^2(\Omega)}^2\nonumber \\ & \leq \frac{1}{\kappa} \int_\Omega \Big(\tilde{{\mathcal F}}({\boldsymbol{S}}^{k-1}) + \frac{1}{2}|\nabla \beta(S^{k-1})|^2 \Big) dx. \label{TF1-8}\end{aligned}$$ Let us now introduce a new notation. Let us define the piecewise constant-in-time functions: $$\begin{aligned} {S^{(\kappa)}}_i(t) & = S^0_i{ {\mbox{\large\raisebox{2pt}{$\chi$}}} }_{\{0\}}(t) + \sum_{j=1}^N S^j{ {\mbox{\large\raisebox{2pt}{$\chi$}}} }_{(t_{j-1},t_j]}(t), \\ {\mu^{(\kappa)}}_i(t) &= \mu^0_i{ {\mbox{\large\raisebox{2pt}{$\chi$}}} }_{\{0\}}(t) + \sum_{j=1}^N \mu^j{ {\mbox{\large\raisebox{2pt}{$\chi$}}} }_{(t_{j-1},t_j]}(t), \end{aligned}$$ and let ${S^{(\kappa)}}= \sum_{i=1}^n {S^{(\kappa)}}_i$. We also define the discrete backward time derivative operator $D_\kappa$ as follows: for every function $f : Q_T\to {{\mathbb R}}$, $$D_\kappa f(x,t) = \frac{f(x,t)-f(x,t-\kappa)}{\kappa}\qquad x\in\Omega,\quad t\in [\kappa, T] .$$ The discretized-regularized system can be rewritten, in the new notation, as $$\begin{aligned} \nonumber \sum_{i=1}^n \int_0^T\int_\Omega \Big[ \big( D_\kappa {S^{(\kappa)}}_i \big)\phi_i &+ \frac{{S^{(\kappa)}}_i}{{S^{(\kappa)}}} \frac{a({S^{(\kappa)}})}{\tau({S^{(\kappa)}})} p_c'({S^{(\kappa)}})\nabla \beta({S^{(\kappa)}}) \cdot \nabla \phi_i \\ & + \frac{{S^{(\kappa)}}_i}{{S^{(\kappa)}}} a({S^{(\kappa)}}) \nabla D_\kappa \beta({S^{(\kappa)}}) \cdot \nabla \phi_i \Big] dx dt \nonumber\\ & + \int_0^T \int_\Omega \sum_{i,j=1}^n D_{ij}({S^{(\kappa)}}_1,\ldots,{S^{(\kappa)}}_n)\nabla \mu_j^{(\kappa)} \cdot \nabla \phi_i dx dt \nonumber\\ & + \eps \int_0^T {\int_\Omega}\sum_{i=1}^n \frac{{S^{(\kappa)}}_i}{{S^{(\kappa)}}} \nabla {w^{(\kappa)}}_i \cdot \nabla \phi_i dx dt = 0,\nonumber\\ &\forall \phi_1,\ldots,\phi_n\in L^2(0,T; H^1_0(\Omega)). \label{DisReg.tau.2.1} \end{aligned}$$ In the new notation, the entropy inequality reads as $$\begin{aligned} \nonumber &\sup_{t\in [0,T]}\int_\Omega (\tilde{{\mathcal F}}({S^{(\kappa)}})+ \frac{1}{2}|\na\beta({S^{(\kappa)}})|^2) dx \\ & + C \int_0^T\int_\Omega\left(D_\kappa\beta({S^{(\kappa)}})\right)^2 dx dt + \frac{2}{3}\int_0^T\int_\Omega \tau({S^{(\kappa)}}) p_c'({S^{(\kappa)}})|\nabla {S^{(\kappa)}}|^2 dx dt \nonumber\\ & + \frac{1}{4}\int_0^T\int_\Omega a({S^{(\kappa)}})\left|\nabla D_\kappa\beta({S^{(\kappa)}})\right|^2 dx dt + C\eps \sum_{i=1}^n \int_0^T\|{w^{(\kappa)}}_i\|^2_{H^1(\Omega)} dt\nonumber \\ & + C\int_0^T\|\Pi\na{\mu^{(\kappa)}}\|_{L^2(\Omega)}^2 dt \leq \int_\Omega (\tilde{{\mathcal F}}({\boldsymbol{S}}^{0}) + \frac{1}{2}|\na\beta(S^{0})|^2) dx . \label{dei}\end{aligned}$$ By using the lower bounded ${\displaystyle}\sum_{i=1}^n S_i \log \frac{S_i}{S} \geq -m n S \geq -mn$, we obtain the following apriori estimates: \[prop.7\] There is a constant $C$, independent of $\kappa$ and $\varepsilon$, such that $$\begin{aligned} { \| \mathcal{E}({S^{(\kappa)}}) \|_{L^{\infty}(0,T;L^1(\Omega))} } & \leq C, \label{Apr.0}\\ \| \nabla \beta({S^{(\kappa)}}) \|_{L^{\infty}(0,T;L^2(\Omega))} & \leq C, \label{Apr.1}\\ \| D_\kappa \beta({S^{(\kappa)}}) \|_{L^{2}(0,T;L^2(\Omega))} & \leq C, \label{Apr.2}\\ \| \sqrt{\tau({S^{(\kappa)}}) p_c'({S^{(\kappa)}})} \nabla {S^{(\kappa)}}\|_{L^{2}(0,T;L^2(\Omega))} & \leq C, \label{Apr.3}\\ \| \sqrt{a({S^{(\kappa)}})} \nabla D_\kappa \beta({S^{(\kappa)}}) \|_{L^{2}(0,T;L^2(\Omega))} & \leq C, \label{Apr.4}\\ \| \sqrt{\varepsilon} w_i^{\kappa} \|_{L^{2}(0,T;H^1(\Omega))} & \leq C, \label{Apr.5}\\ \| \nabla(\Pi {\boldsymbol{\mu}}^{\kappa})_i \|_{L^2(0,T;L^2(\Omega))} &\leq C, \label{Apr.6}\end{aligned}$$ for $i=1,\ldots,n$. By using the bound on the entropy function we obtain the following bounds: \[lemma:first-bound\] There is a constant $C$ independent of $\kappa$ and $\varepsilon$, such that $$\begin{aligned} \| ({S^{(\kappa)}})^{2-\gamma_1} \|_{L^{\infty}(0,T;L^1(\Omega))}+ \| (1-{S^{(\kappa)}})^{2-\lambda} \|_{L^{\infty}(0,T;L^1(\Omega))} & \leq C. \label{Apr.7} \end{aligned}$$ By using simple calculations, we get $$\begin{aligned} \mathcal{E}(S) &= \frac{1}{(\gamma_1-1)(\gamma_1-2)} \frac{1}{S^{\gamma_1-2}} + \frac{1}{(\lambda-1)(\lambda-2)} \frac{1}{(1-S)^{\lambda-2}} + \frac{1-S}{\lambda -1} + \frac{1}{\lambda-2}\\ &\geq C\left( \frac{1}{S^{\gamma_1-2}} + \frac{1}{(1-S)^{\lambda-2}}\right). \end{aligned}$$ The bound (\[Apr.7\]) now follows from (\[Apr.0\]). By using (\[Apr.3\]) we get the following bounds. \[lemma:second-bound\] Define the exponents $\alpha_1$ and $\alpha_2$ as follows: $$\alpha_1 = 1+(\gamma -\gamma_1 -\beta_1)/2 < 0,\quad \alpha_2 = 1-\beta_2/2 <0. \label{coeff:assum:4}$$ Then, there is a constant $C$, independent of $\kappa$ and $\varepsilon$, such that: $$\begin{aligned} \| ({S^{(\kappa)}})^{\alpha_1} \|_{L^{2}(0,T;L^6(\Omega))}+ \| (1-{S^{(\kappa)}})^{\alpha_2} \|_{L^{2}(0,T;L^6(\Omega))} & \leq C. \label{Apr.10} \end{aligned}$$ Let us denote, for notational simplicity, $S=S^{(\kappa)}$. Then, from (\[Apr.3\]) and Assumptions [**(H3)**]{}, [**(H4)**]{} we get $$\begin{aligned} \int_0^T\int_{\Omega} (S^\gamma + S^{\gamma -\gamma_1}(1-S)^\lambda )( S^{-\beta_1} + (1-S)^{-\beta_2}) |\nabla S|^2 dx dt \leq C, \end{aligned}$$ with the constant $C$ independent of $\kappa$ and $\varepsilon$. As a consequence $$\begin{aligned} \int_0^T\int_{\Omega} S^{\gamma -\gamma_1-\beta_1}(1-S)^\lambda |\nabla S|^2 dx dt + \int_0^T\int_{\Omega} S^\gamma (1-S)^{-\beta_2} |\nabla S|^2 dx dt \leq C. \end{aligned}$$ The inequality stated above implies the following bound for the functions $Z\equiv\min(S,1/2)$, $W\equiv\max(S,1/2)$: $$\begin{aligned} \int_0^T\int_{\Omega} Z^{\gamma -\gamma_1-\beta_1} |\nabla Z|^2 dx dt + \int_0^T\int_{\Omega} (1-W)^{-\beta_2} |\nabla W|^2 dx dt \leq C, \end{aligned}$$ which can be written as $$\begin{aligned} \int_0^T\int_{\Omega} |\nabla Z^{\alpha_1}|^2 dx dt + \int_0^T\int_{\Omega} |\nabla (1- W)^{\alpha_2}|^2 dx dt \leq C. \end{aligned}$$ The function $Z^{\alpha_1}$ and $ (1-W)^{\alpha_2}$ are in $L^2(0,T;L^2(\Omega))$ due to Assumption [**(H5)**]{} and Lemma \[lemma:first-bound\]. Indeed, Assumption [**(H5)**]{} implies that $2\alpha_1 \geq 2- \gamma_1$ and $2\alpha_2 \geq 2-\lambda$. We can then use the Sobolev embedding theorem to get the bound: $$\begin{aligned} \| Z^{\alpha_1} \|_{L^{2}(0,T;L^6(\Omega))}+ \| (1-W)^{\alpha_2} \|_{L^{2}(0,T;L^6(\Omega))} & \leq C. \end{aligned}$$ Due to (\[coeff:assum:4\]) these bounds hold also for the function $S$ instead of $Z$ and $W$. There exists $p>1$ such that $$\int_0^T \int_\Omega a(S^{(\kappa)})^{-p} dx dt \leq C, \label{Apr.11}$$ where the constant $C>0$ is independent of $\kappa$ and $\varepsilon$. \[lemma:third-bound\] For simplifying the notation we will write $S=S^{(\kappa)}$. We first notice that $$\begin{aligned} a(S)^{-p} = \left[ \frac{1}{S^\gamma} + \frac{1}{(1-S)^\lambda}\right]^p \leq C \left[ \frac{1}{S^{p\gamma}} + \frac{1}{(1-S)^{p\lambda}}\right]. \end{aligned}$$ So it is sufficient to prove that $S^{-p\gamma}$ and $(1-S)^{-p\lambda}$ are uniformely bounded in $L^1(\Omega\times (0,T))$ for some $p>1$. It is clear that integrability given by Lemma \[lemma:first-bound\] is not sufficient to prove the estimate (\[Apr.11\]). Therefore, we will combine estimates from Lemmas \[lemma:second-bound\] and \[lemma:first-bound\] in order to obtain the integrability with requested exponents. Assumptions [**(H5)**]{} on the parameters $\beta_1$, $\beta_2$, $\gamma$, $\gamma_1$ and $\lambda$ imply $$2<\beta_1\leq \gamma_1 <\gamma < \beta_1 + \gamma_1 -2, \quad 2 < \beta_2 \leq \lambda. \label{coeff:ineq:0}$$ We rewrite the expression $\int_\Omega S^{-\gamma p} dx$ using $ -\gamma p = \alpha_1 \Theta + (2-\gamma_1) \Theta_1$ and Hölder’s inequality: $$\begin{aligned} \int_\Omega S^{-\gamma p} dx & = \int_\Omega S^{\alpha_1 \Theta} \; S^{(2-\gamma_1)\Theta_1} dx \leq \Big( \int_\Omega S^{\alpha_1 \Theta p_1} dx \Big)^{\frac{1}{p_1}} \; \Big( \int_\Omega S^{(2-\gamma_1)\Theta_1 p_2 } dx \Big)^{\frac{1}{p_2}}.\end{aligned}$$ We take $p_1 = 6/\Theta$ and $p_2 = 6/(6-\Theta)$, $\Theta = 2$, $\Theta_1 = 2/3$ and we get $$\begin{aligned} \iint_{Q_T} S^{-\gamma p} dx dt &\leq \int_0^T \Big( \int_{\Omega} S^{6\alpha_1} dx \Big)^{1/3} dt \cdot \max_{0\leq t \leq T} \Big( \int_{\Omega} S^{2-\gamma_1} dx \Big)^{2/3}\\ &=\| S^{\alpha_1}\|_{L^2(0,T; L^6(\Omega))}^2 \| S^{2-\gamma_1}\|_{L^\infty(0,T; L^1(\Omega))}^{2/3}.\end{aligned}$$ Because of and , the right hand side is uniformly bounded. Condition $$\begin{aligned} p = - \frac{1}{\gamma} \Big( \frac{10}{3} + \gamma - \frac{5}{3} \gamma_1 - \beta_1 \Big) > 1 \end{aligned}$$ is equivalent to $$\begin{aligned} \gamma < \frac{1}{2} \beta_1 + \frac{5}{6}(\gamma_1 -2). \label{Est.n3.1}\end{aligned}$$ Now it is easy to see that and the first inequality in are equivalent to the first inequality in Assumption [**(H5)**]{}. The second inequality in Assumption [**(H5)**]{} in treated in the same way. The calculations are given here for completeness. We rewrite the expression $\int_\Omega (1-S)^{-\lambda p} dx$ using $ -\lambda p = \alpha_2 \Theta + (2-\lambda) \Theta_1$ and Hölder’s inequality: $$\begin{aligned} \int_\Omega (1-S)^{-\lambda p} dx & = \int_\Omega (1-S)^{\alpha_2 \Theta} \; (1-S)^{(2-\lambda)\Theta_1} dx\\ &\leq \Big( \int_\Omega (1-S)^{\alpha_2 \Theta p_1} dx \Big)^{\frac{1}{p_1}} \; \Big( \int_\Omega (1-S)^{(2-\lambda)\Theta_1 p_2 } dx \Big)^{\frac{1}{p_2}}.\end{aligned}$$ We take $p_1 = 6/\Theta$ and $p_2 = 6/(6-\Theta)$, $\Theta = 2$, $\Theta_1 = 2/3$ and obtain $$\begin{aligned} \iint_{Q_T} (1-S)^{-\lambda p} dx dt &\leq \int_0^T \Big( \int_{\Omega} (1-S)^{6\alpha_2} dx \Big)^{1/3} dt \cdot \max_{0\leq t \leq T} \Big( \int_{\Omega} (1-S)^{2-\lambda} dx \Big)^{2/3}\\ &=\| (1-S)^{\alpha_2}\|_{L^2(0,T; L^6(\Omega))}^2 \| (1-S)^{2-\lambda}\|_{L^\infty(0,T; L^1(\Omega))}^{2/3}.\end{aligned}$$ Because of and , the right hand side is uniformly bounded. Condition $$\begin{aligned} p = - \frac{1}{\lambda} \Big( \frac{4}{3} - \frac{2}{3} \lambda + 2- \beta_2 \Big) > 1 $$ is equivalent to $\lambda < 3\beta_2 -10$. It is now easy to see that this inequality together with the second inequality in are equivalent to the second inequality in Assumption [**(H5)**]{}. This concludes the proof of Lemma \[lemma:third-bound\]. \[prop.12\] There is an exponent $1<q<2$ such that $$\| \nabla D_\kappa \beta({S^{(\kappa)}})\|_{L^q(\Omega\times (0,T)))} \leq C, \label{bound:3rdder}$$ where $C$ is a constant independent of $\kappa$ and $\varepsilon$. Let $p>1$ as in Lemma \[lemma:third-bound\]. By choosing $q=2p/(1+p) \in (1,2)$, we get $$\begin{aligned} &\int_0^T \int_\Omega |\nabla D_\kappa \beta({S^{(\kappa)}})|^q dx dt \\ &= \int_0^T \int_\Omega {a({S^{(\kappa)}})}^{-q/2} {a({S^{(\kappa)}})}^{q/2}|\nabla D_\kappa \beta({S^{(\kappa)}})|^q dx dt\\ &\leq \left( \int_0^T \int_\Omega {a({S^{(\kappa)}})}^{-q/(2-q)} dx dt\right)^{(2-q)/2} \left( \int_0^T \int_\Omega {a({S^{(\kappa)}})} |\nabla D_\kappa \beta({S^{(\kappa)}})|^2 dx dt\right)^{q/2}\\ &= \left( \int_0^T \int_\Omega {a({S^{(\kappa)}})}^{-p} dx dt\right)^{(2-q)/2} \left( \int_0^T \int_\Omega {a({S^{(\kappa)}})} |\nabla D_\kappa \beta({S^{(\kappa)}})|^2 dx dt\right)^{q/2} . \end{aligned}$$ By using Lemma \[lemma:third-bound\] and bound (\[Apr.4\]) we conclude the proof. Finally, from equation together with the the bounds –, we get the following uniform bound for the discrete time derivative: $$\begin{aligned} \label{dicret.time.deriv} \|D_\kappa {S^{(\kappa)}}_i\|_{L^2(0,T; H^{-1}(\Omega))}\leq C_T.\end{aligned}$$ Passing to the limit when $\kappa \to 0$ ---------------------------------------- From and Lemma \[lemma.3\] we get that $$\begin{aligned} \|\sqrt{\varepsilon}S_i^{(\kappa)}\|_{L^2(0,T; H^1(\Omega))} \leq C_T.\end{aligned}$$ From this and the bound from the discrete time derivative we get by using the nonlinear version of the Aubin-Lions lemma [@CJL14] that $$\begin{aligned} S_i^{\varepsilon,\kappa} \to S_i^{\varepsilon} \quad \mbox{strongly in}~~ L^2(0,T; L^2(\Omega)).\end{aligned}$$ This strong convergence holds also in $L^q(0,T; L^q(\Omega))$ for any $q<\infty$. By using the bounds in Proposition \[prop.7\] and Proposition \[prop.12\], we obtain that the solution of satisfies $$\begin{aligned} \nonumber \sum_{i=1}^n \int_0^T \langle \pa_t S^{(\varepsilon)}_i, \phi_i\rangle dt &+\sum_{i=1}^n \int_0^T\int_\Omega \Big[\frac{S^{(\varepsilon)}_i}{S^{(\varepsilon)}} \frac{a(S^{(\varepsilon)})}{\tau(S^{(\varepsilon)})} p_c'(S^{(\varepsilon)})\nabla \beta(S^{(\varepsilon)}) \cdot \nabla \phi_i \\ & + \frac{S^{(\varepsilon)}_i}{S^{(\varepsilon)}} a(S^{(\varepsilon)}) \nabla D_\kappa \beta(S^{(\varepsilon)}) \cdot \nabla \phi_i \Big] dx dt \nonumber\\ & + \int_0^T \int_{\Omega} \sum_{i,j=1}^n D_{ij}(S_i^{(\varepsilon)},\ldots,S_n^{(\varepsilon)})\nabla \mu_j^{(\kappa)} \cdot \nabla \phi_i dx dt \nonumber\\ & + \eps \int_0^T \int_\Omega\sum_{i=1}^n \frac{S^{(\varepsilon)}_i}{S^{(\varepsilon)}} \nabla {w^{(\kappa)}}_i \cdot \nabla \phi_i dx dt = 0. \label{DisReg.tau.2.2} \end{aligned}$$ Thus, after taking the limit $\kappa\to 0$, holds with $S^{(\varepsilon)}$ in place of $S^{(\kappa)}$, *i.e.* $$\begin{aligned} \label{est.75new} \int_0^T \int_\Omega a(S^{(\varepsilon)})^{-p} dx dt \leq C.\end{aligned}$$ Also, estimates -, , hold with $S^{(\varepsilon)}$ in place of $S^{(\kappa)}$, *i.e.* $$\begin{aligned} { \| \mathcal{E}(S^{(\varepsilon)}) \|_{L^{\infty}(0,T;L^1(\Omega))} } & \leq C, \label{Apr.0.new}\\ \| \nabla \beta(S^{(\varepsilon)}) \|_{L^{\infty}(0,T;L^2(\Omega))} & \leq C, \label{Apr.1.new}\\ \| \pa_t \beta(S^{(\varepsilon)}) \|_{L^{2}(0,T;L^2(\Omega))} & \leq C, \label{Apr.2.new}\\ \| \sqrt{\tau(S^{(\varepsilon)}) p_c'(S^{(\varepsilon)})} \nabla S^{(\varepsilon)} \|_{L^{2}(0,T;L^2(\Omega))} & \leq C, \label{Apr.3.new}\\ \| \sqrt{a(S^{(\varepsilon)})} \nabla \pa_t \beta(S^{(\varepsilon)}) \|_{L^{2}(0,T;L^2(\Omega))} & \leq C, \label{Apr.4.new}\\ \| \sqrt{\varepsilon} w_i^{\varepsilon} \|_{L^{2}(0,T;H^1(\Omega))} & \leq C, \label{Apr.5.new}\\ \| \nabla(\Pi{\boldsymbol{\mu}}^{\varepsilon})_i \|_{L^2(0,T;L^2(\Omega))} &\leq C, \label{Apr.6.new}\\ \|\pa_t S_i^{(\eps)}\|_{L^2(0,T; H^{-1}(\Omega))} &\leq C, \label{9.1.19.dt}\\ \|\nabla\pa_t\beta(S^{(\eps)})\|_{L^q(\Omega\times (0,T))} &\leq C, \label{9.1.19.dd}\end{aligned}$$ for $i=1,\ldots,n$. Passing to the limit $\varepsilon \to 0$ ---------------------------------------- Now we define the continuous mapping $\mathcal{R}: {{\mathbb R}}_+\times{{\mathbb R}}^{n}\to{{\mathbb R}}^{n+1}$ as $$\mathcal{R}_i(w_0,\overline{w}) = w_0 \frac{e^{\overline{w}_i}}{\sum_{j=1}^n e^{\overline{w}_j}},\qquad w_0\geq 0,~~\overline{w}\in{{\mathbb R}}^{n},~~ i=1,\ldots,n.$$ It follows from that $S_i^{(\eps)} = \mathcal{R}_i( S^{(\eps)}, (\mu^*)^{(\eps)})$ for $i = 1,\ldots, n$. Lemma \[Dreyer.2\] implies that $S_i^{(\eps)}$ has a subsequence that is strongly convergent in $L^1(\Omega \times (0,T))$, for $i = 1,\ldots, n$. From the $L^\infty$ bounds for $S_i^{(\eps)}$ we conclude that, up to a subsequence, it holds that $$\begin{aligned} S_i^{(\varepsilon)} \to S_i \quad \mbox{strongly in }L^q(\Omega \times (0,T)) \quad \mbox{for all }q<\infty,~~ i = 1,\ldots, n.\end{aligned}$$ By using this convergence property as well as the bounds –, we are able to take the limit $\varepsilon \to 0$ in and obtain that ${\boldsymbol{S}}= (S_1,\ldots,S_n)$ is a weak solution to –. This finishes the proof of the theorem. Appendix {#appendix .unnumbered} ======== Derivation of the model {#derivation-of-the-model .unnumbered} ----------------------- We consider an isothermal, immiscible and incompressible two-phase flow of water and oil in a porous media, where oil consists of $n$ chemical components. Let us denote by $\mathcal{V}$ the representative volume (REV), which consists of the solid part $\mathcal{V}_s$ and the pore space $\mathcal{V}_p$. The flow occurs in a porous domain $\mathcal{V}_p$ of volume $\Delta V_p$, where the porosity (the relative volume occupied by the pores) is denoted by ${\displaystyle}\Phi = \Delta V_p/\Delta V$. The saturations of the oil and water phase are given by $ S_\alpha = \Delta V_\alpha/\Delta V_p$, where $\Delta V_\alpha$ is the volume of the phase $\alpha$ with $\alpha=w,o$. Following [@BearBach90], a generalized Darcy law gives $$\begin{aligned} {\mathbf{u}}_w = -\lambda_w(S_o) k \nabla p_w,\quad {\mathbf{u}}_o = -\lambda_o(S_o) k \nabla p_o. \label{Darcy} \end{aligned}$$ Here the subscripts $w$ and $o$ correspond, respectively, to the water (wetting) and the oil (non-wetting) fluids, ${\mathbf{u}}_\alpha$ are the fluxes of the phases, $p_\alpha$ are their pressures, and $\lambda_\alpha$ are the phase mobilities. We assume that $\lambda_\alpha$ depend on the nonwetting-phase saturation $S_o$. Furthermore, $k$ is the absolute permeability of the porous medium, and the gravity effects are neglected for simplicity. The mass conservation laws for both phases have the form: $$\begin{aligned} \Phi \frac{\rho_o \partial S_o}{\partial t} + {\operatorname{div}}\rho_o {\mathbf{u}}_o = 0, \quad \Phi \frac{\rho_w \partial(1-S_o)}{\partial t} + {\operatorname{div}}\rho_w {\mathbf{u}}_w = 0, \label{eq-1-2}\end{aligned}$$ where $\Phi$ is the porosity of the medium. The model – has to be completed with the capillary pressure law which has the form $$p_o - p_w = p_c^{\textrm{dyn}},$$ where, due to [@HG93], the capillary pressure saturation relationship is given by $$\begin{aligned} p_c^{\textrm{dyn}} = p_c(S_o) + \tau(S_o) \frac{\partial S_o}{\partial t}. \label{HassGrey}\end{aligned}$$ Here, $ p_c(S_o)$ is the static capillary pressure function and $\tau(S_o)$ is the relaxation parameter. We assume that the non-wetting phase (oil) is a heterogeneous mixture of hydrocarbon compounds and we derive the mass conservation equation for each compound. More precisely, in the oil phase there are $n$ components whose [*mass concentrations $c_o^i$*]{}, *i.e.* the densities of the $i$-th component in the volume of the phase, are given by $ c_o^i = \Delta m_o^i/\Delta V_o$, where $\Delta m_o^i$ is the mass of the component $i$ in the oil-phase of the REV. The sum of the mass concentrations of all components is given by $$\begin{aligned} \sum_{i=1}^n c_o^i = \frac{\Delta m_o}{\Delta V_o} = \rho_o. \label{Der.1}\end{aligned}$$ Noting that $$\frac{\Delta m_o^i}{\Delta V} = \frac{\Delta V_p}{\Delta V} \frac{\Delta V_o}{\Delta V_p} \frac{\Delta m_o^i}{\Delta V_o}= \Phi S_o c_o^i,$$ the mass conservation equation for the component $i$ is given by $$\begin{aligned} \frac{\partial}{\partial t} \big( \Phi S_o c_o^i \big) + {\operatorname{div}}\big( c_o^i {\mathbf{u}}_{o,i}\big) = 0. \label{Conser.2} \end{aligned}$$ The component velocities ${\mathbf{u}}_{o,i}$ are related to the phase velocity ${\mathbf{u}}_o$ by the expression $$\begin{aligned} \rho_o {\mathbf{u}}_o = \sum_{i=1}^n c_o^i {\mathbf{u}}_{o,i}. \label{CompVel.1} \end{aligned}$$ The flux of the oil-phase components consists of the relative movement of the constituents $i$ spreading due to random collisions between molecules of different types (diffusion) followed by the convection, *i.e.* $$\begin{aligned} c_o^i {\mathbf{u}}_{o,i} = {\mathbf{J}}_o^i + c_o^i {\mathbf{u}}_o. \label{Diff.1}\end{aligned}$$ Note that $\sum_{i=1}^n {\mathbf{J}}_o^i = {\mathbf{0}}$. Let us introduce the saturation of the component $i$ in the oil phase as $$S_o^i = \frac{\Delta V_o^i}{\Delta V_p}.$$ It is clear that $\sum_{i=1}^n S_o^i = S_o$. Furthermore, we assume that each component $i$ of the mixture in the oil phase is [*incompressible*]{}, *i.e.* $$\Delta m_o^i = \rho_o^i \Delta V_o^i,\; \mbox{ where }\; \rho_o^i = \mbox{const.}$$ Now we have $$\begin{aligned} c_o^i = \frac{\Delta m_o^i}{\Delta V_o} = \frac{\rho_o^i\, \Delta V_o^i}{\Delta V_o} = \rho_o^i \frac{\Delta V_o^i/\Delta V_p}{\Delta V_o/\Delta V_p} = \rho_o^i \frac{S_o^i}{S_o}.\end{aligned}$$ Next, we make the assumption that the diffusion fluxes are proportional to the spatial gradients of suitable chemical potentials, *i.e.* $$\begin{aligned} {\mathbf{J}}_o^i : = - \rho_o^i \sum_{j=1}^n D_{ij} (S_o^1,\ldots,S_o^n) \nabla \mu_j, \qquad i=1,\ldots,n, \label{Diff.Flux}\end{aligned}$$ where $\mu_j$ are given by using the notation $S \equiv S_o$. In this way, equation reads: $$\begin{aligned} \rho_o^i \frac{\partial}{\partial t} \Big( \Phi S_o \frac{S_o^i}{S_o} \Big) + {\operatorname{div}}\Big( \rho_o^i \frac{S_o^i}{S_o} {\mathbf{u}}_{o} - \rho_o^i \sum_{j=1}^n D_{ij} (S_o^1,\ldots,S_o^n) \nabla \mu_j \Big) = 0.\end{aligned}$$ Now, a simple calculation gives $$\begin{aligned} -\lambda_o k \nabla p_o = \frac{\lambda_o \lambda_w}{\lambda_o + \lambda_w} k \nabla ( p_w - p_o) + \frac{\lambda_o}{\lambda_o + \lambda_w}({\mathbf{u}}_0 + {\mathbf{u}}_w).\end{aligned}$$ Furthermore, we assume that the total flow equals zero, *i.e.* ${\mathbf{u}}_0 + {\mathbf{u}}_w = \bf{0}$, which gives $$\begin{aligned} \lambda_o(S_o) \nabla p_o = a(S_o) \nabla p_c^{\textrm{dyn}}.\end{aligned}$$ Here the diffusion mobility $a(S_o)$ is given by $$a(S_o) = \frac{\lambda_o(S_o) \lambda_w(S_o)}{\lambda_o(S_o) + \lambda_w(S_o)}.$$ In this way, we obtain the parabolic system of our interest $$\begin{aligned} \partial_t S_o^i - {\operatorname{div}}\left( \frac{S_o^i}{S_o} a(S_o) k \nabla p_c^{\textrm{dyn}} + \sum_{j=1}^n D_{ij}(S_o^1,\ldots,S_o^n) \nabla \mu_j \right) = 0, \label{Eq_1}\end{aligned}$$ where $i=1,\ldots,n$. Notice that is identical to with $k=1$ and $S_o$, $S_o^i$ replaced by $S$, $S_i$, respectively. . Theory of Fluid Flows Through Natural Rocks. Nedra Publishing House Moscow, 1972. Reissued by Springer 1996. . Introduction to modeling of transport phenomena in porous media. Kluver Academic Publisher, 1990. . Nonlinear Poisson-Nernst-Planck equations for ion flux through confined geometries. [*Nonlinearity*]{} [**4**]{} (2012), 961–990. . A note on Aubin-Lions-Dubinskii lemmas. [*Acta Appl. Math.*]{} [**133**]{} (2014), 33–43. . Analysis of improved Nernst-Planck-Poisson models of compressible isothermal electrolytes. Part III: Compactness and convergence. [*WIAS Preprint*]{} No. 2397, 2017. . Uniqueness of weak solutions for a pseudo-parabolic equation modeling two phase flow in porous media. [*Appl. Math. Lett.*]{} [**46**]{} (2015), 25–30. . Degenerate two-phase flow model in porpus media including dynamic effects in the capillary pressure: existence of a weak solution. [*J. Diff. Equ.*]{} [**260**]{} (2016), 2418–2456. . Macroscale continuum mechanics for multiphase porous-media flow including phases, interfaces, common lines and commpon points. [*Adv. Water Resources*]{} [**21**]{} (1998), 261–281. . Mechanics and thermodynamics of multiphase flow in porous media including interphase boundaries. [*Adv. Water Resources*]{} [**13**]{} (1990), 169–186. . Thermodynamic basis of capillary pressure in porous media. [*Water Resour. Res.*]{} [**29**]{} (1993), 3389–3405. . The boundedness-by-entropy method for cross-diffusion systems. [*Nonlinearity*]{} [**28**]{} (2015), 1963–2001. . [*Entropy Methods for Diffusive Partial Differential Equations*]{}. Springer Briefs in Mathematics, Springer, 2016. . Existence analysis of a single-phase flow mixture model with van der Waals pressure. To appear in [*SIAM J. Math. Anal.*]{} (2018), no.1.(arXiv: 1612.04161). . A global existence result for the equations describing unsaturated flow in porous media with dynamic capillary pressure. [*J. Diff. Equ.* ]{} [**248**]{} (2010), 1561–1577. . The unsaturated flow in porous media with dynamic capillary pressure. [*J. Diff. Equ.*]{} [**264**]{} (2018), 5629–5658. . On global inversion of homogeneous maps. [*Bull. Math. Sci.*]{} [**5**]{} (2015), 13–18. . Nonlinear Functional Analysis and its Applications. Volume II/B. Springer, New York, 1990. [^1]: The first and the third author acknowledge partial support from the Austrian Science Fund (FWF), grants P22108, P24304, W1245, P27352 and P30000. All three authors were partially supported by the bilaterial project No. HR 04/2018 of the Austrian Exchange Sevice OeAD together with the Ministry of Science and Education of the Republic of Croatia MZO [^2]: We point out that if $\beta(S)$ and $\mu_i^* = (\Pi\mu)_i$ belong to $L^2_{loc}(0,\infty; H^1(\Omega))$ for $i=1,\ldots,b$, then they admit trace on $\pa\Omega$, therefore also $S_1,\ldots,S_n$ admit trace on $\pa\Omega$ thanks to the invertibility of $S\mapsto\beta(S)$ and relation .
--- abstract: 'Retarded electromagnetic potentials are derived from Maxwell’s equations and the Lorenz condition. The difference found between these potentials and the conventional Liénard-Wiechert ones is explained by neglect, for the latter, of the motion-dependence of the effective charge density. The corresponding retarded fields of a point-like charge in arbitary motion are compared with those given by the formulae of Heaviside, Feynman, Jefimenko and other authors. The fields of an accelerated charge given by the Feynman are the same as those derived from the Liénard-Wiechert potentials but not those given by the Jefimenko formulae. A mathematical error concerning partial space and time derivatives in the derivation of the Jefimenko equations is pointed out.' --- 24.5cm -5pt -5pt -50pt addtoreset[equation]{}[section]{} [**J.H.Field** ]{} [ Département de Physique Nucléaire et Corpusculaire Université de Genève . 24, quai Ernest-Ansermet CH-1211 Genève 4.]{} [e-mail; john.field@cern.ch]{} ; Special Relativity, Classical Electrodynamics. PACS 03.30+p 03.50.De ****[Introduction]{}**** ======================== The present paper is the fifth in a series written recently by the present author on relativistic classical electrodynamics (RCED). In the first of the papers  [@JHFRCED], all of the formulae of classical electromagnetism (CEM), up to relativistic corrections of O($\beta^2$), relating to intercharge forces, were derived from Hamilton’s Principle, assuming only Coulomb’s inverse-square force law of electrostatics and relativistic invariance. In the same paper it was shown that the intercharge force, mediated by the exchange of space-like virtual photons, is predicted by quantum electrodynamics (QED) to be instantaneous in the centre-of-mass frame of the interacting charges. Recently, convincing experimental evidence has been obtained [@Kohletal] for the non-retarded nature of ‘bound’ magnetic fields with $r^{-2}$ dependence, (associated in QED with virtual photon exchange) in a modern version, probing small $r$ values, of the Hertz experiment [@Hertz] in which the electromagnetic waves associated with the propagation of real photons (fields with $r^{-1}$ dependence) were originally discovered. In two subsequent papers [@JHFRSKO; @JHFIND] the predictions of the RCED formulae for intercharge forces derived in Ref. [@JHFRCED] are compared with the predictions of the CEM (Heaviside) formulae [@Heaviside] for the force fields of a uniformly moving charge. Unlike the RCED formulae, the CEM ones correspond to a retarded interaction. If the latter are written in ‘present time’ form [@PPPT] they are found to differ from RCED formulae by terms of O($\beta^2$). In the first paper [@JHFRSKO], it is shown that consistent results for small-angle Rutherford scattering in different inertial frames are obtained only for RCED formulae and that stable, circular, Keplerian orbits of a system consisting of two equal and opposite charges are impossible for the case of the retarded CEM fields. The related ‘Torque Paradox’ of Jackson [@JackTP] is also resolved by use of the instantaneous RCED fields. The second paper [@JHFIND] considers electromagnetic induction in different reference frames. Again, consistent results are obtained only in the case of RCED fields. It is demonstrated that for a particular spatial configuration of a simple two-charge ‘magnet’ the Heaviside formula for the electric field predicts a vanishing induction effect in the case that the ‘magnet’ is in motion and the test coil is at rest. In Ref. [@JHFFT], the space-time transformation properties of the RCED and CEM force fields were studied in detail and compared with those that provide the classical description of the creation, propagation, and destruction of real photons. It was shown that in the relativistic theory longitudinal (with respect the direction of motion of the source charge) electric fields contain covariance-breaking terms of O($\beta^2$). The electric Gauss Law and Electrodynamic (Ampère Law) Maxwell Equations are also modified by the addition of covariance-breaking terms of O($\beta^4$) and O($\beta^5$) respectively. The retarded fields are re-derived from the Maxwell Equations and the Lorenz condition and an error in the derivation of the retarded Liénard-Wiechert (LW) [@LW] potentials was pointed out. The argument leading to this conclusion —which implies that retarded fields given by the Heaviside formulae are erroneous for this trivial mathematical reason, as well as being inconsistent with QED— is recalled in Sections 2 and 3 below. The aim of the present paper is to present a more detailed discussion of retarded electromagnetic fields with a view to pointing out some of the mathematically erroneous statements on this subject that have appeared in classical research literature, text books and modern pedagogical literature. The correct relativistic formulae for the retarded fields of an accelerated charge have previously been derived in Ref. [@JHFFT]. These fields actually describe only the production and propagation of real photons whereas in text books and the pedagogical literature it is universally assumed that these fields describe both intercharge forces and radiative effects. Since the present paper is concerned only with the postulates and mathematical arguments used in different derivations of the retarded fields, the physical interpretation of the fields (in particular their relation to the quantum mechanical description of radiation), as discussed in Ref. [@JHFFT], is not considered. The structure of the paper is as follows. In the following section the retarded 4-vector potential is derived from inhomgeneous d’Alembert equations and the Lorenz condition. The reason for the difference between the potential so-obtained and the pre-relativistic LW potentials is explained. In Section 3 Feynman’s derivation of the LW potentials is recalled, where the ‘multiple counting’ committed also in the original derivations [@LW] is made particularly transparent. In Section 4 some erroneous ‘relativistic’ derivations of the LW potentials and the Heaviside formulae that are commonly presented in text books on classical electromagnetism are discussed. In Section 5 the retarded fields of a uniformly moving charge are considered and the ‘present time’ formulae for the retarded RCED fields are derived for comparison with the Heaviside formulae of CEM. In Section 6 a comparison is made between different formulae for the retarded fields of an accelerated charge that have appeared in text books and the pedagogical literature including the well-known ones of Feynman and Jefimenko. Section 7 contains a brief summary. ****[Derivation of retarded electromagnetic potentials from inhomogeneous d’Alembert equations]{}**** ===================================================================================================== As described in Ref. [@Jack1], retarded electromagnetic potentials may be derived from the Maxwell equations: $$\begin{aligned} \vec{\nabla} \cdot \vec{{{\rm E}}} & = & 4 \pi {{\rm J}}_0, \\ \vec{\nabla} \times \vec{{{\rm B}}} & - &\frac{1}{c} \frac{\partial \vec{{{\rm E}}}}{\partial t} = 4 \pi \vec{{{\rm J}}} \end{aligned}$$ and the Lorenz condition $$\vec{\nabla} \cdot \vec{{{\rm A}}} + \frac{1}{c}\frac{\partial {{\rm A}}_0}{\partial t} = 0$$ where the current density ${{\rm J}}$ is a 4-vector: $${{\rm J}}(\vec{x}_J(t),t) = ({{\rm J}}_0;\vec{{{\rm J}}}) \equiv (\gamma_u \rho^*; \gamma_u \vec{\beta}_u \rho^*) = \frac{u \rho^*}{c}.$$ The system of source charges is assumed to be at rest in the frame S$^*$, where the charge density is $\rho^*$, and to move with velocity $\vec{u} = c \vec{\beta}_u$ relative to the frame S in which the potential is defined. The 4-vector velocity of the charge system in this last frame is: $$u \equiv (c\gamma_u ; c\gamma_u \vec{\beta}_u )$$ where $$\beta_u \equiv \frac{u}{c},~~~\gamma_u \equiv \frac{1}{\sqrt{1-\beta_u^2}}.$$ The first step of the calculation is to use the Lorenz condition (2.3) to eliminate either $\vec{{{\rm J}}}$ or ${{\rm J}}_0$ from (2.1) and (2.2) to obtain the inhomogeneous d’Alembert equations: $$\begin{aligned} \nabla^2 {{\rm A}}_0 -\frac{1}{c^2}\frac{\partial^2 {{\rm A}}_0}{\partial t^2} & = & -4 \pi {{\rm J}}_0, \\ \nabla^2 \vec{{{\rm A}}} -\frac{1}{c^2}\frac{\partial^2 \vec{{{\rm A}}}}{\partial t^2} & = & -4 \pi \vec{{{\rm J}}}. \end{aligned}$$ These equations are readily solved by introducing appropriate Green functions [@Jack1]. The solutions give the retarded 4-vector potential: $${{\rm A}}_{\mu}^{ret}(\vec{x}_q,t) = \int dt' \int d^3 x_J(t') \frac{{{\rm J}}_{\mu}(\vec{x}_J(t'), t')} {|\vec{x}_q -\vec{x}_J(t')|} \delta(t'+\frac{|\vec{x}_q -\vec{x}_J(t')|}{c}-t).$$ Here $\vec{x}_q$ is the position and $t$ the time at which the potential is defined and $\vec{x}_J(t')$ specifies the position of the volume element $d^3 x_J(t')$ at the earlier time $t'$. The $\delta$-function ensures that the volume element lies on the backward light cone of the field point specified by $\vec{x}_q$, as required by causality, since the potentials give the classical description of the propagation, from the source to the field point, of real (on-shell) photons at speed $c$. This is a consequence of the wave-equation-like structure of the terms on the left sides of the d’Alembert equations. The solutions of the corresponding homogeneous d’Alembert equations are progressive waves with phase velocity $c$. In the special case of a single point-like source charge the current density in (2.8) is given by the expression: $${{\rm J}}^Q(\vec{x}_J(t'), t') = \frac{ Q u}{c} \delta (\vec{x}_J(t')-\vec{x}_Q(t'))$$ where $\vec{x}_Q(t')$ is the position of the charge at time $t'$. Inserting (2.9) in (2.8), and integrating over $\vec{x}_J$, gives $${{\rm A}}_{\mu}^{ret}(\vec{x}_q,t) = \frac{Q u_{\mu}}{c} \int dt' \frac{\delta(t'-t'_Q)} {r'}$$ where $$r' \equiv |\vec{x}_q-\vec{x}_Q(t')|,~~~ t'_Q \equiv t - \frac{|\vec{x}_q -\vec{x}_Q(t'_Q)|}{c} = \left. t - \frac{r'}{c} \right|_{t' = t'_Q}.$$ The retarded 4-vector potential is therefore: $$({{\rm A}}_0^{ret};\vec{{{\rm A}}}^{ret}) = \left( \left. \frac{Q \gamma_u}{r'} \right|_{t' = t'_Q}; \left. \frac{Q \gamma_u \vec{\beta}_u}{r'}\right|_{t' = t'_Q} \right).$$ A similiar result to (2.12) is obtained in the case of an extended distribution of charge in the case that its dimensions are much less than the separation between the average position of the source charge distribution, $\langle \vec{x}_J\rangle$, and the field point. In this case $\vec{x}_J$ may be replaced in the $\delta$-function and denominator of (2.8) by $\langle \vec{x}_J\rangle$, so that the factor $\langle r' \rangle \equiv |\vec{x}_q -\langle \vec{x}_J \rangle|$ in the denominator may be taken outside the $\vec{x}_J$ integral giving $$\begin{aligned} {{\rm A}}_{\mu}^{ret}(\vec{x}_q,t) & = & \int\frac{dt'}{\langle r' \rangle} \int d^3 x_J( t'_J) {{\rm J}}_{\mu}(\vec{x}_J(t'),t') \delta(t'+\frac{|\vec{x}_q - \langle \vec{x}_J\rangle|}{c}-t) \nonumber \\ & = & \int\frac{dt'}{\langle r' \rangle}\frac{u_{\mu}}{c} \int d^3 x_J(t') \rho^*(\vec{x}_J(t'), t') \delta(t'+\frac{|\vec{x}_q -\langle \vec{x}_J\rangle|}{c}-t) \nonumber \\ & = & \int\frac{dt'}{\langle r' \rangle} \frac{u_{\mu} Q }{c} \delta(t'-\langle t'_J \rangle) \end{aligned}$$ where $Q$ is the total charge of the distribution: $$Q = \int \rho^*(\vec{x}_J(t'), t') d^3 x_J( t')$$ and $$\langle t'_J \rangle \equiv t - \frac{|\vec{x}_q -\langle \vec{x}_J \rangle|}{c} = t - \frac{\langle r' \rangle}{c}$$ giving the 4-vector potential: $$({{\rm A}}_0^{ret};\vec{{{\rm A}}}^{ret}) = \left( \left. \frac{Q \gamma_u}{\langle r' \rangle} \right|_{t' = \langle t'_J \rangle}; \left. \frac{Q \gamma_u \vec{\beta}_u}{ \langle r' \rangle}\right|_{t' = \langle t'_J \rangle} \right).$$ It is now of interest, in view of understanding the origin of the LW potentials, to recalculate the retarded potentials after inverting the the order of the $t'$ and $\vec{x}_J(t')$ integrations in (2.8), so that: $${{\rm A}}^{ret}_{\mu}(\vec{x}_q,t) = \int d^3 x_J(t') \int dt' \frac{{{\rm J}}_{\mu}(\vec{x}_J(t'), t')} {|\vec{x}_q -\vec{x}_J(t')|} \delta(t'+\frac{|\vec{x}_q -\vec{x}_J(t')|}{c}-t).$$ Unlike in (2.10), where the insertion of the current density of a point-like charge, (2.9) simply specifies the value of $t'$ in the $\delta$-function to be $t'_Q$, as given by Eq.(2.11), on integrating over $\vec{x}_J$, the argument of the $\delta$-function in (2.17) has a more complicated dependence on $t'$: $$\delta [f(t')] = \frac{\delta(t'-t'_J)}{~~~~\left|\frac{\partial f(t')}{\partial t'}\right|_{t '= t'_J}}$$ where $t'_J$ is the solution of the equation $f(t')=0$ and $$f(t') \equiv t' +\frac{|\vec{x}_q -\vec{x}_J(t')|}{c}-t.$$ It follows from (2.19), and the definition of $t'_J$, that $$t'_J = t - \frac{|\vec{x}_q -\vec{x}_J(t'_J)|}{c}.$$ Differentiating (2.19) gives: $$\frac{\partial f(t')}{\partial t'} = 1-\hat{r}'_J \cdot \vec{\beta}_u$$ where: $$\hat{r}'_J = \frac{\vec{x}_q -\vec{x}_J(t_J')}{|\vec{x}_q -\vec{x}_J(t_J')|},~~~ \vec{\beta}_u = \frac{1}{c}\frac{d \vec{x}_J(t')}{d t'}$$ so that (2.17) may be written as $${{\rm A}}^{ret}_{\mu}(\vec{x}_q,t) = \int d^3 x_J(t') \int dt' \frac{{{\rm J}}_{\mu}(\vec{x}_J(t'), t')} {|\vec{x}_q -\vec{x}_J(t')|(1-\hat{r}'_J \cdot \vec{\beta}_u)} \delta(t'- t_J').$$ In performing the integral over $t'$, proper account must now be taken of the appropriate current density $J_{\mu}$ to be inserted in (2.23). The limits of the $t'$ integral are determined by the times at which the backward light cone of the field point coincides with the boundaries of the moving charge distribution. This is illustrated in Fig.1 for a uniform block of charge DEFG, of trapeziodal shape, moving in the plane of the figure towards a distant field point, in this plane, far to the right. The segments AA’, BB’ and CC’ lie on the light front, LF, that coincides with the backward light cone of the field point. It is assumed that the latter is sufficiently far that LF may be approximated by a plane, with normal in the plane of the figure. The block of charge is moving with speed $u$ in the plane of the figure at angle $\theta$ to the direction of motion of LF. The light front starts to overlap the block of charge in the position AA’ and ceases to do so in the position CC’. The limits of the $t'$ integral in (2.23) for this case then correspond to the times when the front coincides with AA’ (lower limit) and with CC’ (upper limit). Inspection of Fig.1 shows that, during the time interval between these limits, the average value of the charge density, $\bar{\rho}$, is less than that when the distribution it at rest, $\rho^*$, by the ratio: $$\frac{\ell}{L} = \frac{{ \rm length~of~charge~distribution}}{{\rm length~of~light~cone~overlap~region}}.$$ If $\Delta t'$ is the time during which there is overlap between LF and the block of charge, the geometry of Fig.1 gives: $$L = u \Delta t' + \ell = \frac{c \Delta t'}{\cos \theta}$$ so that $$\frac{\ell}{L} = 1-\frac{u}{c} \cos \theta = 1-\hat{r}'_J \cdot \vec{\beta}_u.$$ It can be seen from Fig.1 that the same average charge density is obtained if the uniform block of charge is replaced by a point-like charge, $Q$, equal to the integrated charge of the block and placed at its centre, or if the moving uniform charge distribution is replaced by the fixed one MNOP with density $\bar{\rho}$. For a single point-like charge the appropriate current density in (2.23) is then given by (2.9). (2.24) and (2.26) as: $${{\rm J}}^Q(\vec{x}_J(t'), t') = (1-\hat{r}'_J \cdot \vec{\beta}_u) \frac{ Q u}{c} \delta (\vec{x}_J(t')-\vec{x}_Q(t')).$$ Inserting (2.27) in (2.23) and performing the integrals over $t'$ and $\vec{x}_J$ recovers the result of Eq.(2.12). The increased overlap time of the light front resulting from the motion of the block of charge is exactly compensated by the reduction of the average charge density resulting from the same motion. The incorrect LW potentials are given by taking into account the time-everlap correction factor but neglecting the corresponding change in the charge density. This gives, instead of (2.12), the potentials $$({{\rm A}}_0^{ret};\vec{{{\rm A}}}^{ret}) \equiv (\gamma_u {{\rm A}}_0({\rm LW})^{ret};\gamma_u\vec{{{\rm A}}}({\rm LW})^{ret}) = \left( \left. \frac{Q \gamma_u}{r'(1-\hat{r}'_J \cdot \vec{\beta}_u)} \right|_{t' = t'_Q}; \left. \frac{Q \gamma_u \vec{\beta}_u}{r'(1-\hat{r}'_J \cdot \vec{\beta}_u)}\right|_{t' = t'_Q} \right)$$ where ${{\rm A}}_0({\rm LW})^{ret}$ and $\vec{{{\rm A}}}({\rm LW })^{ret}$ are the Liénard and Wiechert potentials. This mistake in the original Liénard and Wiechert [@LW] calculations has been repeated in all text-book treatments of the subject of retarded potentials. Some examples may be found in Refs. [@PPLW; @RosserLW; @JackLW; @SchwartzLW; @GriffithsLW]. Inspection of (2.23) shows that neglect of the charge density correction factor of (2.27) in evaluating the potentials implies that they are under-estimated when the source is receding ($\hat{r}'_J \cdot \vec{\beta}_u < 0$) and over-estimated when it is approaching ($\hat{r}'_J \cdot \vec{\beta}_u > 0$). On the other hand, the $1/r'$ dependence of the potential implies that the potentials are greater (smaller) when $\hat{r}'_J \cdot \vec{\beta}_u < 0$ ($\hat{r}'_J \cdot \vec{\beta}_u > 0$). It is shown in Section 5 below that for the ‘present time’ LW fields (Eqs.(4.14) and (4.15) below) the neglect of the charge density correction factor results in exact compensation of the $1/r'^2$ dependence so that the magnitudes of the fields are independent of the sign of $\hat{r}'_J \cdot \vec{\beta}_u$, as is the case for an instantaneous intercharge interaction. Note that the potentials on the right side of (2.28), derived by neglecting the density correction factor in (2.27), differ from the retarded LW potentials by an overall factor of $\gamma_u$. This factor will be commented on at the end of Section 4 below where alternative ‘relativistic’ derivations of the LW potentials are discussed. ****[Feynman’s derivation of the Liénard-Wiechert potentials]{}**** =================================================================== The erroneous nature of the retarded potentials found when the charge density correction factor of Eqs.(2.24) and (2.26) is neglected is made particularly clear by a careful examination of Feynman’s derivation [@FeynLW] of the LW potentials for the case of parallel motion of the source distribution and the light front, LF, corresponding to the backward light cone of the field point. Feynman’s analysis of the problem of retarded potentials is shown in Fig.2. A rectangular block of charge, of uniform density, moves towards the field point, which is sufficiently far to the right that the variation of $r'_J$ may, as in deriving Eq.(2.16) above, be neglected in evaluating the integral that gives the potential. The light front moves across the charge distribution, sampling it. Each element of charge which is crossed by LF gives a contribution to the potential. The depth of the block of charge is $\ell$ and LF moves over the distance $L$ while crossing the charge distribution. The front overlaps the charge distribution for a time interval $T$. The overlap distance, $L$, is divided into bins of width $w$ and the contribution to the potential of each bin is considered separately. In Fig.2 the dimensions and velocity $u$ are chosen so that: $$\ell = \frac{2 L}{5},~~~~~w = \frac{L}{5}.$$ It then follows that $u = 3c/5$. In this figure, the positions of the charge distribution and the front LF at times 0, $T/5$, $2T/5$ respectively are shown. In Figs.2b,2d the front has crossed charge thicknesses of $0.4 w$,$0.8w$ respectively. The region crossed during the time $0 < t < T/5$ is shown by SW-NE[^1]diagonal cross-hatching, that crossed in $T/5< t < 2T/5$ by NW-SE diagonal cross-hatching. Thus the average charge density in each bin is reduced, in comparison with the situation when the charges are at rest, by 60$\%$. Integrating first over the time, as in (2.17), for each bin, then gives: $${{\rm A}}_{\mu} = \frac{u_{\mu} S}{c r'_J} \sum_{bins}w \bar{\rho} = \frac{u_{\mu} S L \bar{\rho}}{c r'_J}$$ where $S$ is the surface area of the charge distribution normal to its direction of motion and $\bar{\rho}$ is the average charge density. From the geometry of Fig.2a, $\bar{\rho} = 2 \rho^*/5$ where $\rho^*$ is the rest frame charge density. Since $L = 5 \ell /2$ (3.1) gives: $${{\rm A}}_{\mu} = \frac{u_{\mu} S \ell \rho^*}{c r'_J} = \frac{u_{\mu} Q}{c r'_J}$$ where $Q$ is the total charge in the block. Allowing for the propagation time delay of the light front with respect to the time of the field point (3.2) agrees with Eq.(2.16) but not with the LW potentials in (2.28). The contributions to the integral given by the first two bins, according to Feynman’s original calculation [@FeynLW] are shown by the SW-NE and NW-SE diagonal hatching in Figs.2c and 2e respectively. The movement of the charge distribution is neglected, and with it the change in the effective charge density. Feynman’s result is given by replacing $\bar{\rho}$ in (3.1) by $\rho^*$, the density of the charge distribution at rest. This gives a result consistent with (2.28), but is evidently wrong, since charge elements are multiply counted during the passage of the light front. For example, a contribution to the integral is assigned proportional to the area of the cross-hatched region to the left of LF in Fig.2c for $t \le T/5$. However, inspection of Fig.2b, showing the actual geometrical configuration at $t = T/5$, shows that, because of the parallel motion of the charge distribution, LF has crossed only the fraction of the region in Fig.2c that is both shaded and cross-hatched, not the entire cross-hatched region. In fact, careful inspection of Fig21-6(c) of Ref. [@FeynLW] shows clearly that the contribution due to the passage of the light front over the first bin is overestimated. Only the region of the charge distribution to the left of the light front as shown in this figure has been sampled at this time, not the filled first bin of Fig21-6(b) of Ref. [@FeynLW]. ****[‘Relativistic’ derivations of the Liénard-Wiechert potentials and the electromagnetic fields of a uniformly moving charge]{}**** ===================================================================================================================================== As well as the derivation of the LW potentials by consideration of retardation effects, as in the original papers of Liénard and Wiechert, text books on classical electromagnetism contain alternative derivations, where no retardation effects are considered, but instead a relativistic ‘length contraction’ effect is invoked. For example, in Ref. [@LLLW1], the temporal component ${{\rm A}}_0$ of the 4-vector electromagnetic potential is obtained by Lorentz-transformation into the frame S, where ${{\rm A}}_0$ is defined, from the frame S$^*$ in which the point-like source charge $Q$ is at rest: $${{\rm A}}_0 = \gamma_u {{\rm A}}_0^* = \gamma_u \frac{Q}{r^*}$$ where $$r \equiv |\vec{x}_q-\vec{x}_Q|,~~~r^* \equiv |\vec{x}_q^*-\vec{x}_Q^*|.$$ The vectors $\vec{x}_q$, $\vec{x}_Q$ ($\vec{x}_q^*$,$\vec{x}_Q^*$) give the position of the field point and the source charge, respectively, in the frames S (S$^*$). These coordinates are specified at a fixed time in the frame S —no retardation effects are considered. It is then assumed that the $x$-coordinate separations in the frames S and S$^*$ are related by the relativistic length contraction relation: $$x_q^*- x_Q^* = \frac{x_q- x_Q}{\sqrt{1-\frac{u^2}{c^2}}}$$ while the $y$ and $z$ separations are the same in both frames. It then follows from (4.2) and (4.3) that $$(r^*)^2 = \frac{(x_q- x_Q)^2+(1-\frac{u^2}{c^2})[(y_q- y_Q)^2+(z_q- z_Q)^2]}{1-\frac{u^2}{c^2}}.$$ Denoting by $\psi$ the angle between the vectors $\vec{x}_q-\vec{x}_Q$ and $\vec{u}$, (4.4) may be written as: $$\begin{aligned} (r^*)^2 & = & r^2 \frac{[\cos^2 \psi+(1-\frac{u^2}{c^2})\sin^2 \psi]}{1-\frac{u^2}{c^2}} \nonumber \\ & = & r^2 \frac{[1-\beta_u^2\sin^2 \psi]}{1-\frac{u^2}{c^2}}. \end{aligned}$$ Substituting $r^*$ from (4.5) in (4.1) than gives $${{\rm A}}_0 \equiv {{\rm A}}_0({\rm LW})^{PT} = \frac{Q}{r(1-\beta_u^2\sin^2 \psi )^{\frac{1}{2}}}.$$ This is the ‘present time’ (PT) formula [@PPPT] for the temporal component of the retarded LW potential ${{\rm A}}_0({\rm LW})^{ret}$ given in Eq.(2.28) above. All quantities in (4.6) are defined at the instant that the potential is specified. The ‘present time’ form of the 3-vector potential $\vec{{{\rm A}}}$ is calculated, in a similar manner, to obtain $$\vec{{{\rm A}}} \equiv \vec{{{\rm A}}}({\rm LW})^{PT} = \frac{Q \vec{\beta}_u}{r(1-\beta_u^2\sin^2 \psi)^{\frac{1}{2}}}$$ It is interesting to note that that the $\gamma_u$ factor in (4.1), manifesting the 4-vector character of ${{\rm A}}$, is cancelled by a similar factor originating in the ‘length contraction’ effect of Eq.(4.3). A similar derivation of ${{\rm A}}_0({\rm LW})^{PT}$ may be found in Ref. [@PPLW1] where it is noted that the change of variables $$x_q^* = \frac{x_q}{\sqrt{1-\frac{u^2}{c^2}}},~~~ y_q^* = y_q,~~~ z_q^* = z_q$$ transforms the d’Alembert equation (2.6) into a Poisson equation, the solution of which is the Coulomb electrostatic potential $Q/r^*$. Expressing $r^*$ in terms of ($x_q$,$y_q$,$z_q$), neglecting a multiplicative factor $\gamma_u$ (which was cancelled in the derivation of Ref. [@LLLW1] by the similar factor in the numerator of the right side of (4.1)) the potential ${{\rm A}}_0({\rm LW})^{PT}$ is obtained. It is only mentioned at the end of the calculation that the purely mathematical transformations of Eqs.(4.8) should be interpreted as physical transformations predicted by the Lorentz transformation, Unlike in Ref [@LLLW1] the scalar and vector potentials are treated in non-relativistic manner, necessitating a (tacit) neglect of a factor $\gamma_u$ in order to recover the LW result. A ‘relativistic’ derivation of the ‘present time’ formulae for the electric and magnetic fields of a uniformly moving charge, by use of a similar ‘length contraction’ ansatz as in Refs. [@LLLW1; @PPLW1] is found in Jackson’s book [@JackLW1][^2], The conventional transformation laws of electric and magnetic fields between the frames S$^*$ and S; $${{\rm E}}_x = {{\rm E}}_x^*,~~~{{\rm E}}_y = \gamma_u( {{\rm E}}_y^*+\beta_u {{\rm B}}_z^*),~~~{{\rm B}}_z = \gamma_u( {{\rm B}}_z^*+\beta_u {{\rm E}}_y^*)$$ are used to transform the fields in the rest frame of the source charge: $${{\rm E}}_x^* = \frac{Q(x_q^*- x_Q^*)}{(r^*)^3},~~~{{\rm E}}_y^* = \frac{Q(y_q^*- y_Q^*)}{(r^*)^3},~~~{{\rm E}}_z^* = {{\rm B}}_x^*= {{\rm B}}_y^*= {{\rm B}}_z^*=0$$ into the frame S. Performing this transformation, and using (4.3) to express the result in terms of S frame coordinates[^3] gives $$\begin{aligned} {{\rm E}}_x & = & \frac{Q(x_q- x_Q)}{\gamma_u^2 \{(x_q- x_Q)^2+(1-\frac{u^2}{c^2})[(y_q- y_Q)^2+(z_q- z_Q)^2]\}^{\frac{3}{2}}} \nonumber \\ & = & \frac{Q \cos \psi}{\gamma_u^2 r^2(1-\beta_u^2\sin^2 \psi)^{\frac{3}{2}}}, \\ {{\rm E}}_y & = & \frac{Q(y_q- y_Q)}{\gamma_u^2 \{(x_q- x_Q)^2+(1-\frac{u^2}{c^2})[(y_q- y_Q)^2+(z_q- z_Q)^2]\}^{\frac{3}{2}}} \nonumber \\ & = & \frac{Q \sin \psi}{\gamma_u^2 r^2(1-\beta_u^2\sin^2 \psi)^{\frac{3}{2}}}, \\ {{\rm B}}_y & = & \beta_u E_y = \frac{Q \beta_u \sin \psi}{\gamma_u^2 r^2(1-\beta_u^2\sin^2 \psi)^{\frac{3}{2}}}. \end{aligned}$$ Eqs.(4.11)-(4.13) may also be written in 3-vector notation as: $$\begin{aligned} \vec{{{\rm E}}} & \equiv & \vec{{{\rm E}}}({\rm H})^{PT} = \frac{Q \vec{r}}{\gamma_u^2 r^3(1-\beta_u^2\sin^2 \psi)^{\frac{3}{2}}}, \\ \vec{{{\rm B}}} & \equiv & \vec{{{\rm B}}}({\rm H} )^{PT} = \vec{\beta_u} \times \vec{{{\rm E}}}. \end{aligned}$$ The label ‘H’ stands for ‘Heaviside’ who first obtained these equations [@Heaviside] more than a decade before the advent of special relativity. They may also be obtained from the ‘present time’ potentials in (4.6) and (4.7) and the usual definitions of electric and magnetic fields in terms of derivatives of the 4-vector potential. It is easy to show that the ‘length contraction’ ansatz of Eqs.(4.3) and (4.8) used to derive (4.14) and (4.15), as obtained from the retarded LW potential, but without invoking any retardation effect, is inconsistent with a fundamental reciprocity property of special relativity. This was stated in a concise way, and in a manner directly applicable to the problem considered here, by Pauli [@Pauli]: [The contraction of lengths at rest in S$^*$ is equal to that of lengths at rest in S and observed in S$^*$.]{} To make manifest the symmetry of the configurations in the frames S and S$^*$, that is the basis of the applicability of the above reciprocity postulate in the present case, a test charge $q$, at rest, is placed at the field point in S. As shown in Fig.3, the ‘length at rest in S$^*$’ is the separation, $r^*$, of the source and test charges in this frame (Fig.3a). Similarly the ‘length at rest in S is equal to $r$ (Fig.3b). However, in the case of the ‘length contraction’ ansatz of Eqs.(4.3) and (4.8), $r$ is also the contracted value of the length $r^*$ as observed in S. i.e. $$r = \alpha(u)r^*$$ where $\alpha(u)$ is some even function of the relative velocity $u$ of the frames S and S$^*$, and $\alpha(u) < 1$, $\alpha(0) = 1$. The above reciprocity postulate states that also $$r^* = \alpha(u^*)r.$$ where $\alpha(u) = \alpha(u^*)$ Combining (4.16) and (4.17), $$r = \alpha(u)r^* = \alpha(u)\alpha(u^*) r.$$ It follows that if $r \ne 0$, $\alpha(u)\alpha(u^*) = 1$. This requires that $u = u^*= 0$, contradicting the initial hypothesis that S and S$^*$ are in relative motion. The existence of a ‘length contraction’ effect respecting Pauli’s reciprocity condition is therefore excluded by [*reductio ad absurdum*]{} (self-contradiction). The length contraction ansatz of (4.3) and (4.8) is therefore incompatible with the above stated reciprocity property of special relativity. How this universally (until now) accepted length contraction effect results from a misinterpretation of the symbols in the space-time Lorentz transformation has been extensively discussed elsewhere  [@JHFSR; @JHFSR1; @JHFSR2]. In conclusion, the ’relativistic’ derivation of the field equations (4.14) and (4.15) neglects retardation effects and is in fact incompatible with (correctly interpreted [@JHFSR; @JHFSR1; @JHFSR2]) special relativity. That the same result is obtained using the incorrect LW potentials (derived by consideration of pre-relativistic retarded fields) must then be regarded, not as confirmation of the correctness of the formulae, but as purely fortuitous. The ‘present time’ formulae derived from the relativistically-correct retarded potentials in (2.12) are presented in the following section. An alternative ‘relativistic derivation’ of $ {{\rm A}}({\rm LW})^{PT}$, but also assuming retardation, was given by Landau and Lifshitz [@LLLW2]. The retardation condition (2.11) was used to write the temporal component of ${{\rm A}}$, in the rest frame of the point-like source charge as: $${{\rm A}}_0^* = \frac{Q}{c(t-t'_Q)}.$$ It was the noticed that the 4-vector: $${{\rm A}}\equiv \frac{Q u}{x^{ret} \cdot u}$$ where $$x^{ret} \equiv (c(t-t'_Q);\vec{x}_q-\vec{x}_Q(t'))$$ reduces to (4.19) in the rest frame of the source charge. The right side of (4.20) is precisely the retarded LW potential in (2.28) of a point-like charge. Although it is true that (4.20) gives (4.19) in the rest frame of the source charge, the same is true of the different 4-vector potential in (2.12). The relation (4.20) is, however, nothing more than a mathematical curiosity, lacking any physical motivation, whereas the potential in (2.12), equally consistent with (4.19), is the solution of the d’Alembert equations (derived from Maxwell’s equations and the Lorenz condition) for a point-like charge. The physical meaning and method of derivation of the potential of (2.12), unlike that of (4.20), are therefore quite clear. That the retarded LW potentials and the associated fields could be derived in a ‘relativistic’ calculation in which retardation effects are completely neglected, whereas in the original derivations of Liénard, Wiechert and Heaviside, performed before the advent of special relativity, the (actually spurious) length contraction effect is neglected should be serious cause for concern. This unease, however, seemed not to be shared by authors of text books, and the pedagogical literature, on classical electromgnetism, throughout the last century. There is now ample experimental verification of the predictions of correctly interpreted [@JHFSR; @JHFSR1; @JHFSR2] special relativity, and that retardation effects do occur in processes where real photons are radiated, so that the corresponding classical fields must also be retarded. The contradiction posed by the absence of one or the other of two essential, but different, physical phenomena in the two different derivations of the Heaviside formulae was clearly stated by Jefimenko [@JEFrr], but the obvious doubt shed by this on the correctness of the formulae and/or the derivations, was passed over in silence. In fact, as demonstrated in the present paper both the original 19th century and the 20th century ‘relativistic’ derivations’ are wrong. The former because the variation of the effective charge density of the moving charge distribution was not taken into account, the latter because the ‘length contraction’ effect on which they are based, does not exist. It is proposed in the present paper that the correct relativistic retarded potentials of a point-like charge are those given above in Eq.(2.12). The corresponding electric and magnetic fields, for the case of a charge in uniform motion, are derived in the following section. In Section 6 the retarded fields of accelerated charges are considered, and compared with those derived from the LW potentials as well as the well-known formulae of Feynman and Jefimenko as well as some others that have appeared in text books and the pedagogical literature. The RCED 4-vector potential and current differ from those of CEM by the multiplicative factor $\gamma_u$ (see Eq.(2.28) above). This leads to to a breakdown of the Gauss law for the electric field of a moving charge  [@JHFRSKO; @JHFFT] and covariance-breaking terms in the electrodynamic Maxwell equation (Ampère’s law) [@JHFFT]. In text books on CEM, the validity of the electric field Gauss law for both static and moving source charges is justified by invoking the relativistic length contraction effect of Eq.(4.3) [@PPLC]. If the charge density in a moving frame transforms as the temporal component of a 4-vector $\propto \gamma_u$, and a volume element transforms $\propto (\gamma_u)^{-1}$ due to relativistic length contraction, then the charge within the volume element is Lorentz invariant. Since however, as demonstrated above, the length contraction effect is spurious, the effective charge actually varies as $\gamma_u$ or as $(u/c)^2$ for small $u$. This effect has been experimentally observed in the vicinity of an electrically neutral superconducting magnet [@Edwards]. ****[Retarded electric and magnetic fields of a point-like charge in uniform motion]{}**** ========================================================================================== The electric and magnetic fields corresponding to the 4-vector potential (2.12) are obtained by straightforward application of the definitions of electric and magnetic fields in terms of derivatives of the potential: $$\begin{aligned} \vec{{{\rm E}}} & \equiv & -\vec{\nabla} {{\rm A}}_0- \frac{1}{c}\frac{\partial \vec{{{\rm A}}}}{\partial t}, \\ \vec{{{\rm B}}} & \equiv & \vec{\nabla} \times \vec{{{\rm A}}} \end{aligned}$$ where, without loss of generality, it may be assumed that the electric field is confined to the $x$-$y$ plane, $$\vec{\nabla} \equiv \hat{\imath}\frac {\partial ~}{\partial x_q} + \hat{\jmath}\frac {\partial ~}{\partial y_q}.$$ Unit vectors along the $x$- $y$- and $z$-axes are denoted as $\hat{\imath}$, $\hat{\jmath}$ and $\hat{k}$. To perform the calculation, the retardation condition $$t' = t-\frac{|\vec{x}_q-\vec{x}_Q(t')|}{c} = t-\frac{r'}{c}$$ must be used to express the derivatives with respect to $t$ in (5.1) in terms of $t'$, since the retarded position of the source charge is a function of $t'$, not of $t$. Assuming that $u$ is constant, (2.12) gives: $$\left. \frac{\partial {{\rm A}}_0}{\partial x_q}\right|_{t} = -\frac{Q \gamma_u}{(r')^2} \left. \frac{\partial r'}{\partial x_q}\right|_{t}.$$ Differentiating the geometrical relation: $$(r')^2 = [x_q-x_Q(t')]^2 + y_q^2$$ with respect to $x_q$ gives $$r'\left. \frac{\partial r'}{\partial x_q}\right|_{t} = (x_q-x_Q(t'))\left(1- \frac{d x_Q(t')}{d t'} \left. \frac{\partial t'}{\partial x_q}\right|_{t}\right).$$ Differentiating (5.3) with respect to $x_q$, $$\left.\frac{\partial t'}{\partial x_q}\right|_{t} = -\frac{1}{c} \left.\frac{\partial r'}{\partial x_q}\right|_{t}.$$ Combining (5.6) and (5.7), rearranging, and noting that $d x_Q(t')/dt' = c \beta_u$ gives $$\left. \frac{\partial r'}{\partial x_q}\right|_{t} = \frac{x_q-x_Q(t')}{r'(1-\hat{r}' \cdot \vec{\beta}_u)}$$ where $ \hat{r}' \equiv \vec{r}'/r'$. Combining (5.4) and (5.8) $$\left. \frac{\partial {{\rm A}}_0}{\partial x_q}\right|_{t} = -\frac{Q \gamma_u [x_q-x_Q(t')]}{(1-\hat{r}' \cdot \vec{\beta}_u)(r')^3}.$$ An analogous relation is obtained for $\partial {{\cal A}}_0/ \partial y_q$ so that $$-\vec{\nabla}{{\rm A}}_0 = \frac{Q \gamma_u \vec{r}'}{(1-\hat{r}'\cdot \vec{\beta}_u)(r')^3}.$$ Considering now the second term on the right side of (5.1), (2.12) gives $$- \frac{1}{c}\frac{\partial \vec{{{\rm A}}}}{\partial t} = - \frac{\hat{\imath}}{c} \left. \frac{\partial {{\rm A}}_x}{\partial t}\right|_{x_q,y_q} = -\frac{ Q \gamma_u \vec{\beta}_u}{(r')^2} \left. \frac{\partial r'}{\partial t'}\right|_{x_q,y_q} \left. \frac{\partial t'}{\partial t}\right|_{x_q,y_q}.$$ Differentiating (5.5) with respect to $t'$: $$r'\left. \frac{\partial r'}{\partial t'}\right|_{x_q,y_q} = -[x_q-x_Q(t')]\frac{d x_Q (t')}{d t'}$$ or $$\left. \frac{\partial r'}{\partial t'}\right|_{x_q,y_q} = -c \hat{r}' \cdot \vec{\beta}_u.$$ Differentiating (5.3) with respect to $t$: $$\left. \frac{\partial t'}{\partial t}\right|_{x_q,y_q} = 1-\frac{1}{c} \left. \frac{\partial r'}{\partial t'}\right|_{x_q,y_q} \times \left. \frac{\partial t'}{\partial t}\right|_{x_q,y_q}$$ Combining (5.13) and (5.14) and rearranging $$\left. \frac{\partial t'}{\partial t}\right|_{x_q,y_q} = \frac{1}{1-\hat{r}' \cdot \vec{\beta}_u}.$$ Combining (5.1),(5.10),(5.11),(5.13) and (5.15) gives, for the retarded RCED electric field: $$\vec{{{\rm E}}}({\rm RCED})^{ret} = \left.\frac{Q \gamma_u}{(1-\hat{r}'\cdot \vec{\beta}_u)}\left[ \frac{[\vec{r}'-\vec{\beta}_u (\vec{r}'\cdot \vec{\beta}_u)]}{(r')^3} \right]\right|_{t' = t'_Q}.$$ Since $\vec{{{\rm A}}} = \hat{\imath} {{\rm A}}$, $$\vec{\nabla} \times \vec{{{\rm A}}} = -\hat{k} \left. \frac{\partial {{\rm A}}_x}{\partial y_q}\right|_{t} =-\hat{k} \frac{Q \gamma_u \beta_u}{(r')^2} \left. \frac{\partial r'}{\partial y_q}\right|_{t}.$$ Similarly to (5.8) $$\left. \frac{\partial r'}{\partial y_q}\right|_{t} = \frac{y_q}{r'(1-\hat{r}' \cdot \vec{\beta}_u)}$$ So that $$\vec{{{\rm B}}}({\rm RCED})^{ret} = \vec{\nabla} \times \vec{{{\rm A}}} = \left.\frac{Q \gamma_u \vec{\beta}_u \times \vec{r}' } {(r')^3(1-\hat{r}'\cdot \vec{\beta}_u)}\right|_{t' = t'_Q} = \vec{\beta}_u \times \vec{{{\rm E}}}({\rm RCED})^{ret}.$$ Apart from the retarded time argument and an overall factor $1/(1-\hat{r}'\cdot \vec{\beta}_u)$ (the Jacobian of the transformation from $t$ to $t'$, see Eq.(5.15)) Eqs.(5.16) and (5.19) are the same as the formulae for the instantaneous force fields of RCED [@JHFRCED]. For comparison with the Heaviside formulae (4.14) and (4.15), that may be derived from the LW potentials it is of interest to write $\vec{{{\rm E}}}({\rm RCED})^{ret}$ and $\vec{{{\rm B}}}({\rm RCED})^{ret}$ in the ‘present time’ form. Consider a point-like charge, $Q$, moving with speed $u$ along the $x$-axis (Fig.4). The field point at which the fields are to be specified is denoted by F, the present position of the charge by P and the retarded position, lying on the backward light cone of F, by R. If N is the foot of the normal to RF passing through P, the geometry of Fig.4 gives: $$\begin{aligned} {\rm NF} & = & r'-\beta_u r'\cos \psi' = r'(1-\hat{r}' \cdot \vec{\beta}_u) \nonumber \\ & = & \sqrt{r^2-\beta_u^2(r' \sin \psi')^2} =\sqrt{r^2-\beta_u^2(r \sin \psi)^2} \nonumber \\ & = & r(1- \beta_u^2 \sin^2 \psi)^{\frac{1}{2}} \equiv r f_u. \end{aligned}$$ Solving the quadratic equation obtained by applying the cosine rule to the triangle RFP: $$(r')^2 = r^2 + \beta_u^2(r')^2 + 2 \beta_u r r' \cos \psi$$ gives the retarded separation between the source charge and field point, $r'$ in terms of the ’present time’ parameters $r$ and $\psi$ $$r' = r\frac{(\beta_u \cos \psi + f_u)}{1-\beta_u^2}.$$ Also $$\sin \psi' = \frac{r \sin \psi}{r'} = \frac{(1-\beta_u^2) \sin\psi}{\beta_u \cos \psi + f_u}$$ and $$\begin{aligned} \cos \psi' & = & = \frac{\beta_u f_u + \cos \psi}{\beta_u \cos \psi + f_u} \\ \hat{r}' & = & \hat{\imath} \cos \psi' + \hat{\jmath} \sin \psi' \\ \hat{r}' \cdot \vec{\beta}_u & = & \beta_u \cos \psi'. \end{aligned}$$ Eqs.(5.20)-(5.26) may now be used to express the retarded fields in terms of ‘present time’ coordinates: $$\begin{aligned} \vec{{{\rm E}}}({\rm RCED})^{PT} & = & \frac{Q (1-\beta_u)[ \hat{\imath}( \beta_u f_u + \cos \psi) + \hat{\jmath}(1+\beta_u) \sin \psi]}{r^2 \gamma_u(\beta_u \cos \psi + f_u)^2 f_u}, \\ \vec{{{\rm B}}}({\rm RCED})^{PT} & = & \frac{Q \hat{k} \sin \psi}{r^2 \gamma_u^3(\beta_u \cos \psi + f_u)^2 f_u} = \vec{\beta}_u \times \vec{{{\rm E}}}({\rm RCED})^{PT}.\end{aligned}$$ These expressions replace, in relativistic classical electrodynamics, the pre-relativistic Heaviside formulae (4.14) and (4.15). Important differences are that (5.27), unlike (4.14), is not radial, and in consequence does not respect Gauss’ Law, and does not revert to a radial Coulomb field on neglecting terms of O($\beta_u^2$). The manifestly incorrect physical behaviour of the retarded electric field given by the Heaviside formula (4.14) is evident on comparing it with that given by (5.27). This is done in Figs. 5 and 6 which show curves of ${{\rm E}}_L^{PT}r^2/Q$ and ${{\rm E}}_T^{PT}r^2/Q$, respectively, as a function of $\psi$, where $\vec{{{\rm E}}}^{PT} = \hat{\imath}{{\rm E}}_L^{PT}+\hat{\jmath}{{\rm E}}_T^{PT}$, for different values of $\beta_u$, as given by (4.14) and (5.27). Elementary physical considerations require that if $\psi < 90^{\circ}$ (i.e. the source charge is approaching the field point) the magnitude of the retarded field must be less than when $\psi > 90^{\circ}$ and the charge is receding from the field point. This is because in the former case the source charge was further from the field point than its present-time position when the retarded field was emitted, and closer to it in the latter case. Because the strength of the field is inversely proportional to the square of the source-field point separation, at the time of emission of the retarded field, the magnitude of the field must be greater at an angle $\psi_+ = 90^{\circ} + \alpha$ than at an angle $\psi_- = 90^{\circ} - \alpha$ where $\alpha > 0$. The fields given by (5.27) demonstrate this property, whereas $\vec{{{\rm E}}}({\rm H})^{PT}$ given by (4.14) gives symmetric behaviour for ${{\rm E}}_T$: $${{\rm E}}_T({\rm H})^{PT}(\psi_+) = {{\rm E}}_T({\rm H})^{PT}(\psi_-)$$ and antisymmetric behaviour for ${{\rm E}}_L$: $${{\rm E}}_L({\rm H})^{PT}(\psi_+) = -{{\rm E}}_L({\rm H})^{PT}(\psi_-).$$ These symmetry properties are those of instantaneous [@JHFRCED; @JHFRSKO], not retarded, force fields[^4]. As explained in Section 2 above, the antisymmetry of the $E_L({\rm H})^{PT}$ curves in Fig. 5, about $\psi = 90^{\circ}$ and the symmetry of the $E_T({\rm H})^{PT}$ curves in Fig. 6, about $\psi = 90^{\circ}$ in Fig. 6 is a consequence of the neglect of the velocity dependence of the source charge density in deriving the LW potentials of Eq.(2.28). ****[Retarded electric and magnetic fields of an accelerated point-like charge: the RCED, Liénard-Wiechert, Feynman and Jefimenko equations]{}**** ================================================================================================================================================== The generalisation of the RCED formulae (5.16) and (5.19) to the case of a non-constant value of the source charge velocity $\vec{u}$ is straightforward. The details of the calculation may be found in Ref. [@JHFFT]. Including the terms generated by the acceleration of the source charge gives: $$\begin{aligned} \vec{{{\rm E}}}({\rm RCED})^{ret} & = & \left. \left\{\frac{Q \gamma_u}{K}\left[ \frac{[\hat{r}'-\vec{\beta}_u (\hat{r}'\cdot \vec{\beta}_u)]}{(r')^2} +\frac{[\gamma_u^2 \beta_u \dot{\beta}_u(\hat{r}'- \vec{\beta}_u)- \dot{\vec{\beta}_u}]} {c r'} \right]\right\}\right|_{t' = t'_Q}, \\ \vec{{{\rm B}}}({\rm RCED})^{ret} & = & \left. \left\{ \frac{Q \gamma_u (\vec{\beta}_u \times \hat{r}')}{K} \left[ \frac{1}{(r')^2} + \frac{\gamma_u^2 \dot{\beta}_u}{c \beta_u r'} \right]\right\}\right|_{t' = t'_Q} \end{aligned}$$ where $$K \equiv (1- \hat{r}' \cdot \vec{\beta}_u),~~~\dot{\beta}_u \equiv |\dot{\vec{\beta}_u}|.$$ It follows from (6.1) that $$\vec{\beta}_u \times \vec{{{\rm E}}}({\rm RCED})^{ret} = \left. \left\{ \frac{Q \gamma_u (\vec{\beta}_u \times \hat{r}')}{K} \left[ \frac{1}{(r')^2} + \frac{\gamma_u^2\beta_u \dot{\beta}_u}{c r'} \right] -\frac{Q\gamma_u(\vec{\beta}_u \times \dot{\vec{\beta}_u})}{Kcr'} \right\}\right|_{t' = t'_Q}$$ and $$\hat{r}'\times \vec{{{\rm E}}}({\rm RCED})^{ret} = \left. \left\{ \frac{Q \gamma_u (\vec{\beta}_u \times \hat{r}')}{K} \left[ \frac{\hat{r}'\cdot \vec{\beta}_u}{(r')^2} + \frac{\gamma_u^2\beta_u \dot{\beta}_u}{c r'} \right] -\frac{Q\gamma_u( \hat{r}' \times \dot{\vec{\beta}_u})}{Kcr'} \right\}\right|_{t' = t'_Q}.$$ The relation $\vec{{{\rm B}}}({\rm RCED})^{ret} = \vec{\beta}_u \times \vec{{{\rm E}}}({\rm RCED})^{ret}$ than holds only if $\dot{\beta}_u =0$, as in Eq.(5.19) above, while, in all cases, $\vec{{{\rm B}}}({\rm RCED})^{ret} \ne \hat{r}' \times \vec{{{\rm E}}}({\rm RCED})^{ret}$ These formulae may be compared with those derived by inserting the LW potentials of Eq.(2.28) into the defining equations (5.1) and (5.2) of the electric and magnetic fields, making use of the Jacobian of (5.15) to relate derivatives with respect to $t$ and $t'$. This calculation is given in Appendix A. It is found that: $$\begin{aligned} \vec{{{\rm E}}}({\rm LW})^{ret} & = & \left.\left\{\frac{Q}{K^3}\left[\frac{\hat{r}'- \vec{\beta}_u}{\gamma_u^2 (r')^2} + \frac{\hat{r}' \times [(\hat{r}'- \vec{\beta}_u) \times \dot{\vec{\beta}_u}]}{c r'}\ \right]\right\}\right|_{t' = t'_Q}, \\ \vec{{{\rm B}}}({\rm LW})^{ret} & = & \left. \left\{\frac{Q (\vec{\beta}_u \times \hat{r}')}{K^3}\left[\frac{1} {\gamma_u^2(r')^2} +\frac{(\dot{\beta}_u(1-\hat{r}' \cdot \vec{\beta}_u) + \beta_u(\hat{r}' \cdot \dot{\vec{\beta}_u})}{c r' \beta_u}\right] \right\} \right|_{t' = t'_Q}. \end{aligned}$$ The markedly different angular dependence of these fields to that of the RCED fields of (6.1) and (6.2) may be noticed. Eq.(6.6) gives $$\vec{\beta}_u \times \vec{{{\rm E}}}({\rm LW})^{ret} = \left. \left\{\frac{Q}{K^3} \left[(\vec{\beta}_u \times \hat{r}')\left(\frac{1}{\gamma_u^2(r')^2} +\frac{\hat{r}' \cdot \dot{\vec{\beta}_u}}{c r'}\right) \right] \right\} \right|_{t' = t'_Q},$$ $$\hat{r}' \times \vec{{{\rm E}}}({\rm LW})^{ret} = \vec{{{\rm B}}}({\rm LW})^{ret}$$ The relation $\vec{{{\rm B}}}({\rm LW})^{ret} = \vec{\beta}_u \times \vec{{{\rm E}}}({\rm LW})^{ret}$ then hold only if $\dot{\beta}_u = 0$. The RCED retarded fields (6.1) and (6.2) are now compared with those obtained from formulae given by Feynman, Jefimenko and other authors. The consistency of the latter fields with the LW ones of (6.6) and (6.7) will also be considered. In the ‘Feynman Lectures in Physics’ compact formulae for the retarded fields of an accelerated point-like charge are given, but not derived [@FeynFMC1; @FeynFMC2]. In the notation of the present paper they are $$\begin{aligned} \vec{{{\rm E}}}({\rm Feyn})^{ret} & = & Q \left. \left[ \frac{\hat{r}'}{(r')^2}+\frac{r'}{c}\frac{d ~}{d t}\left( \frac{\hat{r}'}{(r')^2}\right) + \frac{1}{c^2} \frac{d^2 \hat{r}'}{d t^2} \right]\right|_{t' = t'_Q}, \\ \vec{{{\rm B}}}({\rm Feyn})^{ret} & = & \left. \hat{r}'\right|_{t' = t'_Q} \times \vec{{{\rm E}}}({\rm Feyn})^{ret}. \end{aligned}$$ Since (see Eq.(5.3)) $r'$ is a function of the retarded time $t'$, not of the present time $t$ it is necessary to introduce the Jacobian of Eq.(5.15) in order to evaluate the derivatives in Eq.(6.10). Although Feynman uses the symbol for a total time derivative rather than a partial one in Eq.(6.10) it is clear from the definitions of the fields in terms of potentials in (5.1) and (5.2) that the time derivatives should be understood as partial ones for a fixed position of the field point $\vec{x}_q$ as in (5.15). The straightforward but somewhat lengthy calculation, which is analogous to that shown in the previous section, leading to Eqs.(5.16) and (5.19), is presented in Appendix B. The following formula for the electric field is obtained: $$\begin{aligned} \vec{{{\rm E}}}({\rm Feyn})^{ret} & = & Q \left\{\frac{\hat{r}'}{(r')^2} + \frac{1}{K} \frac{[3 \hat{r}'(\hat{r}'\cdot \vec{\beta}_u)-\vec{\beta}_u]}{(r')^2}\right. \nonumber \\ & & + \frac{1}{K^2}\left[ \frac{\hat{r}'[3 (\hat{r}'\cdot \vec{\beta}_u)^2 -\beta_u^2] - 2 \vec{\beta}_u (\hat{r}'\cdot \vec{\beta}_u)}{(r')^2}+ \frac{[\hat{r}'(\hat{r}' \cdot \dot{\vec{\beta}_u}) - \dot{\vec{\beta}_u}]}{cr'}\right] \nonumber \\ & & + \left. \left. \frac{[\hat{r}'(\hat{r}'\cdot \vec{\beta}_u)- \vec{\beta}_u]}{K^3} \left[ \frac{(\hat{r}'\cdot \vec{\beta}_u)^2 -\beta_u^2}{(r')^2} +\frac{\hat{r}' \cdot \dot{\vec{\beta}_u}}{cr'} \right]\right\}\right|_{t' = t'_Q}. \end{aligned}$$ Collecting terms on the right side of (6.12) proportional to $\hat{r}'$, $\vec{\beta}_u$ and $\dot{\vec{\beta}_u}$ recovers, together with (6.11), the LW formulas (6.6) and (6.7). A formula for the retarded potentials, similar to Eq.(2.8) above, is obtained in in Ref. [@JPS] by using Green functions to solve the inhomogeneous d’Alembert equations (2.6) and (2.7). However, subsequently, the usual mistake is made of neglecting the motion-dependence of the charge density, as explained in Section 2. After performing the spatial integration, instead of replacing $r'(t')$ in the argument of the $\delta$-function by $r'(t_ Q')$, (see Eq.(2.10) above) the appropriate retarded position of the point-like source charge at time $t$, the same functional dependence on $t'$ is assumed as before the spatial integration and the formula (2.18) is then used to transform the argument of the $\delta$-function, leading to the spurious retardation factor $1/K$ of the LW potentials. In this way the formula (19) of Ref. [@JPS] was obtained which was then shown to give the Feynman’s formula Eq. (6.10) above. It was also correctly stated (but not demonstrated) that the same formula was equivalent to the LW field of Eq. (6.6) above. The Jefimenko formulae for the fields of an accelerated charge distribution are  [@Jefimenko]: $$\begin{aligned} \vec{{{\rm E}}}({\rm Jefi})^{ret} & = & \int \left\{\hat{r}' \left[\frac{[\rho]}{(r')^2}+ \frac{1}{cr'} \frac{\partial[\rho]}{\partial t}\right]-\frac{1}{c^2r'} \frac{\partial[\vec{{{\it J}}}]}{\partial t}\right\} d^3 x_{{{\it J}}}, \\ \vec{{{\rm B}}}({\rm Jefi})^{ret} & = & \frac{1}{c}\int \left[\frac{[\vec{{{\it J}}}]}{(r')^2}+ \frac{1}{cr'} \frac{\partial[\vec{{{\it J}}}]}{\partial t}\right] \times \hat{r}' d^3 x_{{{\it J}}}. \end{aligned}$$ The square brackets around the charge density $\rho$ and the non-relativistic current density $\vec{{{\it J}}}$ indicate that they are evaluated at the retarded time $t' = t-r'/c$, as is also the spatial separation $r'$ of the current element from the field point. The volume element, $d^3 x_{{{\it J}}}$, is also specified at the retarded time $t'$. Important differences from the RCED, LW and Feynman formulae are that the time derivatives act only on the charge and current densities, not on $r'$, and that, as compared to the Feynman formula, only first order time derivatives appear. However a time derivative of $\vec{r}'$ is implicit in the definition of the current $\vec{{{\it J}}}$. In order to discuss the Jefimenko equations for the case of a point-like charge, it will be found convenient to explicitly impose the retardation condition by including an integration over the retarded time $t'$ together with the corresponding $\delta$-function as in Eq.(2.8). Indeed, much confusion about, and misinterpretation of, formulae for retarded fields result from not properly taking into account integrations over both space and time. This must be done to correctly describe the essential physical properties of the problem under consideration —a spatially extended distribution of charge[^5] in motion that is probed, in time, by the backward light cone of the field point. As will be seen, attempts to simplify formulae by omitting time integrals, and the associated $\delta$-functions, as in (6.13) and (6.14), and in many text-book treatments of retarded potentials, often lead to erroneous results. Specifying explicitly the retardation condition, (6.13) and (6.14) are written as: $$\begin{aligned} \vec{{{\rm E}}}({\rm Jefi})^{ret} & = & \int dt' \int \left\{ \left[\frac{[\rho(\vec{x}_{{{\it J}}}(t'),t')} {(r')^2}+ \frac{1}{cr'} \frac{\partial[\rho(\vec{x}_{{{\it J}}}(t'),t')]}{\partial t}\right]\hat{r}'\right. \nonumber \\ & & \left. -\frac{1}{c^2r'} \frac{\partial[\vec{{{\it J}}}(\vec{x}_{{{\it J}}}(t'),t')]}{\partial t}\right\} \delta(t' +\frac{r'(t')}{c}-t) d^3 x_{{{\it J}}}(t'), \\ \vec{{{\rm B}}}({\rm Jefi})^{ret} & = & \int dt' \int \frac{1}{c} \left[\frac{[\vec{{{\it J}}}(\vec{x}_{{{\it J}}}(t'),t')]}{(r')^2} \right. \nonumber \\ & & \left. + \frac{1}{cr'} \frac{\partial[\vec{{{\it J}}}(\vec{x}_{{{\it J}}}(t'),t')]}{\partial t}\right] \times \hat{r}' \delta(t' +\frac{r'(t')}{c}-t) d^3 x_{{{\it J}}}(t'). \end{aligned}$$ A point-like charge $Q$ has, in non-relativistic approximation, the following charge and current densities: $$\begin{aligned} \rho_Q(\vec{x}_{{{\it J}}}(t'),t') = Q \delta(\vec{x}_{{{\it J}}}(t')-\vec{x} _Q(t')), \\ \vec{{{\it J}}}_Q(\vec{x}_{{{\it J}}}(t'),t') = Q \vec{u}(t')\delta(\vec{x}_{{{\it J}}}(t')-\vec{x} _Q(t')). \end{aligned}$$ Substituting (6.17) and (6.18) into (6.15) and (6.16) and performing the spatial integrations gives: $$\begin{aligned} \vec{{{\rm E}}}({\rm Jefi})^{ret} & = & Q \int dt' \left[\frac{\hat{r}'} {(r')^2} -\frac{1}{c^2r'} \frac{\partial \vec{u}(t')}{\partial t}\right] \delta(t'-t'_Q) \nonumber \\ & = & \left .\left . Q \left[\frac{\hat{r}'}{(r')^2} -\frac{1}{c^2r'}\frac{\partial t'} {\partial t}\right|_{\vec{x}_q} \frac{d \vec{u}(t')}{d t'}\right]\right|_{t'= t'_Q} \nonumber \\ & = & \left. Q \left[\frac{\hat{r}'}{(r')^2} -\frac{1}{K c^2r'} \frac{d\vec{u}(t')}{d t'}\right]\right|_{t' = t'_Q}, \\ \vec{{{\rm B}}}({\rm Jefi})^{ret} & = & Q \int dt' \left[\frac{\vec{u}(t')}{c (r')^2} + \frac{1}{c^2r'} \frac{\partial \vec{u}(t')}{\partial t}\right] \times \hat{r}' \delta(t'-t'_Q) \nonumber \\ & = & Q\left\{ \left[\frac{\vec{u}(t')}{ c (r')^2} \left. \left.+ \frac{1}{c^2r'}\frac{\partial t'}{\partial t}\right|_{\vec{x}_q} \frac{d \vec{u}(t')}{d t'}\right] \times \hat{r}' \right\} \right|_{t' = t'_Q} \nonumber \\ & = & Q\left\{ \left[\frac{\vec{u}(t')}{c (r')^2} \left.+ \frac{1}{Kc^2r'} \frac{d\vec{u}(t')}{d t'}\right] \times \hat{r}' \right\} \right|_{t'= t'_Q} \end{aligned}$$ where $$t'_Q \equiv t - \frac{r'(t'_Q)}{c}.$$ For a uniformly moving charge, in contrast to the RCED, LW and Feynman equations the Jefimenko equations therefore predict the same (Coulombic) electric field as in the electrostatic case. In a paper on time-dependent generalisations of of the Biot and Savart and Coulomb laws where the Jefimenko equations were extensively discussed [@GH], it was claimed that the Jefimenko, Liénard-Wiechert and Feynman formulae for the retarded fields are all consistent with each other. The arguments given in support of this conclusion are critically examined below, but first the claim of the authors of Ref. [@GH] to [*derive*]{} the Jefimenko equations from the defining formulae (5.1) and (5.2) of electric and magnetic fields and retarded potentials is considered. The assumed form of the potentials in Ref. [@GH] in the notation and choice of units of the present paper, is: $$\begin{aligned} {{\rm A}}_0 & = & \int \frac{[\rho]}{r'}d \tau, \\ \vec{{{\rm A}}} & = & \int \frac{[\vec{{{\it J}}}]}{r'}d \tau. \end{aligned}$$ The volume element $d \tau \equiv d^3x_{{{\it J}}}(t')$ and the quantities in square brackets, as well as the distance $r'$ between the volume element and the field point are all specified at the retarded time $t' = t-r'/c$. Note that unlike the solutions of the D’Alembert equations in (2.8) there is no integral over the retarded time in these equations and also they do not contain the $1/K$ factor of the LW potentials. Substitution of (6.22) and (6.23) into (5.1) gives[^6]: $$\vec{{{\rm E}}}({\rm GH1})^{ret} = -\int \left[ \frac{1}{r'}\vec{\nabla}[\rho] + [\rho]\vec{\nabla}\left(\frac{1}{r'}\right)+ \frac{1}{c^2 r'}\frac{\partial [\vec{{{\it J}}}]} {\partial t} + \frac{[\vec{{{\it J}}}]}{c^2}\frac{\partial ~} {\partial t}\left(\frac{1}{r'}\right)\right]d \tau.$$ This aleady disagrees with the corresponding equation (21) of Ref. [@GH], where the last term on the right side of (6.30) is omitted. Indeed, this term does not vanish but the last factor in it is: $$\left. \frac{\partial ~}{\partial t}\left(\frac{1}{r'}\right)\right|_{\vec{x}_q} = \left. \frac{\partial t'}{\partial t}\right|_{\vec{x}_q} \left.\frac{\partial ~} {\partial t'}\left(\frac{1}{r'}\right)\right|_{\vec{x}_q} = -\frac{1}{(r')^2} \left. \frac{\partial t'}{\partial t}\right|_{\vec{x}_q} \left.\frac{\partial r'} {\partial t'}\right|_{\vec{x}_q} = \frac{c(\hat{r'} \cdot \vec{\beta_u})}{K (r')^2}$$ where (5.13) and (5.15) have been used. This result is implicit in the later Eq.(44) of Ref. [@GH], so the omission of the term in Eq.(21) of this reference in hard to understand. The retarded density $[\rho]$ may be a function of the retarded time $t'$ and the position $\vec{x}_{{{\it J}}}(t')$ of the volume element $d \tau$, but does not depend of the position $\vec{x}_q$ of the field point. Therefore $ \vec{\nabla}[\rho]$ vanishes. More formally[^7] $$\begin{aligned} \vec{\nabla}[\rho] & = & \left. \hat{\imath} \frac{\partial [\rho]}{\partial x_q}\right|_{t} +~.~.~. \nonumber \\ & = & \left. \hat{\imath} \frac{\partial t'}{\partial x_q}\right|_{t} \frac{d [\rho]}{d t'} +~.~.~. \nonumber \\ & = & \left. \left. \left. \hat{\imath} \frac{\partial t'}{\partial x_q}\right|_{t} \frac{\partial t} {\partial t'}\right|_{t} \frac{\partial [\rho]}{\partial t}\right|_{t} +~.~.~. \nonumber \\ & = & 0 \end{aligned}$$ to be compared with the relation $$\vec{\nabla}[\rho] = -\frac{1}{c}\frac{\partial [\rho]}{\partial t}\hat{r}'$$ given in Ref. [@GH]. The mathematical error leading to the incorrect equation (6.27) is a subtle one concerning the precise definitions of partial derivatives. Combining Eqs.(5.7) and (5.8) gives: $$\left. \frac{\partial t'}{\partial x_q}\right|_{t} = -\frac{1}{c}\frac{(x_q - x_Q)}{K r'}$$ so that the second line of (6.26) may be written as: $$\vec{\nabla}[\rho] = -\frac{\hat{\imath}}{c}\frac{(x_q - x_Q)}{K r'}\frac{d [\rho]}{d t'} +~.~.~.~~.$$ Now it seems plausible, in view of Eq.(5.15) above to make the substitution $$\frac{1}{K} \frac {d ~}{d t'} \rightarrow \frac{\partial ~}{\partial t}$$ in (6.29) thus yielding (6.27). But all spatial partial derivatives in (5.1) and hence in (6.26) and (6.29) are evaluated [*at constant*]{} $t$ whereas the operator relation of (6.30) is (see Eq.(5.15)) valid only [*at constant*]{} $\vec{x}_q$, and so is inapplicable in relation to derivatives with respect to $x_q$, $y_q$ or $z_q$. In fact the spurious relation (6.27) was also used by Jefimenko in the original derivation of Eq.(6.13) [@Jefimenko]. Maxwell’s equations and Eq.(6.23), called the ‘Vector Identity’ V-33 [@JefiVI], are used to obtain (6.19) from an integral vector identity: the ‘Vector wave field theorem’ V-31 [@JefiVI]. The term containing $\vec{\nabla}(1/r')$ in Eq.(6.24) is the same, up to a constant multiplicative factor, as one which has been previously calculated in Section 5 (Eq.(5.10)) so that: $$-\vec{\nabla}\left(\frac{1}{r'}\right) = \frac{\hat{r}'}{K(r')^2}.$$ Combining (6.24), (6.25), (6.26) and (6.27) gives: $$\vec{{{\rm E}}}({\rm GH1})^{ret} = \int \left[\frac{[\rho]\hat{r}'-(\hat{r}'\cdot \beta_u)[\vec{{{\it J}}}]} {K(r')^2} - \frac{1}{c^2 r'}\frac{\partial [\vec{{{\it J}}}]} {\partial t}\right]d \tau.$$ Note that the first term on the right side of (6.32) differs from the corresponding one in Jefimenko’s formula by a factor $1/K$. This factor was missed in the calculation of Ref. [@GH]. In summary, the claimed-to-be-derived Jefimenko formula, Eq.(19) of Ref. [@GH] is missing a factor $1/K$ on the first term; the second term vanishes, and the fourth term in (6.24) (the second in (6.32)) is omitted. Indeed, only the last term of Eq.(19) of Ref. [@GH] is correct. Combining (5.2) and (6.23) gives: $$\vec{{{\rm B}}}({\rm GH1})^{ret} = \frac{1}{c}\int \left[\vec{\nabla} \times \frac{[\vec{{{\it J}}}]}{r'} - [\vec{{{\it J}}}] \times \vec{\nabla}\left(\frac{1}{r'}\right)\right]d \tau.$$ Choosing $[\vec{{{\it J}}}]$ parallel to the $x$-axis $$\vec{\nabla} \times [\vec{{{\it J}}}] = -\hat{k}\left. \frac{\partial |\vec{{{\it J}}}|}{\partial y_q} \right |_t = -\hat{k}\left.\left.\left. \frac{\partial t'}{\partial y_q} \right |_t \frac{\partial t}{\partial t'} \right |_t \frac{\partial |\vec{{{\it J}}}|}{\partial t}\right |_t = 0$$ whereas the authors of Ref. [@GH] state that $$\vec{\nabla} \times [\vec{{{\it J}}}] = \frac{1}{c}\frac{\partial [\vec{{{\it J}}}]}{\partial t} \times \hat{r}'.$$ This results from a similar misuse of partial derivatives to that described above in connection with Eq.(6.27). Eqs.(6.33),(6.31) and (6.34) give $$\vec{{{\rm B}}}({\rm GH1})^{ret} = \int \frac{[\vec{{{\it J}}}] \times \hat{r}'}{c K (r')^2} d \tau$$ which differs from the Jefimenko equation (6.15) by an overall factor $1/K$ and the absence of the $\partial [\vec{{{\it J}}}]/\partial t$ term. Again the ‘derivation’ of the Jefimenko equation in Ref. [@GH] , based now on the incorrect formula (6.35), is fallacious. Jefimenko [@Jefimenko] also assumed this formula in order to derive the second term on the right side of (6.14). In Section IV of Ref. [@GH] it is claimed to derive the LW fields of Eqs.(6.4) and (6.5) from the Jefimenko formulae. However this derivation starts not from the Jefimenko formulae (6.13) and (6.14) but instead from the different equations [^8] (given here the label ‘GH2’): $$\begin{aligned} \vec{{{\rm E}}}({\rm GH2})^{ret} & = & \int \left[\frac{[\rho]\hat{r}'}{(r')^2}+ \frac{\partial ~}{\partial t}\left(\frac{[\rho]\hat{r}'}{cr'}\right) - \frac{\partial ~}{\partial t}\left(\frac{[\vec{{{\it J}}}]}{c^2r'}\right)\right] d^3 x_{{{\it J}}}, \\ \vec{{{\rm B}}}({\rm GH2})^{ret} & = & \int \left[\frac{[\vec{{{\it J}}}] \times \hat{r}'}{c(r')^2}+ \frac{\partial ~}{\partial t}\left(\frac{[\vec{{{\it J}}}] \times \hat{r}'}{c^2r'}\right)\right] d^3 x_{{{\it J}}} \end{aligned}$$ which differ from the Jefimenko equations in that the time derivatives operate not only on the charge and current densities but also on the reciprocal of the retarded source-field point separation $r'$ and the unit vector $\hat{r}'$. The authors of Ref. [@GH] introduce into Eqs.(6.37) and (6.38) point-like non-relativistic charge and current densities according to Eq.(6.17) and (6.18). Since the integration over $t'$ is omitted, it is then implicit in these equations that $t' = t'_Q$, where the fixed time, $t'_Q$, is the solution of Eq.(6.21), corresponding to a fixed position of the source charge for given values of $\vec{x}_q $ and $t$. It then follows that for point-like charges (6.37) and (6.38) are written as: $$\begin{aligned} \vec{{{\rm E}}}({\rm GH2})^{ret} & = & Q \int \left[\frac{\hat{r}'}{(r')^2}+ \frac{\partial ~}{\partial t}\left(\frac{\hat{r}'}{cr'}\right) - \frac{\partial ~}{\partial t}\left(\frac{\vec{\beta}_u}{c r'}\right)\right] \delta(\vec{x}_{{{\it J}}}(t'_Q)-\vec{x}_Q(t'_Q)) d^3 x_{{{\it J}}} \nonumber \\ & = & \left.Q \left[\frac{\hat{r}'}{(r')^2}+ \frac{\partial ~}{\partial t}\left(\frac{\hat{r}'}{cr'}\right) - \frac{\partial ~}{\partial t}\left(\frac{[\vec{\beta}_u]}{c r'}\right)\right] \right|_{t' = t'_Q}, \\ \vec{{{\rm B}}}({\rm GH2})^{ret} & = & Q \int \left[\frac{\vec{\beta}_u \times \hat{r}'}{(r')^2}+ \frac{\partial ~}{\partial t}\left(\frac{\vec{\beta}_u \times \hat{r}'}{c r'}\right)\right] \delta(\vec{x}_{{{\it J}}}(t'_Q)-\vec{x}_Q(t'_Q)) d^3 x_{{{\it J}}}\nonumber \\ & = & Q \left. \left\{ \left[\frac{\vec{\beta}_u\times \hat{r}'}{(r')^2}+ \frac{\partial ~}{\partial t}\left(\frac{\vec{\beta}_u\times \hat{r}'}{c r'}\right)\right]\right\}\right|_{t' = t'_Q}. \end{aligned}$$ However, these formulae are not the ones obtained from (6.37) and (6.38) in Ref. [@GH]. Instead, a change of variable is introduced into the $\delta$-functions in the first lines of (6.39) and (6.40): $$\vec{z}(t'_Q) \equiv \vec{x}_{{{\it J}}}(t'_Q)-\vec{x}_Q(t'_Q).$$ The Jacobian, $J$, relating the volume elements $d^3\vec{z}$ and $d^3\vec{x}_{{{\it J}}}$ according to $$d^3\vec{z} = J d^3\vec{x}_{{{\it J}}}$$ is introduced. It is then stated, without derivation, that $J = K$ where $K$ is Jacobian relating $dt$ to $dt'$, as given by Eqs.(5.15) and (6.3) above. The $x$-component of (6.41) is $$z_x(t'_Q) = x_{{{\it J}}}(t'_Q)-x_Q(t'_Q).$$ Since $x_Q(t'_Q)$ is constant it follows from (6.49) that $d z_x = dx_{{{\it J}}}$. Similarly. $d z_y = dy_{{{\it J}}}$ and $d z_z = dz_{{{\it J}}}$ so that the Jacobian $J$ in (6.42) is unity, not $K$, as claimed in Ref. [@GH]. Since the authors of Ref. [@GH] nevertheless [*did*]{} insert a factor $1/K$ multiplying the $\delta$-functions in the first lines of (6.39) and (6.40), before performing the spatial integrations, the equations obtained were not those in the last lines of (6.39) and (6.40) but instead: $$\begin{aligned} \vec{{{\rm E}}}({\rm GH2})^{ret}_{J = K} & = & \left.Q \left[\frac{\hat{r}'}{K(r')^2}+ \frac{\partial ~}{\partial t}\left(\frac{\hat{r}'}{cKr'}\right) - \frac{\partial ~}{\partial t}\left(\frac{[\vec{\beta}_u]}{cK r'}\right)\right] \right|_{t' = t'_Q}, \\ \vec{{{\rm B}}}({\rm GH2})^{ret}_{J = K} & = & Q \left. \left\{ \left[\frac{\vec{\beta}_u \times \hat{r}'}{K(r')^2}+ \frac{\partial ~}{\partial t}\left(\frac{\vec{\beta}_u \times \hat{r}'}{cK r'}\right)\right]\right\}\right|_{t' = t'_Q} \end{aligned}$$ where the subscript ‘$J = K$’ distinguishes these equations from the formally correct ones (6.39) and (6.40), i.e. the correctly calculated point-like charge versions of the general equations (6.37) and (6.38), claimed to be the Jefimenko equations but actually given, without derivation, in Ref. [@GH]. After writing (6.44) and (6.45) (the equivalents, in gaussian units, of the MKS Eqs.(41) and (42) of Ref [@GH]) it is stated that they are ‘essentially in the form made famous by Feynman’. In fact Eq. (6.44) is equivalent to Eq. (19) of Ref. [@JPS] on transforming the $t'$ derivatives in the latter into the $t$ derivatives of the former. It is correctly stated, but not demonstrated, in Ref. [@JPS] that their Eq. (19) is the same as the LW retarded electric field. The introduction of the factor (1/K) inside the time derivatives in (6.44) and (6.45) is equivalent to replacing the retarded potentials in (6.22) and (6.23) by the LW potentials. It is shown in Ref [@GH] that (6.44) is equivalent to the LW electric field of Eq. (6.6) and stated (without proof) that the magnetic field is given by the relation $\vec{{{\rm B}}}({\rm LW})^{ret} = \hat{r}' \times \vec{{{\rm E}}}({\rm LW})^{ret}$. Summarising the results obtained so far in this section: Ref. [@JPS] does demonstrate the consistency of the Feynman and LW formulae for retarded electric fields. The ‘derivation’ of the Jefimenko formulae from the defining equations (5.1) and (5.2) of the electric and magnetic fields and the retarded potentials (6.22) and (6.23) given in Ref. [@GH] is erroneous due to mathematical misinterpretation of spatial partial derivatives. The same remark applies to Jefimenko’s original derivation [@Jefimenko] of these equations. The Eqs.(6.44) and (6.45) given in Ref. [@GH] are not the Jefimenko equations but are obtained from them by introducing an overall multiplicative factor $1/K$ in each term and allowing the time derivatives to act on all factors in the terms of the equation instead of uniquely on the charge and current densities as in the Jefimenko formulae. This is tantamount to replacing the potentials of (6.22) and (6.23) by the retarded LW potentials of Eq. (2.28). Eq.(6.44) does give the LW field of (6.6), as claimed in Ref. [@GH]. It was pointed out in Ref. [@McDonald] that an equation for the magnetic field identical to the Jefimenko equation (6.14) and a formula equivalent to the Jefimenko electric field, (6.13), had been given earlier in the second edition of the book ‘Classical Electricity and Magnetism’ by Panofsky and Phillips [@PPJE]. The equivalent electric field formula is: $$\vec{E}({\rm PP})^{ret} = \int \left\{\frac{\hat{r}'[\rho]}{(r')^2} + \frac{([\vec{{{\rm J}}}]\cdot \hat{r}')\hat{n}+([\vec{{{\rm J}}}]\times \hat{r}')\times \hat{r}'}{c(r')^2} +\frac{[\dot{\vec{{{\rm J}}}}]\times \hat{r}')\times \hat{r}'}{c^2 r'}\right\} d^3 x_{{{\rm J}}}.$$ A calculation claiming to show the equivalence of (6.13) and (6.46) was given in Section II of Ref. [@McDonald] The first step was to repeat the erroneous derivation of (6.13) from the defining equation (5.1) of the electric field and the non-relativistic potentials (6.22) and (6.23), previously given in Ref. [@GH] and discussed above. The term proportional to $\partial[\rho]/\partial t$ is manipulated to obtain Eq.(6.46). As shown above, this term actually vanishes. ****[Summary]{}**** =================== Retarded potentials are derived from homogeneous d’Alembert equations for electromagnetic potentials and the Lorenz condition. The potentials so-obtained in Eq.(2.12) differ from the LW potentials of CEM. It is shown that the incorrect LW potentials result from neglect of the dependence of the effective density of a moving charge distribution on its speed. This point is made particularly clear by the careful re-examination of Feynman’s derivation of the LW potentials presented in Section 3. In Section 4, several ‘relativistic’ derivations of the LW potentials or the corresponding retarded fields given in text books are reviewed. It is shown that they all contain misapplications of special relativity –in particular by invoking a spurious ‘length contraction’ effect. In all of the relativistic derivations, retardation effects are neglected, whereas in the original 19th Century derivations of the LW potentials or the corresponding retarded fields, no relativistic effects are considered. There are therefore two independent, logically incompatible, and incorrect, derivations of retarded potentials and their associated fields. In Section 5 the retarded RCED fields of a uniformly moving charge are derived and expressed in ‘present time’ form. Except for an overall multiplicative factor $1/(1-\hat{r}' \cdot \vec{\beta}_u)$ and the retarded time argument, they are the same as the instantaneous force fields of RCED [@JHFRCED]. In Section 6 the consistency claimed in the pedagogical literature between various different formulae for the fields of an accelerated charge (LW, Feynman, Jefimenko) is considered. The Feynman formula for the retarded electric field of a charge in arbitary motion is (as previously shown in Ref. [@JPS]) consistent with the LW field. The electric field of a uniformly moving charge given by the Jefimenko formula is found to be, unlike CEM and RCED fields, Coulombic. The considerations of the present paper are of a primarily mathematical nature. The physical interpretation of retarded (radiation) and instantaneous (force) fields in RCED has been discussed in some detail previously [@JHFRCED; @JHFFT]. [**Appendix A**]{} In this appendix, retarded electric and magnetic fields are derived from the LW potentials as well as from the equations (6.44) and (6.45) equivalent to those given in Ref [@GH] and claimed there to be the same as the LW fields. To derive the LW fields the potentials $$\begin{aligned} {{\rm A}}_0({\rm LW})^{ret} & = & \left.\frac{Q}{K r'}\right|_{t' = t'_Q}, \\ \vec{{{\rm A}}}({\rm LW})^{ret} & = & \left.\frac{\vec{\beta}_u}{K r'}\right|_{t' = t'_Q} \end{aligned}$$ where $K \equiv (1-\hat{r}' \cdot \vec{\beta}_u)$, are substituted into the defining equations (5.1) and (5.2) of electric and magnetic fields to give $$\begin{aligned} \vec{{{\rm E}}}({\rm LW})^{ret}& = & -\vec{\nabla} {{\rm A}}_0({\rm LW})^{ret} - \frac{1}{c} \frac{\partial \vec{{{\rm A}}}({\rm LW})^{ret}}{\partial t}, \\ \vec{{{\rm B}}}({\rm LW})^{ret} & = & \vec{\nabla} \times \vec{{{\rm A}}}({\rm LW})^{ret}. \end{aligned}$$ For simplicity, all labels, superscripts and subscripts on the fields and potentials are omitted in the following. Taking into account, by the chain rule, the contribution to the fields of each factor in the potentials, (A.3) and (A.4) give: $$\begin{aligned} \vec{{{\rm E}}} & = & -\frac{Q}{K}\vec{\nabla}\left(\frac{1}{r'}\right)- \frac{Q}{r'}\vec{\nabla}\left(\frac{1}{K}\right) -\frac{Q \vec{\beta}_u}{c K} \frac{\partial ~}{\partial t}\left(\frac{1}{r'}\right) -\frac{Q \vec{\beta}_u}{c r'} \frac{\partial ~}{\partial t}\left(\frac{1}{K}\right) -\frac{Q}{c K r'} \frac{\partial \vec{\beta}_u }{\partial t}, \\ \vec{{{\rm B}}} & = & -\frac{Q}{K}\vec{\beta}_u \times\vec{\nabla}\left(\frac{1}{r'}\right) -\frac{Q}{r'}\vec{\beta}_u \times\vec{\nabla}\left(\frac{1}{K}\right) + \frac{Q}{K r'}(\vec{\nabla} \times \vec{\beta}_u). \end{aligned}$$ In these and the following equations it is understood that all spatial partial derivatives hold $t$ constant and all temporal partial deivatives hold $\vec{x}_q$, the field position, constant. The derivatives in the successive terms on the right sides of these equations are now evaluated. The first term on the right side of (A.5) gives: $$\begin{aligned} -\frac{Q}{K}\vec{\nabla}\left(\frac{1}{r'}\right) & = & -\frac{ \hat{\imath} Q}{K} \frac{\partial~}{\partial x_q}\left(\frac{1}{r'}\right)+ .~.~.~. \nonumber \\ & = & \frac{ \hat{\imath} Q}{K(r')^2} \frac{\partial r'}{\partial x_q}+ .~.~.~. \nonumber \\ & = & \frac{ \hat{\imath} Q (x_q-x_Q)}{K^2(r')^3}+ .~.~.~. \nonumber \\ & = & \frac{ \hat{r'}}{K^2(r')^2} \end{aligned}$$ where Eq.(5.8) has been used. Considering the second term on the right side of (A.5), $$\begin{aligned} -\frac{Q}{r'}\vec{\nabla}\left(\frac{1}{K}\right) & = & -\frac{ \hat{\imath} Q}{ r' K^2} \frac{\partial~}{\partial x_q}(\hat{r}' \cdot \vec{\beta}_u)+ .~.~.~ \nonumber \\ & = & -\frac{ \hat{\imath} Q}{ r' K^2} \left[\vec{\beta}_u \cdot \frac{\partial \hat{r}'}{\partial x_q} + \hat{r}' \cdot \frac{\partial \vec{\beta}_u }{\partial x_q}\right]+ .~.~.~ \nonumber \\ & = & -\frac{ \hat{\imath} Q}{ r' K^2} \left[\vec{\beta}_u \cdot \frac{\partial \hat{r}'}{\partial x_q} + \frac{\partial t'}{\partial x_q}(\hat{r}'\dot{\cdot \vec{\beta}_u})\right]+ .~.~.~~. \end{aligned}$$ Now, $$\begin{aligned} \frac{\partial \hat{r}'}{\partial x_q} & = & \frac{\partial ~}{\partial x_q} \left(\frac{\vec{r}'}{r'}\right) = \frac{1}{r'} \frac{\partial \vec{r}'}{\partial x_q} - \frac{\vec{r}'}{(r')^2} \frac{\partial r'}{\partial x_q}\nonumber \\ & = & \frac{\hat{\imath}}{r'}\left(1- \frac{d x_Q}{d t'}\frac{\partial t'}{\partial x_q}\right) - \frac{\hat{r}'}{r'} \frac{\partial r'}{\partial x_q}\nonumber \\ & = & \frac{\hat{\imath}}{r'} + \frac{\hat{\imath} \beta_u -\hat{r}'}{r'} \frac{\partial r'}{\partial x_q}\nonumber \\ & = & \frac{\hat{\imath}}{r'} + \frac{(\hat{\imath} \beta_u -\hat{r}')(x_q-x_Q)}{K(r')^2} \end{aligned}$$ where Eqs.(5.7) and (5.8) have been used, as well as the assumption that $\vec{u}$ is parallel to the $x$-axis. Substituting (A.9) in (A.8) and again using Eqs.(5.7) and (5.8) gives: $$\begin{aligned} -\frac{Q}{r'}\vec{\nabla}\left(\frac{1}{K}\right) & = & -\frac{Q \vec{\beta}_u}{K^2(r')^2} -Q \hat{\imath}\frac{[\beta_u^2-(\hat{r}' \cdot \vec{\beta}_u)]}{K^3(r')^2} \frac{(x_q-x_Q)}{r'}+ Q \hat{\imath}\frac{(\hat{r}' \cdot \dot{\vec{\beta}_u})]}{c K^3r'} \frac{(x_q-x_Q)}{r'}+.~.~.~. \nonumber \\ & = & -\frac{Q \vec{\beta}_u}{K^2(r')^2} -Q\hat{r}'\frac{[\beta_u^2-(\hat{r}' \cdot \vec{\beta}_u)]}{K^3(r')^2} + Q \hat{r}'\frac{(\hat{r}' \cdot \dot{\vec{\beta}_u})]}{c K^3r'}+.~.~.~.~~. \end{aligned}$$ The third term on the right side of (A.5) gives $$\begin{aligned} -\frac{Q \vec{\beta}_u}{c K} \frac{\partial ~}{\partial t}\left(\frac{1}{r'}\right) & = & -\frac{Q\vec{\beta}_u}{c K} \frac{\partial t'} {\partial t} \frac{\partial ~}{\partial t'}\left(\frac{1}{r'}\right) \nonumber \\ & = & \frac{Q \vec{\beta}_u}{c K^2(r')^2} \frac{\partial r'} {\partial t'} \nonumber \\ & = & -\frac{Q \vec{\beta}_u (\hat{r}' \cdot \vec{\beta}_u)}{K^2(r')^2} \end{aligned}$$ where (5.15) and (5.13) have been used. The fourth term on the right side of (A.5) gives $$-\frac{Q \vec{\beta}_u}{c r'} \frac{\partial ~}{\partial t}\left(\frac{1}{K}\right) = -\frac{Q \vec{\beta}_u}{c r'} \frac{\partial t'} {\partial t} \frac{\partial ~}{\partial t'}\left(\frac{1}{K}\right)$$ But $$\frac{\partial ~}{\partial t'}\left(\frac{1}{K}\right) = \frac{\partial ~}{\partial t'} \left(\frac{1}{1- (\hat{r}' \cdot \vec{\beta}_u)}\right) = \frac{1}{K^2}\frac{\partial(\hat{r}' \cdot \vec{\beta}_u)}{\partial t'} = \frac{1}{K^2}\left[\vec{\beta}_u \cdot \frac{\partial \hat{r}'}{\partial t'} +\hat{r}' \cdot \dot{\vec{\beta}_u}\right]$$ and also $$\frac{\partial \hat{r}'}{\partial t'} = \frac{\partial ~}{\partial t'}\left(\frac{\vec{r}'}{r'}\right) = -c\frac{\vec{\beta}_u}{r'}-\frac{\vec{r}'}{(r')^2}\frac{\partial r'}{\partial t'} = c\frac{[ \hat{r}'(\hat{r}' \cdot \vec{\beta}_u)-\vec{\beta}_u]}{r'}$$ where (5.13) has been used. (A.12)-(A.14) together with (5.15) then give $$-\frac{Q \vec{\beta}_u}{c r'} \frac{\partial ~}{\partial t}\left(\frac{1}{K}\right) = -\frac{Q \vec{\beta}_u}{K^3}\left\{ \frac{(\hat{r}' \cdot \vec{\beta}_u)^2-\beta_u^2)}{(r')^2} + \frac{\hat{r}' \cdot \dot{\vec{\beta}_u}}{c r'}\right\}.$$ Collecting together (A.7), (A.10), (A.11) and (A.15) gives, for the electric field derived from the LW potentials: $$\begin{aligned} \vec{{{\rm E}}} & = & \frac{Q}{K^2(r')^2}\left\{\hat{r}'- \vec{\beta}_u[1+ (\hat{r}' \cdot \vec{\beta}_u)] -\frac{\vec{\beta}_u[(\hat{r}' \cdot \vec{\beta}_u)^2-\beta_u^2]+\hat{r}' [ \hat{r}' \cdot \vec{\beta}_u-\beta_u^2]}{K}\right\} \nonumber \\ & & - \frac{Q}{K^2 c r'}\left[\dot{\vec{\beta}_u} +\frac{(\vec{\beta}_u -\hat{r}')\hat{r}' \cdot \dot{\vec{\beta}_u}}{K}\right] \nonumber \\ & = & \frac{Q}{K^3}\left[\frac{\hat{r}'-\vec{\beta}_u}{\gamma_u^2(r')^2} + \frac{{\hat{r}' \times [(\hat{r}'-\beta}_u) \times\dot{\vec{\beta}_u}]}{c r'} \right]. \end{aligned}$$ The retarded LW magnetic field given by (A.6) is now calculated. Since $\vec{u}$ is assumed to be parallel to the $x$-axis, it follows that $$\begin{aligned} -\frac{Q}{K}\vec{\beta}_u \times\vec{\nabla}\left(\frac{1}{r'}\right) & = & -\frac{Q\hat{k} \beta_u}{K}\frac{\partial ~}{\partial y_q}\left(\frac{1}{r'}\right) \nonumber \\ & = & \frac{Q\hat{k} \beta_u}{K(r')^2}\frac{\partial r'}{\partial y_q} = \frac{Q\hat{k} \beta_u y_q}{K^2(r')^3} \nonumber \\ & = & \frac{Q(\vec{\beta}_u \times \hat{r}')}{K^2(r')^2}. \end{aligned}$$ Similarly $$\begin{aligned} -\frac{Q}{K}\vec{\beta}_u \times\vec{\nabla}\left(\frac{1}{K}\right) & = & -\frac{Q\hat{k} \beta_u}{K}\frac{\partial ~}{\partial y_q}\left(\frac{1}{K}\right) \nonumber \\ & = & -\frac{Q\hat{k} \beta_u}{K^2 r'}\frac{\partial (\hat{r}' \cdot \vec{\beta}_u)}{\partial y_q} \nonumber \\ & = & -\frac{Q \hat{k} \beta_u}{K^2r'} \left(\hat{r}' \cdot \frac{\partial \vec{\beta}_u}{\partial y_q}+\vec{\beta}_u \cdot \frac{\partial \hat{r}'}{\partial y_q} \right). \end{aligned}$$ Evaluating the first term in brackets on the right side of Eq.(A.18), $$\hat{r}' \cdot \frac{\partial \vec{\beta}_u}{\partial y_q} = \frac{\partial t'}{\partial y_q}(\hat{r}' \cdot \dot{\vec{\beta}_u}) = -\frac{y_q(\hat{r}' \cdot \dot{\vec{\beta}_u})}{cK r'}$$ where the relation $$\frac{\partial t'}{\partial y_q} = -\frac{1}{c} \frac{\partial r'}{\partial y_q}$$ given by differentiating the retardation condition $t' = t-r'/c$ as well as Eq.(5.18) have been used. The second tern in brackets on the right side of(A.18) is $$\vec{\beta}_u \cdot \frac{\partial \hat{r}'}{\partial y_q} = \vec{\beta}_u \cdot \frac{\partial ~}{\partial y_q} \left(\frac{\vec{r}'}{r'}\right) = \vec{\beta}_u \cdot \left( \frac{1}{r'} \frac{\partial \vec{r}'}{\partial y_q}- \frac{\vec{r}'}{(r')^2 }\frac{\partial r'}{\partial y_q} \right).$$ Assuming, without loss of generality, that the vector $\vec{r}'$ is confined to the $x-y$ plane, $$\frac{\partial \vec{r}'}{\partial y_q} = -\hat{\imath}\frac{d x_Q}{d t'}\frac{\partial t'}{\partial y_q} + \hat{\jmath}.$$ Combining (A.21) and (A.22), again using (A.20) and (5.18), gives $$\vec{\beta}_u \cdot \frac{\partial \hat{r}'}{\partial y_q} = \frac{[\beta_u^2- \hat{r}' \cdot \vec{\beta}_u]y_q}{K(r')^2}.$$ Combining A(18), (A.19) and (A.23), $$-\frac{Q}{K}\vec{\beta}_u \times\vec{\nabla}\left(\frac{1}{K}\right) = \frac{Q \vec{\beta}_u \times \hat{r}'}{K^3}\left[ \frac{[ \hat{r}' \cdot \vec{\beta}_u- \beta_u^2]}{(r')^2} +\frac{\hat{r}' \cdot \dot{\vec{\beta}_u}}{cr'} \right]$$ The third term on the right side of (A.6) is $$\begin{aligned} \frac{Q}{K r'}(\vec{\nabla} \times \vec{\beta}_u) & = & -\frac{Q \hat{k}}{K r'} \frac{\partial \beta_u}{\partial y_q} = -\frac{Q \hat{k}}{K r'} \frac{\partial t'}{\partial y_q} \dot{\beta}_u \nonumber \\ & = & \frac{Q \hat{k}}{c K r'} \frac{\partial r'}{\partial y_q} \dot{\beta}_u = \frac{Q (\vec{\beta}_u \times \hat{r}')}{c K^2 r'\beta_u}\dot{\beta}_u \end{aligned}$$ where (A.20) and (5.18) have been used. Collecting together (A.17), (A.24) and (A.25), the magnetic field generated by the LW potentials is: $$\begin{aligned} \vec{{{\rm B}}} & = & \left. \left\{\frac{Q (\vec{\beta}_u \times \hat{r}')}{K^3}\left[\frac{K + \hat{r}' \cdot \vec{\beta}_u -\beta_u^2}{(r')^2} +\frac{K \dot{\beta}_u+ \beta_u(\hat{r}' \cdot \dot{\vec{\beta}_u})}{c r' \beta_u}\right] \right\} \right|_{t' = t'_Q}, \nonumber \\ & = & \left. \left\{\frac{Q (\vec{\beta}_u \times \hat{r}')}{K^3}\left[\frac{1}{\gamma_u^2(r')^2} +\frac{(\dot{\beta}_u(1-\hat{r}' \cdot \vec{\beta}_u) + \beta_u(\hat{r}' \cdot \dot{\vec{\beta}_u})}{c r' \beta_u}\right] \right\} \right|_{t' = t'_Q}. \end{aligned}$$ (A.16) and (A.26) are the formulae (6.6) and (6.7) of the text. The consistency of the fields of Eqs.(6.44) and (6.45) with the LW fields of (6.6) and (6.7) claimed in Ref. [@GH] is now investigated. The equations analogous to (A.5) and (A.6) given by using the chain rule to expand the derivatives in (6.44) and (6.45) are: $$\begin{aligned} \vec{{{\rm E}}} & = & \frac{Q \hat{r}'}{K (r')^2} + \frac{Q \hat{r}'}{c r'} \frac{\partial ~}{\partial t}\left(\frac{1}{K}\right) + \frac{Q \hat{r}'}{c K} \frac{\partial ~}{\partial t}\left(\frac{1}{r'}\right)+ \frac{Q}{c K r'}\frac{\partial \hat{r}'}{\partial t} \nonumber \\ & & -\frac{Q \vec{\beta}_u}{c K} \frac{\partial ~}{\partial t}\left(\frac{1}{r'}\right) -\frac{Q \vec{\beta}_u}{c r'} \frac{\partial ~}{\partial t}\left(\frac{1}{K}\right) -\frac{Q}{c K r'} \frac{\partial \vec{\beta}_u }{\partial t}, \\ \vec{{{\rm B}}} & = & \frac{Q (\vec{\beta}_u \times \hat{r}')}{K (r')^2}+ \frac{Q(\vec{\beta}_u \times \hat{r}')}{c r'} \frac{\partial ~}{\partial t}\left(\frac{1}{K}\right)+ \frac{Q (\vec{\beta}_u \times \hat{r}')}{c K} \frac{\partial ~}{\partial t}\left(\frac{1}{r'}\right) \nonumber \\ & & +\frac{Q}{c K r'}\left(\vec{\beta}_u \times\frac{\partial \hat{r}'}{\partial t}\right)-\frac{Q}{c K r'} \left( \hat{r}' \times \frac{\partial \vec{\beta}_u }{\partial t}\right). \end{aligned}$$ . Comparison with (A.5) and (A.6) shows that the last three terms in (A.5) and (A.27) (originating from the time derivative in (A.3)) are the same but all other terms in (A.27) and (A.28) differ from those in (A.5) and (A.6). Thus to compare (A.5) and (A.27) the derivatives in the second. third and fourth terms on the right of (A.27) must be evaluated, while to compare (A.6) and (A.28) all derivatives on the right side of (A.28) must be evaluated. This is readily done using, [*mutatis mutandis*]{}, the formulae obtained above. The second term in (A.27) is $$\frac{Q \hat{r}'}{c r'} \frac{\partial ~}{\partial t}\left(\frac{1}{K}\right) = \frac{Q \hat{r}'}{c K r'} \frac{\partial ~}{\partial t'}\left(\frac{1}{K}\right) = \frac{Q \hat{r}'}{K^3}\left\{ \frac{(\hat{r}' \cdot \vec{\beta}_u)^2-\beta_u^2)}{(r')^2} + \frac{\hat{r}' \cdot \dot{\vec{\beta}_u}}{c r'}\right\}$$ by analogy with Eq.(A.15). The third term is $$\frac{Q \hat{r}'}{c K}\frac{\partial ~}{\partial t}\left(\frac{1}{r'}\right) = \frac{Q \hat{r}'}{c K^2}\frac{\partial ~}{\partial t'}\left(\frac{1}{r'}\right) = -\frac{Q \hat{r}'}{c K^2 (r')^2} \frac{\partial r'}{\partial t'} = \frac{Q \hat{r}' (\hat{r}' \cdot \vec{\beta}_u)}{K^2 (r')^2}$$ where (5.15) and (5.13) have been used The fourth term is $$\frac{Q}{c K r'}\frac{\partial \hat{r}'}{\partial t} = \frac{Q}{c K^2 r'}\frac{\partial \hat{r}'}{\partial t'} =\frac{Q [ \hat{r}'(\hat{r}' \cdot \vec{\beta}_u)-\vec{\beta}_u]}{K^2(r')^2}.$$ Substituting (A.29)-(A.31) into (A.22) as well as the previously obtained terms, and performing some algebraic simplification, gives $$\vec{{{\rm E}}} = \frac{Q}{K^3}\left[\frac{\hat{r}'-\vec{\beta}_u}{\gamma_u^2(r')^2} + \frac{{\hat{r}' \times [(\hat{r}'-\beta}_u) \times\dot{\vec{\beta}_u}]}{c r'} \right].$$ Which is the LW field of Eq.(A.16) Noting that the first three terms of (A.28) differ from those of (A.27) by the replacement $\hat{r}' \rightarrow \vec{\beta}_u \times \hat{r}'$ and using the above results for the latter terms, gives, after algebraic simplification, the LW magnetic field of Eq. (6.7). [**Appendix B**]{} For clarity the total time derivatives in (6.6) are replaced the corresponding partial derivatives with respect to the present time, $t$, for a fixed value of the field point position $\vec{x}_q$. The second term on the right side of (6.10) contains the derivative: $$\frac{\partial~}{\partial t}\left(\frac{\hat{r}'}{(r')^2}\right) = \frac{\partial t'}{\partial t} \frac{\partial~}{\partial t'} \left(\frac{\vec{r}'}{(r')^3}\right) = \frac{\partial t'}{\partial t}\left[\frac{1}{(r')^3} \frac{\partial \vec{r}'}{\partial t'} -\frac{3 \vec{r}'}{(r')^4}\frac{\partial r'}{\partial t'}\right].$$ Since the vector $\vec{r}'$ is confined to the $x$-$y$ plane, $$\frac{\partial \vec{r}'}{\partial t'} = \hat{\imath} \frac{\partial (x_q-x_Q)}{\partial t'} + \hat{\jmath} \frac{\partial y_q}{\partial t'} = -c\vec{\beta}_u$$ since $\partial y_q/\partial t' = 0$ and $c \beta_u = d x_Q/dt$. (5.13), (5.15), (B.1) and (B.2) give: $$\frac{r'}{c}\frac{\partial~}{\partial t}\left(\frac{\hat{r}'}{(r')^2}\right) = \frac{1}{1-\hat{r}' \cdot \vec{\beta}_u}\left[ \frac{3 \hat{r}'( \hat{r}' \cdot \vec{\beta}_u) - \vec{\beta}_u}{(r')^2}\right].$$ Considering now the last term on the right side of (6.10): $$\frac{\partial^2 \hat{r}'}{\partial t^2} = \frac{\partial t'}{\partial t}\frac{\partial~}{\partial t'} \left[\frac{\partial t'}{\partial t}\frac{\partial \hat{r}'}{\partial t'}\right] = \frac{\partial t'}{\partial t}\left[\frac{\partial \hat{r}'}{\partial t'} \frac{\partial~}{\partial t'} \left(\frac{\partial t'}{\partial t}\right)+ \frac{\partial t'}{\partial t} \frac{\partial^2 \hat{r}'}{\partial t'^2}\right]$$ where $$\begin{aligned} \frac{\partial~}{\partial t'}\left(\frac{\partial t'}{\partial t}\right) & = & \frac{\partial~}{\partial t'}\left(\frac{1}{1-\hat{r}' \cdot \vec{\beta}_u}\right) = \frac{1}{(1-\hat{r}' \cdot \vec{\beta}_u)^2} \frac{\partial(\hat{r}' \cdot \vec{\beta}_u)}{\partial t'} \nonumber \\ & = & \frac{1}{(1-\hat{r}' \cdot \vec{\beta}_u)^2}\left[ c \frac{[(\hat{r}' \cdot \vec{\beta}_u)^2 -\beta_u^2]} {r'}+ (\hat{r}' \cdot \dot{\vec{\beta}_u)}\right] \end{aligned}$$ where (A.14) has been used. It also follows from (A.14) that $$\begin{aligned} \frac{\partial^2 \hat{r}'}{\partial t'^2} & = & - \frac{c[\hat{r}'(\hat{r}' \cdot \vec{\beta}_u)-\vec{\beta}_u]}{(r')^2}\frac{\partial r'}{\partial t'} +\frac{c}{r'}\left[\left(\frac{\partial \hat{r}'}{\partial t'}\right)(\hat{r}' \cdot \vec{\beta}_u) + \hat{r}'\left(\frac{\partial \hat{r}'}{\partial t'}\right) \cdot \vec{\beta}_u + \hat{r}'(\hat{r}' \cdot \dot{\vec{\beta}_u})- \dot{\vec{\beta}_u}\right] \nonumber \\ & = & c^2\left[\frac{\hat{r}'[3(\hat{r}' \cdot \vec{\beta}_u)^2-\beta_u^2] - 2\vec{\beta}_u(\hat{r}' \cdot \vec{\beta}_u)}{(r')^2}\right] + \frac{ c[\hat{r}'(\hat{r}' \cdot \dot{\vec{\beta}_u})-\dot{\vec{\beta}_u}]}{r'}. \end{aligned}$$ Combining (5.15) and (A.14),(B.5) and (B.6) then gives $$\begin{aligned} \frac{1}{c^2} \frac{\partial^2 \hat{r}'}{\partial t^2} & = & \frac{1}{(1-\hat{r}' \cdot \vec{\beta}_u)} \left\{\frac{[\hat{r}'(\hat{r}'\cdot\vec{\beta}_u)- \vec{\beta}_u]}{(1-\hat{r}' \cdot \vec{\beta}_u)^2} \left[\frac{\hat{r}'(\hat{r}'\cdot \vec{\beta}_u)^2-\beta_u^2}{(r')^2}+\frac{\hat{r}' \cdot \dot{\vec{\beta}_u}} {c r'}\right]\right. \nonumber \\ &+&\left. \frac{1}{(1-\hat{r}' \cdot \vec{\beta}_u)}\left[\frac{\hat{r}'[3(\hat{r}' \cdot \vec{\beta}_u)^2 -\beta_u^2] -2\vec{\beta}_u(\hat{r}'\cdot \vec{\beta}_u)}{(r')^2}+ \frac{\hat{r}'( \hat{r}' \cdot \dot{\vec{\beta}_u})- \dot{\vec{\beta}_u}}{c r'}\right]\right\}. \end{aligned}$$ Inserting (B.3) and (B.7) into (6.10) yields Eq.(6.12) of the text. [99]{} J.H.Field, Phys. Scr. [**74**]{} 702 (2006), http://xxx.lanl.gov/abs/physics/0501130. A.L.Kholmetskii [*et al.*]{} J. Appl. Phys. [**101**]{} 023532 (2007). http://xxx.lanl.gov/abs/physics/0601084v1. H.Hertz, Ann. der Physik [**XXXIV**]{} 551 (1888). J.H.Field, Int. J. Mod. Phys. A Vol 23 No 2 327 (2008); http://xxx.lanl.gov/abs/physics/0507150v3. J.H.Field, ’Inter-charge forces in relativistic classical electrodynamics: electromagnetic induction in different reference frames’, http://xxx.lanl.gov/abs physics/0511014v4. O.Heaviside, The Electrician, [**22**]{} 147 (1888), Philos. Mag. [**27**]{} 2324 (1889). W.H.Panofsky and M.Phillips, ‘Classical Electricity and Magnetism’, 2nd Edition (Addison-Wesley, Cambridge Mass, 1962) Section 19-2, P334. J.D.Jackson, Am. J. Phys. [**72**]{} 1484 (2004). J.H.Field, ‘Space-time transformation properties of inter-charge forces and dipole radiation: Breakdown of the classical field concept in relativistic electrodynamics’, http://xxx.lanl.gov/abs/physics/0604089v3. A.Liénard, L’Eclairage Electrique, [**16**]{} pp5, 53, 106 (1898); E.Wiechert, Archives Néland (2) [**5**]{} 459 (1900). J.D.Jackson, ‘Classical Electrodynamics’, Second Edition (John Wiley and Sons, New York, 1975), Section 6.6, P223. Ref [@PPPT], Section 19-1, P341. W.G.V.Rosser, ‘Classical Electromagnetism via Relativity’, (Butterwoths, London 1968) Section 5.4, PXXX. Ref [@Jack1] Section 14.1, P654. M.Schwartz, ‘Principles of Electrodynamics’, (McGraw-Hill, New York 1972) Section 6.1, P213. D.J.Griffiths, ‘Introduction to Electromagnetism’, 2nd Edition (Prentice-Hall, Englewood Cliffths, NJ, 1989) Section 10.3, P429. R.P.Feynman, R.B.Leighton and M.Sands, ‘The Feynman Lectures in Physics’ (Addison-Wesley, Reading Massachusetts, 1963), ‘Electromagnetism I’ Ch 21-5. L.D.Landau and E.M.Lifshitz ‘Classical Theory of Fields’, Translated by M.Hamermesh, (Pergamon Press, Oxford, 1962), Section 38, P103. Ref [@RosserLW], Section 3.1 P29. Ref [@PPPT], Section 19-3, P347. Ref [@Jack1], Section 11.10, P552. E.M.Purcell,‘Electricity and Magnetism’, Berkeley Physics Course, Vol. 2 (McGraw-Hill, New York 1963), Chapters 5 and 6. W.Pauli, ‘Relativitätstheorie’ (Springer, Berlin 2000). English translation,‘Theory of Relativity’ (Pergamon Press, Oxford, 1958) Section 4, P11. J.H. Field, Fundamental Journal of Modern Physics, Vol [**2**]{} Issue 2 139 (2011); arXiv pre-print http://xxx.lanl.gov/abs/1210.2270v1. J.H. Field, Fundamental Journal of Modern Physics, Vol [**4**]{} Issue 1-2 1 (2012); arXiv pre-print: http://xxx.lanl.gov/abs/1307.7962v1. J.H. Field, ’Space-time attributes of physical objects and the laws of space-time physics’ arXiv pre-print http://xxx.lanl.gov/abs/0809.4121v1. Ref [@LLLW1], Section 63, P184. O.D.Jefimenko, Am. J. Phys. [**63**]{} 267 (1995). Ref. [@PPPT], Chapter 18, Eq.(18-13), P325. W.F.Edwards, C.S.Kenyon and D.K.Lemon, Phys. Rev. [**D14**]{} 992 (1976). Ref. [@Jack1], Eq.(14.13), P657. Ref. [@LLLW1], Eq.(6.39), P187. R.P.Feynman, R.B.Leighton and M.Sands, ‘The Feynman Lectures in Physics’ (Addison-Wesley, Reading Massachusetts, 1963), Vol I, Section 28.1, Eq.(28.3). Ref [@FeynFMC1] ‘Electromagnetism I’ Section 21, Eq.(21.3). A.R.Janach,T.Padmanabhan and T.P.Singh, Am. J. Phys. [**63**]{} 267 (1995). O.D.Jefimenko,‘Electricity and Magnetism’, 2nd Edition (Elteret Scientific Company, W.Virginia, 1989) Section 15-7, Eqs.(15-7.5),(15-7.6). D.J.Griffiths and M.A.Heald, Am. J. Phys. [**59**]{} 111 (1995). Ref. [@Jefimenko], Section 2.16, P57. K.T.McDonald, Am. J. Phys [**65**]{} 1074 (1997). Ref. [@PPPT], Section 14.3, P245. [^1]: The points of the compass: South-West (SW), North-East (NE), North-West (NW) and South-East (SE). [^2]: A similar derivation is found in the widely-used text book on Electricity and Magnetism by Purcell [@Purcell]. [^3]: Actually Jackson used a relativistic time dilatation equation equivalent to Eq.(4.3). [^4]: See, for example, the comparision of the ‘present time’ retarded LW fields with the instantaneous RCED fields in Figs. 2 and 3 of Ref. [@JHFRSKO]. [^5]: In the real world, consisting of an ensemble of identical point-like charged particles. [^6]: The field is labelled according to the initials of the authors, Griffiths and Heald, of Ref. [@GH] [^7]: The ellipsis in (6.26) and subsequent equations indicates the contribution of the $x$- and $y$-components. [^8]: Eq. (6.37)is Eq. (38) of Ref. [@GH]
--- abstract: 'An ensemble of 2 $\times$ 2 pseudo-Hermitian random matrices is constructed that possesses real eigenvalues with level-spacing distribution exactly as for the Gaussian unitary ensemble found by Wigner. By a re-interpretation of Connes’ spectral interpretation of the zeros of Riemann zeta function, we propose to enlarge the scope of search of the Hamiltonian connected with the celebrated Riemann Hypothesis by suggesting that the Hamiltonian could also be [PT]{}-symmetric (or pseudo-Hermitian).' author: - | Zafar Ahmed and Sudhir R. Jain\ [*Nuclear Physics Division, Van de Graaff Building,*]{}\ [*Bhabha Atomic Research Centre, Trombay, Mumbai 400 085, India*]{} title: '**A pseudo-unitary ensemble of random matrices, PT-symmetry and the Riemann Hypothesis**' --- 0.5 truecm PACS Nos : 05.45.+b, 03.65.Ge Riemann Hypothesis (RH) states that all the nontrivial zeros of the Riemann zeta function have the form $\frac{1}{2}+i\sigma_n$, lying on a line [@riemann]. This beautiful statement got related to mechanics by the conjecture of Hilbert and Polya, as a result, a search is on for a self-adjoint operator admitting real eigenvalues $\{\sigma _n\}$. Perhaps the most striking work in this direction is due to Connes [@connes] who constructed a classical eigenvalue problem with a Perron-Frobenius operator and presented a spectral interpretation of Riemann zeros. In the realm of quantum mechanics, observations on trace formula have given some important insights and several Hamiltonians have been discussed [@bk1; @bk; @okubo; @others]. The connection of the RH with random matrix theory (RMT) is very deep. For the statistical description of level-sequences (or number-sequences) of nuclei, Wigner [@mehta] introduced the subject wherein Hamiltonian matrices were constructed keeping in mind the underlying symmetries possessed by a physical system. Thus, an even-spin, time-reversal invariant system belongs to a Gaussian Orthogonal Ensemble (GOE) whereas a system violating time-reversal invariance (TRI) belongs to a Gaussian Unitary Ensemble (GUE). That the sequence $\{\sigma _n\}$ actually has a spectral interpretation first came out of the seminal work by Montgomery [@montgomery] who found the two-point correlation of Riemann zeros and obtained exactly the result known for GUE. Since then, higher-order correlations among Riemann zeros have also been shown to correspond to GUE; within GUE, it is known that two-point correlation guarantees all the higher-order correlations as they are factorisable. Perhaps the most important single effort after the Montgomery’s work is the marathon numerics by Odlyzko [@odlyzko] who has decidedly shown that the distribution has the form exactly as in GUE. Due to these works, the Hamiltonians being searched for RH are the ones where time-reversal invariance is broken [@bk; @okubo; @others]. Let us focus on two main points on which all the works rest. Firstly, the reality of eigenvalues of a Hermitian operator and completeness of solutions of the ensuing eigenvalue problem would guarantee that the RH holds true. Secondly, due to the mathematical and numerical works on correlations, it is expected that the Hamiltonian underlying the RH breaks TRI. In this paper, we construct a pseudo-unitary ensemble of random matrices which has the spacing distribution exactly as in GUE. Since these systems usually correspond to the physical situation where TRI and parity are not individually preserved, the finding presented below suggests that the Hamiltonian underlying RH could also be pseudo-Hermitian. After this demonstration, we shall provide further reasons that attest to the above statement. A Hamiltonian [H]{} is called pseudo-Hermitian [@ph; @ptph] if $\beeta$[H]{}$\beeta ^{-1}$ = [H]{}$^{\dagger}$ for some metric $\eta$. If $E_m$ and $E_n$ are two eigenvalues of [H]{}, it is known that [@ptph] $$(E^\ast_m-E_n)~ \langle \Psi^\ast_m |\beeta~\Psi_n \rangle =0,$$ implying that if eigenvalues are real and different, the eigenstates are orthogonal as $\langle \Psi^\ast_m |\beeta \Psi_n \rangle =\in_n \delta_{m,n}$. If an eigenvalue is complex it will have a zero pseudo-norm as $N_{\beeta}=\langle \Psi_n^\ast |\beeta \Psi_n\rangle =0.$ The vanishing of pseudo-norm means that the eigenvector is null. If a Hamiltonian, [H]{} is symmetric under a joint action of parity [P]{} : $x\rightarrow -x$ and time-reversal [T]{} : $i \rightarrow -i$, i.e, ([PT]{}) [H]{} ([PT]{})$^{-1}$=[H]{} [@pt] then we have real eigenvalues if the eigenstates, $\Psi_n$ are also the eigenstates of [PT]{}, otherwise the eigenvalues are complex conjugate pairs. For [PT]{}-symmetric Hamiltonians we have [@ahmed] $$(E^\ast_m-E_n)~ \langle \Psi^{\mbox{\helv PT}}_m |\Psi_n \rangle=0.$$ When eigenvalues are real and distinct, the eigenstates are orthogonal as $\langle \Psi^{\mbox{\helv PT}}_m |\Psi_n\rangle = \in_n \delta_{m,n}$. Remarkably, [PT]{}-symmetric Hamiltonians are found to be pseudo-Hermitian : [P H P]{}$^{-1}$ = [H]{}$^\dagger$. Pseudo-Hermiticity has been recast [@ptph] in terms of [PT]{}-symmetry. Given a pseudo-Hermitian Hamiltonian, one can construct generalized [P]{} and [T]{}. The operator [D]{}= $e^{i\mbox{\helv H}}$ is pseudo-unitary in accordance with [@aj] $$\mbox{\helv D}^{\dagger} = \beeta \mbox{\helv D}^{-1}\beeta ^{-1}.$$ The eigenvalues of [D]{} are either on the unit circle or of the type : $|\lambda_1 \lambda_2|=1.$ It is only recently that a random matrix theory has been presented for a statistical study of pseudo-Hermitian Hamiltonians [@aj]. Only 2$\times$2 matrices have been studied. To summarize briefly, two cases are found : one with a linear (with more slope than that of goe) level repulsion and the other where as the spacing $s$ becomes small, level-spacing distribution $\sim s\log \frac{1}{s}$ [@aj]. We call these ensembles [@cz] as Gaussian pseudo-orthogonal ensemble (GPOE) and Gaussian pseudo-unitary ensemble (GPUE), respectively. The essence of these two results is that they show much weaker level-repulsion at small spacings than those of the known ensembles of Wigner and Dyson. Let us now consider the Hamiltonian matrix $$\begin{aligned} \mbox{\helv H} = \{\mbox{\helv H}_{ij}\}&= \left[\begin{array}{cc}a+b&(c+id)/\epsilon \\(c-id)\epsilon&a-b\end{array}\right],\end{aligned}$$ $a, b, c, d$ being real. This is pseudo-Hermitian with respect to a metric $$\begin{aligned} \beeta = \left[\begin{array}{cc}\epsilon&0\\0&1/\epsilon\end{array}\right]\end{aligned}$$ which gives rise to a positive definite pseudo-norm (1) It is due to this property such Hamiltonians as (4) are called quasi-Hermitian (see Scholtz et al. in [@ph]). The eigenvalues of [H]{} are given by $$E_{\pm} = a \pm \sqrt{b^2+c^2+d^2}.$$ Consider that the matrix [H]{} is drawn from an ensemble of random matrices with a Gaussian distribution given by [@mehta] $$P(\mbox{\helv H}) = {\cal N} e^{- \frac{1}{2\sigma ^2}~tr~\mbox{\helv H}^{\dagger} \mbox{\helv H}}.$$ Accordingly, the joint probability distribution of $a, b, c, d$ is $$P(a, b, c, d) \sim \exp \left[-\frac{1}{\sigma ^2}\left(a^2+b^2+ \left(\epsilon ^2+\epsilon ^{-2}\right)\frac{(c^2+d^2)}{2}\right)\right].$$ We know that three-parameter unitary matrix, [U]{}$(\theta,\phi,\psi)$ $$\begin{aligned} \mbox{\helv U} = \left[\begin{array}{cc} e^{i\psi}\cos \theta &-\sin \theta e^{i\phi} \\ \sin \theta e^{-i\phi} &e^{-i\psi}\cos \theta \end{array}\right].\end{aligned}$$ constitutes a Lie group. More importantly, the unitary matrix [U]{} can generate all the Hermitian $2 \times 2$ matrices of the general type ($\epsilon=1,$ in (5)) with any arbitrary value (including zero) of $\psi$. Only two continuous parameters $(\theta,\phi)$ suffice for this purpose. Inspired by this, we construct the following matrix [D]{}: $$\begin{aligned} \mbox{\helv D} = \left[\begin{array}{cc}\cos \theta &-\sin \theta e^{i\phi}/\epsilon \\ \sin \theta e^{-i\phi}\epsilon &\cos \theta \end{array}\right], \end{aligned}$$ which is pseudo-unitary with respect to $\beeta$. As per our design [D]{} matrix would generate all possible [H]{} of the type (4) as $$\mbox{\helv D} \mbox{~diag}(E_+,E_-)\mbox{\helv D}^{-1} = \mbox{\helv H},$$ which gives us the following relations : $$\begin{aligned} &~&a = \frac{E_++E_-}{2}, ~~ b = \frac{E_+-E_-}{2}\cos 2\theta , \nonumber \\ &~&c = \frac{E_+-E_-}{2}\sin 2\theta \cos \phi, ~~ d = \frac{E_+-E_-}{2}\sin 2\theta \sin \phi .\end{aligned}$$ Writing $\epsilon = e^{-\gamma}$, and calling $t=E_++E_-$, $s=E_+-E_-$, we have $P(a,b,c)$ going over to $$P_{\gamma}(s, t, \theta , \phi ) \sim \exp \left[-\frac{t^2}{4\sigma ^2}-\frac{s^2}{4\sigma ^2}\cos^2 2\theta-\frac{s^2} {\sigma ^2}\cosh 2\gamma \cos ^2\theta \sin ^2\theta \right]$$ via a Jacobian, ${\cal J}=\frac{s^2 \sin 2\theta}{4}$. Next, integrating over $t, \theta $ and $\phi$, we have the un-normalised nearest-neighbour spacing distribution given by $$P_{\gamma}(s) \sim s\exp \left(-\frac{p^2s^2}{\sigma ^2} \right) \mbox{Erfi}\left(\frac{q}{2\sigma}s \right)$$ where $$\mbox{Erfi}(x)=\frac{x}{\sqrt{\pi}}\int_{-1}^{+1} dy e^{x^2 y^2}, p=\sqrt{\cosh 2\gamma}/2, q=\sqrt{\cosh 2\gamma-1}/2$$ We write now the normalized nearest-neighbour spacing distribution in terms of a dimensionless variable, $x=\frac{s}{\langle s \rangle} $ where $\langle s \rangle $ is the mean level spacing. With $$\begin{aligned} \alpha_{\gamma}=\frac{2}{\sqrt{\pi}}\left( 1 + \frac{\tanh ^{-1} (q/p) }{4pq} \right), \nonumber \\ P_{\gamma}(x) = \frac{\alpha _{\gamma }^2\cosh 2\gamma}{4 } ~x~ e^{-p^2\alpha^2_{\gamma}x^2} \left[ \frac{\mbox{Erfi}(\alpha_{\gamma} q x)}{q} \right].\end{aligned}$$ Note that the limiting value of the square-bracketted term is $\frac{2x\alpha_0}{\sqrt{\pi}}$ and $\alpha_{0}=\frac{4}{\sqrt{\pi}}$. For an arbitrarily small $\gamma$, the matrix [H]{} is pseudo-Hermitian. Actually, even for $\gamma$ as much as 1/2, the difference between $P_{\gamma}(x)$ and $P_{\mbox{\small GUE}}(x)= \frac{32 }{ \pi^2} x^2 e^\frac{-4x^2 }{ \pi}$ is hardly appreciable (see Fig. 1(a)). ![Plot of $P_{\gamma}(x)$ (solid line) (16) for three values of $\gamma$. Dashed line denotes $P_{GUE}(x)$.](cz.ps){width="13.5cm" height="6cm"} Returning to the discussion of the RH, with this example-ensemble, the scope of search of the Hamiltonian for the RH widens. The Hamiltonian in question relevant for the RH could be pseudo-Hermitian. Although the example presented is non-generic, so could the Riemann Hamiltonian be. There is nothing that suggests generic nature of the Hamiltonian, particularly in the light of our illustrative example. Our suggestion is also well supported by the spectral interpretation of the Riemann zeros ensuing from Connes’ work [@connes]. According to Connes, the zeros form an absorption spectrum in the sense that the wavefunctions corresponding to the eigenvalues $\sigma _n$ is “zero”. We know that the eigenvalues of a pseudo-Hermitian operator are either real or complex-conjugate pairs. Thus, [*we suggest the possibility of the Riemann zeros $\{\frac{1}{2}\pm i\sigma _n\}$ to be the complex-conjugate-pair eigenvalues of an unknown pseudo-Hermitian operator where it would be automatically guaranteed that the eigenvectors are null.*]{} Our paper suggests this central message. Let us demonstrate our point heuristically by taking a simplistic and trivially [PT]{}-symmetric Hamiltonian as $$\mbox{\helv H}_{\mbox{\helv PT}}=-i\mbox{\helv x p}.$$ The eigenvalues like $\frac{1}{2} \pm i t_n$ will be supported with $\Psi_n(x)= N x^{-\frac{1}{2}\mp i t_n}$ such that $\Psi_n(\pm \infty)=0$ and [H]{}$_{\mbox{\helv PT}}\Psi_n=(\frac{1}{2}\pm it_n) \Psi_n(x)$. Very importantly notice that the eigenvalues are complex conjugate pairs and the eigenfunctions of [H]{}$_{\mbox{\helv PT}}$ are not the simultaneous eigenstates of the antilinear operator, [PT]{}. As stated earlier, this situation is referred to as spontaneous breaking of [PT]{}-symmetry [@pt]. Check that [PT]{}$\Psi_n(x)=\Psi^\ast_n(-x) \ne c \Psi_n(x)$. Next, whether $\frac{1}{2}+it_n$ are bona fide discrete eigenvalues and whether $t_n$ would coincide with $\sigma_n$ (RZs) are of course the most crucial questions. Our simple Hamiltonian in (17) mimics the Hamiltonian of Berry and Keating [@bk1] $$\mbox{\helv H}_{\mbox{\small BK}} = \mbox{\helv xp}-\frac{i}{2}$$ which, in turn, has been inspired by the work of Connes. This is Hermitian and it also breaks time-reversal symmetry. Berry and Keating [@bk1; @bk] have studied the the semiclassical trace formula for their Hamiltonain (18) [*vis-a-vis*]{} very interesting properties of $\zeta(z)$ and reported a shortcoming of (18) in this regard. Also they speculated [@bk1] that the Hamiltonian (18) along with and extraordinary boundary condition on the wavefunction would yield $\pm \sigma_n$ as eigenvalues. This boundary condition is unfortunately not known so far. Also, if $\sigma_n$ is an eigenvalue, apparantly there is nothing to ensure that $-\sigma_n$ would also be an eigenvalue. When we diagonalize the matrix for [H]{}$_{\mbox{\small BK}}$ using the one-dimensional Harmonic Oscillator basis by using the creation and innhallation operators as : [x]{}=([a]{}+[a]{}$^\dagger)/ \sqrt{2}$ and [p]{}=i([a]{}$^\dagger $-[a]{})/ $\sqrt{2}$, we find that eigenvalues very crucially depend upon the size of the basis (say $N$)! This is how we conclude that [H]{}$_{\mbox{\small BK}}$ does not even possess a discrete spectrum. So is the fate of our toy model [H]{}$_{\mbox{\helv PT}}$ (17), this however is only heuristic. These simple findings are for the most ordinary boundary condition where the eigenfunctions vanish at $\pm \infty$. The real part turns out to be $1/2$, and this would change as soon as the boundary conditions are disturbed. The classical analogue of the Hamiltonian (18) is known to be scaling type (as x$\rightarrow $ K x, p$\rightarrow $ p/K), therefore the complex scaling of co-ordinate can not be employed to study its resonances. However, its canonically-transformed Hamiltonian, $H=(p^2-x^2)/2$ is very well- studied [@bv] for its resonances and these are well-known as $\pm i(n+\frac{1}{2})$ (with $\hbar=1$) not showing any connection with $\sigma_n$. Okubo [@okubo] has considered the Hamiltonian, $$\mbox{\helv H}_{\mbox{\small Okubo}} = - \mbox{\helv p}_x\mbox{\helv p}_y - (1-\beta )\mbox{\helv xp}_x - \beta \mbox{\helv yp}_y + \frac{i}{2}$$ that is both Hermitian and time-reversal breaking. It is in two-dimensional Euclidean space with boundary conditions on the eigenfunctions :$$\mbox{\helv H}_{\mbox{\small Okubo}}\psi (x,y) = \lambda \psi (x,y); ~~\psi (x,0) = 0,$$ and $\psi (x,y)$ rapidly decreasing at infinity. We have again constructed the Hamiltonian matrix for (19) in the harmonic oscillator basis and diagonalised the matrices to find the eigenvalues. In $x$ it is a H.O. and in $y$ it is half-H.O. model as per [@okubo]. We find that the eigenvalues are not stable with the size of the matrices, thus indicating that there is no discrete spectrum supported by the Hamiltonian. However, the possibility of the Riemann zeros to correspond to resonances remains with this Hamiltonian. It may again be non-trivial to investigate its resonances. The Hamiltonian suggested by Castro et al. [@Castro] is like [H]{}$_{\mbox{\small CGM}}=i$[x p + g(x)]{}, where [g(x)]{} is a special function and $x>0$. Once again, by finding the matrix elements employing half H.O. basis, we do not find discrete specrum as the eigenvalues keep changing with the size, $N$, of the basis. We have found that the popular Hamiltonians in the context of the RH do not even possess a discrete spectrum. From the random matrix ensemble of pseudo-Hermitian matrices presented here exhibiting GUE statistics and by a re-interpretation of Connes’ work, we have suggested that the Hamiltonian relevant to the RH could be pseudo-Hermitian. [99]{} B. Riemann, Montasb. der Berliner Akad. p.671 (1859). A. Connes, Selecta Math. New Ser. [**5**]{}, 29 (1999) M.V. Berry and J.P. Keating, “[H]{}=[xp]{} and the Riemann zeros" in ‘Supersymmetry and trace formulae : Chaos and disorder’ (Eds I.V. Lerner, J.P. Keating) Plenum (N.Y.), 1999) p. 355 ff. M. V. Berry and J. P. Keating, SIAM Rev. [**41**]{}, 236 (1999). S. Okubo, J. Phys. A [**31**]{}, 1049 (1998). H. Wu and D. W. L. Sprung, Phys. Rev. E[**48**]{}, 2595 (1993); B. L. Julia, Physica A[**203**]{}, 425 (1994); B. P. van Zyl and D. A. W. Hutchinson, Phys. Rev. E[**67**]{}, 066211 (2003). M. L. Mehta, “Random matrices”, Second enlarged edition (Academic Press, New York, 1991). H. L. Montgomery, Proc. Symp. Pure Math. [**24**]{}, 181 (1973). A. Odlyzko, AMS Contemporary Math. Series [**290**]{}, 139 (2001). C. M. Bender and S. Boettcher, Phys. Rev. Lett. [**80**]{}, 5243 (1998).\ G. Levai and M. Znojil, J. Phys. A : Math. Gen. [**33**]{}, 7165 (2000),\ Z. Ahmed, Phys. Lett. A [**286**]{}, 231 (2001); ibid. [**287**]{}, 295 (2001).\ Z. Ahmed, Phys. Lett. A[**282**]{}, 343 (2001). Z. Ahmed and S. R. Jain, Phys. Rev. E[**67**]{}, 045106 (2003).\ Z. Ahmed and S. R. Jain, J. Phys. A[**36**]{}, 3349 (2003).\ Z. Ahmed, Phys. Lett. A [**308**]{}, 140 (2003).\ Z. Ahmed, Invited Talk delivered in II International Workshop on ‘Pseudo-Hermitian Hamiltonians in Physics’ at Prague, June 14-16, 2004 (to appear in Czech. J. Phys. 2004). quant-ph/0407154. R. Nevanlinna, Ann. Ac. Sci. Fenn. [**1**]{}, 108 (1952); [**163**]{}, 222 (1954).\ L.K. Pandit, Nouvo Cimento (supplimento) [**11**]{}, 157 (1959).\ E.C.G. Sudarshan, Phys. Rev. [**123**]{}, 2183 (1961).\ M. A. Pease III, “Methods of Matrix Algebra" (Academic, New York, 1965).\ T.D. Lee and G.C. Wick, Nucl. Phys. B [**9**]{}, 209 (1969).\ F.G. Scholtz, H. B. Geyer and F.J.H. Hahne, Ann. Phys. [**213**]{}, 74 (1992).\ A. Mostafazadeh, J. Math. Phys. [**43**]{}, 3944 (2002).\ Z. Ahmed, Phys. Lett. A [**290**]{}, 19 (2001); ibid. [**294**]{}, 287 (2002).\ C. M. Bender, D. C. Brody and H. F. Jones, Phys. Rev. Lett. [**89**]{}, 270401 (2002).\ Z. Ahmed, Phys. Lett. A [**310**]{}, 139 (2003).\ A. Mostafazadeh, J. Math. Phys. [**47**]{}, 974 (2002).\ A. Solombrino, J. Math. Phys. [**43**]{}, 5439 (2002).\ Z. Ahmed, J. Phys. A : Math. Gen. [**36**]{}, 9719 (2003); ibid. 10325 (2003). G. Barton, Ann. Phys. (N.Y.) [**116**]{}, 322 (1986).\ N.L. Balazs and A. Voros, Ann. Phys. (N.Y.) [**199**]{}, 123 (1990).\ P. Gaspard, “Chaos, scattering, and statistical mechanics” (Cambridge University Press, Cambridge, 1998). C. Castro, A. Granik and J. Mahecha, hep-th/0107266.
--- abstract: 'A consistent description of a shear flow, the accompanied viscous heating, and the associated entropy balance is given in the framework of a deterministic dynamical system, where a multibaker dynamics drives two fields: the velocity and the temperature distributions. In an appropriate macroscopic limit their transport equations go over into the Navier-Stokes and the heat conduction equation of viscous flows. The inclusion of an artificial heat sink can stabilize steady states with constant temperatures. It mimics a thermostating algorithm used in non-equilibrium molecular-dynamics simulations.' author: - 'Tamás Tél [^1]' - 'Jürgen Vollmer [^2],' - 'László Mátyás [^3]' title: | Shear flow, viscous heating, and entropy balance\ from dynamical systems --- Introduction ============ In the last years, there has been an increasing interest in modeling transport phenomena by low-dimensional, deterministic dynamical systems [@history; @Focus; @Gasp; @G; @TVB; @Gent; @VTB97; @GasKla98; @GD; @TG98; @MTV99; @GDG00; @VMT00]. Multibaker maps [@G; @TVB; @Gent; @VTB97; @GasKla98; @GD; @TG98; @MTV99; @GDG00; @VMT00] appeared to be the simplest models of this approach. They provide an opportunity to derive the equations of non-equilibrium thermodynamics from an underlying dynamics *without* using the concept of particles. The strongly-chaotic mixing properties of these two-dimensional maps seem to be sufficient to ensure consistency with the entropy balance equation of thermodynamics provided that a properly chosen coarse-grained entropy and a macroscopic limit are taken [@VTB97; @TG98; @MTV99]. Previous work successfully described the phenomena of diffusion [@G; @Gent], conduction in an external field [@TVB; @VTB97; @GD], chemical reactions [@GasKla98], thermal conduction [@TG98], and cross effects due to the simultaneous presence of an external field and heat conduction [@MTV99] by means of multibaker maps. Not only stationary, but also transient states could be addressed [@VTB97; @GDG00; @MTV99]. Here we add to this list the phenomenon of shear flows and the accompanying viscous heating. The interest of this is to clarify how the shear rate enters the expression for the irreversible entropy production. After all, by the definition of local equilibrium the macroscopic flow profile does not appear in the entropy balance. Thermodynamic averages contain only *deviations* from the average streaming velocity. We consider a viscous fluid driven by applying a shear forcing to two parallel walls of separation $L$ ([@GM; @B97]). The fluid is assumed to be incompressible. The upper wall moves in the $y$-direction with a constant velocity $U$ (fig. \[fig:geometry\]a). We restrict ourselves to a discussion of (possibly non-stationary) laminar flow. Due to translation invariance the velocity $v$ is always directed in the $y$ direction, and it only depends on the $x$ coordinate. Similarly, the temperature $T$ is independent of the $y$ and $z$ coordinates. There is no external pressure gradient applied, and the internal pressure forces due to eventual temperature changes are expected to be negligible. For sufficiently long times the well-known linear velocity profile $v^*(x)=U x/L$ is approached as asymptotic steady state. The spatial behavior of the temperature distribution $T(x,t)$ depends on the boundary conditions. Initially $T$ is typically increasing in time due to the viscous heating. For an adiabatically closed fluid, the asymptotic distribution $T^*$ is spatially constant and grows linearly with time. Inclusion of a heat sink can remove this time dependence such that the asymptotic state becomes stationary. In a hydrodynamic setting the sink acts only at the boundaries of the system, but an artificial, spatially-uniform sink can also be implemented. It mimics the so-called SLLOD [@books] algorithm of non-equilibrium molecular dynamics. A multibaker map for shear flow =============================== We model an intersection of the fluid along the $x$ axis by a multibaker map. It consists of $N+2$ cells of size unity times $a$ (fig. \[fig:geometry\]a). The cells $m=0$ and $N+1$ represent the boundary, and cells $m=1,...,N$, the bulk of the fluid, whose width is $Na=L$. In each cell there is a momentum-like variable $p$ defined besides the position. The multibaker dynamics advects two fields; the ’microscopic’ velocity $v(x,p;t)$ and the ’microscopic’ temperature $T(x,p;t)$. The appropriate (below) averages $v_m$ and $T_m$ over cell $m$ are called the coarse-grained fields. In the macroscopic limit they go over into the hydrodynamic velocity field and the thermodynamic temperature, respectively. The two-dimensional multibaker dynamics is defined as follows (fig. \[fig:geometry\]b). After each time unit $\tau$ every cell $m=1, \cdots N$ is divided into three bands of heights $ag$, $as$ and $ag$, such that $g+s+g=1$. The outermost ones are mapped onto a column of height $a$ and width $g$ in cells $m+1$ and $m-1$, respectively. The middle one stays in cell $m$, where it is transformed into a column of width $s$. In all cases the area is preserved such that an initially constant phase-space density $\varrho$ remains constant in time, reflecting the incompressibility of the fluid. Hence, the density can be identified with the density of the fluid. The velocity and the temperature fields are advected by the multibaker dynamics. In contrast to the density $\varrho$ they typically evolve in time. Starting with a coarse-grained velocity distribution $v_m$ along the chain, the updated values $v'_m$ after time $\tau$ become $$\label{eq:vn} v'_{m} = v_{m} +g (v_{m-1}+v_{m+1}-2v_m ) ,$$ This update expresses momentum conservation in the $x$-direction. A portion $s=1-2g$ of the original momentum (velocity) remains in cell $m$, and portions $g$ of the momenta (velocities) of the neighboring cells flow in. The temperature equation follows from the energy balance. The full energy $e_m$ of cell $m$ after coarse graining is proportional to the sum of the temperature $T_m$ and the translational kinetic energy $v_m^2/2$, $$e_m = \frac{v_m^2}{2} + C T_m \label{eq:em}$$ where $C$ is a constant. The update of energy is due to an in- and outflow of energy from the neighbors $$\begin{aligned} \label{eq:enprime} \frac{{(v'_m)}^2}{2} +C T'_m = (1-2g) \left( \frac{v_m^2}{2} + C T_m \right) + g \left( \frac{v_{m-1}^2}{2} + C T_{m-1} \right) + g \left( \frac{v_{m+1}^2}{2} + C T_{m+1} \right). $$ The action of a thermostat can, however, lead to a change of this kinetic energy. This is modeled by introducing a local source term $q_m$, and multiplying the energy terms by a factor $[1+\tau q_m]$, by setting $e_m' \rightarrow e_m' \,[1+\tau q_m]$ after every update (\[eq:enprime\]). Rearranging eq. (\[eq:enprime\]) subjected to this additional factor, and using relation one obtains $$\begin{aligned} \label{eq:Tpn} T'_m &=& \biggl\{ T_m + g (T_{m-1}+T_{m+1}-2 T_m) + \frac{g}{2 C} [ {(v_{m-1} - v_m)}^2+{(v_{m+1} - v_m)}^2 ] \biggr\} \; [1+\tau q_m] ,\end{aligned}$$ which can also be written in the form of the balance equation $$\frac{T_m' - T_m}{\tau} = Q_m - \frac{j_{m+1}^{(w)} - j_m^{(w)}}{a} \label{eq:Tbal}$$ with the discrete “heat” current $$j_m^{(w)} = \frac{a^2}{\tau} \; g \, \frac{T_m - T_{m-1}}{a} .$$ and the source term $$Q_m = \frac{q_m}{1+\tau q_m} T_m' + \frac{g}{2 \tau C} \left[ {(v_{m-1} - v_m)}^2+{(v_{m+1} - v_m)}^2 \right] .$$ The first contribution to the source reflects the action of thermostating, and the second one the effect of viscous heating of the fluid. In the absence of coarse graining, a repeated application of the chaotic, mixing multibaker dynamics leads to a fractal distribution of the temperature and the velocity fields. Similar to the treatment of diffusive mass and heat transport [@VTB97; @MTV99] the emergence and proliferation of these structures lies at the heart of a consistent dynamical-systems treatment of static and transient transport phenomena. Entropy and entropy balance =========================== The Gibbs entropy of a multibaker system is defined as the information-theoretic entropy, as the phase-space average of $\ln(\varrho/\varrho^*)$. The reference density $\varrho^*$ in the single-particle phase space of the multibaker map is expected to depend on the temperature. We take the choice $\varrho^*=\varrho T^{\gamma}$, where $\gamma$ is a constant exponent. Since the density $\varrho$ is constant in space and time the Gibbs energy of cell $m$ becomes[^4] $$S_m^{(G)} = { \vrho \gamma} \int_{\hbox{\tiny cell }m} \upd x\,\upd p \, \ln T(x,p) \label{eq:SmG}$$ The coarse-grained entropy is defined in an analogous way as $$S_m= a \gamma \vrho \ln T_m .$$ It is based on the cell-averaged, coarse-grained value of the temperature $T(x,p)$. As announced, the average (streaming) velocities $v_m$ do not enter the definition of the entropy. For the purpose of deriving the time evolution of entropies, it is useful to choose and initial condition with uniform densities in every cell. The coarse-grained and the Gibbs entropy then initially coincide. After one time step, however, the Gibbs entropy changes due to the fact that the $T$ field takes different values in the neighboring cells, viz. $$\begin{aligned} S_m^{(G)'} &=& {\gamma \vrho} a \; \left\{ g \; \ln[ T_{m-1} \; (1+\tau q_m) ] + (1-2g) \; \ln[ T_m \; (1+\tau q_m) ] + g \; \ln[ T_{m-1} \; (1+\tau q_m) ] \right\} \nonumber \\ & = & { \vrho \gamma} a \left\{\; \ln[ T_m \; (1+\tau q_m) ] + \, g \; \ln\frac{T_{m-1}}{T_m} - \, g \; \ln\frac{T_m}{T_{m+1}} \right\} . \label{eq:SmG'}\end{aligned}$$ On the other hand, the coarse-grained entropy after one time step is $$S'_m = - {a^2 \gamma \vrho} \ln T_m' . \label{eq:Sm'}$$ Since it only depends on averages in small volumes in the configuration space, the coarse-grained entropy is considered as the analogue of the thermodynamic entropy [@VTB97; @MTV99]. Its temporal change can be decomposed as $$\begin{aligned} \frac{\Delta S_m}{a \tau} & \equiv & \frac{S_m' - S_m}{a \tau} = \frac{{S_m^{(G)}}' - S_m^{(G)}}{a \tau} + \frac{( {S_m}'- {S_m^{(G)}}' ) - ( S_m - S_m^{(G)} )} {a \tau} , \label{eq:micbal}\end{aligned}$$ and information-theoretic arguments [@VTB97; @MTV99] lead one to identify $$\begin{aligned} \frac{\Delta_e S_m}{a \tau} &\equiv& \frac{{S_m^{(G)}}' - S_m^{(G)}}{a \tau} , \label{eq:DefDeSm} \\ \hbox{and}\qquad \frac{\Delta_i S_m}{a \tau} &\equiv& \frac{( {S_m}'- {S_m^{(G)}}' ) - ( S_m - S_m^{(G)} )} {a \tau} \label{eq:DefDiSm}\end{aligned}$$ with the entropy flux and the rate of entropy production, respectively. Note that the second term of the numerator of vanishes due to the choice of initial conditions. Inserting eqs.  and into , the rate of entropy production $\Delta_i S_m / (a \tau)$ per unit volume and time is found to be $$\begin{aligned} \frac{\Delta_i S_m}{a \tau} &=& \gamma \varrho \tau^{-1} \biggl[ \ln\left( \frac{T_m'}{T_m} (1+\tau q_m)^{-1} \right) - g \ln \frac{T_{m-1}}{T_m} - g \ln \frac{T_{m+1}}{T_m} \biggr] . \label{eq:DiSm}\end{aligned}$$ It does not depend on the source term $q_m$, but only on the values of the field $T$ in cell $m$ and its neighbors. Entropy production arises in this model from (i) an explicit evolution of the macroscopic temperature profile, and (ii) from mixing of regions with different local temperatures. The emergence of self-similar structure in the temperature distribution $T(x,p)$ can lead to a non-vanishing steady-state entropy production. Note that on the level of this discrete relations the entropy production does not yet contain the velocity distribution $v_m$. The effect of shear flow is only implicitly arising from the splitting of the full kinetic energy into a translational and an irregular part that depends on the implicit choice of a spatial resolution when writing . In the Gibbs entropy the local velocity $v(x,p)$ enters implicitly through the definition (\[eq:em\]) of the temperature, while for the coarse-grained entropy only the averages on the cells ($v_m$) can enter. The entropy flux $ {\Delta_e S_m}/{(a \tau)} $ can be written as the sum of the discrete divergence of the entropy current $$j_m^{(s)} = - \frac{a \, g}{\tau} \; {\gamma \vrho} \; \ln\frac{T_{m+1}}{T_m}$$ and the flux $$\Phi^{(th)}_m = {\gamma \vrho} q_m$$ into the thermostat. The macroscopic limit ===================== Establishing a discrete momentum, energy and entropy balance is not sufficient to motivate the thermodynamic relevance of a dynamical systems model transport. As argued in [@VTB97] a full consistency can only be found in a continuum scaling limit (the *macroscopic limit*) where the time evolution equations have to coincide with the relations known from irreversible thermodynamics, irrespective of the detailed prescription of how to choose the discrete time and space units needed to define the local equilibrium. The macroscopic limit corresponds to $a\ll L$, $N \gg 1$, and $\tau$ is much smaller than typical macroscopic time scales (for instance the viscous relaxation time). Formally it is defined as $ a,\tau \rightarrow 0 $ such that the spatial coordinate $ x = a m $ is finite. Taking this limit will be indicated by the arrow $\rightarrow$. In order to find the macroscopic limit $ \partial_t v= \nu \, \partial_x^2 v . $ of the coarse-grained velocity dynamics it is required that $g a^2/\tau$ takes the finite value $\nu$ in the limit, $$g \frac{a^2}{\tau} \rightarrow \nu.$$ For a vanishing pressure gradient eq. (\[eq:vn\]) leads to the Navier-Stokes equation for the laminar shear flow [@GM], where $\nu$ is the *kinematic viscosity*. Considering the macroscopic limit of eq. (\[eq:Tbal\]) one obtains the temperature equation $$\label{eq:ptTmacr} \partial_t T = \nu \, \partial_x^2 T + \frac{\nu}{C} {(\partial_x v_y)}^2 + T q.$$ For $q=0$ this is exactly of the type known from hydrodynamics [@GM], where, however, the heat diffusion coefficient $\kappa$ appears in front of the second derivative. Hence, in the multibaker map $\nu$ also governs heat diffusion, $\kappa=\nu$. In the hydrodynamic expression, the coefficient of the term expressing viscous heating is $\nu/c_v$, where $c_v$ is the specific heat at constant volume. Thus, we have to identify the constant $C$ with the specific heat, as also expected from a physical interpretation of . Evaluating the macroscopic limit of eq. (\[eq:micbal\]) one gets for the entropy density $S_m/a \rightarrow s$ $$\frac{\Delta S_m}{a \tau} \rightarrow \partial_t s = \sigma^{(irr)} - \nabla j^{(s)} + \Phi^{(th)} \label{eq:s-dot}$$ which coincides with the thermodynamic entropy balance [@GM] if $\Phi^{(th)}=0$ in the bulk. For the irreversible entropy production we find $$\frac{\Delta_i S_m}{a \tau} \rightarrow \sigma^{(irr)} = \gamma \nu \vrho \left( \frac{\partial_x T}{T} \right)^2 + \frac{\nu \vrho}{T} \left( \partial_x v \right)^2 .$$ It is consistent with thermodynamics if $ \nu \gamma \varrho$ corresponds to the heat conductivity $\lambda$ of the flow. Since in general $\lambda = \kappa \varrho c_v$, and since $\kappa=\nu$ in our case, we conclude that also $\gamma$ amounts to the specific heat in our model, $c_v=C=\gamma$. For the entropy current $j_m^{(s)}$ one finds in the macroscopic limit $$j_m^{(s)} \rightarrow j^{(s)} (x) = - \gamma \nu \vrho \frac{\partial_x T}{T} = - \lambda \frac{\partial_x T}{T} . \label{eq:jms-th}$$ Also this relation fully agrees with its thermodynamic counterpart [@GM]. The heat conductivity $\lambda=\kappa \varrho c_V=\nu \varrho\gamma$ appears in front of the logarithmic derivative ${\partial_x T}/{T}$ without possibility to adjust free parameters, thus demonstrating the full consistency of the results with irreversible thermodynamics. Note that eqs. – are valid at any instant of time. Conclusion ========== We have established a simple multibaker model that faithfully models hydrodynamic and thermal properties of shear flows. The model is based on an area-preserving multibaker dynamics. No phase-space contraction was needed. Thermostating manifests itself in the appearance of the a source term $q$ that only influences the heat equation and the entropy flux. By a proper choice of $q(x,t)$ one can stabilize any temperature profile $T^*(x)$ as a steady state. For instance the choice $q=\nu {(\partial_x v)}^2/{(C T)}$ eliminates viscous heating, and ensures a $T^*=$constant steady state for a uniformly thermostated system. Alternatively, a parabolic stationary temperature profile is obtained when the source terms only act at the boundaries in order to prescribe a constant temperature at the two ends. In either case, the source term $\Phi^{(th)}$ leads to an entropy flux into a thermostat, and hence requires a generalization of the local relations of classical irreversible thermodynamics. The main interest of the model lies in the light it sheds on the origin of viscous heating in deterministic models of transport. As in the SLLOD algorithm it arises from the emergence of fractal structures. In contrast to that model where they are due to the joint action of a driving force and a Gaussian thermostat introduced by heuristic arguments into the equations of motion, the present model also generates the structures for more typical physical settings of transport driven from the boundaries. It identifies the structures as arising from the mixing of regions with different *local temperatures* that is exponentially proliferating to smaller and smaller scales for a driven system. In the present model one can explicitly follow the analog of an energy cascade in turbulence where the kinetic energy of a macroscopic flow is distributed to finer and finer scales until reaching the Kolmogorov scale where it has to be considered as contributing to the non-directional motion and leads to viscous heating. It is exactly this mechanism that also leads to the appearance of the macroscopic shear rate in the expression of the irreversible entropy production. We would like to thank Burkhard Dünweg, Bob Dorfman, Denis Evans, and Garry Morris for illuminating discussions. Support from the Hungarian Science Foundation (OTKA Grant No. 032423) and the Deutsche Forschungsgemeinschaft is acknowledged. [99]{} ; ; ; ; ; ; . Focus issue on [*Chaos and Irreversibility*]{}. ; . ; . . ; . ; ; . . ; . ; [*Entropy production and transports in a conservative multibaker map*]{}, to appear in ; ; . . , unpublished. (reprinted: Dover, New-York, 1984). . ; . [^1]: E-mail: [^2]: E-mail: [^3]: E-mail: [^4]: The Boltzmann constant $k_B$ and a constant additive constant are suppressed here; [@VMT00] for details.
--- abstract: 'Retrieval of live, user-broadcast video streams is an under-addressed and increasingly relevant challenge. The on-line nature of the problem requires temporal evaluation and the unforeseeable scope of potential queries motivates an approach which can accommodate arbitrary search queries. To account for the breadth of possible queries, we adopt a no-example approach to query retrieval, which uses a query’s semantic relatedness to pre-trained concept classifiers. To adapt to shifting video content, we propose memory pooling and memory welling methods that favor recent information over long past content. We identify two stream retrieval tasks, instantaneous retrieval at any particular time and continuous retrieval over a prolonged duration, and propose means for evaluating them. Three large scale video datasets are adapted to the challenge of stream retrieval. We report results for our search methods on the new stream retrieval tasks, as well as demonstrate their efficacy in a traditional, non-streaming video task.' bibliography: - 'bmvc2016.bib' title: Video Stream Retrieval of Unseen Queries using Semantic Memory ---
--- abstract: 'This paper introduces epistemic graphs as a generalization of the epistemic approach to probabilistic argumentation. In these graphs, an argument can be believed or disbelieved up to a given degree, thus providing a more fine–grained alternative to the standard Dung’s approaches when it comes to determining the status of a given argument. Furthermore, the flexibility of the epistemic approach allows us to both model the rationale behind the existing semantics as well as completely deviate from them when required. Epistemic graphs can model both attack and support as well as relations that are neither support nor attack. The way other arguments influence a given argument is expressed by the epistemic constraints that can restrict the belief we have in an argument with a varying degree of specificity. The fact that we can specify the rules under which arguments should be evaluated and we can include constraints between unrelated arguments permits the framework to be more context–sensitive. It also allows for better modelling of imperfect agents, which can be important in multi–agent applications.' author: - Anthony Hunter - 'Sylwia Polberg[^1]' - Matthias Thimm bibliography: - 'epistemicgraph.bib' title: | Epistemic Graphs for Representing and Reasoning\ with Positive and Negative Influences of Arguments --- [***Keywords—*** abstract argumentation, epistemic argumentation, bipolar argumentation ]{} Introduction {#sec:introduction} ============ In real-world situations, argumentation is pervaded by uncertainty. In monological argumentation, we might be uncertain about how much we believe an argument and how much this belief should influence the belief in other arguments. These issues are compounded when considering dialogical argumentation, where each participant might be uncertain about what other agents believe. In addition, there are further notions important for successful argumentation, such as the ability to take contextual information into account, to handle different perspectives that various agents can have about a given issue, or to model agents that are not perfectly rational reasoners or about whom we do not possess complete information. Our aim in this paper is to present a new formalism for argumentation that takes belief into account and tackles these challenges. Moreover, we want to have a formalism that will allow us to model how different agents reason with arguments. A key application we have in mind is an agent modelling another agent while participating in some form of discussion or debate. Hence, the modeller wants to understand the degree of belief in arguments by the other agent and reasons for it. He or she may then use the model to help choose their next move in the discussion or debate. In order to make our investigation more focused, we assume that we can harness the following key assumptions for any scenario that we want to handle. Argument graph : We assume we have a set of arguments, and some relationships between these arguments. We will treat the arguments as abstract in this paper, but we can instantiate each argument with a textual description (and we will have examples of such instantiations) or a logical specification (for example as a deductive argument [@BesnardHunter2014]). We also assume that the arguments and relationships between them can be represented by a directed graph, and so each node denotes an argument, and each arc denotes a relationship between a pair of arguments. Belief in arguments : Belief in an arguments can be conceptualized in a number of ways. In this paper, we focus on belief in an argument as a combination of the degree to which the premises and claims are believed to be true, and the degree to which the claim is believed to follow from those premises. Furthermore, we assume that belief in an argument can be modulated or influenced by other arguments. Additionally, we are interested in applications where we source arguments from the real-world (e.g. arguments that arise in dialogues or discussions). This means that we will have arguments that are enthymemes (i.e. arguments with some of the premises and/or claim being implicit). This in turn means that different people may have a different belief assignment to an argument because it is an enthymeme and therefore they can decode it in different ways. **Requirements** {#requirements .unnumbered} ---------------- In this paper, we will focus on the following requirements for our new formalism. We will briefly delineate them first, and then motivate them through examples and a discussion of how meeting the requirements is of use. Some of these requirements have been satisfied by existing proposals in the literature but some are entirely new (e.g. modelling context-sensitivity, modelling different perspectives, modelling imperfect agents, and modelling incomplete graphs). Modelling fine–grained acceptability : Typical semantics for argumentation frameworks focus on judging whether an argument should be accepted or rejected. However, in practical applications, there might be uncertainty as to the degree an argument is accepted or rejected. Various studies, including [@Rahwan2011; @PolbergHunter17], show that a two–valued perspective may be insufficient for modelling people’s beliefs about arguments. Since the degree to which an argument is accepted (rejected) can be expressed by the degree to which the argument is believed (disbelieved) [@PolbergHunter17], we can see this requirement as stating that we should have a many–valued scale for belief in arguments. Recent interest in ranking-based semantics and the notion of argument strength, also points to the need for the fine-grained requirement (see [@Bonzon16] for an overview). Modelling positive and negative relations between arguments : The notions of attack and support are clearly important aspects of argumentation, even though the formalization of the interaction between these two types of relationship is open to multiple interpretation and subject to some debate in the research community [@CayrolLS13; @BrewkaPW14; @Prakken14; @Polberg16; @PolbergHunter17; @CabrioVillata13; @KontarinisToni16; @RosenfeldKraus16]. Nevertheless, there are various studies showing the importance of support in real argumentation, such as works on argument mining [@CabrioVillata13] or dialogical argumentation [@PolbergHunter17]. Hence, this requirement is that we need to model how the beliefs in arguments can have a positive or negative influence on other arguments, and that the belief in an argument needs to take into account those influences. Modelling context–sensitivity : Consider two argumentation scenarios represented by the same directed graph, but with arguments instantiated with different textual descriptions. The belief that an individual has in these arguments may not be the same in these two scenarios. The way arguments and influences between them are evaluated can be affected by the actual content of the arguments and the problem domain in question [@Cerutti2014]. Two different instantiations can be interpreted differently by a single user depending on his or her knowledge or preferences [@Zeng2008]. Hence, this requirement is that the context (i.e. how arguments are instantiated) can affect the belief in arguments and their influence on other arguments. Modelling different perspectives : It is common for different people to perceive the same information in different ways. In argumentation, not only can a given graph be evaluated in various ways by different agents, but also its structure might not be uniformly perceived [@PolbergHunter17]. So if we have an argumentation scenario represented by a single directed graph, different people might have dissimilar beliefs in the individual arguments and in the influence the belief in one argument has on other arguments. Partly, this divergence of opinions may occur because of an argument being an enthymeme (i.e. an argument that only has some of its premises and claim represented explicitly), which every agent can decode differently [@Black:2012]. This disparity may also occur because of differing background knowledge and experience. So this requirement is that the participant (i.e. the agent judging the argument graph) can have belief in arguments and their influences that is different to other participants. Modelling imperfect agents : People can exhibit a number of imperfections such as errors in their background knowledge, errors in the way they analyze certain information, and biases in how they process information in general. So when judging a given argumentation scenario some people might make inappropriate or irrational judgments. This irrationality could be seen in terms of not adhering to argumentation semantics as well as in terms of reasoning fallacies or undesirable cognitive biases [@ogden2012health]. Since we want our formalism to be useful for real-world applications, we need the ability to model the imperfect agents in their assessment of belief in arguments and in their assessment of the influence between arguments. Modelling incomplete situations : An argumentation graph might not contain all of the arguments relevant to a given problem, in particular those that concern the agent(s). For example, a patient might withhold a certain embarrassing or private piece of information from the doctor, despite the fact that it can affect the diagnosis. However, this incomplete knowledge might also be a result of how the graph is obtained or updated. In dialogical argumentation, depending on the used protocol, an agent might not always be able to put forward all arguments relevant to the discussion. As a result, an agent may, for example, disbelieve an argument that is perceived by us as unattacked, even though the agent is privately aware of reasons to doubt the argument. Similarly, an agent can believe an argument despite it being attacked, simply because the graph does not contain the agent’s supporting arguments. Such a behaviour would violate the majority of the argumentation semantics available in the literature. Furthermore, graph incompleteness in combination with fine–grained acceptability means that we might know that an agent believes or disbelieves a particular argument, but cannot precisely state to what degree. We therefore need an approach that is more resilient to potential incompleteness of the possessed information. In the following examples, we consider simple scenarios where we might use monological argumentation to make sense of a situation, and possibly to make decisions. The examples highlight the value of implementing the above requirements. [0.49]{} [0.49]{} \[ex:trains\] Imagine we have two passengers on a train, Jack and Jill, travelling to work. Jack is using this particular connection regularly and has some experience with the vagaries of the service. Jill, however, uses this connection for the first time, and has an important meeting to attend and wants to be on time. Let us assume that their knowledge concerning whether the train is going to be late is represented in Figures \[fig:train\] and \[fig:train2\]. Let us first focus on Jack. Arguments ${{{\tt {B}}}}$, ${{{\tt {C}}}}$ and ${{{\tt {D}}}}$ are enthymemes, and so their claims are not explicit and can be decoded in a number of ways. Since Jack is a regular client we can assume that the missing claim for ${{{\tt {B}}}}$ (and ${{{\tt {C}}}}$) is “therefore the train will arrive a bit late”. He also has a live travel info app that says that this service will arrive on time. He has been using this app for a while and does not consider it reliable at all, and because of this experience he chooses to decode the claim of ${{{\tt {D}}}}$ as “the live travel info service predicts the train will be on time”. So, Jack does not decode the claim of ${{{\tt {D}}}}$ as “therefore the train will arrive on time”. Hence, he sees arguments ${{{\tt {B}}}}$ and ${{{\tt {C}}}}$ as attacking ${{{\tt {A}}}}$ and disregards the influence of ${{{\tt {D}}}}$. Thus, Jack’s belief in ${{{\tt {B}}}}$ and ${{{\tt {C}}}}$ suggests that ${{{\tt {A}}}}$ should be disbelieved, and the degree to which he disbelieves or believes ${{{\tt {A}}}}$ should be primarily affected by ${{{\tt {C}}}}$, i.e. his current perception of the service. At the same time, he is certain of his eyes, i.e. that the info service predicts that the train will be on time, but his belief in that argument does not affect ${{{\tt {A}}}}$. Let us now focus on Jill, who is new to the service. She heard from a fellow passenger that this train normally arrives a little bit late and chooses to decode the claim of ${{{\tt {B}}}}$ as “therefore the timetable is inaccurate”. This is the first time she has used this particular service, as she had only recently moved from a different town. She commuted by train before, but the connection she used from her previous town was a faster one. Therefore, she sees argument ${{{\tt {C}}}}$ as a comment on the new line when compared to the line she used before, not as a sign of problems happening right now on the train she has boarded. Thus, her claim for ${{{\tt {C}}}}$ is “therefore the tracks on this line must be in a worse condition than on the other line” and for her, arguments ${{{\tt {A}}}}$ and ${{{\tt {C}}}}$ are not particularly related. Finally, the live travel app she has been using has been very reliable in the past and she trusts it. She decodes the claim of ${{{\tt {D}}}}$ as “therefore, the train will be on time”. Therefore, as long as she believes ${{{\tt {D}}}}$ more than she believes the complaints of a random stranger on the train (i.e. argument ${{{\tt {B}}}}$), she will believe ${{{\tt {A}}}}$. The above example indicates how considering arguments, and beliefs in them, can be a useful part of sense-making and decision-making in monological argumentation. How we model the influence of positive and negative relations is an important part of this. Furthermore, we can see that there is context-sensitivity, in that how we interpret arguments (in particular how we decode enthymemes) and the relationships between them can affect this analysis. We can also see that it is reasonable for different agents having different views on how to decode a given enthymeme, different views on the influence of one argument on another, and different views on how to take multiple relationships into account. (a1) \[text width=4.5cm\] at (-1,1) [${{{\tt {A}}}}$ = Giving up smoking will be good for your health]{}; (a2) \[text width=5cm\] at (6,1) [${{{\tt {B}}}}$ = My appetite will increase and so I will put on too much weight]{}; (a3) \[text width=5cm\] at (6,-1.5) [${{{\tt {C}}}}$ = My anxiety will increase and so I will lose too much weight]{}; (a7) \[text width=5cm\] at (-1,-1.5) [${{{\tt {D}}}}$ = My anxiety will increase and so I will have problems with working]{}; (a4) \[text width=6cm\] at (6,3) [${{{\tt {E}}}}$ = You can join a healthy eating course to help you manage your weight]{}; (a5) \[text width=5cm\] at (-1,-4) [${{{\tt {F}}}}$ = You can join a yoga class to help you relax, and thereby manage your anxiety]{}; (a6) \[text width=6.5cm\] at (6,-4) [${{{\tt {G}}}}$ = You can use online counseling services for anxiety associated with smoking cessation, and thereby manage your anxiety]{}; (a2) edge\[bend left\] node\[right\] [$-$]{}(a3); (a3) edge\[bend left\] node\[left\] [$-$]{} (a2); (a2) edge node\[above\] [$-$]{}(a1); (a3) edge node\[below\] [$-$]{} (a1); (a7) edge node\[left\] [$-$]{} (a1); (a4) edge node\[left\] [$-$]{} (a2); (a5) edge node\[above,pos=0.2\] [$-$]{} (a3); (a6) edge node\[left\] [$-$]{} (a3); (a5) edge node\[left\] [$-$]{} (a7); (a6) edge node\[above,pos=0.2\] [$-$]{} (a7); \[ex:smoking\] Let us now assume that we have an artificial agent attempting to persuade the users Rachel, Robin and Morgan to stop smoking. The graph of the artificial agent is represented by Figure \[fig:smoking\]. The dialogue proceeds in turns and limits the ways the participants can respond. The artificial agent can state any of the arguments in the graph and the user is allowed to react in two ways. A user (be it Rachel, Robin or Morgan) can either select his/her counterargument from the list presented by the agent, or state how much (s)he agrees or disagrees with an argument presented by the agent. The user can end the dialogue at any time, the agent ends once there are no arguments to put forward or the user agreed to the desired arguments. After the dialogue is finished by any party, the participant is asked whether he or she agrees or disagrees with argument ${{{\tt {A}}}}$. If the participant agrees, the dialogue is marked as successful. Let us start with Rachel. The agent presents her with argument ${{{\tt {A}}}}$ in order to convince her to stop smoking and allows her to select from ${{{\tt {B}}}}$, ${{{\tt {C}}}}$ and ${{{\tt {D}}}}$ as her potential arguments. Rachel selects ${{{\tt {B}}}}$ and ${{{\tt {D}}}}$. In response to ${{{\tt {B}}}}$, the agent puts forward ${{{\tt {E}}}}$, and Rachel agrees. In response to ${{{\tt {D}}}}$, the agent decides to first put forward ${{{\tt {F}}}}$ based on the experience with previous users. Unfortunately, Rachel strongly disagrees and ends the discussion. The dialogue is marked as unsuccessful. The agent was not aware that Rachel uses a wheelchair and that yoga classes did not suit her requirements, and the conversation ended before ${{{\tt {G}}}}$ could have been put forward. Let us now consider Robin. The agent presents Robin with ${{{\tt {A}}}}$ and again allows ${{{\tt {B}}}}$, ${{{\tt {C}}}}$ and ${{{\tt {D}}}}$ to be selected as counterarguments. Robin is afraid of any weight changes associated with smoking cessation and selects both ${{{\tt {B}}}}$ and ${{{\tt {C}}}}$ despite the fact that they are conflicting. Consequently, any counterarguments put forward by the artificial agent can be seen as at the same time indirectly conflicting with and promoting ${{{\tt {A}}}}$. The agent puts forward ${{{\tt {E}}}}$ and ${{{\tt {F}}}}$, to which Robin moderately agrees, and the dialogue ends successfully. Finally, consider Morgan, who similarly to Rachel selects both ${{{\tt {B}}}}$ and ${{{\tt {D}}}}$. However, in reality, Morgan is more afraid of weight gain than anxiety affecting his work, but wants to discuss both issues. The agent proposes solutions and Morgan moderately agrees with ${{{\tt {E}}}}$, but somewhat disagrees with ${{{\tt {F}}}}$. The agent decides to follow up with ${{{\tt {G}}}}$, with which Morgan strongly disagrees. Nevertheless, the dialogue ends successfully due to the fact that Morgan’s more pressing issue was addressed. The above example indicates how beliefs in arguments and relations between them are important in dialogical argumentation. In particular, the same procedures applied to two agents expressing similar concerns can lead to different results based on the beliefs they have in arguments and their private knowledge. An agent not aware of another agent’s arguments or beliefs can put forward unacceptable arguments and fail to persuade a given party to do or not to do something. One also has to be ready to put forward arguments that, possibly due to certain behaviours of the other party that can be deemed not rational, might work against the agent’s goal. \[ex:context\] The work in [@Cerutti2014] has investigated the problem of reinstatement in argumentation using an instantiated theory and preferences. We draw attention to two scenarios considered in the study, concerning weather forecast and car purchase, where each comes in the basic (without the last sentence) and extended (full text) version. The weather forecasting service of the broadcasting company AAA says that it will rain tomorrow. Meanwhile, the forecast service of the broadcasting company BBB says that it will be cloudy tomorrow but that it will not rain. It is also well known that the forecasting service of BBB is more accurate than the one of AAA. *However, yesterday the trustworthy newspaper CCC published an article which said that BBB has cut the resources for its weather forecasting service in the past months, thus making it less reliable than in the past.* You are planning to buy a second-hand car, and you go to a dealership with BBB, a mechanic whom has been recommended you by a friend. The salesperson AAA shows you a car and says that it needs very little work done to it. BBB says it will require quite a lot of work, because in the past he had to fix several issues in a car of the same model. *While you are at the dealership, your friend calls you to tell you that he knows (beyond a shadow of a doubt) that BBB made unnecessary repairs to his car last month.* The formal representation of the base (resp. extended) versions of these scenarios is equivalent (we refer to [@Cerutti2014; @haepilot] for more details). However, the findings show that they are not judged in the same way and suggest that the domain dependent knowledge of the participants has affected their performance of the tasks. This shows the importance of modelling context–sensitivity and allowing an agent to evaluate structurally equivalent graphs differently. **State of the Art** {#state-of-the-art .unnumbered} -------------------- There are various approaches that attempt to tackle some of the above requirements, as we will discuss in more detail in Section \[section:Comparison\]. However, there does not exist one that would be able to deal with all of them at the same time. We can find a number of proposals in computational models of argument such as the postulates for argument weights, strengths or beliefs [@CayrolLS05; @CayrolLS05b; @LeiteMartins11; @Amgoud2013; @Amgoud16kr; @AmgoudBenNaim16a; @AmgoudBenNaim16b; @Rago16; @Bonzon16; @AmgoudBenNaim17; @AmgoudBNDV17; @Thimm:2012; @Hunter:2013; @Hunter:2014; @Costa-Pereira:2011], which offer a more fine–grained alternative for Dung’s approach. Some of these works also permit certain forms of support or positive influences on arguments [@CayrolLS05; @AmgoudBenNaim17; @AmgoudBenNaim16a; @Rago16; @LeiteMartins11]. Nevertheless, due to the way the influence aggregation methods are defined, it is difficult for these proposals to meet the requirement for modelling context–sensitivity, different perspectives or incomplete graphs. Certain flexibility is perhaps possible only with approaches that work with initial scoring assignment such as [@Rago16; @LeiteMartins11; @AmgoudBenNaim17; @AmgoudBNDV17], though dealing with imperfect agents still poses difficulties in these methods. In contrast to the above works, there are frameworks that allow us to specify the way one argument affects another locally, which promotes dealing with context–sensitivity, different perspectives and a wider range of relations between arguments. In particular, abstract dialectical frameworks (ADFs) [@BrewkaWoltran10; @BrewkaESWW13; @Polberg16] allow us to specify various ways the incoming support and attack can affect a given argument. Unfortunately, this specification has certain restrictions, and dealing with incomplete scenarios and imperfect agents is not ideal. The semantics of ADFs are also primarily two and three–valued, and while a recent generalizations allows for considering fine-grained acceptability [@Brewka18], it still suffers from the previously mentioned issues. **Our Proposal** {#our-proposal .unnumbered} ---------------- We therefore believe there is a need to investigate argumentative approaches that would handle both attack and support relations, allow for fine–grained argument acceptability, and permit context–sensitivity, different perspectives, agents’ imperfections and incomplete knowledge about agents’ graphs. As a starting point for our research, we take the epistemic approach to probabilistic argumentation, which has already shown to be potentially valuable in modelling agents in persuasion dialogues [@Hunter15ijcai; @Hunter16sum; @Hunter16ecai; @Hadoux16; @Hadoux17]. In order to address our requirements, we introduce epistemic graphs a generalization of this approach. In these graphs, an argument can be believed or disbelieved to a given degree, and the way other arguments influence a given argument is expressed by the epistemic constraints that can restrict the belief we have in an argument in a flexible way. Through the use of degrees of belief, epistemic graphs provide a more fine–grained alternative to classical Dung’s approaches when it comes to determining the status of a given argument. The flexibility of the epistemic approach allows us to both model the rationale behind the existing semantics as well as completely deviate from them, thus giving us a more appropriate formalism for practical situations including the modelling of imperfect agents. Epistemic graphs can model both attack and support as well as relations that are neither support nor attack, so far analyzed primarily in the context of abstract dialectical frameworks [@BrewkaESWW13]. The freedom in defining the constraints allows us to easily express various interpretations of support at the same time and without the need to transform them, which is usually necessary in other types of argumentation frameworks [@CayrolLS13; @PolbergOren14a; @Polberg16]. The fact that we can specify the rules under which arguments should be evaluated and that we can include constraints between unrelated arguments allows the framework to be more context–sensitive and more accommodating when it comes to dealing with imperfect agents. Additionally, the ability to leave certain relations unspecified lets us deal with cases when the system has insufficient knowledge about the situation. In this paper, we make the following contributions: A syntax and semantics for a logical language for constraints that is appropriate for argumentation; A proof theory for reasoning with these constraints so that we can determine whether a set of constraints is consistent, whether a set of constraints is minimal, and whether one constraint implies another constraint; A definition for epistemic graphs, study of its properties and an analysis of how it can be used to capture different kinds of argumentation scenarios; and A set of tools for analysing the relationship between the graphical structure and the constraints contained in the graph and an example of how they can be harnessed in practical applications. In this paper, we do not consider how we can obtain the probability distribution or constraints. However, other works with crowdsourced data show how we can obtain belief in arguments and relations between them [@Cerutti2014; @HunterPolberg17; @PolbergHunter17]. From this crowdsourced data, we believe that it is entirely feasible to develop machine learning techniques for generating constraints. However, we leave the learning of constraints from data to future work. **Outline of the Paper** {#outline-of-the-paper .unnumbered} ------------------------ We proceed as follows: Section \[section:Preliminaries\] reviews the background that we require; Section \[section:epistemiclanguage\] introduces the syntax and semantics for the language we require for specifying epistemic graphs; Section \[section:ReasoningConstraints\] presents the proof theory for reasoning with statements in this language; Section \[sec:epistemicgraphs\] introduces epistemic graphs and considers how they can be used for analysing different kinds of scenarios; Section \[section:Comparison\] compares our work to related state-of-the-art formalisms; and Section \[section:Discussion\] discusses our contribution and considers future work. Preliminaries {#section:Preliminaries} ============= In its simplest form, an argument graph is a directed graph in which nodes represent arguments and arcs represent relations. In conflict–based graphs, such as the ones created by Dung [@Dung95], arcs stand for attacks. In graphs such as those in [@AmgoudBenNaim16a], arcs are supports, while in bipolar graphs they can be either supports or attacks [@CayrolLS13; @BoellaGTV10; @OrenNorman08; @NouiouaRisch11; @PolbergOren14a]. In some frameworks, such as abstract dialectical frameworks, an arc may also represent a dependence relation in case it cannot be strictly classified as neither supporting nor attacking [@BrewkaWoltran10; @BrewkaESWW13; @Polberg16]. Argument graphs can be extended in various ways in order to account for additional preferences, recursive relations, group relations[^2] and more. For an overview, we refer the reader to [@BrewkaPW14]. We will also discuss some of these structures more in Section \[section:Comparison\]. For now, we will focus on introducing the notation we will use throughout the text. By an argument graph we will understand a directed graph and we will use a labelling function that assigns to every arc a label representing its nature – supporting, attacking, or dependent, where dependency is understood as a relation that is neither positive nor negative. Hence, unless stated otherwise, we will assume we are working with a label set $\Omega = \{+,-,*\}$, which can be adjusted if needed. Given that many argumentation graphs allow two arguments to be connected in more ways than one, we allow a single arc to possess more than just one label: Let ${{\cal G}}= (V, A)$, where $A \subseteq V \times V$, be a directed graph. A [**labelled graph**]{} is a tuple $X = ({{\cal G}},{{\cal L}})$ where ${{\cal L}}: A \rightarrow 2^\Omega$ is labelling function and $\Omega$ is a set of possible labels. $X$ is **fully labelled** iff for every $\alpha \in A$, ${{\cal L}}(\alpha) \neq \emptyset$. $X$ is **uni-labelled** iff for every $\alpha \in A$, $\lvert {{\cal L}}(\alpha) \rvert = 1$. Unless stated otherwise, from now on we assume that we are working with fully labelled graphs. With ${{\sf Nodes}}({{\cal G}})$ we denote the set of nodes $V$ in the graph ${{\cal G}}$ and with ${{\sf Arcs}}({{\cal G}})$ we denote the set of arcs $A$ in ${{\cal G}}$. For a graph ${{\cal G}}$ and a node ${{{\tt {B}}}}\in {{\sf Nodes}}({{\cal G}})$, the parents of ${{{\tt {B}}}}$ are ${{\sf Parent}}({{{\tt {B}}}}) = \{ {{{\tt {A}}}}\mid ({{{\tt {A}}}},{{{\tt {B}}}}) \in {{\sf Arcs}}({{\cal G}}) \}$. With ${{\cal L}}^x({{\cal G}}) = \{ \alpha \in {{\sf Arcs}}({{\cal G}}) \mid x \in {{\cal L}}(\alpha)\}$ we denote the set of relations labelled with $x$ by ${{\cal L}}$, where $x \in \{+,*,-\}$. In a similar fashion, by ${{\sf Parent}}^x({{{\tt {B}}}}) = \{ {{{\tt {A}}}}\mid ({{{\tt {A}}}},{{{\tt {B}}}}) \in {{\sf Arcs}}({{\cal G}}) \land x \in {{\cal L}}(({{{\tt {A}}}}, {{{\tt {B}}}})) \}$ we will denote the set of parents of an argument ${{{\tt {B}}}}$ s.t. the relation between the two is labelled with $x$ by ${{\cal L}}$. On an arc from a parent to the target, a positive label denotes a positive influence, a negative label denotes a negative influence, and a star label denotes an influence that is neither strictly positive nor negative. If ${{\cal L}}$ is assigned only the $-$ label to every arc in a graph, then the graph is a conflict–based argument graph, and if ${{\cal L}}$ is assigned $+$ or $-$ (or both) to every arc in a graph, then the graph is a bipolar argument graph [@NouiouaRisch11; @CayrolLS13; @PolbergOren14a]. Following the analysis in [@PolbergHunter17], a graph making use of all three labels will be referred to as tripolar. In Figure \[fig:smoking\] we can see an example of a conflict–based argument graph, Figure \[fig:train\] shows an example of a bipolar argument graph and Figure \[fig:tripolar\] of a tripolar one. In the last case, we can observe that while ${{{\tt {E}}}}$ and ${{{\tt {F}}}}$ are necessary for ${{{\tt {A}}}}$, only one of them can be accepted at a time in order for ${{{\tt {A}}}}$ to be accepted, as having both of them would lead to rejecting the argument. This mutual exclusivity requirement for ${{{\tt {A}}}}$ is neither an attacking nor a supporting relation, and thus it is classified as a dependency. A given argument graph is evaluated with the use of semantics, which are meant to represent what can be considered “reasonable”. The most basic type of semantics – the extension–based ones – associate a given graph with sets of arguments, called extensions, formed from acceptable arguments. A more refined version, the labeling–based semantics, tell us whether an argument is accepted, rejected, or neither [@Caminada:2009; @Baroni:2011; @BrewkaESWW13]. However, when it comes to some applications such as user modelling, these two and three–valued perspectives can be insufficient to express the extent to which the user agrees or disagrees with a given argument [@Rahwan2011; @PolbergHunter17]. Consequently, a variety of weighted, ranking–based and probabilistic approaches have been proposed [@Amgoud2013; @AmgoudBenNaim16b; @AmgoudBenNaim16a; @Amgoud16kr; @Rago16; @Bonzon16; @AmgoudBenNaim17; @AmgoudBNDV17; @Hunter:2013; @Hunter15ijcai; @Hunter16sum; @Hunter16ecai; @Hadoux16; @Hadoux17; @Pu2014; @BesnardHunter01; @CayrolLS05; @CayrolLS05b; @LeiteMartins11]. We will discuss some of these approaches further in Section \[section:Comparison\] and refer interested readers to the listed papers for a more in-depth analysis. Epistemic Language {#section:epistemiclanguage} ================== In the introduction, we have discussed the value of being able to model beliefs in arguments, various types of relations between arguments, context–sensitivity, and more. Our proposal, capable of meeting the postulated requirements, comes in the form of epistemic graphs, which can be equipped with particular formulae specifying the beliefs in arguments and the interplay between them. In this section, we will focus on providing the language for these formulae. We describe its syntax and semantics as well as introduce an appropriate proof system later in Section \[section:ReasoningConstraints\]. Syntax and Semantics -------------------- The epistemic language consists of Boolean combinations of inequalities involving statements about probabilities of formulae built out of arguments. Throughout the section, we will assume that we have a directed graph ${{\cal G}}$. The building block of an epistemic formula is a statement “probability of $\alpha$”, where $\alpha$ is a propositional formula on arguments (further referred to as terms). We can then speak about additions and subtractions of probabilities of such terms (further referred to as operational formulae). Comparing them to actual numerical values through equalities and inequalities forms epistemic atoms, which can then through negation, disjunction, conjunction etc. be joined into epistemic formulae. Let us now formally introduce the language: The epistemic language based on ${{\cal G}}$ is defined as follows: - a [**term**]{} is a Boolean combination of arguments. We use $\lor$, $\land$ and $\neg$ as connectives in the usual way, and can derive secondary connectives, such as implication $\rightarrow$, as usual. ${{\sf Terms}}({{\cal G}})$ denotes all the terms that can be formed from the arguments in ${{\cal G}}$. - an [**operational formula**]{} is of the form ${p}(\alpha_i) \star_1 \ldots \star_{k-1} {p}(\alpha_{k})$ where all $\alpha_i \in {{\sf Terms}}({{\cal G}})$ and $\star_j \in \{+,-\}$. ${{\sf OFormulae}}({{\cal G}})$ denotes all possible operational formulae of ${{\cal G}}$ and we read ${p}(\alpha)$ as “probability of $\alpha$”. - an [**epistemic atom**]{} is of the form $f {\#}x$ where ${\#}\in {\{=, \neq, \geq, \leq,>,<\}}$, $x \in [0,1]$ and $f \in {{\sf OFormulae}}({{\cal G}})$. - an [**epistemic formula**]{} is a Boolean combination of epistemic atoms. ${{\sf EFormulae}}({{\cal G}})$ denotes the set of all possible epistemic formulae of ${{\cal G}}$. For $\alpha \in {{\sf Terms}}({{\cal G}})$, ${{\sf Args}}(\alpha)$ denotes the set of all arguments appearing in $\alpha$ and for a set of terms $\Gamma \subseteq {{\sf Terms}}({{\cal G}})$, ${{\sf Args}}(\Gamma)$ denotes the set of all arguments appearing in $\Gamma$. Given a formula $\psi \in {{\sf EFormulae}}({{\cal G}})$, let ${{\sf FTerms}}(\psi)$ denote the set of terms appearing in $\psi$ and let ${{\sf FArgs}}(\psi) = {{\sf Args}}({{\sf FTerms}}(\psi))$ be the set of arguments appearing in $\psi$. With ${\sf Num}(\psi)$ we denote the collection of all numerical values $x$ appearing in $\psi$. For an operational formula $f = {p}(\alpha_i) \star_1 \ldots \star_{k-1} {p}(\alpha_{k})$, ${{\sf AOp}}(f) = (\star_1, \star_2, \ldots, \star_{k-1})$ denotes the, possibly empty, sequence of arithmetic operators appearing in $f$. By abuse of notation, by ${{\sf AOp}}(\varphi)$ for an epistemic atom $\varphi$ we will understand the sequence of operators of the operational formula of $\varphi$. Assume a graph ${{\cal G}}$ s.t. $\{{{{\tt {A}}}},{{{\tt {B}}}},{{{\tt {C}}}},{{{\tt {D}}}}\} \subseteq {{\sf Nodes}}({{\cal G}})$. $\psi {:}{p}({{{\tt {A}}}}\land {{{\tt {B}}}}) - {p}({{{\tt {C}}}}) - {p}({{{\tt {D}}}}) > 0$ is an example of an epistemic formula on ${{\cal G}}$. The terms of that formula are ${{\sf FTerms}}(\psi) = \{{{{\tt {A}}}}\land {{{\tt {B}}}}, {{{\tt {C}}}}, {{{\tt {D}}}}\}$, the arguments appearing in them are ${{\sf FArgs}}(\psi) = \{{{{\tt {A}}}}, {{{\tt {B}}}},{{{\tt {C}}}},{{{\tt {D}}}}\}$. The sequence of operators of $\psi$ is ${{\sf AOp}}(\psi) = (-,-)$. Finally, in this case, ${\sf Num}(\psi) = \{0\}$. Having defined the syntax of our language, let us now focus on its semantics, which comes in the form of belief distributions. A [**belief distribution**]{} on arguments is a function ${P}: 2^{{{\sf Nodes}}({{\cal G}})} \rightarrow [0,1]$ s.t. $\sum_{\Gamma \subseteq {{\sf Nodes}}({{\cal G}})} {P}(\Gamma) = 1$. With ${{\sf Dist}}({{\cal G}})$ we denote the set of all belief distributions on ${{\sf Nodes}}({{\cal G}})$. Each $\Gamma \subseteq {{\sf Nodes}}({{\cal G}})$ corresponds to an interpretation of arguments. We say that $\Gamma$ *satisfies an argument* ${{{\tt {A}}}}$ and write $\Gamma \models {{{\tt {A}}}}$ iff ${{{\tt {A}}}}\in \Gamma$. Essentially $\models$ is a classical satisfaction relation and can be extended to complex terms as usual. For instance, $\Gamma \models \neg\alpha$ iff $\Gamma \not\models \alpha$ and $\Gamma \models \alpha \land \beta$ iff $\Gamma \models \alpha$ and $\Gamma \models \beta$. For each graph ${{\cal G}}$, we assume an ordering over the arguments $\langle {{{\tt {A}}}}_1,\ldots,{{{\tt {A}}}}_n \rangle$ so that we can encode each model by a binary number: for a model $X$, if the i-th argument is in $X$, then the i-th digit is 1, otherwise it is 0. For example, for $\langle {{{\tt {A}}}},{{{\tt {B}}}},{{{\tt {C}}}}\rangle$, the model $\{{{{\tt {A}}}},{{{\tt {C}}}}\}$ is represented by 101. The [**probability of a term**]{} is defined as the sum of the probabilities (beliefs) of its models: $${P}(\alpha) = \sum_{\Gamma \subseteq {{\sf Nodes}}({{\cal G}}) \mbox{ s.t. } \Gamma \models \alpha} {P}(\Gamma).$$ We say that an agent believes a term $\alpha$ to some degree if ${P}(\alpha) > 0.5$, disbelieves $\alpha$ to some degree if ${P}(\alpha) < 0.5$, and neither believes nor disbelieves $\alpha$ when ${P}(\alpha) = 0.5$. Please note in this notation, ${P}({{{\tt {A}}}})$ stands for the probability of a simple term ${{{\tt {A}}}}$ (i.e. sum of probabilities of all sets containing ${{{\tt {A}}}}$), which is different from ${P}(\{{{{\tt {A}}}}\})$, i.e. the probability assigned to set $\{{{{\tt {A}}}}\}$. \[def:sat:valued\] Let $\varphi$ be an epistemic atom ${p}(\alpha_i) \star_1 \ldots \star_{k-1} {p}(\alpha_{k}) {\#}b$. The [**satisfying distributions**]{} of $\varphi$ are defined as ${{\sf Sat}}(\varphi) = \{{P}\in {{\sf Dist}}({{\cal G}}) \mid {P}(\alpha_i) \star_1 \ldots \star_{k-1} {P}(\alpha_{k}) {\#}b\}$. The set of satisfying distributions for a given epistemic formula is as follows where $\phi$ and $\psi$ are epistemic formulae: - ${{\sf Sat}}(\phi\land\psi) = {{\sf Sat}}(\phi) \cap {{\sf Sat}}(\psi)$; - ${{\sf Sat}}(\phi\lor\psi) = {{\sf Sat}}(\phi) \cup {{\sf Sat}}(\psi)$; and - ${{\sf Sat}}(\neg\phi) = {{\sf Sat}}(\top) \setminus {{\sf Sat}}(\phi)$. For a set of epistemic formulae $\Phi = \{ \phi_1,\ldots, \phi_n\}$, the set of satisfying distributions is ${{\sf Sat}}(\Phi)$ = ${{\sf Sat}}(\phi_1) \cap \ldots \cap {{\sf Sat}}(\phi_n)$. Consider a graph with nodes $\{{{{\tt {A}}}},{{{\tt {B}}}},{{{\tt {C}}}},{{{\tt {D}}}}\}$ and the formulae $\psi {:}{p}({{{\tt {A}}}}\land {{{\tt {B}}}}) - {p}({{{\tt {C}}}}) - {p}({{{\tt {D}}}}) \! > \! 0 \, \land \, {p}({{{\tt {D}}}}) \!> \!0$. A probability distribution ${P}_1$ with ${P}_1({{{\tt {A}}}}\wedge {{{\tt {B}}}}) =0.7$, ${P}_1({{{\tt {C}}}}) = 0.1$ and ${P}_1({{{\tt {D}}}}) = 0.1$ is in ${{\sf Sat}}(\psi)$. However, a distribution ${P}_2$ with ${P}_2({{{\tt {A}}}}\wedge {{{\tt {B}}}}) = 0$ cannot satisfy $\psi$ and so ${P}_2 \notin {{\sf Sat}}(\psi)$. Restricted Language ------------------- The full power of the epistemic language, while useful in various scenarios, may be redundant in other. For instance, one of the most commonly employed tools in opinion surveys is a Likert scale, which typically admits from 5 to 11 possible answer options. Consequently, we would also like to consider the restricted epistemic language, i.e. one where the sets of values that the probability function can take on and that can appear as numerical values in the formulae are fixed and finite. We start by defining the restricted value set, which has to be closed under addition and subtraction (assuming the resulting value is still in the $[0,1]$ interval). We can then create subsets of this set according to a given inequality and “threshold” value, as well as sequences of values that can be seen as satisfying a given arithmetical formula: A finite set of rational numbers from the unit interval $\Pi$ is a [**restricted value set**]{} iff for any $x, y \in \Pi$ it holds that if $x+y \leq 1$, then $x + y \in \Pi$, and if $x-y \geq 0$, then $x-y \in \Pi$. For $\Pi \neq \emptyset$, with $\Pi^x_{{\#}} = \{ y \in \Pi \mid y {\#}x \}$ we denote the subset of $\Pi$ obtained according to the value $x$ and relationship ${\#}\in {\{=, \neq, \geq, \leq,>,<\}}$. The [**combination set**]{} for a nonempty restricted value set $\Pi$ and a sequence of arithmetic operations $(*_1, \ldots, *_k)$ where $*_i \in \{+,-\}$ and $k\geq 0$ is defined as: $$\Pi^{x,(*_1, \ldots, *_k)}_{{\#}} = \begin{cases} \{(v) \mid v \in \Pi^{x}_{{\#}} \}& k=0 \\ \{(v_1, \ldots, v_{k+1}) \mid v_i \in \Pi, \, v_1 *_1 \ldots *_k v_{k+1} {\#}x\} & \mbox{ otherwise } \\ \end{cases}$$ Let $\Pi_1 = \{ 0, 0.5, 0.75, 1 \}$. We can observe that it is not a restricted value set, since $0.75-0.5 = 0.25$ is missing from $\Pi_1$. Its modification, $\Pi_2 = \{ 0, 0.25, 0.5, 0.75, 1 \}$, is a restricted value set. Similarly, it is easy to show that $\Pi_3 = \{0, \frac{1}{3}, \frac{2}{3}, \frac{3}{3}\}$ and $\Pi_4 = \{0, \frac{2}{5}, \frac{4}{5}\}$ are also restricted value sets. The subsets of $\Pi_2$ for $x = 0.25$ under various inequalities are as follows: ${\Pi_2}^x_{>} = \{ 0.5,0.75,1\}$, ${\Pi_2}^x_{<} = \{ 0\}$, ${\Pi_2}^x_{\geq} = \{0.25,0.5,0.75,1\}$, ${\Pi_2}^x_{\leq} = \{ 0, 0.25\}$, ${\Pi_2}^x_{\neq} = \{0, 0.5,0.75,1\}$, and ${\Pi_2}^x_{=} = \{ 0.25\}$. Assume we have a restricted value set $\Pi_3 = \{0,0.5,1\}$, a sequence of operations $(+, -)$, an operator $=$ and a value $x=1$. In order to find appropriate combination sets, we are simply looking for triples of values $(\tau_1,\tau_2,\tau_3)$ s.t. $x+y-z = 1$. By collecting such combinations of values from $\Pi_3$, we obtain six possible value sequences, i.e. ${\Pi_3}^{1,(+,-)}_{=} = \{ ( 0 , 1 , 0 )$, $( 0.5 , 0.5 , 0 )$, $( 0.5 , 1 , 0.5 )$, $( 1 , 0 , 0 )$, $( 1 , 0.5 , 0.5 )$, $( 1 , 1 , 1 ) \}$. On the basis of a given restricted value set, we can now constrain our approach both in a syntactic and in a semantic way: Let $\Pi$ be a restricted value set. An epistemic formula $\psi \in {{\sf EFormulae}}({{\cal G}})$ is [**restricted**]{} w.r.t. $\Pi$ iff ${\sf Num}(\psi) \subseteq \Pi$. Let ${{\sf EFormulae}}({{\cal G}},\Pi)$ denote this set of restricted epistemic formulae. Let $\Pi$ be a restricted value set. A probability distribution ${P}\in {{\sf Dist}}({{\cal G}})$ is [**restricted**]{} w.r.t. $\Pi$ iff for every $X \subseteq {{\sf Nodes}}({{\cal G}})$, ${P}(X) \in \Pi$ and for every argument ${{{\tt {A}}}}\in {{\sf Nodes}}({{\cal G}})$, ${P}({{{\tt {A}}}}) \in \Pi$. Let ${{\sf Dist}}({{\cal G}}, \Pi)$ denote the set of restricted distributions of ${{\cal G}}$. \[def:restricteddist\] Let $\Pi$ be a restricted value set. For $\psi \in {{\sf EFormulae}}({{\cal G}},\Pi)$, the [**restricted satisfying distribution**]{} w.r.t. $\Pi$, denoted ${{\sf Sat}}(\psi,\Pi)$, is $${{\sf Sat}}(\psi,\Pi) = {{\sf Sat}}(\psi) \cap {{\sf Dist}}({{\cal G}},\Pi)$$ Due to the properties of $\cap$, $\cup$ and $\setminus$, we can observe that restricted satisfying distributions can be manipulated similarly to the unrestricted ones, i.e. the following hold for formulae $\psi$ and $\phi$: - ${{\sf Sat}}(\phi\land\psi,\Pi) = {{\sf Sat}}(\phi,\Pi) \cap {{\sf Sat}}(\psi,\Pi)$; - ${{\sf Sat}}(\phi\lor\psi,\Pi) = {{\sf Sat}}(\phi,\Pi) \cup {{\sf Sat}}(\psi,\Pi)$; and - ${{\sf Sat}}(\neg\phi,\Pi) = {{\sf Sat}}(\top,\Pi) \setminus {{\sf Sat}}(\phi,\Pi)$. Let $\Pi = \{ 0, 0.5, 1 \}$. In the epistemic language restricted w.r.t. $\Pi$, we can only have atoms of the form $\beta {\#}0$, $\beta {\#}0.5$, and $\beta {\#}1$, where $\beta \in {{\sf OFormulae}}({{\cal G}})$ and ${\#}\in {\{=, \neq, \geq, \leq,>,<\}}$. From these atoms we compose epistemic formulae using the Boolean connectives. Let us assume we have a formula ${p}({{{\tt {A}}}}) + {p}({{{\tt {B}}}}) \leq 0.5$ on a graph s.t. $\{{{{\tt {A}}}}, {{{\tt {B}}}}\} = {{\sf Nodes}}({{\cal G}})$. We can create three restricted satisfying distributions, namely ${P}_1$ s.t. ${P}_1(00) = 1$, ${P}_1(10) = 0$, ${P}_1(01) = 0$ and ${P}_1(11) = 0$, ${P}_2$ s.t. ${P}_2(00) = 0.5$, ${P}_2(10) = 0.5$, ${P}_2(01) = 0$ and ${P}_2(11) = 0$, and ${P}_3$ s.t. ${P}_3(00) = 0.5$, ${P}_3(10) = 0$, ${P}_3(01) = 0.5$ and ${P}_3(11) = 0$. We can observe that depending on the graph and the restricted value set, it might not be possible to create a restricted distribution. For example, we can consider the set $\{0, 0.9\}$. Although it meets the restricted value set requirements, there is no way to add or subtract $0$ and $0.9$ such that they add up to $1$. This means that it is not possible to define a distribution with this set. Thus, it makes sense to consider also a stronger version of $\Pi$ that prevents such scenarios: Let $\Pi$ be a restricted value set. $\Pi$ is [**reasonable**]{} iff for every graph ${{\cal G}}$ s.t. ${{\sf Nodes}}({{\cal G}}) \neq \emptyset$, ${{\sf Dist}}({{\cal G}}, \Pi) \neq \emptyset$. The following simple properties allow us to easily detect reasonable restricted sets: [lemma]{}[reasonablerestricted]{} \[lemma:reasonablerestricted\] The following hold: - If $\Pi$ is a nonempty restricted value set, then $0 \in \Pi$. - If $\Pi$ is a reasonable restricted value set, then $0 \in \Pi$. - A restricted value set $\Pi$ is reasonable iff $1 \in \Pi$. It can happen that the combination sets or value subsets of $\Pi$ are empty. However, as we can see, this occurs only if particular conditions are met: [proposition]{}[restrnonempty]{} \[restrnonempty\] Let $\Pi$ be a nonempty restricted value set, $x \in \Pi$ a value, ${\#}\in {\{=, \neq, \geq, \leq,>,<\}}$ an inequality, and $(*_1, \ldots, *_k)$ a sequence of operators where $*_i \in \{+,-\}$ and $k\geq 0$. Let $max(\Pi)$ denote the maximal value of $\Pi$. The following hold: - $\Pi^x_{\#}= \emptyset$ if and only if: 1. $\Pi = \{0\}$ and ${\#}= \neq$, or 2. ${\#}$ is $>$ and $x = max(\Pi)$, or 3. ${\#}$ is $<$ and $x = 0$. - $\Pi^{x,(*_1, \ldots, *_k)}_{{\#}} = \emptyset$ if and only if: 1. $k=0$ and $\Pi^x_{\#}= \emptyset$, or 2. $k >0$, ${\#}$ is $>$, $x = max(\Pi)$ and for no $*_i$, $*_i = +$, or 3. $k >0$, ${\#}$ is $>$ and $\Pi = \{0\}$, or 4. $k >0$, ${\#}$ is $<$, $x = 0$ and for no $*_i$, $*_i = -$, or 5. $k >0$, ${\#}$ is $<$ and $\Pi = \{0\}$. 6. $k >0$, ${\#}$ is $\neq$, $\Pi = \{0\}$. The restricted language is appropriate for applications where a restricted set of belief values are available. For instance, it could be used when the beliefs in arguments are obtained from surveys using the Likert scale. When we consider the proof theory for constraints, the restricted language also has advantages if we want to harness automated reasoning with the logical statements. Distribution Disjunctive Normal Form ------------------------------------ In propositional logic, we often analyze formulae in various normal forms due to their useful properties. Traditional forms include the negation normal form NNF, conjunctive normal form CNF and disjunctive normal form DNF. Given that epistemic formulae extend propositional logic, they can also be transformed into various normal forms if we look at epistemic atoms as propositions. In principle, for every formula $\varphi$ we can find at least one formula $\varphi'$ that is in NNF, CNF or DNF and s.t. ${{\sf Sat}}(\varphi) = {{\sf Sat}}(\varphi')$. However, further notions can be introduced once we take the meaning of the atoms into account. In this section we introduce a normal form for epistemic formulae from which it is easy to read if and how a given formula can be satisfied. Let us start by observing that for every probability distribution, we can create an epistemic formula describing precisely that distribution. As we may remember, a probability distribution maps sets of arguments to probabilities. For every such set, we can create a term (i.e. a propositional formula over arguments) describing it, where arguments contained in the set appear as positive literals and those not in the set appear as negative literals. This brings us to the notion of argument complete terms: Let $\langle {{{\tt {A}}}}_1,\ldots,{{{\tt {A}}}}_n \rangle$ be the order of arguments in ${{\cal G}}$ and $\varphi \in {{\sf Terms}}({{\cal G}})$ a term. Then $\varphi$ is $\textbf{argument complete}$ iff it is of the form $\alpha_1 \land \ldots \land \alpha_n$, where $\alpha_i = {{{\tt {A}}}}_i$ or $\alpha_i = \neg {{{\tt {A}}}}_i$. With ${{\sf AComplete}}({{\cal G}}) = \{c_1, \ldots, c_j\}$ we denote the set of all complete terms on ${{\cal G}}$, where $j = 2^n$. Let us consider a graph with arguments ${{{\tt {A}}}}$, ${{{\tt {B}}}}$ and ${{{\tt {C}}}}$ and ordering $\langle {{{\tt {A}}}}, {{{\tt {B}}}}, {{{\tt {C}}}}\rangle$. We can create the following argument complete terms: $\neg {{{\tt {A}}}}\land \neg {{{\tt {B}}}}\land \neg {{{\tt {C}}}}$, ${{{\tt {A}}}}\land \neg {{{\tt {B}}}}\land \neg {{{\tt {C}}}}$, $\neg {{{\tt {A}}}}\land {{{\tt {B}}}}\land \neg {{{\tt {C}}}}$, $\neg {{{\tt {A}}}}\land \neg {{{\tt {B}}}}\land {{{\tt {C}}}}$, $ {{{\tt {A}}}}\land {{{\tt {B}}}}\land \neg {{{\tt {C}}}}$, $\neg {{{\tt {A}}}}\land {{{\tt {B}}}}\land {{{\tt {C}}}}$, $ {{{\tt {A}}}}\land \neg {{{\tt {B}}}}\land {{{\tt {C}}}}$, and $ {{{\tt {A}}}}\land {{{\tt {B}}}}\land {{{\tt {C}}}}$. By using atoms containing only complete terms, we can create a formula describing precisely one distribution: Let ${P}\in {{\sf Dist}}({{\cal G}})$ be a probability distribution and ${{\sf AComplete}}({{\cal G}}) = \{c_1, \ldots, c_j\}$ the collection of all argument complete terms for ${{\cal G}}$. The **epistemic formula associated with** ${P}$ is $\varphi^{P}= {p}(c_1) = x_1 \land {p}(c_2) = x_2 \land \ldots \land {p}(c_j) = x_j$, where $x_i = {P}(c_i)$. [proposition]{}[satdistrdnf]{} \[satdistrdnf\] Let ${P}\in {{\sf Dist}}({{\cal G}})$ be a probability distribution and $\varphi^{P}$ its associated epistemic formula. Then $\{{P}\} = {{\sf Sat}}(\varphi^{P})$. \[ex:ddnftabs\] Assume we a have a graph s.t. $\{{{{\tt {A}}}}, {{{\tt {B}}}}\} = {{\sf Nodes}}({{\cal G}})$. Below we have tabulated some of the possible distributions for our graph and their associated formulae. $$\begin{array}{c|cccc|c} & \emptyset & \{{{{\tt {A}}}}\} & \{{{{\tt {B}}}}\} & \{{{{\tt {A}}}},{{{\tt {B}}}}\} & \varphi^{P_i} \\ \hline {P}_1 & 0 & 1 & 0 & 0 & {p}(\neg {{{\tt {A}}}}\land \neg {{{\tt {B}}}}) = 0 \land {p}( {{{\tt {A}}}}\land \neg {{{\tt {B}}}}) = 1 \land {p}(\neg {{{\tt {A}}}}\land {{{\tt {B}}}}) = 0 \land {p}( {{{\tt {A}}}}\land {{{\tt {B}}}}) = 0 \\ {P}_2 & 0 & 0 & 1 & 0 & {p}(\neg {{{\tt {A}}}}\land \neg {{{\tt {B}}}}) = 0 \land {p}( {{{\tt {A}}}}\land \neg {{{\tt {B}}}}) = 0 \land {p}(\neg {{{\tt {A}}}}\land {{{\tt {B}}}}) = 1 \land {p}( {{{\tt {A}}}}\land {{{\tt {B}}}}) = 0 \\ {P}_3 & 0 & 0 & 0 & 1 & {p}(\neg {{{\tt {A}}}}\land \neg {{{\tt {B}}}}) = 0 \land {p}( {{{\tt {A}}}}\land \neg {{{\tt {B}}}}) = 0 \land {p}(\neg {{{\tt {A}}}}\land {{{\tt {B}}}}) = 0 \land {p}( {{{\tt {A}}}}\land {{{\tt {B}}}}) = 1 \\ {P}_4 & 0 & 0.5 & 0 & 0.5 & {p}(\neg {{{\tt {A}}}}\land \neg {{{\tt {B}}}}) = 0 \land {p}( {{{\tt {A}}}}\land \neg {{{\tt {B}}}}) = 0.5 \land {p}(\neg {{{\tt {A}}}}\land {{{\tt {B}}}}) = 0 \land {p}( {{{\tt {A}}}}\land {{{\tt {B}}}}) = 0.5 \\ {P}_5 & 0 & 0 & 0.5 & 0.5 & {p}(\neg {{{\tt {A}}}}\land \neg {{{\tt {B}}}}) = 0 \land {p}( {{{\tt {A}}}}\land \neg {{{\tt {B}}}}) = 0 \land {p}(\neg {{{\tt {A}}}}\land {{{\tt {B}}}}) = 0.5 \land {p}( {{{\tt {A}}}}\land {{{\tt {B}}}}) = 0.5 \\ {P}_6 & 0 & 0.5 & 0.5 & 0 & {p}(\neg {{{\tt {A}}}}\land \neg {{{\tt {B}}}}) = 0 \land {p}( {{{\tt {A}}}}\land \neg {{{\tt {B}}}}) = 0.5 \land {p}(\neg {{{\tt {A}}}}\land {{{\tt {B}}}}) = 0.5 \land {p}( {{{\tt {A}}}}\land {{{\tt {B}}}}) = 0 \\ {P}_7 & 0.1 & 0.3 & 0.2 & 0.4 & {p}(\neg {{{\tt {A}}}}\land \neg {{{\tt {B}}}}) = 0.1 \land {p}( {{{\tt {A}}}}\land \neg {{{\tt {B}}}}) = 0.3 \land {p}(\neg {{{\tt {A}}}}\land {{{\tt {B}}}}) = 0.2 \land {p}( {{{\tt {A}}}}\land {{{\tt {B}}}}) = 0.4 \\ \end{array}$$ Consequently, for every epistemic formula $\varphi$, we can create a semantically equivalent formula $\varphi'$ that is built from the formulae associated with the distributions satisfying $\varphi$. We refer to this new formula as the distribution disjunctive normal form. Given the fact that an epistemic formula can potentially be satisfied by infinitely many distributions, we only consider this form in the context of restricted reasoning. Let $\Pi$ be a reasonable restricted value set, $\psi \in {{\sf EFormulae}}({{\cal G}}, \Pi)$ be a restricted epistemic formula and $\{{P}_1,\ldots,{P}_n\} = {{\sf Sat}}(\psi, \Pi)$ the set of distributions satisfying $\psi$ under $\Pi$. The **distribution disjunctive normal form** (abbreviated DDNF) of $\psi$ is $\bot$ iff ${{\sf Sat}}(\psi, \Pi) = \emptyset$, and $\varphi^{{P}_1} \lor \varphi^{{P}_2} \ldots \lor \varphi^{{P}_n}$ otherwise, where $\varphi^{{P}_i}$ is the epistemic formula associated with ${P}_i$. [proposition]{}[satdistrdnfform]{} \[satdistrdnfform\] Let $\Pi$ be a reasonable restricted value set, $\psi \in {{\sf EFormulae}}({{\cal G}}, \Pi)$ be a restricted epistemic formula and $\varphi$ its distribution disjunctive normal form. Then ${{\sf Sat}}(\psi, \Pi) = {{\sf Sat}}(\varphi, \Pi)$. \[ex:ddnf1\] Let us continue Example \[ex:ddnftabs\] and assume we have an epistemic atom ${p}({{{\tt {A}}}}\lor {{{\tt {B}}}}) > 0.5$ and a reasonable restricted value set $\Pi = \{0, 0.5, 1\}$. Distributions ${P}_1$ to ${P}_6$ are the restricted satisfying distributions of our formula and the DDNF associated with ${p}({{{\tt {A}}}}\lor {{{\tt {B}}}}) > 0.5$ is $\varphi^{{P}_1} \lor \varphi^{{P}_2} \lor \varphi^{{P}_3} \lor \varphi^{{P}_4} \lor \varphi^{{P}_5} \lor \varphi^{{P}_6}$. We will harness the DDNF when we provide correctness results for the consequence relation for the epistemic language in Section \[section:consequencerelation\]. Reasoning with the Epistemic Language {#section:ReasoningConstraints} ===================================== Previously, we have considered the syntax and semantics of our epistemic language. However, we have not yet explained how two epistemic formulae can be related based on their satisfying distributions, or what can be logically inferred from a given formula. We would like to address this here by first introducing the notion of epistemic entailment and then by providing a consequence relation, with the latter primarily focused on the restricted language. From now on, unless stated otherwise, we will assume that the argumentation framework we are dealing with is finite and nonempty (i.e. the set of arguments in the graph is finite and nonempty). Epistemic Entailment -------------------- Let us start with the unrestricted epistemic entailment relation, which is defined in the following manner: Let $\{\phi_1,\ldots,\phi_n\} \subseteq {{\sf EFormulae}}({{\cal G}})$ be a set of epistemic formulae, and $\psi \in {{\sf EFormulae}}({{\cal G}})$ be an epistemic formula. The [**epistemic entailment relation**]{}, denoted $\VDash$, is defined as follows. $$\{\phi_1,\ldots,\phi_n\}\VDash\psi \mbox{ iff } {{\sf Sat}}(\{\phi_1,\ldots,\phi_n\}) \subseteq {{\sf Sat}}(\psi)$$ The following are some instances of epistemic entailment. - $\{ {p}({{{\tt {A}}}}) < 0.2 \} \VDash {p}({{{\tt {A}}}}) < 0.3$ - $\{ {p}({{{\tt {A}}}}) < 0.2 \} \VDash {p}({{{\tt {A}}}}\land {{{\tt {B}}}}) < 0.2$ - $\{ {p}({{{\tt {A}}}}) < 0.9, {p}({{{\tt {A}}}}) > 0.7 \} \VDash {p}({{{\tt {A}}}}) \geq 0.7 \land \neg ({p}({{{\tt {A}}}}) > 0.9)$ Let us now focus on reasoning in the restricted scenario, which can be defined similarly to the standard epistemic entailment through the use of restricted satisfying distributions: Let $\Pi$ be a restricted value set, $\{\phi_1,\ldots,\phi_n\} \subseteq {{\sf EFormulae}}({{\cal G}},\Pi)$ a set of epistemic formulae, and $\psi \in {{\sf EFormulae}}({{\cal G}})$ an epistemic formula. The [**restricted epistemic entailment relation**]{} w.r.t. $\Pi$, denoted $\VDash_{\Pi}$, is defined as follows. $$\{\phi_1,\ldots,\phi_n\}\VDash_{\Pi} \psi \mbox{ iff } {{\sf Sat}}(\{\phi_1,\ldots,\phi_n\},\Pi) \subseteq {{\sf Sat}}(\psi,\Pi)$$ Consider $\Pi = \{ 0, 0.25, 0.5, 0.75, 1 \}$ and restricted epistemic formulae ${p}({{{\tt {A}}}}) + {p}(\neg {{{\tt {B}}}}) \leq 1$ and ${p}({{{\tt {A}}}}) + {p}(\neg {{{\tt {B}}}}) \leq 0.75$. It holds that $$\{ {p}({{{\tt {A}}}}) + {p}(\neg {{{\tt {B}}}}) \leq 0.75 \} \VDash_{\Pi} {p}({{{\tt {A}}}}) + {p}(\neg {{{\tt {B}}}}) \leq 1$$ Let us now discuss how the restricted satisfying distributions and the restricted entailment are related to the unrestricted versions. First of all, by Definition \[def:restricteddist\], we can observe that every restricted satisfying distribution for an epistemic formula is also a satisfying distribution. Thus, we can easily show that epistemic entailment implies restricted entailment: [proposition]{}[temp]{} \[temp1\] Let $\Pi$ be a restricted value set, $\Phi \subseteq {{\sf EFormulae}}({{\cal G}},\Pi)$ a set of epistemic formulae, and $\psi \in {{\sf EFormulae}}({{\cal G}})$ an epistemic formula. If $\Phi \VDash \psi$ then $\Phi \VDash_{\Pi} \psi$. In principle, we can observe that a “less” restricted entailment implies a “more” restricted one: [proposition]{}[tempmore]{} \[temp2\] Let $\Pi_1 \subseteq \Pi_2$ be restricted value sets, $\Phi \subseteq {{\sf EFormulae}}({{\cal G}},\Pi_1)$ a set of epistemic formulae, and $\psi \in {{\sf EFormulae}}({{\cal G}})$ an epistemic formula. If $\Phi \VDash_{\Pi_2} \psi$ then $\Phi \VDash_{\Pi_1} \psi$. Note, it does not necessarily hold that if one formula follows from another in a restricted manner, then it also follows in the unrestricted one as illustrated below: Consider two formulae $\varphi_1: {p}({{{\tt {A}}}}) \neq 0.5$ and $\varphi_2: {p}({{{\tt {A}}}}) = 0 \lor {p}({{{\tt {A}}}}) = 1$ and a reasonable restricted set $\Pi = \{0, 0.5, 1\}$. We can observe that ${{\sf Sat}}(\varphi_1, \Pi) = {{\sf Sat}}(\varphi_2, \Pi)$ and therefore $\{\varphi_1\} \VDash_{\Pi} \varphi_2$. However, in the unrestricted case we can consider a probability distribution ${P}$ s.t. ${P}({{{\tt {A}}}}) = 0.9$ in order to show that ${{\sf Sat}}(\varphi_1) \nsubseteq {{\sf Sat}}(\varphi_2)$. We can observe that this issue would have been bypassed if, instead of $\Pi$, we considered the set $\Pi_2 = \{0, 0.25, 0.5, 0.75, 1\}$, for which $\{\varphi_1\} \nVDash_{\Pi_2} \varphi_2$. Consequently, although restricted entailment does not in general imply unrestricted entailment, for a given set of formulae it is possible to find such a $\Pi$ for which this property holds. The reason that an inference from the restricted entailment relation is not necessarily an inference from the unrestricted entailment relation is that the restricted case contains more information. The set $\Pi$ is extra information that restricts the possible assignments for the probability distribution. Indeed, it could equivalently be represented as a set of formulae that could be added to the left-hand side of the unrestricted entailment relation. This is analogous to the use of explicit formulae on the domain in order to formalise the closed world assumption in predicate logic [@Reiter1978]. Consequence Relation {#section:consequencerelation} -------------------- In order to provide a proof theoretic counterpart to the entailment relation, we present a consequence relation in this subsection. For this, we will focus on the restricted language. The advantage of having a consequence relation is that we can now obtain inferences from a set of epistemic formulae. This means we can for instance determine if one constraint is implied by another, and whether there is redundancy in a set of constraints (i.e. whether the set is not minimal). More generally, we will see that both the entailment and consequence relations are important in examining properties of epistemic graphs as covered in Section \[sec:epistemicgraphs\]. Before we present the epistemic proof system, we introduce some subsidiary definitions associated with the arithmetic nature of operational formulae. Although they are not limited to restricted formulae only, we prefer to have them at hand due to the fact that we will be using them in our epistemic proof system: Let $f_1 {:}{p}(\alpha_1) *_1 {p}(\alpha_2) *_2 \ldots *_{m-1} {p}(\alpha_m)$, and $f_2 {:}{p}(\beta_1) \star_1 {p}(\beta_2) \star_2 \ldots \star_{l-1} {p}(\beta_l)$, where $\alpha_i, \beta_i \in {{\sf Terms}}({{\cal G}})$ and $*_i, \star_i \in \{+, -\}$, be operational formulae. $f_1 \succeq_{su} f_2$ denotes the [**subject inequality relation**]{} that holds when $f_2$ is obtained from $f_1$ by logical weakening of an element ${p}(\alpha_i)$ of $f_1$ to ${p}(\alpha'_i)$ where $\{\alpha_i\}\vdash\alpha'_i$, and all other elements are the same in $f_1$ and $f_2$. Additionally: - with $f_1 \succeq_{su}^+ f_2$ we denote the case where $f_1 \succeq_{su} f_2$ and either $i=1$ or $*_{i-1} = +$ - with $f_1 \succeq_{su}^- f_2$ we denote the case where $f_1 \succeq_{su} f_2$, $i > 1$ and $*_{i-1} = -$ Let $\varphi_1 = f_1 {\#}x$ and $\varphi_2 = f_2 {\#}x$, where ${\#}\in {\{=, \neq, \geq, \leq,>,<\}}$ and $x \in [0,1]$, be epistemic atoms. We say that $\varphi_1 \succeq_{su} \varphi_2$ iff $f_1 \succeq_{su} f_2$ and with $\varphi_1 \succeq_{su}^+ \varphi_2$ (resp. $\varphi_1 \succeq_{su}^- \varphi_2$) we denote the case where $f_1 \succeq_{su}^+ f_2$ (resp. $f_1 \succeq_{su}^- f_2$). The following illustrate the subject inequality relation. $$\begin{array}{c} {p}({{{\tt {B}}}}) - {p}({{{\tt {A}}}}\land {{{\tt {C}}}}) > x \succeq^+_{su} {p}({{{\tt {B}}}}\vee {{{\tt {D}}}}) -{p}({{{\tt {A}}}}\land {{{\tt {C}}}}) >x \\ {p}({{{\tt {B}}}}) - {p}({{{\tt {A}}}}\land {{{\tt {C}}}}\land {{{\tt {E}}}}) < x \succeq^-_{su} {p}({{{\tt {B}}}}) - {p}({{{\tt {A}}}}\land {{{\tt {C}}}}) < x \\ \end{array}$$ [proposition]{}[propformsat]{} \[prop:formsat\] For epistemic atoms $\varphi_1= f_1 {\#}x$ and $\varphi_2 = f_2 {\#}x$ in ${{\sf EFormulae}}({{\cal G}},\Pi)$, the following hold: - if $\varphi_1 \succeq_{su}^+ \varphi_2$ and ${\#}\in \{> ,\geq\}$ then ${{\sf Sat}}(\varphi_1) \subseteq {{\sf Sat}}(\varphi_2)$, and if ${\#}\in \{<, \leq\}$, then ${{\sf Sat}}(\varphi_2) \subseteq {{\sf Sat}}(\varphi_1)$ - if $\varphi_1 \succeq_{su}^- \varphi_2$ and ${\#}\in \{< ,\leq\}$ then ${{\sf Sat}}(\varphi_1) \subseteq {{\sf Sat}}(\varphi_2)$, and if ${\#}\in \{>, \geq\}$, then ${{\sf Sat}}(\varphi_2) \subseteq {{\sf Sat}}(\varphi_1)$ We can now introduce the proof rule system for the epistemic formulae. The basic rules grasp the primitive properties of probabilities, i.e. that any probability is in the unit interval, and that probabilities of $\top$ and $\bot$ are respectively $1$ and $0$. The probabilistic rule allows us to express the probability of conjunction (disjunction) of two argument terms through the probabilities of these terms. The subject rules capture the behaviour of epistemic formulae that are connected through the subject inequality relation. The enumeration rules allow us to transform any inequality into a formula using only equality under the given restricted set $\Pi$. However, given the results of Proposition \[restrnonempty\], in some cases it can happen that the appropriate subsets of $\Pi$ are empty. Thus, wherever applicable, we make it clear that the resulting formula should be seen as falsity. Finally, the propositional rules capture how the reasoning extends classical propositional logic. Let ${\#}\in {\{=, \neq, \geq, \leq,>,<\}}$ and $x \in [0,1]$. Let $\Pi$ be a restricted value set, and $\Pi^{x,(*_1,\ldots,*_{m-1})}_{{\#}}$ be the combination set of $\Pi$ obtained according to the value $x$, relationship ${\#}$ and the sequence arithmetic operations of arithmetic operations $(*_1,\ldots,*_{m-1})$. Also let $f_1 {:}{p}(\alpha_1) *_1 {p}(\alpha_2) *_2 \ldots *_{k-1} {p}(\alpha_k)$ and $f_2 {:}{p}(\beta_1) \star_1 {p}(\beta_2) \star_2 \ldots \star_{l-1} {p}(\beta_l)$, where $k,l \geq 1$, $\alpha_i,\beta_i \in {{\sf Terms}}({{\cal G}})$ and $\star_j,*_i \in \{+ ,-\}$ be operational formulae. The [**restricted epistemic consequence relation**]{}, denoted $\Vdash_\Pi$, is defined as follows, where $\vdash$ is propositional consequence relation, $\Phi \subseteq {{\sf EFormulae}}({{\cal G}},\Pi)$, and $\phi,\psi \in {{\sf EFormulae}}({{\cal G}},\Pi)$. The following proof rules are the [**basic rules**]{}: $$\begin{aligned} (B1) \hspace{1mm} \Phi\Vdash_\Pi {p}(\alpha) \geq 0 & \mbox{ iff } \Phi\Vdash_\Pi \top & (B2) \hspace{1mm} \Phi\Vdash_\Pi {p}(\alpha) \leq 1 & \mbox{ iff } \Phi\Vdash_\Pi \top \\ (B3) \hspace{1mm} \Phi\Vdash_\Pi {p}(\top) = 1 & \mbox{ iff } \Phi\Vdash_\Pi \top & (B4) \hspace{1mm} \Phi\Vdash_\Pi {p}(\bot) = 0 & \mbox{ iff } \Phi\Vdash_\Pi \top \end{aligned}$$ The following rule is the [**probabilistic rule**]{}: $$\begin{array}{rl} (PR1) & \Phi\Vdash_\Pi {p}(\alpha \lor \beta) - {p}(\alpha) - {p}(\beta) + {p}(\alpha \land \beta) = 0 \\ \end{array}$$ The following proof rules are the [**subject rules**]{}. $$\begin{array}{rl} (S1) & \Phi\Vdash_\Pi f_1 > x \mbox{ and } f_1 \succeq_{su}^+ f_2 \mbox{ implies } \Phi\Vdash_\Pi f_2 > x \\ (S2) & \Phi\Vdash_\Pi f_1 \geq x \mbox{ and } f_1 \succeq_{su}^+ f_2 \mbox{ implies } \Phi\Vdash_\Pi f_2 \geq x \\ (S3) & \Phi\Vdash_\Pi f_1 < x \mbox{ and } f_1 \succeq_{su}^- f_2 \mbox{ implies } \Phi\Vdash_\Pi f_2 < x \\ (S4) & \Phi\Vdash_\Pi f_1 \leq x \mbox{ and } f_1 \succeq_{su}^- f_2 \mbox{ implies } \Phi\Vdash_\Pi f_2 \leq x \\ (S5) & \Phi\Vdash_\Pi f_2 < x \mbox{ and } f_1 \succeq_{su}^+ f_2 \mbox{ implies } \Phi\Vdash_\Pi f_1 < x \\ (S6) & \Phi\Vdash_\Pi f_2 \leq x \mbox{ and } f_1 \succeq_{su}^+ f_2 \mbox{ implies } \Phi\Vdash_\Pi f_1 \leq x \\ (S7) & \Phi\Vdash_\Pi f_2 > x \mbox{ and } f_1 \succeq_{su}^- f_2 \mbox{ implies } \Phi\Vdash_\Pi f_1 > x \\ (S8) & \Phi\Vdash_\Pi f_2 \geq x \mbox{ and } f_1 \succeq_{su}^- f_2 \mbox{ implies } \Phi\Vdash_\Pi f_1 \geq x \\ \end{array}$$ The next rules are the [**enumeration rules**]{}. $$\begin{aligned} (E1) \hspace{1mm} \Phi\Vdash_\Pi f_1 {\#}x & \mbox{ iff } (\Phi\Vdash_\Pi \bigvee_{(v_1,\ldots,v_k) \in \Pi^{x, {{\sf AOp}}(f_1)}_{{\#}}} ({p}(\alpha_1) = v_1 \land {p}(\alpha_2) = v_2 \land \ldots \land {p}(\alpha_k)= v_k) \\ &\mbox{ if } \Pi^{x, {{\sf AOp}}(f_1)}_{\#}\neq \emptyset \mbox{ and } \Phi \Vdash_\Pi \bot \mbox { otherwise}) \\ (E2) \hspace{1mm} \Phi\Vdash_\Pi f_1 > x & \mbox{ iff } \Phi\Vdash_\Pi \neg(\bigvee_{(v_1,\ldots,v_k) \in \Pi^{x, {{\sf AOp}}(f_1)}_{\leq}} ({p}(\alpha_1) = v_1 \land {p}(\alpha_2) = v_2 \land \ldots \land {p}(\alpha_k)= v_k)) \\ (E3)\hspace{1mm}\Phi\Vdash_\Pi f_1 \geq x & \mbox{ iff } (\Phi\Vdash_\Pi \neg(\bigvee_{(v_1,\ldots,v_k) \in \Pi^{x, {{\sf AOp}}(f_1)}_{<}} ({p}(\alpha_1) = v_1 \land {p}(\alpha_2) = v_2 \land \ldots \land {p}(\alpha_k)= v_k))\\ &\mbox{ if } \Pi^{x, {{\sf AOp}}(f_1)}_{<} \neq \emptyset \mbox{ and } \Phi \Vdash_\Pi \neg(\bot) \mbox { otherwise}) \\ (E4) \hspace{1mm} \Phi\Vdash_\Pi f_1 < x & \mbox{ iff } \Phi\Vdash_\Pi \neg(\bigvee_{(v_1,\ldots,v_k) \in \Pi^{x, {{\sf AOp}}(f_1)}_{\geq}} ({p}(\alpha_1) = v_1 \land {p}(\alpha_2) = v_2 \land \ldots \land {p}(\alpha_k)= v_k)) \\ (E5) \hspace{1mm} \Phi\Vdash_\Pi f_1 \leq x & \mbox{ iff } (\Phi\Vdash_\Pi \neg(\bigvee_{(v_1,\ldots,v_k) \in \Pi^{x,{{\sf AOp}}(f_1)}_{>}} ({p}(\alpha_1) = v_1 \land {p}(\alpha_2) = v_2 \land \ldots \land {p}(\alpha_k)= v_k))\\ &\mbox{ if } \Pi^{x, {{\sf AOp}}(f_1)}_{>} \neq \emptyset \mbox{ and } \Phi \Vdash_\Pi \neg(\bot) \mbox { otherwise}) \\ \end{aligned}$$ The following proof rules are the [**propositional rules**]{}. $$\begin{array}{rl} (P1) & \Phi\Vdash_\Pi\phi_1 \mbox{ and } \ldots \mbox{ and }\Phi\Vdash_\Pi \phi_n \mbox{ and } n \geq 1 \mbox{ and } \{\phi_1,\ldots,\phi_n\}\vdash\psi \mbox{ implies } \Phi\Vdash_\Pi \psi \\ (P2) & \mbox{ if } \Phi\vdash \varphi \mbox{ then } \Phi\Vdash_\Pi \varphi \end{array}$$ For $\Pi = \{0, 0.2, 0.4, 0.6, 0.8, 1.0\}$, the following illustrate the restricted epistemic consequence relation. - $\{ {p}({{{\tt {A}}}}) + {p}({{{\tt {B}}}}) \leq 1, {p}({{{\tt {A}}}}) - {p}({{{\tt {B}}}}) \geq 1 \} \Vdash_\Pi {p}({{{\tt {A}}}}) + {p}({{{\tt {B}}}}) = 1$ - $\{ {p}({{{\tt {A}}}}) > 0.8, {p}({{{\tt {A}}}}) > 0.5 \rightarrow {p}({{{\tt {B}}}}) > 0.5 \} \Vdash_\Pi {p}({{{\tt {B}}}}) > 0.5$ - $\{ {p}({{{\tt {C}}}}) > 0.5 \rightarrow {p}({{{\tt {B}}}}) > 0.5 \land {p}({{{\tt {A}}}}) > 0.5, {p}({{{\tt {B}}}}) > 0.5 \rightarrow {p}({{{\tt {A}}}}) \leq 0.5 \} \Vdash_\Pi \bot$ - $\{ {p}({{{\tt {A}}}}) > 0.6 \} \Vdash_\Pi {p}({{{\tt {A}}}}) = 0.8 \vee {p}({{{\tt {A}}}}) = 1$ We can use the epistemic consequence relation to infer relationships between unconnected nodes as illustrated next. \[ex:reasoning:2\] For the following graph, consider the formulae ${{\cal C}}= \{ {p}({{{\tt {C}}}}) > 0.5 \rightarrow {p}({{{\tt {B}}}}) > 0.5, {p}({{{\tt {B}}}}) > 0.5 \rightarrow {p}({{{\tt {A}}}}) \leq 0.5 \}$. From ${{\cal C}}$, we can infer ${p}({{{\tt {C}}}}) > 0.5 \rightarrow {p}({{{\tt {A}}}}) \leq 0.5$. \(a) at (3,0) [${{{\tt {A}}}}$]{}; (b) at (1.5,0) [${{{\tt {B}}}}$]{}; (c) at (0,0) [${{{\tt {C}}}}$]{}; (c) edge\[\] node\[above\] [$+$]{} (b); (b) edge\[\] node\[above\] [$-$]{} (a); The following is a correctness result showing that the restricted epistemic consequence relation is sound with respect to the restricted epistemic entailment relation. [proposition]{}[restrictedvalsound]{} \[prop:restrictedvalsound\] Let $\Pi$ be a restricted value set. For $\Phi \subseteq {{\sf EFormulae}}({{\cal G}},\Pi)$, and $\psi \in {{\sf EFormulae}}({{\cal G}},\Pi)$, if $\Phi \Vdash_\Pi \psi$ then $\Phi\VDash_{\Pi}\psi$. However, as it is often the case, the completeness is somewhat more difficult to show. We may recall that for every probability distribution, we can create an epistemic formula describing precisely that distribution. From the disjunction of such formulae, we have created the distribution disjunctive normal form (DDNF) of every formula, the models of which were identical with the original formula. The challenge of the completeness proof is therefore to show that the DDNF of a given formula is equivalent to it not only semantically, but also syntactically. This can be achieved by first transforming every term into a disjunction of argument complete terms, then separating this epistemic atom into further atoms s.t. every one of them contains precisely one complete term through the use of probabilistic rules. The probabilities of the complete terms that are not present yet can be inferred from the ones that are, and we can use all of this to show the syntactical equivalence of the epistemic formula and its DDNF: [proposition]{}[valdistributiondnf]{} \[valdistributiondnf\] Let $\Pi$ be a reasonable restricted value set, $\Phi \subseteq {{\sf EFormulae}}({{\cal G}},\Pi)$ a set of epistemic formulae and $\psi \in {{\sf EFormulae}}({{\cal G}},\Pi)$ an epistemic formula. Then $\Phi \Vdash_\Pi \psi$ iff $\Phi \Vdash_\Pi \varphi$, where $\varphi$ is the distribution disjunctive normal form of $\psi$ The ability to transform any formula into its DDNF both semantically and syntactically, along with the previous soundness results, brings us to the final correctness result for our system: [proposition]{}[restrictedvalsoundcomp]{} \[prop:restrictedvalsoundcomp\] Let $\Pi$ be a restricted value set. For $\Phi \subseteq {{\sf EFormulae}}({{\cal G}},\Pi)$, and $\psi \in {{\sf EFormulae}}({{\cal G}},\Pi)$, $\Phi \Vdash_\Pi \psi$ iff $\Phi\VDash_{\Pi}\psi$. In addition, the following property can be shown, which indicates that we can develop algorithms for automated reasoning based on proof by contradiction: [proposition]{}[negval]{} \[prop:negval\] Let $\Pi$ be a reasonable restricted value set. For $\Phi \subseteq {{\sf EFormulae}}({{\cal G}},\Pi)$ and $\psi \in {{\sf EFormulae}}({{\cal G}},\Pi)$, $\Phi\Vdash_\Pi \psi \mbox{ iff } \Phi \cup \{\neg \psi\} \Vdash_{\Pi} \bot$. We can also observe that for a finite set of rational numbers from the unit interval $\Pi$, representing and reasoning with the restricted epistemic language w.r.t. $\Pi$ is equivalent to propositional logic. We show this via the next two lemmas. [lemma]{}[lemmavformula]{} \[lemma:vformula\] Let $\Pi$ be a reasonable restricted value set. There is a set of propositional formulae $\Omega$ with $\Lambda \subseteq \Omega$, and there is a function $f: {{\sf EFormulae}}({{\cal G}},\Pi) \rightarrow \Omega$ s.t. for each $\{\phi_1,\ldots,\phi_n\} \subseteq {{\sf EFormulae}}({{\cal G}},\Pi)$, and for each $\psi \in {{\sf EFormulae}}({{\cal G}},\Pi)$, $$\{\phi_1,\ldots,\phi_n\} \Vdash_\Pi \psi \mbox{ iff } \{f(\phi_1),\ldots,f(\phi_n)\} \cup \Lambda \vdash f(\psi)$$ [lemma]{}[restrictedequivalencetwo]{} \[lemma:2\] Let $\Omega$ be a propositional language composed from a set of atoms and the usual definitions for the Boolean connectives. There is a restricted epistemic language ${{\sf EFormulae}}({{\cal G}},\Pi)$ where $\Pi = \{ 0,1 \}$ and there is a function $g: \Omega \rightarrow {{\sf EFormulae}}({{\cal G}},\Pi)$ s.t. for each set of propositional formulae $\{\alpha_1,\ldots,\alpha_n\} \subseteq \Omega$ and for each propositional formula $\beta \in \Omega$, $$\{\alpha_1,\ldots,\alpha_n\} \vdash \beta \mbox{ iff } \{g(\alpha_1),\ldots,g(\alpha_n)\} \Vdash_\Pi g(\beta)$$ From Lemma \[lemma:vformula\] and Lemma \[lemma:2\], we obtain the following result. This means that whatever can be represented or inferred in the restricted epistemic language can be represented or inferred in the classical propositional language and vice versa. [proposition]{}[propclassequivalenttwo]{} The restricted epistemic language with the restricted epistemic consequence relation is equivalent to the classical propositional language with the classical propositional consequence relation. The restricted language (where the values for the inequalities are restricted to a finite set of values from the unit interval) allows for inequalities to be rewritten as a disjunction of equalities. This then allows for an epistemic consequence relation to be defined as a conservative extension of the classical propositional consequence relation. The advantage of this restricted version is that it can be easily implemented using *constraint satisfaction* techniques [@Dechter2003; @Rossi:2006; @Tsang1993]. These allow for a declarative representation of constraints and provide sophisticated methods for determining solutions. For some applications, such as user modelling in persuasion dialogues, having a restricted set of values (such as corresponding to a Likert scale) would offer a sufficiently rich framework. Closure ------- Last, but not the least, we define the notion of an epistemic closure, which will become particularly useful in the analysis of relation coverage and labelings in Sections \[sec:RelationCoverage\] and \[sec:consistentlabel\]. To put it simply, closure produces the set of all formulae derivable from a given set: Let $\Phi \subseteq {{\sf EFormulae}}({{\cal G}})$. The [**epistemic closure function**]{} is defined as follows. $${{\sf Closure}}(\Phi) = \{ \psi \mid \Phi\VDash \psi \}$$ We can observe that closure can produce infinitely many formulae that, depending on how we intend to use it, can be seen as redundant. For example, from a formula ${p}({{{\tt {A}}}}) > 0.5$ we can derive ${p}({{{\tt {A}}}})>y$ for every real number $y \in [0,0.5]$. Consequently, in many cases it makes sense to focus on closure w.r.t. a given reasonable restricted set of values $\Pi$: Let $\Pi$ be a reasonable restricted value set, and let $\Phi \subseteq {{\sf EFormulae}}({{\cal G}},\Pi)$. The [**restricted epistemic closure function**]{} is defined as follows. $${{\sf Closure}}(\Phi,\Pi) = \{ \psi \mid \Phi\VDash_\Pi \psi \}$$ Given the soundness and completeness results for our proof systems, we can observe that closure can also be defined using $\Vdash_\Pi$. The closure function is monotonic on both of its arguments (i.e. if $\Phi\subseteq\Phi'$ and $\Pi\subseteq\Pi'$, then ${{\sf Closure}}(\Phi,\Pi) \subseteq {{\sf Closure}}(\Phi,\Pi)$). Let us consider the reasonable restricted value set $\Pi = \{0, 0.1, 0.2, \ldots, 0.9, 1\}$ and the set of formulae $\Phi = \{{p}({{{\tt {A}}}}) < 0.5, ({p}({{{\tt {B}}}}) > 0.5 \land {p}({{{\tt {A}}}}) > 0.4) \rightarrow {p}({{{\tt {C}}}}) > 0.6, {p}({{{\tt {C}}}}) = 1 \rightarrow {p}({{{\tt {B}}}}) = 0.9\}$. We can observe that $\Phi \VDash_\Pi {p}({{{\tt {A}}}}) \leq x$ for $x \in \{0.5, 0.6, \ldots, 1\}$, thus these formulae belong to the (both restricted and standard) closure of $\Phi$. On the other hand, the formula ${p}({{{\tt {A}}}}) = 0.7$ does not. The formula ${p}({{{\tt {A}}}}) < 0.5 \land ({p}({{{\tt {B}}}}) \leq 0.5 \lor {p}({{{\tt {C}}}}) > 0.6) \land ({p}({{{\tt {C}}}}) < 1 \lor {p}({{{\tt {B}}}}) = 0.9)$ also belongs to the closure. The formula ${p}({{{\tt {B}}}}) = 0.8 \land {p}({{{\tt {C}}}}) = 0.2$ does not. We will use the closure function in the next section when we consider properties of epistemic graphs in terms of their constraints. Epistemic Graphs {#sec:epistemicgraphs} ================ In the introduction, we have discussed the value of being able to model beliefs in arguments, various types of relations between arguments, context–sensitivity, and more. Our proposal, capable of meeting the postulated requirements, comes in the form of epistemic graphs, which are labelled graphs equipped with particular formulae specifying the beliefs in arguments and the interplay between them. In this section we formalize the idea of epistemic graphs: we explain how constraints can be specified and interpreted, define epistemic semantics and provide an example of how our proposal can be used in practical applications. Our aim in this section is to provide a general representation formalism for epistemic probabilistic argumentation. Although we will, at times, discuss reasoning methods and introduce concepts that may help in implementing a working system based on our formalism, our focus will be on the conceptual level. In general, reasoning with epistemic constraints can be seen as a special case of *constraint satisfaction problems* [@Dechter2003; @Rossi:2006; @Tsang1993] (CSP, as mentioned earlier) and CSP software could be used to implement our proposal. We will point to general concepts from the CSP literature when appropriate, but leave a deeper discussion of the implementation issues for future work. An epistemic graph is, to put it simply, a labelled graph equipped with a set of epistemic constraints, which are defined as epistemic formulae that contain at least one argument. This restriction is to exclude constraints that operate only on truth values and are simply redundant. Nevertheless, we note that it is optional and can be lifted if desired. An [**epistemic constraint**]{} is an epistemic formula $\psi \in {{\sf EFormulae}}({{\cal G}})$ s.t. ${{\sf FArgs}}(\psi) \neq \emptyset$. An [**epistemic graph**]{} is a tuple $({{\cal G}},{{\cal L}},{{\cal C}})$ where $({{\cal G}},{{\cal L}})$ is a labelled graph, and ${{\cal C}}\subseteq {{\sf EFormulae}}({{\cal G}})$ is a set of epistemic constraints associated with the graph. We will say that an epistemic graph is consistent iff its set of constraints is consistent. Please note that the graph (and its labelling, which we will discuss in Section \[sec:consistentlabel\]) is not necessarily induced by the constraints and therefore it contains additional information. The actual direction of the edges in the graph is also not derivable from ${{\cal C}}$. For example, if we had two arguments ${{{\tt {A}}}}$ and ${{{\tt {B}}}}$ connected by an edge, a constraint of the form ${p}({{{\tt {A}}}}) < 0.5 \lor {p}({{{\tt {B}}}}) < 0.5$ would not tell us the direction of this edge. While for the sake of readability, we may use implications that reflect the directions of the edges, the syntactical features of the constraints should in general not be treated as cues for the graph structure. The constraints may also involve unrelated arguments, similarly as in [@CosteMarquisDM06]. We will now consider some examples of epistemic graphs. \(a) \[text width=5cm\] at (1.85,4) [${{{\tt {A}}}}$ = Throw a party]{}; (b) \[text width=3cm\]at (-4,2) [${{{\tt {B}}}}$ = Buy fizzy drinks]{}; (c) \[text width=3.075cm\]at (-0.1,2) [${{{\tt {C}}}}$ = Buy fruit juices]{}; (d) \[text width=3.015cm\]at (3.8,2) [${{{\tt {D}}}}$ = Buy chips and salty snacks]{}; \(e) \[text width=5cm\] at (1.85,0) [${{{\tt {E}}}}$ = Constrained budget]{}; (f) \[text width=5cm\] at (1.85,-2) [${{{\tt {F}}}}$ = Found forgotten stash]{}; \(b) edge node[$+$]{} (a) (c) edge node[$+$]{} (a) (d) edge node\[swap\][$+$]{} (a) (e) edge node[$-$]{} (b) (e) edge node[$-$]{} (c) (e) edge node\[swap\][$-$]{} (d) (f) edge node\[swap\][$-$]{} (e); \[ex:party\] Let us consider an example in which Mary and Jane are organizing a small party at the student dormitory. Although the guests will bring some beer, Mary and Jane need to buy some non–alcoholic drinks and snacks. This can be represented with arguments ${{{\tt {A}}}}$, ${{{\tt {B}}}}$, ${{{\tt {C}}}}$ and ${{{\tt {D}}}}$ as seen in Figure \[fig:party\] and expressed with the following constraints: - $\varphi_1 {:}({p}({{{\tt {B}}}}) >0.5 \lor {p}({{{\tt {C}}}}) >0.5) \land {p}({{{\tt {D}}}}) >0.5 \rightarrow {p}({{{\tt {A}}}}) > 0.5$ - $\varphi_2 {:}({p}({{{\tt {B}}}}) <0.5 \land {p}({{{\tt {C}}}}) <0.5) \lor {p}({{{\tt {D}}}}) <0.5 \rightarrow {p}({{{\tt {A}}}}) <0.5$ We can observe that ${{{\tt {B}}}}$, ${{{\tt {C}}}}$ and ${{{\tt {D}}}}$ are supporters of ${{{\tt {A}}}}$ in the sense that the acceptance of ${{{\tt {A}}}}$ requires the acceptance of ${{{\tt {D}}}}$ and ${{{\tt {B}}}}$ or ${{{\tt {C}}}}$. Let us assume that Mary and Jane realize that their budget is somewhat limited. We could create a constraint stating that at least one of the items has to be rejected: - $\varphi_3 {:}{p}({{{\tt {B}}}}) <0.5 \lor {p}({{{\tt {C}}}}) <0.5 \lor {p}({{{\tt {D}}}}) < 0.5$ However, instead of this, we can also decide to represent the budget limitations as an argument ${{{\tt {E}}}}$ and replace $\varphi_3$ with $\varphi'_3$: - $\varphi'_3 {:}{p}({{{\tt {E}}}}) >0.5 \rightarrow {p}({{{\tt {B}}}}) <0.5 \lor {p}({{{\tt {C}}}}) <0.5 \lor {p}({{{\tt {D}}}}) < 0.5$ - $\varphi'_4 {:}{p}({{{\tt {E}}}}) >0.5$ We can observe that in this case, the relation between ${{{\tt {E}}}}$ and ${{{\tt {B}}}}$, ${{{\tt {C}}}}$ and ${{{\tt {D}}}}$ is more attacking, in the sense that acceptance of ${{{\tt {E}}}}$ leads to the rejection of at least one of ${{{\tt {B}}}}$, ${{{\tt {C}}}}$ and ${{{\tt {D}}}}$. Although the former solution is more concise, the latter also has its benefits. Let us assume that Mary now finds some spare money in her backpack and they can afford to buy all of the items. Thus, we add argument ${{{\tt {F}}}}$, and the constraint $\varphi'_4$ will need to be replaced: - $\varphi''_4 = {p}({{{\tt {F}}}}) >0.5 \rightarrow {p}({{{\tt {E}}}}) <0.5$ - $\varphi''_5 = {p}({{{\tt {F}}}}) >0.5$ Clearly, the relation between ${{{\tt {F}}}}$ and ${{{\tt {E}}}}$ is conflicting. Let us consider the graph depicted in Figure \[fig:matura\]. Given the rules in his country, Mark has written the matura exam (national exam after high school allowing a person to apply to a university) and can now register for up to two universities that interest him. He will be accepted or rejected once the exam results are in. We create the following constraints expressing what Mark plans to do: - If Mark strongly disbelieves that he will get good grades, he will apply only to his 4th choice university: ${p}({{{\tt {A}}}}) \leq 0.2 \rightarrow {p}({{{\tt {B}}}}) < 0.5 \land {p}({{{\tt {C}}}}) < 0.5 \land {p}({{{\tt {D}}}}) < 0.5 \land {p}({{{\tt {E}}}}) >0.5$ - If Mark moderately does not believe that he will get good grades, he will apply only to his 3rd and 4th choice universities: ${p}({{{\tt {A}}}}) > 0.2 \land {p}({{{\tt {A}}}}) \leq 0.5 \rightarrow {p}({{{\tt {B}}}}) < 0.5 \land {p}({{{\tt {C}}}}) < 0.5 \land {p}({{{\tt {D}}}}) > 0.5 \land {p}({{{\tt {E}}}}) >0.5$ - If Mark moderately believes his grades will be good, he will apply only to his 2nd and 3rd choice universities: ${p}({{{\tt {A}}}}) > 0.5 \land {p}({{{\tt {A}}}}) < 0.8 \rightarrow {p}({{{\tt {B}}}}) < 0.5 \land {p}({{{\tt {C}}}}) > 0.5 \land {p}({{{\tt {D}}}}) > 0.5 \land {p}({{{\tt {E}}}}) < 0.5$ - If Mark strongly believes he will get good grades, he will apply only to his 1st and 2nd choice universities: ${p}({{{\tt {A}}}}) \geq 0.8 \rightarrow {p}({{{\tt {B}}}}) > 0.5 \land {p}({{{\tt {C}}}}) > 0.5 \land {p}({{{\tt {D}}}}) < 0.5 \land {p}({{{\tt {E}}}}) < 0.5$ We can consider the relation between ${{{\tt {A}}}}$ and ${{{\tt {E}}}}$ to be conflicting, as once ${{{\tt {A}}}}$ is believed we disbelieve ${{{\tt {E}}}}$. Given that believing ${{{\tt {A}}}}$ (to a sufficiently high degree) also leads to believing ${{{\tt {B}}}}$ and ${{{\tt {C}}}}$, the relations between these arguments can be seen as supporting. However, the interaction between ${{{\tt {A}}}}$ and ${{{\tt {D}}}}$ cannot be clearly classified as supporting or attacking, given that as the belief in ${{{\tt {A}}}}$ increases, ${{{\tt {D}}}}$ can be disbelieved, believed, and then disbelieved again. Epistemic graphs are therefore quite flexible in representing various restrictions on beliefs. However, given the freedom we have in defining constraints, we can create epistemic graphs in which the constraints do not really reflect the structure of the graph and vice versa. Moreover, a probability distribution satisfying our requirements may be further refined in various ways, independently of the graph in question. Thus, in the next section we would like to explore testing if and how the graph structure can be reflected by the constraints and introduce various types of epistemic semantics. Coverage {#sec:CoverageAnalysis} -------- Previously, we have stated that it is not necessary for the constraints to account for all arguments and all the relations between them. While the ability to operate a not fully defined framework is valuable from the practical point of view, for example when dealing with limited knowledge about an opponent during a dialogue, having a graph in which the constraints cover all possible scenarios has undeniable benefits. In this section we will therefore focus on notions that can be used to measure if, and to what degree, arguments and relations between them are accounted for by the constraints. We will consider possible means of using this information in Section \[sec:intercoh\]. The general idea of verifying coverage relies on modulating beliefs in certain arguments and observing whether it results in particular behaviours in the arguments we are interested in. Key notion in this is a constraint combination, which we will use as a “modulating” component: \[def:constraintcomb\] Let $F = \{{{{\tt {A}}}}_1, \ldots, {{{\tt {A}}}}_m\} \subseteq {{\sf Nodes}}({{\cal G}})$ be a set of arguments. An **exact constraint combination** for $F$ is a set ${\cal CC}^F = \{{p}({{{\tt {A}}}}_1) = x_1, {p}({{{\tt {A}}}}_2) = x_2, \ldots, {p}({{{\tt {A}}}}_m) = x_m \}$, where $x_1, \ldots, x_m \in [0,1]$. A **soft constraint combination** for $F$ is a set ${\cal CC}^F = \{{p}({{{\tt {A}}}}_1) {\#}_1 x_1, {p}({{{\tt {A}}}}_2) {\#}_2 x_2, \ldots, {p}(f_m) {\#}_m x_m \}$, where $x_1, \ldots, x_m \in [0,1]$ and ${\#}_1, \ldots, {\#}_m \in {\{=, \neq, \geq, \leq,>,<\}}$. With ${\cal CC}^F\rvert_{G}$ for $G \subseteq {{\sf Nodes}}({{\cal G}})$ we denote the subset of ${\cal CC}^F$ that consists of all and only constraints of ${\cal CC}^F$ that are on arguments contained in $F \cap G$. Verifying if and how the belief in an argument changes given the beliefs in other arguments can possess certain challenges depending on how the set of constraints is defined. Amending the set of constraints with the above combinations might lead to inconsistencies coming from the fact that the arguments in the combinations themselves are interrelated or because the set of constraints already affects the belief in one of the arguments in the combination by default. Furthermore, we need to take into account the fact that the set of constraints associated with the graph might not be consistent to start with. In the following sections we will work under the assumption that we are dealing with a graph s.t. the associated set of constraints is satisfiable, and for a discussion on inconsistent constraints refer to [@HunterPT2018Arxiv]. ### Argument Coverage On its own, an argument can be assigned any probability value from $[0,1]$. One of the purposes of the constraints is - as the name suggest - to constrain the range of values that an argument may take, for example by the values assigned to its parents. Coverage means that there is at least one value for the degree of belief of an argument cannot be assigned, be it straight from the constraints or under certain assumptions concerning the beliefs in other arguments, cf. general constraint propagation to restrict the domain of variables [@Dechter2003]. The most basic form of coverage is the default coverage, where we can find a degree of belief that an argument cannot take straightforwardly from the constraints and without imposing additional assumptions: \[def:defcov\] Let $X = ({{\cal G}}, {{\cal L}}, {{\cal C}})$ be a consistent epistemic graph. We say that an argument ${{{\tt {A}}}}\in {{\sf Nodes}}({{\cal G}})$ is **default covered in $X$** if there is a value $x \in [0,1]$ s.t. ${{\cal C}}\VDash {p}({{{\tt {A}}}}) \neq x$. \[ex:coverage1\] Let us consider the graph depicted in Figure \[fig:defcov\] and the associated set of constraints ${{\cal C}}$: $$\{{p}({{{\tt {A}}}}) > 0.5, {p}({{{\tt {A}}}}) > 0.5 \rightarrow {p}({{{\tt {B}}}}) < 0.5, ({p}({{{\tt {B}}}}) < 0.5 \land {p}({{{\tt {C}}}}) > 0.5) \rightarrow {p}({{{\tt {D}}}}) \leq 0.5, {p}({{{\tt {C}}}}) \leq 0.5 \rightarrow {p}({{{\tt {D}}}}) > 0.5\}$$ In this case, we can observe that both ${{{\tt {A}}}}$ and ${{{\tt {B}}}}$ are covered by default. For example, ${{\cal C}}\VDash {p}({{{\tt {A}}}}) \neq 0.5$ and ${{\cal C}}\VDash {p}({{{\tt {B}}}}) \neq 0.5$. This comes from the fact that the belief in ${{{\tt {A}}}}$ is restricted from the very beginning and from it we can derive the restrictions for ${{{\tt {B}}}}$. However, arguments ${{{\tt {C}}}}$ and ${{{\tt {D}}}}$ are not default covered. Although they are constrained and, for example, it cannot be the case that they are both believed or both disbelieved at the same time, for every belief value $x \in [0,1]$ we can still find a probability distribution ${P}$ s.t. ${P}({{{\tt {C}}}}) = x$ (resp. ${P}({{{\tt {D}}}}) = x$). \[-&gt;,&gt;=stealth,shorten &gt;=1pt,auto,node distance=1.7cm, thick,main node/.style=[shape=rounded rectangle,fill=darkgreen!10,draw,minimum size = 0.6cm,font=****]{} \] \(a) [${{{\tt {A}}}}$]{}; (b) \[right of=a\] [${{{\tt {B}}}}$]{}; (c) \[right of=b\] [${{{\tt {C}}}}$]{}; (d) \[right of=c\] [${{{\tt {D}}}}$]{}; \(a) edge node [$-$]{} (b) (b) edge \[bend right=45\] node \[swap\] [$+$]{} (d) (c) edge \[bend right\] node\[swap\] [$-$]{} (d) (d) edge \[bend right\] node\[swap\] [$-$]{} (c); \[fig:defcov\] \[-&gt;,&gt;=stealth,shorten &gt;=1pt,auto,node distance=1.7cm, thick,main node/.style=[shape=rounded rectangle,fill=darkgreen!10,draw,minimum size = 0.6cm,font=****]{} \] \(a) at(0,0) [${{{\tt {A}}}}$]{}; (b) at(1,1) [${{{\tt {B}}}}$]{}; (c) at(-1,1) [${{{\tt {C}}}}$]{}; \(b) edge node\[swap\] [$-$]{} (a) (c) edge node\[swap\] [$-$]{} (a) (b) edge node\[swap\] [$-$]{} (c); \[fig:inconsistentcombination\] The above example also shows that in some cases, the default coverage may be too restrictive. Although neither ${{{\tt {C}}}}$ nor ${{{\tt {D}}}}$ are default covered, the belief we have in one restricts the belief we have in the other. Thus, our intuition is that some form of coverage should exist. In our case, every level of belief we had in ${{{\tt {C}}}}$ had constrained ${{{\tt {D}}}}$ and vice versa. However, even weaker forms may be considered: \[ex:coverage2\] Let us consider the framework depicted in Figure \[fig:inconsistentcombination\] and the following set of constraints ${{\cal C}}$: $$\{\varphi_1 {:}{p}({{{\tt {B}}}}) >0.5 \rightarrow {p}({{{\tt {C}}}}) \leq 0.5, \varphi_2 {:}({p}({{{\tt {B}}}}) >0.5 \land {p}({{{\tt {C}}}}) \geq 0.5) \rightarrow {p}({{{\tt {A}}}}) <0.5\}$$ Let us analyze how the belief in ${{{\tt {A}}}}$ is constrained in the graph. Our intuition is that some coverage does exist. In particular, we can observe that if ${{{\tt {B}}}}$ is believed and ${{{\tt {C}}}}$ is not disbelieved, then ${{{\tt {A}}}}$ is disbelieved and thus there are some probabilities it cannot take in this context. However, if this condition is not satisfied, then ${{{\tt {A}}}}$ can take on any probability. Thus, the coverage is, in a sense, “partial”. We therefore introduce the additional notions of coverage below. Given the fact that the constraints can occur between unrelated arguments and that for certain types of relations the belief in an argument is more affected by the arguments it is targeting rather than by those that are its parents, we allow for testing coverage against an arbitrary set of arguments. We say that an argument is partially covered by a set of arguments $F$ if we can find a belief assignment for $F$ that respects the existing constraints and leads to our argument not being able to take on some values. Full coverage states that every appropriate belief assignment for $F$ should lead to the argument not taking on some values. \[def:coverage\] Let $X = ({{\cal G}}, {{\cal L}}, {{\cal C}})$ be a consistent epistemic graph, ${{{\tt {A}}}}\in {{\sf Nodes}}({{\cal G}})$ an argument and $F \subseteq {{\sf Nodes}}({{\cal G}})\setminus\{{{{\tt {A}}}}\}$ a set of arguments. We say that ${{{\tt {A}}}}$ is: - **partially covered by $F$** in $X$ if there exists a constraint combination ${\cal CC}^{F}$ and a value $x \in [0,1]$ s.t. ${\cal CC}^{F} \cup {{\cal C}}\not\VDash \bot$ and ${\cal CC}^{F} \cup {{\cal C}}\VDash {p}({{{\tt {A}}}}) \neq x$ - **fully covered by $F$** in $X$ if for every constraint combination ${\cal CC}^{F}$ s.t. ${\cal CC}^{F} \cup {{\cal C}}\not\VDash \bot$, there exists a value $x \in [0,1]$ s.t. ${\cal CC}^{F} \cup {{\cal C}}\VDash {p}({{{\tt {A}}}}) \neq x$ We note that for a graph that possesses a consistent set of constraints, for every set of arguments $F$ we can find a constraint combination ${\cal CC}^{F}$ for $F$ s.t. ${\cal CC}^{F} \cup {{\cal C}}$ is consistent (see also Definition \[def:constraintcomb\]). It is also worth noting that for $F = \emptyset$, the definitions of partial, full and default coverage coincide. The set ${\cal CC}^{F}$ is also called an *eliminating explanation* [@VanBeek:2006]. We can observe that in the above definition, we exclude the effect an argument may have on itself (i.e. the set $F$ cannot contain the argument in question). While it has clear technical benefits, we also observe that constraints representing directly self–attacking and self–supporting arguments either provide default coverage or no coverage at all. Let us consider a simple graph with an argument ${{{\tt {A}}}}$ s.t ${{{\tt {A}}}}$ is a self–attacker, which can be represented with constraints ${p}({{{\tt {A}}}}) >0.5 \rightarrow {p}({{{\tt {A}}}}) < 0.5$ (i.e. if ${{{\tt {A}}}}$ is believed, then ${{{\tt {A}}}}$ is disbelieved) and ${p}({{{\tt {A}}}}) < 0.5 \rightarrow {p}({{{\tt {A}}}}) >0.5$ (i.e. if ${{{\tt {A}}}}$ is disbelieved, then its attackee (and/or attacker) ${{{\tt {A}}}}$ is believed). From this we can infer that ${p}({{{\tt {A}}}}) = 0.5$ which provides default coverage. Performing a similar analysis for a self–supporter (i.e. if ${{{\tt {A}}}}$ is believed, then ${{{\tt {A}}}}$ is believed and if ${{{\tt {A}}}}$ is disbelieved, then ${{{\tt {A}}}}$ is disbelieved) leads to a tautology constraint and provides no coverage at all. \[ex:coveragecont\] Let us consider the graph from Example \[ex:coverage1\] and look at arguments ${{{\tt {C}}}}$ and ${{{\tt {D}}}}$. We can start by analyzing whether arguments ${{{\tt {A}}}}$ and ${{{\tt {B}}}}$ provide any coverage for them. We can see that any constraint combination $\{{p}({{{\tt {A}}}}) = x, {p}({{{\tt {B}}}}) = y\}$ for these two arguments that is consistent with the existing formulae is such that $x \in (0.5,1]$ and $y \in [0, 0.5)$. Nevertheless, there is no value $z \in [0,1]$ s.t. the union of our constraint combination and the original set of constraints entails ${p}({{{\tt {C}}}}) \neq z$ or ${p}({{{\tt {D}}}}) \neq z$. Consequently, these arguments provide no coverage (be it full or partial), which is in accordance with our intuition. Let us therefore consider constraint combinations on ${{{\tt {C}}}}$ and analyze the argument ${{{\tt {D}}}}$. We can observe that any set $\{{p}({{{\tt {C}}}}) = v\}$ for $v \in [0,1]$ is consistent with ${{\cal C}}$. For $v \in [0, 0.5]$, we can observe that ${{\cal C}}\cup \{{p}({{{\tt {C}}}}) = v\} \VDash {p}({{{\tt {D}}}}) > 0.5$. Thus, for example, ${{\cal C}}\cup \{{p}({{{\tt {C}}}}) = v\} \VDash {p}({{{\tt {D}}}}) \neq 0$. For $v \in (0.5, 1]$, we can observe that ${{\cal C}}\cup \{{p}({{{\tt {C}}}}) = v\} \VDash {p}({{{\tt {D}}}}) \leq 0.5$. Therefore, for example, ${{\cal C}}\cup \{{p}({{{\tt {C}}}}) = v\} \VDash {p}({{{\tt {D}}}}) \neq 1$. Hence, we can argue that ${{{\tt {D}}}}$ is both partially and fully covered by $\{{{{\tt {C}}}}\}$ (and, as a result, also by sets containing ${{{\tt {C}}}}$). Similar arguments can be made for showing that ${{{\tt {C}}}}$ is partially and fully covered by $\{{{{\tt {D}}}}\}$. \[ex:counterexfullpart\] Let us come back to Example \[ex:coverage2\] and check whether argument ${{{\tt {A}}}}$ is covered by the set $\{{{{\tt {B}}}},{{{\tt {C}}}}\}$. We can observe that all constraint combinations $\{{p}({{{\tt {B}}}}) = x, {p}({{{\tt {C}}}}) = y\}$ are consistent with ${{\cal C}}$ as long as either $x \leq 0.5$ or $y \leq 0.5$. We can observe that $\{{p}({{{\tt {B}}}}) = 1, {p}({{{\tt {C}}}}) = 0.5\} \cup {{\cal C}}\VDash {p}({{{\tt {A}}}}) < 0.5$. Thus, for example, $\{{p}({{{\tt {B}}}}) = 1, {p}({{{\tt {C}}}}) = 0.5\} \cup {{\cal C}}\VDash {p}({{{\tt {A}}}}) \neq 1$, and we have at least partial coverage. However, if we consider $\{{p}({{{\tt {B}}}}) = 0.5, {p}({{{\tt {C}}}}) = 0.5\}$, then ${{{\tt {A}}}}$ can be assigned any belief from $[0,1]$. In other words, there is no value $z \in [0,1]$ s.t. $\{{p}({{{\tt {B}}}}) = 0.5, {p}({{{\tt {C}}}}) = 0.5\} \cup {{\cal C}}\VDash {p}({{{\tt {A}}}}) \neq z$. Thus, the coverage is not full. In the above partial and full versions of the coverage, we needed to select the arguments against which we wanted to check whether the belief in an argument is restricted or not. For some applications, this extra information might be unnecessary, and thus we can consider the arbitrary versions of partial and full coverage, i.e. ones in which the actual set $F$ is not important as long as at least one exists. Let $X = ({{\cal G}}, {{\cal L}}, {{\cal C}})$ be a consistent epistemic graph. An argument ${{{\tt {A}}}}\in {{\sf Nodes}}({{\cal G}})$ has **arbitrary full/partial coverage** iff there exists a set of arguments $F \subseteq {{\sf Nodes}}({{\cal G}})\setminus \{{{{\tt {A}}}}\}$ s.t. ${{{\tt {A}}}}$ is fully or partially covered w.r.t. $F$. The following relationships between the various forms of coverage can be shown straightforwardly: [proposition]{}[conscoverage]{} Let $X = ({{\cal G}}, {{\cal L}}, {{\cal C}})$ be a consistent epistemic graph, ${{{\tt {A}}}}\in {{\sf Nodes}}({{\cal G}})$ be an argument and $F = {{\sf Nodes}}({{\cal G}})\setminus\{{{{\tt {A}}}}\}$ be a set of arguments. The following hold: - If ${{{\tt {A}}}}$ is default covered in $X$, then it is partially and fully covered w.r.t. any set of arguments $G \subseteq {{\sf Nodes}}({{\cal G}})\setminus\{{{{\tt {A}}}}\}$, but not necessarily vice versa - If ${{{\tt {A}}}}$ is fully covered in $X$ w.r.t. $F$, then it is partially covered in $X$ w.r.t. $F$, but not necessarily vice versa Finally, we can observe that for epistemic graphs whose constraints have the same satisfying distributions, the coverage analysis leads to the same results: [proposition]{}[conscoverageequiv]{} \[conscoverageequiv\] Let $X = ({{\cal G}}, {{\cal L}}, {{\cal C}})$ and $X' = ({{\cal G}}', {{\cal L}}', {{\cal C}}')$ be consistent epistemic graphs s.t. ${{\sf Sat}}({{\cal C}}) = {{\sf Sat}}({{\cal C}}')$. An argument ${{{\tt {A}}}}\in {{\sf Nodes}}({{\cal G}})$ is default (partially, fully) covered in $X$ (and w.r.t. $F \subseteq {{\sf Nodes}}({{\cal G}}) \setminus \{{{{\tt {A}}}}\}$) iff it is default (partially, fully) covered in $X'$ (w.r.t. $F$). This result further highlights the need of contrasting information in the graph with the information in the constraints, which we will address in Section \[sec:intercoh\]. ### Relation Coverage {#sec:RelationCoverage} In the previous section we have discussed properties concerning whether an argument is sufficiently covered by the constraints. However, it also makes sense to check whether every relation is covered by the constraints as well. For example, we can consider an argument ${{{\tt {A}}}}$ and its parents ${{{\tt {B}}}}$ and ${{{\tt {C}}}}$. It is possible that the constraints are defined in a way that only ${{{\tt {B}}}}$ has an actual effect on ${{{\tt {A}}}}$. Thus, the relation between ${{{\tt {C}}}}$ and ${{{\tt {A}}}}$ might have no real impact, despite the fact that ${{{\tt {A}}}}$ may be fully covered in the graph. Hence, we also test for the effectiveness of a given relation, which is understood as the ability of the source to change the belief restrictions on the target argument. We therefore introduce the following definition, which simply states that there is a point at which changing the belief of the source of a relation will lead to a change in the belief we have in the target. In order to be able to look at effectiveness of explicit as well as implicit relations, we do not limit ourselves only to those mentioned in ${{\sf Arcs}}({{\cal G}})$: Let $X = ({{\cal G}}, {{\cal L}}, {{\cal C}})$ be a consistent epistemic graph, $F \subseteq {{\sf Nodes}}({{\cal G}}) \setminus \{{{{\tt {B}}}}\}$ and $G = F \setminus \{{{{\tt {A}}}}\}$ be sets of arguments. The relation represented by $({{{\tt {A}}}}, {{{\tt {B}}}})\in {{\sf Nodes}}({{\cal G}}) \times {{\sf Nodes}}({{\cal G}})$ is: - **effective w.r.t. $F$** if there exists a constraint combination ${\cal CC}^{F}$ and values $x,y \in [0,1]$ s.t. - ${{\cal C}}\cup {\cal CC}^{F} \nVDash \bot$, and - ${{\cal C}}\cup {\cal CC}^F\rvert_{G} \cup \{{p}({{{\tt {A}}}}) = y\} \nVDash \bot$, and - at least one of the following conditions holds: - ${{\cal C}}\cup {\cal CC}^{F} \nVDash {p}({{{\tt {B}}}}) \neq x$ and ${{\cal C}}\cup {\cal CC}^F\rvert_{G} \cup \{{p}({{{\tt {A}}}}) = y\} \VDash {p}({{{\tt {B}}}}) \neq x$, or - ${{\cal C}}\cup {\cal CC}^{F} \VDash {p}({{{\tt {B}}}}) \neq x$ and ${{\cal C}}\cup {\cal CC}^F\rvert_{G} \cup \{{p}({{{\tt {A}}}}) = y\} \nVDash {p}({{{\tt {B}}}}) \neq x$. - **strongly effective w.r.t. $F$** if for every constraint combination ${\cal CC}^{F}$ s.t. ${{\cal C}}\cup {\cal CC}^{F} \nVDash \bot$, there exist values $x,y \in [0,1]$ s.t. ${{\cal C}}\cup {\cal CC}^F\rvert_{G} \cup \{{p}({{{\tt {A}}}}) = y\} \nVDash \bot$, and at least one of the following conditions holds: - ${{\cal C}}\cup {\cal CC}^{F} \nVDash {p}({{{\tt {B}}}}) \neq x$ and ${{\cal C}}\cup {\cal CC}^F\rvert_{G} \cup \{{p}({{{\tt {A}}}}) = y\} \VDash {p}({{{\tt {B}}}}) \neq x$, or - ${{\cal C}}\cup {\cal CC}^{F} \VDash {p}({{{\tt {B}}}}) \neq x$ and ${{\cal C}}\cup {\cal CC}^F\rvert_{G} \cup \{{p}({{{\tt {A}}}}) = y\} \nVDash {p}({{{\tt {B}}}}) \neq x$. \[ex:effectivenesscounterex\] Let us consider a simple set of constraints ${{\cal C}}= \{ {p}({{{\tt {A}}}}) > 0.5 \rightarrow {p}({{{\tt {B}}}}) \leq 0.5\}$ and analyze the impact ${{{\tt {A}}}}$ has on ${{{\tt {B}}}}$. For this analysis, we assume $F = \{{{{\tt {A}}}}\}$ and $G = \emptyset$. Consequently, we will focus on analyzing what we can conclude from ${{\cal C}}\cup \{{p}({{{\tt {A}}}}) = z\}$ for selected values of $z \in [0,1]$. We can observe that for every value of $z$, ${{\cal C}}\cup \{{p}({{{\tt {A}}}}) = z \} \not\VDash \bot$. Consequently, the first two conditions of effectiveness are easily satisfied. Let $z = 1$ and ${\cal CC}^{F} = \{{p}({{{\tt {A}}}}) = 1 \}$. It holds that ${{\cal C}}\cup \{{p}({{{\tt {A}}}}) = 1 \} \VDash {p}({{{\tt {B}}}}) \leq 0.5$. Thus, for example, ${{\cal C}}\cup \{{p}({{{\tt {A}}}}) = 1 \} \VDash {p}({{{\tt {B}}}}) \neq 1$. However, if we set the probability of ${{{\tt {A}}}}$ to $0$, then ${{{\tt {B}}}}$ is allowed to take on any probability. Since ${\cal CC}^{G} = \emptyset$, it suffices to show that ${{\cal C}}\cup \emptyset \cup \{{p}({{{\tt {A}}}}) = 0 \} \not\VDash {p}({{{\tt {B}}}}) \neq 1$. Consequently, the third condition of effectiveness is also satisfied, and the $({{{\tt {A}}}}, {{{\tt {B}}}})$ relation is effective. In a similar fashion, we can show that it is strongly effective. \[ex:effectivenesscounterex2\] Let us now consider the following set of constraints ${{\cal C}}$: $$\{\varphi_1 {:}{p}({{{\tt {A}}}}) > 0.5 \rightarrow {p}({{{\tt {B}}}}) >0.5, \, \varphi_2 {:}{p}({{{\tt {C}}}}) >0.5 \rightarrow {p}({{{\tt {B}}}}) > 0.9\}$$ We can analyze how ${{{\tt {A}}}}$ and ${{{\tt {C}}}}$ affect ${{{\tt {B}}}}$ and consider the constraint combinations on $F=\{{{{\tt {A}}}},{{{\tt {C}}}}\}$. We observe that both $({{{\tt {A}}}}, {{{\tt {B}}}})$ and $({{{\tt {C}}}}, {{{\tt {B}}}})$ are effective w.r.t. $\{{{{\tt {A}}}},{{{\tt {C}}}}\}$. For instance, in case of ${{{\tt {A}}}}$, we can take the constraint combinations ${\cal CC}^{F} = \{{p}({{{\tt {A}}}}) = 0, {p}({{{\tt {C}}}}) = 0\}$ and ${\cal CC}^{G} = \{{p}({{{\tt {C}}}}) = 0\}$ to see that ${{\cal C}}\cup {\cal CC}^{F} \nVDash {p}({{{\tt {B}}}}) \neq 0.4$ and ${{\cal C}}\cup {\cal CC}^{G} \cup \{{p}({{{\tt {A}}}}) = 0.7\} \VDash {p}({{{\tt {B}}}}) \neq 0.4$. Similar analysis can be carried out for ${{{\tt {C}}}}$. We can also observe that $({{{\tt {C}}}}, {{{\tt {B}}}})$ is strongly effective w.r.t. $F$. Let ${\cal CC}^{F} = \{{p}({{{\tt {A}}}}) = x, {p}({{{\tt {C}}}}) = y\}$ be an arbitrary constraint combination. If $x \in [0,1]$ and $y \leq 0.5$, then we can take ${\cal CC}^{G} = \{{p}({{{\tt {A}}}}) = x\}$ and $\{{p}({{{\tt {C}}}}) = 1\}$ to observe that observe that ${\cal CC}^{F} \cup {{\cal C}}\not\VDash {p}({{{\tt {B}}}}) \neq 0.9$ and ${\cal CC}^{G} \cup \{ {p}({{{\tt {C}}}}) =1\} \cup {{\cal C}}\VDash {p}({{{\tt {B}}}}) \neq 0.9$. If $x \in [0,1]$ and $y > 0.5$, then we can take $\{{p}({{{\tt {C}}}}) = 0\}$ to show that ${\cal CC}^{F} \cup {{\cal C}}\VDash {p}({{{\tt {B}}}}) \neq 0.9$ and ${\cal CC}^{G} \cup \{{p}({{{\tt {C}}}}) = 0\} \cup {{\cal C}}\not\VDash {p}({{{\tt {B}}}}) \neq 0.9$. Hence, in all cases, modifying the belief associated with ${{{\tt {C}}}}$ changes the restrictions on ${{{\tt {B}}}}$. We note that unlike $({{{\tt {C}}}},{{{\tt {B}}}})$, $({{{\tt {A}}}},{{{\tt {B}}}})$ is not strongly effective w.r.t. $F$. For example, we can consider the combination ${\cal CC}^{F} = \{{p}({{{\tt {A}}}}) = 0.6, {p}({{{\tt {C}}}}) = 0.6\}$ for ${{{\tt {A}}}}$. In this case, ${\cal CC}^{G} = \{{p}({{{\tt {C}}}}) = 0.6\}$. We can observe that ${\cal CC}^{G} \cup {{\cal C}}\VDash {p}({{{\tt {B}}}}) > 0.9$ and no matter the value of $x \in [0,1]$, adding $\{{p}({{{\tt {A}}}}) = x\}$ to our premises will not change the restrictions on ${{{\tt {B}}}}$. The above definition of effectiveness is in fact a rather demanding one in the sense that even though there might exist a constraint from which we can see how two arguments are connected, other constraints in the graph might make it impossible for it to ever become “active”, so to speak. For example, coverage such as default, can interfere with detecting the effectiveness of a given relation. Let us consider the following scenario: \[ex:effectivegap\] Let us look at the following set of constraints ${{\cal C}}$ and assume that $\{({{{\tt {B}}}},{{{\tt {A}}}}),({{{\tt {C}}}}, {{{\tt {A}}}})\} = {{\sf Arcs}}({{\cal G}})$: $$\{\varphi_1 {:}{p}({{{\tt {B}}}}) \leq 0.5 \land {p}({{{\tt {C}}}}) < 0.5, \, \varphi_2 {:}({p}({{{\tt {B}}}}) \leq 0.5 \land {p}({{{\tt {C}}}}) < 0.5) \rightarrow {p}({{{\tt {A}}}}) <0.5\}$$ We can observe that even though ${{{\tt {B}}}}$ and ${{{\tt {C}}}}$ are not default covered, ${{{\tt {A}}}}$ is. In particular, ${{\cal C}}\VDash {p}({{{\tt {A}}}}) < 0.5$. In other words, no constraint combination on $\{{{{\tt {B}}}},{{{\tt {C}}}}\}$ that is consistent with ${{\cal C}}$ will affect the restrictions on the probability of ${{{\tt {A}}}}$. Hence, the $({{{\tt {B}}}},{{{\tt {A}}}})$ and $({{{\tt {C}}}}, {{{\tt {A}}}})$ relations will not be considered effective. Given this, we can consider a weaker form of effectiveness, where the impact of other constraints may be disregarded. To achieve this, we test effectiveness not against the set of constraints ${{\cal C}}$, but against any consistent set of constraints derivable from it: \[def:effectiveness\] Let $X = ({{\cal G}}, {{\cal L}}, {{\cal C}})$ be a consistent epistemic graph, $Z \subseteq {{\sf Closure}}({{\cal C}})$ be a consistent set of epistemic constraints, $F \subseteq {{\sf Nodes}}({{\cal G}}) \setminus \{{{{\tt {B}}}}\}$ and $G = F \setminus \{{{{\tt {A}}}}\}$ be sets of arguments. Then $({{{\tt {A}}}}, {{{\tt {B}}}}) \in {{\sf Nodes}}({{\cal G}}) \times {{\sf Nodes}}({{\cal G}})$ is: - **semi–effective w.r.t. $(Z,F)$** if there exist a constraint combination ${\cal CC}^{F}$ and values $x,y \in [0,1]$ s.t. - $Z \cup {\cal CC}^{F} \nVDash \bot$, and - $Z \cup {\cal CC}^F\rvert_{G} \cup \{{p}({{{\tt {A}}}}) = y\} \nVDash \bot$, and - at least one of the following conditions holds: - $Z \cup {\cal CC}^{F} \nVDash {p}({{{\tt {B}}}}) \neq x$ and $Z \cup {\cal CC}^F\rvert_{G} \cup \{{p}({{{\tt {A}}}}) = y\} \VDash {p}({{{\tt {B}}}}) \neq x$, or - $Z \cup {\cal CC}^{F} \VDash {p}({{{\tt {B}}}}) \neq x$ and $Z \cup {\cal CC}^F\rvert_{G} \cup \{{p}({{{\tt {A}}}}) = y\} \nVDash {p}({{{\tt {B}}}}) \neq x$. - **strongly semi–effective w.r.t. $(Z,F)$** if for every constraint combination ${\cal CC}^{F}$ s.t. $Z \cup {\cal CC}^{F} \nVDash \bot$, there exist values $x,y \in [0,1]$ s.t. $Z \cup {\cal CC}^F\rvert_{G} \cup \{{p}({{{\tt {A}}}}) = y\} \nVDash \bot$ and at least one of the following conditions holds: - $Z \cup {\cal CC}^{F} \nVDash {p}({{{\tt {B}}}}) \neq x$ and $Z \cup {\cal CC}^F\rvert_{G} \cup \{{p}({{{\tt {A}}}}) = y\} \VDash {p}({{{\tt {B}}}}) \neq x$, or - $Z \cup {\cal CC}^{F} \VDash {p}({{{\tt {B}}}}) \neq x$ and $Z \cup {\cal CC}^F\rvert_{G} \cup \{{p}({{{\tt {A}}}}) = y\} \nVDash {p}({{{\tt {B}}}}) \neq x$. \[ex:semieffectiveness\] Let us come back to Example \[ex:effectivegap\]. We could have observed that the $({{{\tt {B}}}}, {{{\tt {A}}}})$ and $({{{\tt {C}}}},{{{\tt {A}}}})$ relations were not effective. Let us take $Z = \{({p}({{{\tt {B}}}}) \leq 0.5 \land {p}({{{\tt {C}}}}) < 0.5) \rightarrow {p}({{{\tt {A}}}}) <0.5\}$ and $F = \{{{{\tt {B}}}},{{{\tt {C}}}}\}$. It is easy to verify that $Z \subseteq {{\sf Closure}}({{\cal C}})$. We observe that any constraint combination on the set $\{{{{\tt {B}}}}, {{{\tt {C}}}}\}$ is consistent with $Z$, i.e. for any $x,y \in [0,1]$, $\{{p}({{{\tt {B}}}}) = x, {p}({{{\tt {C}}}}) = y\} \cup Z \not\VDash \bot$. We observe that if $x \leq 0.5$ and $y < 0.5$, then $\{{p}({{{\tt {B}}}}) = x, {p}({{{\tt {C}}}}) = y\} \cup Z \VDash {p}({{{\tt {A}}}}) < 0.5$. Hence, for example, $\{{p}({{{\tt {B}}}}) = x, {p}({{{\tt {C}}}}) = y\} \cup Z \VDash {p}({{{\tt {A}}}})\neq 1$. If we change either $x$ or $y$ in a way that $x > 0.5$ or $y \geq 0.5$, then ${{{\tt {A}}}}$ can take on any probability. Thus, for such new $x'$ or $y'$, $\{{p}({{{\tt {B}}}}) = x', {p}({{{\tt {C}}}}) = y\} \cup Z \not\VDash {p}({{{\tt {A}}}})\neq 1$ and $\{{p}({{{\tt {B}}}}) = x, {p}({{{\tt {C}}}}) = y'\} \cup Z \not\VDash {p}({{{\tt {A}}}})\neq 1$. Hence, the relations are semi–effective w.r.t $(Z, F)$, even though they were not effective w.r.t. $F$. They are unfortunately not strongly semi–effective w.r.t. $(Z, F)$. For example, if we took a constraint combination $\{{p}({{{\tt {B}}}}) = 1, {p}({{{\tt {C}}}}) = 1\}$, altering the assignment for ${{{\tt {B}}}}$ (${{{\tt {C}}}}$ respectively) would not change the restrictions on ${{{\tt {A}}}}$. Similarly as we did in case of arbitrary argument coverage, we can speak of arbitrary (semi-) effectiveness as long as a suitable $(Z, F)$ pair exists. The following connections can be drawn between all of these forms of effectiveness: [proposition]{}[effectiveness]{} \[prop:effectiveness\] Let $X = ({{\cal G}}, {{\cal L}}, {{\cal C}})$ be a consistent epistemic graph, $Z \subseteq {{\sf Closure}}({{\cal C}})$ be a consistent set of epistemic constraints, $F \subseteq {{\sf Nodes}}({{\cal G}}) \setminus \{{{{\tt {B}}}}\}$ and $G = F \setminus \{{{{\tt {A}}}}\}$ be sets of arguments. Let $({{{\tt {A}}}}, {{{\tt {B}}}}) \in {{\sf Nodes}}({{\cal G}})\times {{\sf Nodes}}({{\cal G}})$. The following hold: - If $({{{\tt {A}}}}, {{{\tt {B}}}})$ is strongly effective w.r.t. $F$, then it is effective w.r.t. $F$, but not necessarily vice versa - If $({{{\tt {A}}}}, {{{\tt {B}}}})$ is strongly semi–effective w.r.t. $(Z,F)$, then it is semi–effective w.r.t. $(Z, F)$, but not necessarily and vice versa - If $({{{\tt {A}}}}, {{{\tt {B}}}})$ is effective w.r.t. $F$, then it is semi–effective w.r.t. $({{\cal C}}, F)$ and vice versa - If $({{{\tt {A}}}}, {{{\tt {B}}}})$ is strongly effective w.r.t. $F$, then it is strongly semi–effective w.r.t. $({{\cal C}}, F)$ and vice versa - If $Z \neq {{\cal C}}$ and $({{{\tt {A}}}}, {{{\tt {B}}}})$ is semi–effective w.r.t. $(Z, F)$, then it is not necessarily effective w.r.t. $F$ - If $Z \neq {{\cal C}}$ and $({{{\tt {A}}}}, {{{\tt {B}}}})$ is strongly semi–effective w.r.t. $(Z, F)$, then it is not necessarily strongly effective w.r.t. $F$ Relation Types {#sec:consistentlabel} -------------- Labellings are useful for indicating the kind of influence one argument has on another. In epistemic graphs, the labels can be either provided during the instantiation process or, similarly as in the bipolar abstract dialectical frameworks, derived from the constraints[^3]. This however begs the question whether the way a relation is labelled is really consistent with the way it is described by the constraints. While taking the labelling as input has the benefit of being informed by the method that has instantiated the graph from a given knowledge base, the derivation approach offers more understanding of the real impact a given relation has on the arguments connected to it. By this we understand that determining edge types during instantiation is typically a very “local” process in which, for instance, we check whether the conclusions of two arguments are contradictory or not, or if conclusion of one is a premise of another. This often ignores the presence of other arguments. For example, it is perfectly possible for an argument ${{{\tt {A}}}}$ that is locally a supporter of ${{{\tt {B}}}}$ to also support an attacker ${{{\tt {C}}}}$ of ${{{\tt {B}}}}$, and thus have a negative influence on ${{{\tt {B}}}}$ from a more “global” standpoint. In this section we will focus on analyzing what constraints are telling us about relations between arguments. Inferring the type of a relation we are dealing with based on how the parent affects the target is not as trivial as one may think. For instance, even in the case of attack relations, we have binary attack, group attack, attacks as defined in ADFs or attacks as weakening relations, to list a few [@Dung95; @CayrolLS05b; @Caminada:2009; @BrewkaESWW13; @BrewkaPW14]. Acceptance of an attacker can lead to disbelieving the target, decreasing the belief in the target, or - in the presence of e.g. overpowering supporters as in ADFs - have no effect at all. While there is a general consensus among argumentation formalisms that an attack should not have positive effects, the notion of a negative effect is still very broad. Since epistemic graphs are expressive enough to model all of these behaviours, it is therefore valuable to study them in this setting. Therefore, as we can observe, even a simple attack can lead to various behaviours, and some of them may overlap with possible behaviours of supporting relations, one has to be careful when judging a relation by the effect it has. Additionally, even though two arguments can appear to be positively or negatively related on their own, taking into account the effects of other arguments in the graph might also bring to light other behaviours. Certain works on argument frameworks introduce the notions of indirect relations [@inproc:prudent; @inproc:careful; @CayrolLS13]. For example, one argument can support another, but at the same time attack another of its supporters, thus serving as an indirect attacker. It can therefore happen that depending on the context in which we look at two arguments, the perception of the relation between them changes: \[ex:locglob\] Let us consider the following scenario with arguments ${{{\tt {A}}}}$, ${{{\tt {B}}}}$, ${{{\tt {C}}}}$ and ${{{\tt {D}}}}$ where ${{{\tt {B}}}}$ and ${{{\tt {C}}}}$ group support ${{{\tt {A}}}}$ s.t. at least one of ${{{\tt {B}}}}$ and ${{{\tt {C}}}}$ needs to be believed in order to believe ${{{\tt {A}}}}$, ${{{\tt {B}}}}$ supports ${{{\tt {D}}}}$ s.t. believing ${{{\tt {B}}}}$ implies believing ${{{\tt {D}}}}$, and ${{{\tt {D}}}}$ attacks ${{{\tt {A}}}}$. This can be depicted with the graph in Figure \[fig:supportex\] and expressed with the following set of constraints ${{\cal C}}$: - $\varphi_1 {:}{p}({{{\tt {A}}}}) >0.5 \rightarrow {p}({{{\tt {B}}}}) >0.5 \lor {p}({{{\tt {C}}}}) > 0.5$ - $\varphi_2 {:}({p}({{{\tt {D}}}}) <0.5 \land ({p}({{{\tt {B}}}}) >0.5 \lor {p}({{{\tt {C}}}}) > 0.5)) \rightarrow {p}({{{\tt {A}}}}) >0.5$ - $\varphi_3 {:}{p}({{{\tt {B}}}}) >0.5 \rightarrow {p}({{{\tt {D}}}}) >0.5$ - $\varphi_4 {:}{p}({{{\tt {D}}}}) >0.5 \rightarrow {p}({{{\tt {A}}}}) < 0.5$ If we were to decide on the nature of the ${{{\tt {B}}}}$–${{{\tt {A}}}}$ relation only from the constraint concerning both of them (i.e. constraints $\varphi_1$ and $\varphi_2$), then the supporting relation becomes quite apparent. However, if we were to take into account the interactions expressed in constraints $\varphi_1$ to $\varphi_4$, then we would observe that believing ${{{\tt {B}}}}$ implies believing ${{{\tt {D}}}}$ and thus disbelieving ${{{\tt {A}}}}$, which is hardly a positive influence. \(a) at (0,0) [${{{\tt {A}}}}$]{}; (b) at (1.5, 0) [${{{\tt {B}}}}$]{}; (c) at (-1.5,0) [${{{\tt {C}}}}$]{}; (d) at (3,0) [${{{\tt {D}}}}$]{}; \(b) edge node [$+$]{} (a) (c) edge node\[below\] [$+$]{} (a) (b) edge node\[below\] [$+$]{} (d) (d) edge \[bend right\] node\[swap\] [$-$]{} (a) ; \(a) at (0,0) [${{{\tt {A}}}}$]{}; (b) at (1.5, 0) [${{{\tt {B}}}}$]{}; (c) at (3,0) [${{{\tt {C}}}}$]{}; \(a) edge node\[below\] [$-$]{} (b) (c) edge node\[below\] [$+$]{} (b) ; \[ex:29\] Let us consider the following scenario with the graph from Figure \[fig:supportex29\] and where we know that if ${{{\tt {A}}}}$ is believed, then unless ${{{\tt {C}}}}$ is believed, ${{{\tt {B}}}}$ is disbelieved. Thus, ${{{\tt {A}}}}$ carries out an attack that can be overruled by the support from ${{{\tt {C}}}}$[^4]. We can create the constraint ${p}({{{\tt {A}}}}) >0.5 \land {p}({{{\tt {C}}}}) \leq 0.5 \rightarrow {p}({{{\tt {B}}}}) <0.5$ to reflect this. The interplay between ${{{\tt {A}}}}$ and ${{{\tt {C}}}}$ shows that despite the fact ${{{\tt {A}}}}$ has primarily a negative effect on ${{{\tt {B}}}}$, believing ${{{\tt {A}}}}$ might not always imply disbelieving ${{{\tt {B}}}}$ due to the presence of other arguments. Therefore, as we can see, both negative and positive relations can be interpreted in various ways, and their actual influence can change depending on the context in which they are analyzed. Hence, rather than forcing an attack to have a negative effect, we interpret it as a relation not having a positive effect and support as not having a negative effect. In this respect, our approach is similar to the one in abstract dialectical frameworks [@BrewkaESWW13], which as seen in [@Polberg16] subsumes a wide range of existing methods. However, as motivated by Example \[ex:locglob\], we should additionally distinguish between local and global influence, the difference between them being whether all or some (parts of) constraints are taken into account. What we would also like to observe is that selecting the constraints against which the relations should be tested, is not necessarily an objective process. Let us again look at Example \[ex:locglob\]: \[ex:locglob2\] Let us come back to the graph depicted in Figure \[fig:supportex\] and analyzed in Example \[ex:locglob\]. In order to test for local impact that ${{{\tt {B}}}}$ has on ${{{\tt {A}}}}$, our intuition would be to focus on constraints $\varphi_1$ and $\varphi_2$. Let us however consider replacing $\varphi_3$ and $\varphi_4$ with an equivalent constraint $({p}({{{\tt {B}}}}) >0.5 \rightarrow {p}({{{\tt {D}}}}) >0.5) \land ({p}({{{\tt {D}}}}) >0.5 \rightarrow {p}({{{\tt {A}}}}) < 0.5)$. We observe that this replacement does not affect the satisfiability of our set. From the new constraint we can also infer another constraint $ \varphi' {:}{p}({{{\tt {B}}}}) >0.5 \land {p}({{{\tt {D}}}}) >0.5 \rightarrow {p}({{{\tt {A}}}}) <0.5$. Again, adding it to the constraint set in no way affects the satisfying distributions. However, this constraint can be interpreted as a group attack on ${{{\tt {A}}}}$ by ${{{\tt {B}}}}$ and ${{{\tt {D}}}}$, and if we were to check the local impact that ${{{\tt {B}}}}$ has on ${{{\tt {A}}}}$, the intuition would be to take it under consideration. Consequently, despite the logical equivalence of both the original and the modified sets of constraints, the perception of the relations stemming from them might not be the same. Thus, similarly as in the case of relation coverage, determining the nature of a given relation depends on the constraints that we choose to analyze. Likewise, we will focus on graphs with consistent constraints, and refer to [@HunterPT2018Arxiv] for discussion on handling the inconsistent ones. Let us first consider a simplified approach, which primarily focuses on arguments being believed, disbelieved or neither. We assume that a relation we want to investigate is at least semi-effective; ones that are not we will refer to as unspecified. Semi-effective relations can later be deemed attacking, supporting, dependent or subtle. Attack means that a target argument that is not believed remains as such when the source is believed. In other words, we want to avoid situations when believing the source would lead to believing the target. Support can be defined in a similar fashion. A dependent relation is seen as neither supporting nor attacking, and one that is both is referred to as subtle[^5]: \[def:relations\] Let $X = ({{\cal G}}, {{\cal L}}, {{\cal C}})$ be a consistent epistemic graph. Let $Z \subseteq {{\sf Closure}}({{\cal C}})$ be a consistent set of epistemic constraints, $F \subseteq {{\sf Nodes}}({{\cal G}}) \setminus \{{{{\tt {B}}}}\}$ and $G = F \setminus \{{{{\tt {A}}}}\}$ be sets of arguments. The relation represented by $({{{\tt {A}}}}, {{{\tt {B}}}})\in {{\sf Nodes}}({{\cal G}}) \times {{\sf Nodes}}({{\cal G}})$ is: - **attacking w.r.t. $(Z,F)$** if it is semi–effective w.r.t. $(Z,F)$ and for every constraint combination ${\cal CC}^{F}$ s.t. $Z \cup {\cal CC}^{F} \not\VDash \bot$ and $Z \cup {\cal CC}^{F}\rvert_G \cup \{{p}({{{\tt {A}}}}) >0.5\} \not\VDash \bot$, if $Z \cup {\cal CC}^{F} \VDash {p}({{{\tt {B}}}}) \leq 0.5$ then $Z \cup {\cal CC}^{F}\rvert_G \cup \{{p}({{{\tt {A}}}}) >0.5\} \VDash {p}({{{\tt {B}}}}) \leq 0.5$. - **supporting w.r.t. $(Z,F)$** if it is semi–effective w.r.t. $(Z,F)$ and for every constraint combination ${\cal CC}^{F}$ s.t. $Z \cup {\cal CC}^{F} \not\VDash \bot$ and $Z \cup {\cal CC}^{F}\rvert_G \cup \{{p}({{{\tt {A}}}}) >0.5\} \not\VDash \bot$, if $Z \cup {\cal CC}^{F} \VDash {p}({{{\tt {B}}}}) \geq 0.5$ then $Z \cup {\cal CC}^{F}\rvert_G \cup \{{p}({{{\tt {A}}}}) >0.5\} \VDash {p}({{{\tt {B}}}}) \geq 0.5$. - **dependent w.r.t. $(Z,F)$** if it is semi–effective but neither attacking nor supporting w.r.t. $(Z,F)$ - **subtle w.r.t. $(Z,F)$** if it is semi–effective and both attacking and supporting w.r.t. $(Z,F)$ - **unspecified w.r.t. $(Z,F)$** if it is not semi–effective w.r.t. $(Z,F)$ Depending on the choice of constraints and arguments that we use for testing, it can happen that a relation is seen as supporting or attacking due to vacuous truth. For example, we may never find an appropriate combination s.t. $Z \cup {\cal CC}^{F} \VDash {p}({{{\tt {B}}}}) \geq 0.5$ ($Z \cup {\cal CC}^{F} \VDash {p}({{{\tt {B}}}}) \leq 0.5$), or we cannot find constraint combinations that would be consistent with $Z$. Consequently, we can also consider the following strengthening: Let $X = ({{\cal G}}, {{\cal L}}, {{\cal C}})$ be a consistent epistemic graph, $Z \subseteq {{\sf Closure}}({{\cal C}})$ be a consistent set of epistemic constraints, $F \subseteq {{\sf Nodes}}({{\cal G}})\setminus \{{{{\tt {B}}}}\}$ and $G = F \setminus \{{{{\tt {A}}}}\}$ be sets of arguments. Then a supporting (resp. attacking, dependent, subtle) w.r.t. $(Z, F)$ relation $({{{\tt {A}}}}, {{{\tt {B}}}})$ is **strong** w.r.t. $(Z, F)$ if: - for every constraint combination ${\cal CC}^{F}$ it holds that $Z \cup {\cal CC}^{F} \not\VDash \bot$ and $Z \cup {\cal CC}^{F}\rvert_G \cup \{{p}({{{\tt {A}}}}) >0.5\} \not\VDash \bot$ - and there is at least one constraint combination ${\cal CC}^{F}$ s.t. $Z \cup {\cal CC}^{F} \VDash {p}({{{\tt {B}}}}) \geq 0.5$ (resp. $Z \cup {\cal CC}^{F} \VDash {p}({{{\tt {B}}}}) \leq 0.5$ or both). \[ex:rellabel\] Let us consider relation $({{{\tt {B}}}},{{{\tt {A}}}})$. We can observe that ${{\cal C}}\VDash \varphi_1 \land \varphi_2$ and $\{\varphi_1 \land \varphi_2 \} \VDash ({p}({{{\tt {A}}}}) \leq 0.5 \lor {p}({{{\tt {B}}}}) > 0.5 \lor {p}({{{\tt {C}}}}) > 0.5) \land ({p}({{{\tt {A}}}}) > 0.5 \lor {p}({{{\tt {B}}}}) \leq 0.5 \lor {p}({{{\tt {D}}}}) \geq 0.5)$. Let us therefore take $Z = \{ ({p}({{{\tt {A}}}}) \leq 0.5 \lor {p}({{{\tt {B}}}}) > 0.5 \lor {p}({{{\tt {C}}}}) > 0.5) \land ({p}({{{\tt {A}}}}) > 0.5 \lor {p}({{{\tt {B}}}}) \leq 0.5 \lor {p}({{{\tt {D}}}}) \geq 0.5)\}$ and $F = \{{{{\tt {B}}}}, {{{\tt {C}}}}, {{{\tt {D}}}}\}$ as our parameters for testing the nature of $({{{\tt {B}}}},{{{\tt {A}}}})$. We can observe that if we take the sets $W = \{{p}({{{\tt {B}}}}) = 0, {p}({{{\tt {C}}}}) = 0, {p}({{{\tt {D}}}}) = 0\}$ and $W' = \{{p}({{{\tt {B}}}}) =1, {p}({{{\tt {C}}}}) = 0, {p}({{{\tt {D}}}}) = 0\}$, then $Z \cup W \VDash {p}({{{\tt {A}}}}) \neq 1$ and $Z \cup W' \not\VDash {p}({{{\tt {A}}}}) \neq 1$. Thus, the $({{{\tt {B}}}}, {{{\tt {A}}}})$ relation is semi–effective w.r.t. $(Z, F)$. Furthermore, for all value $y_1, y_2, y_3 \in [0,1]$, $Z \cup \{{p}({{{\tt {B}}}}) =y_1, {p}({{{\tt {C}}}}) = y_2, {p}({{{\tt {D}}}}) = y_3\} \not\VDash \bot$ and $Z \cup \{{p}({{{\tt {C}}}}) =y_2, {p}({{{\tt {D}}}}) = y_3\} \cup \{ {p}({{{\tt {B}}}}) > 0.5\} \not\VDash \bot$. We can also observe that if $y_1 >0.5$ and $y_3<0.5$, then $Z \cup \{{p}({{{\tt {B}}}}) =y_1, {p}({{{\tt {C}}}}) = y_2, {p}({{{\tt {D}}}}) = y_3\} \VDash {p}({{{\tt {A}}}}) > 0.5$, and if $y_1 \leq 0.5$ and $y_2 \leq 0.5$, then $Z \cup \{{p}({{{\tt {B}}}}) =y_1, {p}({{{\tt {C}}}}) = y_2, {p}({{{\tt {D}}}}) = y_3\} \VDash {p}({{{\tt {A}}}}) \leq 0.5$. Otherwise, any probability can be assigned to ${{{\tt {A}}}}$. Hence, for support, we only need to consider the first case, and amending the set of constraints with ${p}({{{\tt {B}}}}) > 0.5$ will not change the outcome. Thus, the $({{{\tt {B}}}}, {{{\tt {A}}}})$ relation is strongly supporting w.r.t. $(Z, F)$. We can consider the second case and amend the constraints in the same way to see that the relation is not attacking. Let us now take into account all of the constraints and assume $Z = {{\cal C}}$. We can observe that if $F$ left the way it is, the $({{{\tt {B}}}}, {{{\tt {A}}}})$ relation is in fact unspecified. This is due to the fact that once the values for ${{{\tt {C}}}}$ and ${{{\tt {D}}}}$ are set, modifying the value of ${{{\tt {B}}}}$ leads either to inconsistency (caused by $\varphi_3$) or does not change anything anymore. We can therefore reduce the set $F$ to $\{{{{\tt {B}}}}, {{{\tt {C}}}}\}$. At this point, we observe that for every $y_1, y_2 \in [0,1]$, the set $W = \{{p}({{{\tt {B}}}}) = y_1, {p}({{{\tt {C}}}}) = y_2\}$ is consistent with $Z$. Furthermore, for $y_1 > 0.5$, $Z \cup W \VDash {p}({{{\tt {A}}}}) < 0.5$, for $y_1 \leq 0.5$ and $y_2 \leq 0.5$, $Z \cup W \VDash {p}({{{\tt {A}}}}) \leq 0.5$, and for $y_1 \leq 0.5$ and $y_2 > 0.5$ ${{{\tt {A}}}}$ can take any probability. We can therefore show that $({{{\tt {B}}}}, {{{\tt {A}}}})$ is strongly attacking w.r.t. $(Z, F)$. However, since we cannot derive ${p}({{{\tt {A}}}}) \geq 0.5$, it is also supporting and subtle, even though not strongly. Summary of our analysis, as well as for some other relations, can be seen in Table \[tab:inducedlabels\]. [|c|c|P[6cm]{}|c|c|]{} Relation & Label & $Z$ & $F$ & Type\ & & [$\{({p}({{{\tt {A}}}}) \leq 0.5 \lor {p}({{{\tt {B}}}}) > 0.5 \lor {p}({{{\tt {C}}}}) > 0.5) \land ({p}({{{\tt {A}}}}) > 0.5 \lor {p}({{{\tt {B}}}}) \leq 0.5 \lor {p}({{{\tt {D}}}}) \geq 0.5)\}$]{}& $\{{{{\tt {B}}}},{{{\tt {C}}}},{{{\tt {D}}}}\}$ & (strongly) supporting\ & & & $\{{{{\tt {B}}}},{{{\tt {C}}}},{{{\tt {D}}}}\}$ & unspecified\ & & & & (strongly) attacking\ & & & & supporting\ & & & & subtle\ $({{{\tt {C}}}}, {{{\tt {A}}}})$ & + & [$\{({p}({{{\tt {A}}}}) \leq 0.5 \lor {p}({{{\tt {B}}}}) > 0.5 \lor {p}({{{\tt {C}}}}) > 0.5) \land ({p}({{{\tt {A}}}}) > 0.5 \lor {p}({{{\tt {C}}}}) \leq 0.5 \lor {p}({{{\tt {D}}}}) \geq 0.5)\}$]{}& $\{{{{\tt {B}}}},{{{\tt {C}}}},{{{\tt {D}}}}\} $ &(strongly) supporting\ $({{{\tt {D}}}}, {{{\tt {A}}}})$ & - & [$\{({p}({{{\tt {D}}}}) <0.5 \land ({p}({{{\tt {B}}}}) >0.5 \lor {p}({{{\tt {C}}}}) > 0.5)) \rightarrow {p}({{{\tt {A}}}}) >0.5, {p}({{{\tt {D}}}}) >0.5 \rightarrow {p}({{{\tt {A}}}}) < 0.5\}$]{}& $\{{{{\tt {B}}}},{{{\tt {C}}}},{{{\tt {D}}}}\}$ &(strongly) attacking\ & & & & (strongly) supporting\ & & & & attacking\ & & & & subtle\ In the previous definition we have dealt with positive and negative relations in a more ternary manner, i.e. it only mattered whether the parent and the target were believed, and not up to what degree. Thus, we can also use more refined methods, coming in the form of positive and negative monotony. In other words, assuming beliefs in order relevant arguments remain unchanged, a higher belief in one argument node will ensure that there is a higher (resp. lower) belief in the other argument. Let $X = ({{\cal G}}, {{\cal L}}, {{\cal C}})$ be a consistent epistemic graph. Let $Z \subseteq {{\sf Closure}}({{\cal C}})$ be a consistent set of epistemic constraints and $F \subseteq {{\sf Nodes}}({{\cal G}}) \setminus \{{{{\tt {B}}}}\}$ be a set of arguments. The relation represented by $({{{\tt {A}}}}, {{{\tt {B}}}})\in {{\sf Nodes}}({{\cal G}}) \times {{\sf Nodes}}({{\cal G}})$ is: - **positive monotonic** w.r.t. $(Z,F)$ if for every ${P}, {P}' \in {{\sf Sat}}(Z)$ s.t. - ${P}({{{\tt {A}}}}) > {P}'({{{\tt {A}}}})$, and - for all ${{{\tt {C}}}}\in F$, if ${{{\tt {C}}}}\neq {{{\tt {A}}}}$ and ${{{\tt {C}}}}\neq {{{\tt {B}}}}$ then ${P}({{{\tt {C}}}}) = {P}'({{{\tt {C}}}})$, it holds that ${P}({{{\tt {B}}}}) > {P}'({{{\tt {B}}}})$. - **negative monotonic** w.r.t. $(Z,F)$ if for every ${P}, {P}' \in {{\sf Sat}}(Z)$ s.t. - ${P}({{{\tt {A}}}}) > {P}'({{{\tt {A}}}})$, and - for all ${{{\tt {C}}}}\in F$, if ${{{\tt {C}}}}\neq {{{\tt {A}}}}$ and ${{{\tt {C}}}}\neq {{{\tt {B}}}}$ then $ {P}({{{\tt {C}}}}) = {P}'({{{\tt {C}}}})$, it holds that ${P}({{{\tt {B}}}}) < {P}'({{{\tt {B}}}})$. - **non–monotonic dependent** w.r.t. $(Z,F)$ if it is neither positive nor negative monotonic Similarly as previously, we will call a relation arbitrary positive (negative) monotonic or non-monotonic dependent if a suitable pair $(Z, F)$ can be found. \[ex:monoton\] If we look at Example \[ex:rellabel\] once more, we can observe that w.r.t. to the previously analyzed $(Z,F)$ pairs, all of the relations are non–monotonic dependent. The constraints, while they can specify whether the target argument should be believed or not, are not specific enough to state the precise degree of this belief. Instead, let us now consider a simple graph $(\{{{{\tt {A}}}},{{{\tt {B}}}},{{{\tt {C}}}}\}, \{ ({{{\tt {B}}}},{{{\tt {A}}}}), ({{{\tt {C}}}},{{{\tt {A}}}})\}$ where the $({{{\tt {B}}}},{{{\tt {A}}}})$ relation is labelled with $+$ and the $({{{\tt {C}}}}, {{{\tt {A}}}})$ relation is labelled with $-$, and we have a single constraint $\varphi {:}{p}({{{\tt {C}}}}) + {p}({{{\tt {A}}}}) - {p}({{{\tt {B}}}}) = 1$. Let us focus on the $({{{\tt {B}}}},{{{\tt {A}}}})$ relation and assume an arbitrary probability distribution ${P}$ satisfying our constraint. Let ${P}({{{\tt {C}}}}) = x$. Then, ${P}({{{\tt {A}}}}) = y + {P}({{{\tt {B}}}})$ for $y = 1-x$. We can thus show that any increase in the belief in ${{{\tt {B}}}}$ will result in a proportional increase in the belief in ${{{\tt {A}}}}$ and that this relation is positive monotonic w.r.t. $(\{\varphi\}, \{{{{\tt {B}}}},{{{\tt {C}}}}\})$. In a similar fashion we can show $({{{\tt {C}}}},{{{\tt {A}}}})$ to be negative monotonic under the same parameters. We would like to stress that while epistemic graphs are standard directed graphs and not hypergraphs (i.e. edges join only two arguments), the relations between arguments do not need to be binary. By this we understand that it can happen that only the presence of more than one argument can start to impact another argument. This is reflected in the way our relation-specific notions are defined. For instance, Definition \[def:effectiveness\] demands that certain conditions are met at least once, not that they are met all the time, and Definition \[def:relations\] distinguishes attacking and supporting relations as not having a given effect instead of having one. To exemplify, forcing a supporting relation to always have an explicit positive effect would mean that it can impact a target argument on its own, without the presence of other arguments. This is the binary interpretation that is not demanded here. Our notions reflect similar concepts from ADFs, which have been shown to subsume a wide range of non-binary relations despite the fact that link type analysis is done between pairs of arguments [@Polberg16]. Internal Graph Coherence {#sec:intercoh} ------------------------ In the previous sections we have considered what information about arguments and relations between them we can extract from the constraints associated with the graph. Comparing this information with what is presented in the graph can provide us with insight into the completeness and internal coherence of the graph. There are many ways in which we could determine whether the coverage and labeling consistency of the graph are “good” and we intend to explore this more in the future. Currently, we focus on the following notions, though please note that the list is by no means exhaustive: Let $X = ({{\cal G}}, {{\cal L}}, {{\cal C}})$ be an epistemic graph. Let ${\sf DirectRels}({{{\tt {A}}}}) = \{ {{{\tt {B}}}}\mid {{{\tt {B}}}}\in {{\sf Parent}}({{{\tt {A}}}})$ or ${{{\tt {A}}}}\in {{\sf Parent}}({{{\tt {B}}}})\}$ be the set of arguments directly connected to an argument ${{{\tt {A}}}}\in {{\sf Nodes}}({{\cal G}})$ in ${{\cal G}}$. Let ${{\sf Arcs}}^*({{\cal G}}) = \{ ({{{\tt {A}}}}, {{{\tt {B}}}}) \mid $ there exists undirected path from ${{{\tt {A}}}}$ to ${{{\tt {B}}}}$ in ${{\cal G}}\}$ denote the set of pairs of all arguments connected in the graph. We say that $X$ is: - **bounded** if every argument is default or arbitrary fully covered - **entry-bounded** if every argument is default or arbitrary fully covered apart from possibly arguments ${{{\tt {A}}}}$ s.t. ${{\sf Parent}}({{{\tt {A}}}}) = \emptyset$ - **directly connected** if every relation in ${{\sf Arcs}}({{\cal G}})$ is arbitrary semi-effective - **indirectly connected** if every relation $({{{\tt {A}}}},{{{\tt {B}}}}) \in {{\sf Arcs}}^*({{\cal G}})$ is arbitrary semi-effective - **hidden connected** if there exists an arbitrary semi-effective relation $({{{\tt {A}}}},{{{\tt {B}}}}) \notin {{\sf Arcs}}^*({{\cal G}})$ - **locally connected** if for every ${{{\tt {A}}}}$, there exists a consistent set $Z \subseteq {{\sf Closure}}({{\cal C}})$ s.t. ${{{\tt {A}}}}$ is fully covered w.r.t. $(Z, {\sf DirectRels}({{{\tt {A}}}}) \setminus \{{{{\tt {A}}}}\})$ and for every ${{{\tt {B}}}}\in {\sf DirectRels}({{{\tt {A}}}}) \setminus \{{{{\tt {A}}}}\}$, $({{{\tt {B}}}},{{{\tt {A}}}})$ or $({{{\tt {A}}}}, {{{\tt {B}}}})$ is semi–effective w.r.t. $(Z, {\sf DirectRels}({{{\tt {A}}}}) \setminus \{{{{\tt {A}}}}\})$. A bounded graph is an epistemic graph we would expect to obtain through translating various existing argumentation frameworks under their standard semantics (see also Section \[section:Comparison\]). The purpose of an entry-bounded graph is to represent situations in which the internal reasoning of the graph is stated, but the actual resulting beliefs in a distribution depend on the “input” beliefs provided by the user. Connectedness of a graph contrasts the relation coverage deduced from constraints with the existence of connections within the graph. In particular, we can distinguish hidden connections, which may reflect a user that is providing constraints that are not reflected by the structure of the graph. Let us now focus on comparing the nature of a relation induced from the constraints and the nature defined by the labeling. The presented definitions could be used to verify whether the relation labels are in some way reflected by the constraints or, if possible, to assign labels to relations when they are missing. A possible - though not the only - way to do so is shown in the definition below. Let $X = ({{\cal G}}, {{\cal L}}, {{\cal C}})$ be a consistent epistemic graph. We say that ${{\cal L}}$ is **(strongly) consistent** if for every $({{{\tt {A}}}}, {{{\tt {B}}}}) \in {{\sf Arcs}}({{\cal G}})$, the following holds: - if $+ \in {{\cal L}}(({{{\tt {A}}}}, {{{\tt {B}}}}))$, then $({{{\tt {A}}}},{{{\tt {B}}}})$ is arbitrary (strongly) supporting - if $- \in {{\cal L}}(({{{\tt {A}}}}, {{{\tt {B}}}}))$, then $({{{\tt {A}}}},{{{\tt {B}}}})$ is arbitrary (strongly) attacking - if $* \in {{\cal L}}(({{{\tt {A}}}}, {{{\tt {B}}}}))$, then $({{{\tt {A}}}},{{{\tt {B}}}})$ is arbitrary (strongly) dependent - if ${{\cal L}}(({{{\tt {A}}}}, {{{\tt {B}}}})) = \emptyset$, then $({{{\tt {A}}}},{{{\tt {B}}}})$ is arbitrary unspecified We say that ${{\cal L}}$ is **monotonic consistent** if for every $({{{\tt {A}}}}, {{{\tt {B}}}}) \in {{\sf Arcs}}({{\cal G}})$, the following holds: - if $+ \in {{\cal L}}(({{{\tt {A}}}}, {{{\tt {B}}}}))$, then $({{{\tt {A}}}},{{{\tt {B}}}})$ is arbitrary positive monotonic - if $- \in {{\cal L}}(({{{\tt {A}}}}, {{{\tt {B}}}}))$, then $({{{\tt {A}}}},{{{\tt {B}}}})$ is arbitrary negative monotonic - if $* \in {{\cal L}}(({{{\tt {A}}}}, {{{\tt {B}}}}))$, then $({{{\tt {A}}}},{{{\tt {B}}}})$ is arbitrary non–monotonic dependent In this case, we could either use the set $\{+,-\}$ to denote subtle relations, or introduce a new label in order to avoid ambiguity. We also observe that in practice, every relation can be arbitrary unspecified, given that one can decide to test relations against the empty set of constraints. These approaches can be further refined in the future by putting restrictions on how the $Z$ and $F$ sets are chosen, imposing certain ranking on the relations (for example, if a relation is seen as strongly supporting and not strongly attacking, strong support could take precedence) and/or making the label conditions even stronger (for example, we can demand that ${{\cal L}}(({{{\tt {A}}}}, {{{\tt {B}}}})) = \emptyset$ iff it $({{{\tt {A}}}}, {{{\tt {B}}}})$ unspecified w.r.t. every $(Z, F)$ pair). We can consider the analysis performed in Example \[ex:rellabel\] to show that the labeling proposed for the graph from Example \[ex:locglob\] is strongly consistent with the analyzed set of constraints. The same analysis also shows that it is not the only possible consistent labeling. Following the analysis in Example \[ex:monoton\], we can also argue that a labeling that assigns $*$ to every relation would be more adequate based on the monotonicity analysis. We can observe that the labeling for the graph from Example \[ex:monoton\] is monotonic consistent with the assumed constraint. The “quality” of our epistemic graph, particularly in terms of boundedness, can affect our choice of how the graph is evaluated. For instance, the less we know about the graph, the more risky credulous reasoning becomes. In turn, connectedness can affect choice of arguments in applications, such as persuasive dialogues. It can be used to both highlight ineffective relations that should not be taken into consideration as system moves as well as incomplete relations that, if used by the user, could lead to situations in which the system cannot decide what to do next. We will come back to this in Section \[section:CaseStudy\]. Unfortunately, there is also the issue of inconsistent relations labelings. Natural language arguments are often enthymemes and the labels we would obtain through instantiating the graph are not necessarily the ones that the users would recognize. The fact that the personal views or knowledge of a given agent affects their decoding of the graph has already been observed in [@PolbergHunter17]. Recovering consistency of the relation labelings is, however, not trivial. While one can investigate the constraints and override the graph labeling to force consistency, it is unclear what methods would be optimal for allowing information in the graph to override or delete the information in the constraints. Furthermore, one could also consider cases where both parts of the graph and the constraints are sacrificed, as well as where neither, and the existence of inconsistency becomes an additional piece of information that we take advantage of. The actual chosen strategy, as well as the criteria by which it is judged, can depend on the application and the methods with which epistemic graphs are combined. We can consider criteria such as accuracy or popularity when determining whether the graph or constraints (or possibly both) should be adjusted. For instance, argument and relation mining methods are of varying accuracy, and we can expect that the epistemic constraint mining methods will perform differently depending on the quality and amount of available data. Given conflicting graphs and constraints, one can therefore select which one to prioritize and which one to adjust depending on how much we can trust them to be an accurate representation of, for example, the reasoning patterns and knowledge of (sets of) agents. Concerning popularity, we observe that the graphs and the constraints extracted from a single source of knowledge can differ between agents performing the extraction. One can therefore sacrifice parts of graphs or constraints leading to inconsistency depending on how unpopular or unlikely they are w.r.t. the given population. There are also situations in which keeping both the graph and the constraints, despite the issues between them, could be informative. An argument graph instantiated with a given structured argumentation approach from, for instance, a legal text, can be viewed as a normative representation of a given problem. It can later be augmented with the constraints of a given agent, which offer a more subjective representation. The dichotomy between the two could be a source of information of its own and, for instance, serve as a measure of how reliable or reasonable a given agent is. Favouring either of the perspectives could also be used as guidance as to whether the graph or the constraints should be sacrificed in the face of contradiction. These are only few possible scenarios, and we leave investigating consistency retrieving strategies for future work. Epistemic Semantics {#sec:epsem} ------------------- Epistemic graphs offer us a number of ways in which we can decide how much a given argument should be believed or disbelieved depending on the remaining arguments. Evaluating the graph and deciding what probabilities should be assigned to (sets of) arguments is the role of the epistemic semantics: Let $X = ({{\cal G}}, {{\cal L}}, {{\cal C}})$ be an epistemic graph. An **epistemic semantics** associates $X$ with a set ${\cal R} \subseteq {{\sf Dist}}({{\cal G}})$, where ${{\sf Dist}}({{\cal G}})$ is the set of all belief distributions over ${{\cal G}}$. Although the main aim of the epistemic semantics is to select those probability distributions that satisfy our requirements, one can also enforce additional restrictions for refining the sets of acceptable distributions, on which we will focus in this section. First of all, the simplest possible semantics is the one that associates a given graph with the set of distributions satisfying its constraints: For an epistemic graph $({{\cal G}}, {{\cal L}}, {{\cal C}})$, a distribution ${P}\in {{\sf Dist}}({{\cal G}})$ meets the **satisfaction semantics** iff ${P}\in {{\sf Sat}}({{\cal C}})$. Given that an inconsistent graph is not particularly interesting, we will aim at specifying epistemic graphs that have consistent constraints. However, we would like to note that this may not always be possible and that inconsistency does not necessarily mean that the constraints are not rational. For example, the stable semantics [@CosteMarquis:2007] for argumentation graphs does not always produce any extensions, and this is a result of the restrictive nature of this semantics. We can therefore expect that epistemic graphs aiming to emulate this may have inconsistent sets of constraints. Various properties which can be quite useful concern minimizing or maximizing certain aspects of a distribution. Similarly as in other types of argumentation semantics, we can aim to maximize or minimize the set of arguments that are believed up to any degree, disbelieved up to any degree or undecided. We can also consider the information ordering, such as the one used in [@BrewkaESWW13], which maximizes or minimizes belief and disbelief together. We can therefore introduce the following means of comparing distributions: Let $X = ({{\cal G}}, {{\cal L}}, {{\cal C}})$ be an epistemic graph and ${P}, {P}' \in {{\sf Dist}}({{\cal G}})$ be probability distributions. We say that: - ${P}\lesssim_A {P}'$ iff $\{{{{\tt {A}}}}\mid {P}({{{\tt {A}}}}) >0.5 \} \subseteq \{{{{\tt {A}}}}\mid {P}'({{{\tt {A}}}}) > 0.5 \}$ - ${P}\lesssim_R {P}'$ iff $\{{{{\tt {A}}}}\mid {P}({{{\tt {A}}}}) <0.5 \} \subseteq \{{{{\tt {A}}}}\mid {P}'({{{\tt {A}}}}) < 0.5 \}$ - ${P}\lesssim_U {P}'$ iff $\{{{{\tt {A}}}}\mid {P}({{{\tt {A}}}}) =0.5 \} \subseteq \{{{{\tt {A}}}}\mid {P}'({{{\tt {A}}}}) = 0.5 \}$ - ${P}\lesssim_I {P}'$ iff $\{{{{\tt {A}}}}\mid {P}({{{\tt {A}}}}) >0.5 \} \subseteq \{{{{\tt {A}}}}\mid {P}'({{{\tt {A}}}}) > 0.5 \}$ and $\{{{{\tt {A}}}}\mid {P}({{{\tt {A}}}}) <0.5 \} \subseteq \{{{{\tt {A}}}}\mid {P}'({{{\tt {A}}}}) < 0.5 \}$ We will refer to these orderings as acceptance, rejection, undecided and information orderings. These approaches can be further refined to take the actual degrees into account as well. For example, in some scenarios a distribution s.t. ${P}({{{\tt {A}}}}) = {P}({{{\tt {B}}}}) = 1$ and ${P}({{{\tt {C}}}}) = 0.49$ might be preferable to one s.t. ${P}({{{\tt {A}}}}) = {P}({{{\tt {B}}}}) = {P}({{{\tt {C}}}}) = 0.51$, even if the actual number of believed arguments is smaller. Thus, we also consider the belief maximizing and minimizing approaches, based on the notion of entropy: For a probability distribution ${P}$, the **entropy** $H({P})$ of ${P}$ is defined as $$\begin{aligned} H({P}) = -\sum_{\Gamma\subseteq {{\sf Nodes}}({{\cal G}})} {P}(\Gamma) log {P}(\Gamma)\end{aligned}$$ with $0 \log 0 =0$. The entropy measures the amount of indeterminateness of a probability distribution ${P}$. A probability function ${P}_{1}$ that describes absolute certain knowledge, i.e. ${P}_{1}(\Gamma)=1$ for some $\Gamma\subseteq {{\sf Nodes}}({{\cal G}})$ and ${P}_{1}(\Gamma')=0$ for every other $\Gamma'\subseteq {{\sf Nodes}}({{\cal G}})$, yields minimal entropy $H({P}_{1})=0$. The uniform probability function ${P}_{0}$ with ${P}_{0}(\Gamma)=\frac{1}{|2^{{{\sf Nodes}}({{\cal G}})}|}$ for every $\Gamma\subseteq {{\sf Nodes}}({{\cal G}})$ yields maximal entropy $H({P}_{0})= -\log \sfrac{1}{|2^{{{\sf Nodes}}({{\cal G}})}|}$. Hence, entropy is minimal when we are completely certain about the possible world, and entropy is maximal when we are completely uncertain about the possible world. Let $X = ({{\cal G}}, {{\cal L}}, {{\cal C}})$ be an epistemic graph and ${P}, {P}' \in {{\sf Dist}}({{\cal G}})$ be probability distributions. We say that ${P}\lesssim_B {P}'$ iff $H({P}') \leq H({P})$. We will refer to the above as belief ordering. Given that the purpose of an epistemic semantics is to grasp various optional properties whenever and however they are needed, a new semantics can be defined “on top” of a previous semantics, such as the satisfaction semantics. We can therefore propose the following, parameterized definition: \[def:paramsem\] Let $({{\cal G}}, {{\cal L}}, {{\cal C}})$ be an epistemic graph and ${\cal R}$ the set of distributions associated with it according to a given semantics $\sigma$. Let $v \in \{A,R,U,I,B\}$ denote acceptance, rejection, undecided, information or belief. A distribution ${P}\in {{\sf Dist}}({{\cal G}})$ meets the **$\sigma{\mhyphen}v$ maximizing (minimizing) semantics** iff ${P}\in {\cal R}$ and ${P}$ is maximal (minimal) w.r.t. $\lesssim_v$ among the elements of ${\cal R}$. There are, of course, additional properties we may want to impose in order to refine the distributions produced by the constraints associated with a framework. In particular, we may want to limit the values that the distribution may take. With some exceptions, most of these restrictions can be expressed as straightforward constraints in the epistemic graphs. However, one has to observe that in a sense, they are completely independent of the underlying structure of the graph. Thus, we believe it is more appropriate to view them as additional, optional properties: A distribution ${P}$ is: - **minimal** iff for every ${{{\tt {A}}}}\in {{\sf Nodes}}({{\cal G}})$, ${P}({{{\tt {A}}}}) = 0$ - **maximal** iff for every ${{{\tt {A}}}}\in {{\sf Nodes}}({{\cal G}})$, ${P}({{{\tt {A}}}}) = 1$ - **neutral** iff for every ${{{\tt {A}}}}\in {{\sf Nodes}}({{\cal G}})$, ${P}({{{\tt {A}}}}) = 0.5$ - **ternary** iff for every ${{{\tt {A}}}}\in {{\sf Nodes}}({{\cal G}})$, ${P}({{{\tt {A}}}}) \in \{0, 0.5, 1\}$ - **non–neutral**[^6] iff for every ${{{\tt {A}}}}\in {{\sf Nodes}}({{\cal G}})$, ${P}({{{\tt {A}}}}) \neq 0.5$ - **n–valued** iff $\left\vert{ \{ x \mid \exists {{{\tt {A}}}}\in {{\sf Nodes}}({{\cal G}}), \, {P}({{{\tt {A}}}}) = x\} }\right\vert = n$ We can therefore observe that there are various ways of refining probability distributions. We have proposed a number of ways we can minimize or maximize different aspects of the distributions, and it is possible that on certain epistemic graphs they will coincide. However, as the following examples will show, all of the methods are in principle distinct. \[-&gt;,&gt;=stealth,shorten &gt;=1pt,auto,node distance=1.7cm, thick,main node/.style=[shape=rounded rectangle,fill=darkgreen!10,draw,minimum size = 0.6cm,font=****]{} \] \(a) [${{{\tt {A}}}}$]{}; (b) \[right of=a\] [${{{\tt {B}}}}$]{}; (c) \[right of=b\] [${{{\tt {C}}}}$]{}; (d) \[right of=c\] [${{{\tt {D}}}}$]{}; (e) \[right of=d\] [${{{\tt {E}}}}$]{}; \(a) edge node [$-$]{} (b) (c) edge node\[swap\] [$-$]{} (b) (d) edge node [$-$]{} (e) (c) edge \[bend left\] node [$+$]{} (e) (c) edge \[bend right\] node\[swap\] [$-$]{} (d) (d) edge node\[swap\] [$-$]{} (c); \[ex:episem5\] Let us consider the graph depicted in Figure \[fig:episemex5\] and the following set of constraints ${{\cal C}}$: $$\begin{gathered} \{ {p}({{{\tt {A}}}}) >0.5,\, {p}({{{\tt {B}}}}) + {p}({{{\tt {A}}}})\leq 1 \land {p}({{{\tt {B}}}})+ {p}({{{\tt {C}}}})\leq 1, \, {p}({{{\tt {C}}}}) \neq 0.5, \, \\ {p}({{{\tt {C}}}}) + {p}({{{\tt {D}}}})= 1, \, ({p}({{{\tt {C}}}}) > 0.5 \land {p}({{{\tt {D}}}}) < 0.5) \rightarrow {p}({{{\tt {E}}}}) = 0.5\}\end{gathered}$$ The types of ternary satisfying distributions of our graph are listed in Table \[tab:episemtab2\]. We also include an analysis of which of them are additionally maximizing or minimizing according to a given criterion. We observe that while only belief in arguments are listed, we only have single distributions producing them. The patterns set out by ${P}_2$ and ${P}_4$ describe distributions that assign probability $1$ to the set of formed of arguments believed with degree $1$ and $0$ to the rest (e.g. in ${P}_2$, the probability of $\{{{{\tt {A}}}},{{{\tt {D}}}}\}$ would be $1$ and $0$ for everything else). In turn, for ${P}_1$ and ${P}_3$, we would assign probability $0.5$ to the set of arguments that are not disbelieved, and additional $0.5$ to the set of arguments that are believed (e.g. for ${P}_1$, $\{{{{\tt {A}}}},{{{\tt {C}}}},{{{\tt {E}}}}\}$ and $\{{{{\tt {A}}}},{{{\tt {C}}}}\}$ would be assigned $0.5$ and all other sets would be assigned $0$). This allows us to easily verify belief maximizing and minimizing patterns. --------- ----------------- ----------------- ----------------- ----------------- ----------------- ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ ${{{\tt {A}}}}$ ${{{\tt {B}}}}$ ${{{\tt {C}}}}$ ${{{\tt {D}}}}$ ${{{\tt {E}}}}$ $\lesssim_A$ $\lesssim_R$ $\lesssim_I$ $\lesssim_U$ $\lesssim_B$ $\lesssim_A$ $\lesssim_R$ $\lesssim_I$ $\lesssim_U$ $\lesssim_B$ ${P}_1$ 1 0 1 0 0.5 [$\checkmark$]{} [$\checkmark$]{} [$\checkmark$]{} [$\checkmark$]{} [$\times$]{} [$\checkmark$]{} [$\checkmark$]{} [$\checkmark$]{} [$\times$]{} [$\checkmark$]{} ${P}_2$ 1 0 0 1 0 [$\times$]{} [$\checkmark$]{} [$\checkmark$]{} [$\times$]{} [$\checkmark$]{} [$\checkmark$]{} [$\times$]{} [$\times$]{} [$\checkmark$]{} [$\times$]{} ${P}_3$ 1 0 0 1 0.5 [$\times$]{} [$\times$]{} [$\times$]{} [$\checkmark$]{} [$\times$]{} [$\checkmark$]{} [$\checkmark$]{} [$\checkmark$]{} [$\times$]{} [$\checkmark$]{} ${P}_4$ 1 0 0 1 1 [$\checkmark$]{} [$\times$]{} [$\checkmark$]{} [$\times$]{} [$\checkmark$]{} [$\times$]{} [$\checkmark$]{} [$\times$]{} [$\checkmark$]{} [$\times$]{} --------- ----------------- ----------------- ----------------- ----------------- ----------------- ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ : Types of probability distributions meeting the ternary and satisfaction semantics from Example \[ex:episem5\] and their conformity to given maximizing and minimizing semantics.[]{data-label="tab:episemtab2"} \[-&gt;,&gt;=stealth,shorten &gt;=1pt,auto,node distance=1.7cm, thick,main node/.style=[shape=rounded rectangle,fill=darkgreen!10,draw,minimum size = 0.6cm,font=****]{} \] \(a) [${{{\tt {A}}}}$]{}; (b) \[right of=a\] [${{{\tt {B}}}}$]{}; (c) \[right of=b\] [${{{\tt {C}}}}$]{}; (d) \[right of=c\] [${{{\tt {D}}}}$]{}; (e) \[right of=d\] [${{{\tt {E}}}}$]{}; \(a) edge node [$-$]{} (b) (c) edge node\[swap\] [$-$]{} (b) (d) edge node [$-$]{} (e) (e) edge \[loop right\] node [$-$]{} (e) (c) edge \[bend right\] node\[swap\] [$-$]{} (d) (d) edge \[bend right\] node\[swap\] [$-$]{} (c); Consider the graph from Figure \[fig:af3\] and the following set of constraints ${{\cal C}}$: - $\varphi_1 {:}{p}({{{\tt {A}}}}) > 0.5$ - $\varphi_2 {:}({p}({{{\tt {B}}}}) > 0.5 \leftrightarrow ({p}({{{\tt {A}}}}) < 0.5 \land {p}({{{\tt {C}}}}) < 0.5)) \land ({p}({{{\tt {B}}}}) < 0.5 \leftrightarrow ({p}({{{\tt {A}}}}) > 0.5 \lor {p}({{{\tt {C}}}}) > 0.5))$ - $\varphi_3 {:}({p}({{{\tt {C}}}}) > 0.5 \leftrightarrow {p}({{{\tt {D}}}}) < 0.5) \land ({p}({{{\tt {C}}}}) < 0.5 \leftrightarrow {p}({{{\tt {D}}}}) > 0.5)$ - $\varphi_4 {:}({p}({{{\tt {E}}}}) > 0.5 \leftrightarrow ({p}({{{\tt {E}}}}) < 0.5 \land {p}({{{\tt {D}}}}) < 0.5)) \land ({p}({{{\tt {E}}}}) < 0.5 \leftrightarrow ({p}({{{\tt {E}}}}) > 0.5 \lor {p}({{{\tt {D}}}}) > 0.5))$ - $\varphi_5 {:}{p}({{{\tt {C}}}}) > 0.5 \lor {p}({{{\tt {D}}}}) > 0.5$ We obtain two patterns for ternary satisfying distributions, namely ${P}_1$ s.t. ${P}_1({{{\tt {A}}}}) = {P}({{{\tt {C}}}}) = 1$, ${P}_1({{{\tt {B}}}}) = {P}_1({{{\tt {D}}}}) = 0$, ${P}_1({{{\tt {E}}}}) = 0.5$, and ${P}_2$ s.t. ${P}_2({{{\tt {A}}}}) = {P}_2({{{\tt {D}}}}) =1$, ${P}_2({{{\tt {B}}}}) = {P}_2({{{\tt {C}}}}) ={P}_2({{{\tt {E}}}})=0$. Both describe distributions that are also information minimizing, but only ${P}_1$ fits the undecided maximizing requirements. \[-&gt;,&gt;=stealth,shorten &gt;=1pt,auto,node distance=1.7cm, thick,main node/.style=[shape=rounded rectangle,fill=darkgreen!10,draw,minimum size = 0.6cm,font=****]{} \] \(a) [${{{\tt {A}}}}$]{}; (b) \[right of=a\] [${{{\tt {B}}}}$]{}; \(b) edge \[loop right\] node [$-$]{} (b); \[ex:episem2\] Consider the graph depicted in Figure \[fig:episemex2\] and the following set of constraints ${{\cal C}}= \{{p}({{{\tt {A}}}}) \geq 0.5$, ${p}({{{\tt {B}}}})+{p}(\neg {{{\tt {B}}}}) = 1\}$. For this graph, the distribution ${P}'_{1}$ defined via: $$\begin{aligned} {P}'_{1}(\emptyset) & = 0 & {P}'_{1}(\{{{{\tt {A}}}}\}) & = 0.5 & {P}'_{1}(\{{{{\tt {B}}}}\}) & = 0 & {P}'_{1}(\{{{{\tt {A}}}},{{{\tt {B}}}}\}) & = 0.5\end{aligned}$$ satisfies the constraints and is both undecided minimizing (only ${{{\tt {B}}}}$ is undecided) and belief maximizing. However, the distribution ${P}'_{2}$ defined via: $$\begin{aligned} {P}'_{2}(\emptyset) & = 0.5 & {P}'_{2}(\{{{{\tt {A}}}}\}) & = 0 & {P}'_{2}(\{{{{\tt {B}}}}\}) & = 0 & {P}'_{2}(\{{{{\tt {A}}}},{{{\tt {B}}}}\}) & = 0.5\end{aligned}$$ is belief maximizing but not undecided minimizing (both ${{{\tt {A}}}}$ and ${{{\tt {B}}}}$ are undecided). Case Study {#section:CaseStudy} ---------- In order to illustrate how epistemic graphs could be acquired for an application, we consider using them as domain models in persuasive dialogue systems. Recent developments in computational argumentation are leading to a new generation of persuasion technologies [@Hunter16comma]. An automated persuasion system (APS) is a system that can engage in a dialogue with a user (the persuadee) in order to convince the persuadee to do (or not do) some action or to believe (or not believe) something. The system achieves this by putting forward arguments that have a high chance of influencing the persuadee. In real-world persuasion, in particular in applications such as behaviour change, presenting convincing arguments, and presenting counterarguments to the user’s arguments, is critically important. For example, for a doctor to persuade a patient to drink less alcohol, that doctor has to give good arguments why it is better for the patient to drink less, and how (s)he can achieve this. Two important features of an APS are the domain model and the user model, which are closely related, and together are harnessed by the APS strategy for optimizing the choice of move in a persuasion dialogue. Domain model : This contains the arguments that can be presented in the dialogue by the system, and it also contains the arguments that the user may entertain. Some arguments will attack other arguments, and some arguments will support other arguments. As we will see, the domain model can be represented by an epistemic graph. User model : This contains information about the user that can be utilized by the system in order to choose the most beneficial actions. The information in the user model is what the system believes is true about that user. A key dimension that we consider in the user model are the beliefs that the user may have in the arguments, and as the dialogue proceeds, the model can be updated [@Hunter15ijcai] based on the results of the queries and of the arguments posited. By using an epistemic graph to represent the domain model, and a probability distribution over arguments to represent the user model, we can have a tight coupling of the two kinds of model. Furthermore, the probability distribution can be harnessed directly in a decision-theoretic approach to optimize the choice of move [@Hadoux17]. To illustrate the use of epistemic graphs for domain/user modelling, we consider a case study in behaviour change. The aim of this behaviour change application is to persuade users to book a regular dental check-up. Assume we have the graph presented in Figure \[figure:dental\] and that through, for instance, crowdsourcing data, we have learned which constraints should be associated with a given user profile. The obtained domain model(s) can now be used in automated persuasion systems, and we assume we are now dealing with a user of such a system whose profile lead to the selection of the following constraints in order to describe his or her behaviour: 1. \[abcd\] This constraint states that if ${{{\tt {B}}}}$ is believed or ${{{\tt {C}}}}$ is disbelieved or ${{{\tt {D}}}}$ is disbelieved, then ${{{\tt {A}}}}$ is believed and vice versa: $$({p}({{{\tt {B}}}}) > 0.5 \lor {p}({{{\tt {C}}}}) < 0.5 \lor {p}({{{\tt {D}}}}) < 0.5) \leftrightarrow {p}({{{\tt {A}}}}) > 0.5$$ 2. \[ab\] This constraint states that if ${{{\tt {B}}}}$ is at least moderately believed then ${{{\tt {A}}}}$ is strongly believed, and if ${{{\tt {B}}}}$ is at least strongly believed then ${{{\tt {A}}}}$ is completely believed: $$({p}({{{\tt {B}}}}) > 0.65 \rightarrow {p}({{{\tt {A}}}}) > 0.8) \land ({p}({{{\tt {B}}}}) > 0.8 \rightarrow {p}({{{\tt {A}}}}) = 1)$$ 3. \[ad\] This constraint states that if ${{{\tt {D}}}}$ is strongly disbelieved then ${{{\tt {A}}}}$ is at least moderately believed $${p}({{{\tt {D}}}}) < 0.2 \rightarrow {p}({{{\tt {A}}}}) > 0.65$$ 4. \[bf\] This constraint states that if ${{{\tt {F}}}}$ is believed then ${{{\tt {B}}}}$ is at least moderately believed and if ${{{\tt {F}}}}$ is disbelieved, then so is ${{{\tt {B}}}}$ $$({p}({{{\tt {F}}}}) > 0.5 \rightarrow {p}({{{\tt {B}}}}) > 0.65) \land ({p}({{{\tt {F}}}}) < 0.5 \rightarrow {p}({{{\tt {B}}}}) < 0.5)$$ 5. \[cg\] This constraint states that disbelief in ${{{\tt {C}}}}$ is proportional to belief in ${{{\tt {G}}}}$ $${p}({{{\tt {G}}}}) + {p}({{{\tt {C}}}}) \leq 1$$ We can use these constraints together with the epistemic graph and probability distribution over the subsets of arguments to model the agent in a persuasion dialogue. We assume that we want to persuade the agent to believe argument ${{{\tt {A}}}}$, the more the better. The initial belief distribution for such applications can be obtained through crowdsourcing data about various participants, and for the purpose of our example we assume that a suitable distribution ${P}_0$ denoting the initial belief that we think the agent has in the arguments has been obtained. Given ${P}_0$, and the need to get a change in belief in ${{{\tt {A}}}}$ so that it is believed, we can use constraints \[abcd\], \[ab\] and \[ad\] as a guide. In other words, we can either increase the belief in ${{{\tt {B}}}}$ or decrease the belief in ${{{\tt {C}}}}$ or ${{{\tt {D}}}}$. We observe that using ${{{\tt {B}}}}$ can lead to the biggest increase in belief in ${{{\tt {A}}}}$. The effect of using ${{{\tt {D}}}}$ may be smaller and ${{{\tt {C}}}}$ the smallest. We observe that none of these arguments (and none of their parents) are default covered. We can thus see this user as being flexible and open to a discussion. If, for instance, ${{{\tt {F}}}}$ had been default covered by a constraint ${p}({{{\tt {F}}}}) < 0.5$, then ${{{\tt {B}}}}$ would have been default covered as well and putting forward ${{{\tt {B}}}}$ could be seen as an ineffective move. We therefore have three options to explore, and analyzing them is valuable due to the fact that prolonged argument exchanges significantly decrease the chances of changing an opinion [@TanNDNML16]. Consequently, exhausting all possible routes may yield negative results, and a persuasion system will need to be able to optimize the choice of dialogue moves. Option ${{{\tt {B}}}}$ : By looking at the graph and the labels, we may expect that the increase in belief in ${{{\tt {B}}}}$ may be achieved by increasing the belief in ${{{\tt {F}}}}$ and/or decreasing the belief in ${{{\tt {E}}}}$. However, we observe that even though ${{{\tt {E}}}}$ is stated to be an attacker of ${{{\tt {B}}}}$, analysis of the constraints tells us that it cannot be anything else than unspecified and that the labeling is inconsistent. This can potentially be the result of how the user was profiled and what constraints for the graph have been created for the profile (s)he fitted. By analyzing the constraints, we can observe that increasing the belief in ${{{\tt {B}}}}$ can only be done by using argument ${{{\tt {F}}}}$ (constraint \[bf\]). We can therefore choose to rely on the information in the graph or the information in the constraints. An optimistic system, which selects the easier and more favourable options, would in this case assume that the learned constraints accurately describe the user. It could, for instance, take ${P}_{opt}$ as the predicted probability distribution and determine that using ${{{\tt {F}}}}$ is a good move[^7]. A more pessimistic system, which is convinced that if something can go wrong, it will, can consider the learned data to be incomplete. Thus, ${{{\tt {E}}}}$ can be seen as a potential attacker, despite being unspecified. The system could take ${P}_{pes}$ as the predicted distribution and decide not to proceed with ${{{\tt {F}}}}$ due to the potential chance of failure[^8]. ${{{\tt {A}}}}$ ${{{\tt {B}}}}$ ${{{\tt {C}}}}$ ${{{\tt {D}}}}$ ${{{\tt {E}}}}$ ${{{\tt {F}}}}$ ${{{\tt {G}}}}$ ${{{\tt {H}}}}$ ${{\tt {I}}}$ ------------- ----------------- ----------------- ----------------- ----------------- ----------------- ----------------- ----------------- ----------------- --------------- -- -- ${P}_0$ $0.3$ $0.4$ $0.7$ $0.6$ $0.7$ $0.45$ $0.2$ $0.4$ $0.3$ ${P}_{opt}$ $0.85$ $0.7$ $0.7$ $0.6$ $0.7$ $0.8$ $0.2$ $0.4$ $0.3$ ${P}_{pes}$ $0.3$ $0.45$ $0.7$ $0.6$ $0.7$ $0.8$ $0.2$ $0.4$ $0.3$ Option ${{{\tt {D}}}}$ : Making argument ${{{\tt {D}}}}$ disbelieved will cause ${{{\tt {A}}}}$ to be believed and based on the information in the graph, we can suspect that it can be done through increasing the belief in ${{{\tt {H}}}}$ and/or ${{\tt {I}}}$. However, once more we can observe that the graph labeling is not consistent with the constraints and the impact of ${{{\tt {H}}}}$ and ${{\tt {I}}}$ on ${{{\tt {D}}}}$ is in fact unspecified. We can therefore again choose to rely either on the graph or on the constraints. In a similar fashion as before, an optimistic system could decide to carry on with presenting either ${{{\tt {H}}}}$ or ${{\tt {I}}}$ and hope to observe a decrease in ${{{\tt {D}}}}$. A possible predicted distribution ${P}_{opt}$ associated with presenting ${{\tt {I}}}$ is seen in the table below. A pessimistic system may choose to abandon this dialogue line due to its potential ineffectiveness (see predicted distribution ${P}_{pes})$. ${{{\tt {A}}}}$ ${{{\tt {B}}}}$ ${{{\tt {C}}}}$ ${{{\tt {D}}}}$ ${{{\tt {E}}}}$ ${{{\tt {F}}}}$ ${{{\tt {G}}}}$ ${{{\tt {H}}}}$ ${{\tt {I}}}$ ------------- ----------------- ----------------- ----------------- ----------------- ----------------- ----------------- ----------------- ----------------- --------------- -- -- ${P}_0$ $0.3$ $0.4$ $0.7$ $0.6$ $0.7$ $0.45$ $0.2$ $0.4$ $0.3$ ${P}_{opt}$ $0.7$ $0.4$ $0.7$ $0.1$ $0.7$ $0.45$ $0.2$ $0.4$ $0.9$ ${P}_{pes}$ $0.3$ $0.4$ $0.7$ $0.6$ $0.7$ $0.45$ $0.2$ $0.4$ $0.9$ Option ${{{\tt {C}}}}$ : We can observe that ${{{\tt {G}}}}$ is an attacker of ${{{\tt {C}}}}$ both in the graph and in the constraints. While increasing the belief in ${{{\tt {G}}}}$ (and thus decreasing the belief in ${{{\tt {C}}}}$) yields the smallest gain in ${{{\tt {A}}}}$, it may be seen as the safest way to go given the contrast between the constraints and the graph. Consequently, both the optimistic and the pessimistic systems can have similar predictions on this route. ${{{\tt {A}}}}$ ${{{\tt {B}}}}$ ${{{\tt {C}}}}$ ${{{\tt {D}}}}$ ${{{\tt {E}}}}$ ${{{\tt {F}}}}$ ${{{\tt {G}}}}$ ${{{\tt {H}}}}$ ${{\tt {I}}}$ ------------- ----------------- ----------------- ----------------- ----------------- ----------------- ----------------- ----------------- ----------------- --------------- -- -- ${P}_0$ $0.3$ $0.4$ $0.7$ $0.6$ $0.7$ $0.45$ $0.2$ $0.4$ $0.3$ ${P}_{opt}$ $0.55$ $0.4$ $0.4$ $0.6$ $0.7$ $0.45$ $0.6$ $0.4$ $0.3$ ${P}_{pes}$ $0.55$ $0.4$ $0.4$ $0.6$ $0.7$ $0.45$ $0.6$ $0.4$ $0.3$ Note, we do not consider here how the precise value is picked for the update in the belief in each argument. We direct the interested reader to our work on updates in epistemic graphs [@HunterPP18] and other relevant materials [@Hunter15ijcai; @HunterPotyka17]. We also do not explicitly consider the issue of verifying predicted distributions, but note that it can be done by, for instance, querying the user during a dialogue [@Hunter15ijcai]. The above case study illustrates how the framework in this paper can be incorporated in a user model, and then used to guide the choice of moves in a persuasion dialogue. Related Work {#section:Comparison} ============ The constraints in epistemic graphs quite naturally generalize the epistemic postulates [@Thimm:2012; @Hunter:2013; @Hunter:2014; @PolbergHunter17]. Given the fact that in the epistemic graphs we can decide whether a given property should hold for a particular argument or not, the desired postulate needs to be repeated for every element of the framework. Nevertheless, the general method is straightforward, and using our approach we can elevate the classical postulates for conflict–based frameworks to a much more general setting. In this section we will focus on describing in more detail further argumentation approaches which satisfy at least some of the requirements we have stated in the introduction, and consider some other relevant works. Weighted and Ranking–Based Semantics ------------------------------------ There is a wide array of computational models of argument that allow for modelling argument weights or strengths [@Amgoud2013; @AmgoudBenNaim16b; @AmgoudBenNaim16a; @AmgoudBenNaim17; @Amgoud16kr; @AmgoudBNDV17; @Bonzon16; @CayrolLS05; @CayrolLS05b; @LeiteMartins11; @Rago16; @Costa-Pereira:2011], which offer a more fine–grained alternative for Dung’s approach. Some of these works also permit certain forms of support or positive influences on arguments [@CayrolLS05; @AmgoudBenNaim17; @AmgoudBenNaim16a; @Rago16; @LeiteMartins11]. Given certain structural similarities between these approaches and epistemic semantics, it is therefore natural to compare them. Although in both cases what we receive can be seen as “assigning numbers from $[0,1]$” to arguments (either as side or end product), probabilities in the epistemic approach are interpreted as belief, while weights remain abstract and open to a number of possible instantiations. The meaning that is assigned to the values is derived from the structure of the graph and comparing weightings or rankings between different graphs can distort the picture. For instance, given one dense and one sparse graph, it is possible that the highest grade achieved by any argument in the former graph is the same as the lowest grade achieved in the latter graph. Given the comparative grades w.r.t. other arguments in the graph, we can therefore make different judgments about the arguments, which has both its negative as well as positive aspects. In turn, the belief and disbelief interpretation of probabilities would more uniformly point to a decision. We also need to note that many of the postulates set out in the weighted and ranking–based methods are, by design, counter–intuitive in the epistemic approach, even though they can be perfectly applicable in other scenarios. We can for instance consider the principles from [@AmgoudBenNaim17]. *(Bi-variate) Independence* states that ranking between two arguments should be independent of any other argument that is not connected to either of them, and any hidden connected epistemic graph would violate this. The same holds for *(Bi-variate) Directionality*, which forces the rank of a given argument to depend only on arguments connected to it through a directed path. *(Bi-variate) Equivalence* would tell us that the strength of the argument depends only on the strength of its parents and not arguments that it attacks or supports. This is also clearly not something we intend to force in epistemic graphs. All further postulates, such as how an increase or decrease in beliefs in attackers (or supporters) should be matched with an appropriate decrease or increase in the belief of the target argument, can, but do not have to, be satisfied a given graph. This can be caused either by the constraints themselves simply not adhering to a given axiom on purpose, or by the constraints being possibly not very specific. Already a simple formula such as ${p}({{{\tt {A}}}}) > 0.5 \rightarrow {p}({{{\tt {B}}}}) \leq 0.5$, which embodies one of the core concepts of the classical epistemic approach [@Thimm:2012; @Hunter:2013; @Hunter:2014], violates what is referred to as *Weakening* and *Strengthening*. The list continues, however, it should not be taken as a criticism of the weighted or epistemic approaches, but only as a highlight of striking conceptual differences. Another major difference between the epistemic graphs and the weighted or ranking semantics is that in the latter, the patterns set out by the semantics have to be global, which leads to side effects not desirable in the epistemic approach. In particular, two arguments supported and attacked by the same sets of arguments will need to be assigned the same value (assuming their initial weights or weights assigned to relations are similar, if applicable). In other words, it would be contrary to the intuitions of the weighted approach to have e.g. an attack relation $({{{\tt {A}}}}, {{{\tt {B}}}})$ described with a constraint ${p}({{{\tt {A}}}}) + {p}({{{\tt {B}}}}) = 1$ to co-exist in the same graph with another attack relation $({{{\tt {C}}}},{{{\tt {D}}}})$ described through ${p}({{{\tt {C}}}}) > 0.5 \leftrightarrow {p}({{{\tt {D}}}}) \leq 0.5$. The first constraint is more specific than the second one and describes the attack relation more closely. Although this is generally a desirable thing, it might not be realistic. For instance, when sourcing epistemic graphs and their constraints from participants, we have no guarantees that every argument and relation in the graph will be described with the same quality and consistency. Forcing a uniform modelling would make us either create specific constraints even for parts of the graph for which the data does not support this, or create general constraints even for parts where a better description is available. Epistemic graphs aim to bypass this by allowing varying quality of constraints to be used. Another property of the weighted and ranking semantics (and that is not enforced in the epistemic approach) is that given the values of the parents, a single value of the target is returned. This may be a restriction if we want the flexibility to express a margin of error or vagueness. Depending on how the constraints are defined in the epistemic approach, we can force the target to take on a single probability as well as allow it any value from a given range. Consequently, we have a certain form of control over specificity in the epistemic graphs. A more relaxed approach can be useful in modelling imperfect agents or incomplete situations, and such tasks can pose certain difficulties to the weighted semantics. In conclusion, we can observe that despite certain high–level similarities, there are significant differences between the weighted and epistemic approaches. Although one can argue that it is possible to represent certain weighting functions as constraints and the other way around, particularly if multiplication or division were allowed in the latter, we would either obtain constraints that violate the meaning of epistemic probabilities or semantics that do not conform to the required axioms. Abstract Dialectical Frameworks {#sec:ef-adf} ------------------------------- Epistemic graphs share certain similarities with abstract dialectical frameworks (ADFs) [@Strass13a; @Strass13; @BrewkaESWW13; @StrassWallner15; @thesis:polberg; @Polberg16; @puhrer15], particularly when it comes to their ability to express a wide range of relations between arguments. Before we compare the two approaches, we briefly review ADFs and some of their semantics[^9]. \[def:funcadf\] An **abstract dialectical framework** (ADF) is a tuple $({{\cal G}}, {{\cal L}}, {{\cal AC}})$, where $({{\cal G}}, {{\cal L}})$ is a labelled graph and ${{\cal AC}}= \{{{\cal AC}}_{{{{\tt {A}}}}}\mid {{\cal AC}}_{{{{\tt {A}}}}}$ is a propositional formula over ${{\sf Parent}}({{{\tt {A}}}})\}_{{{{\tt {A}}}}\in {{\sf Nodes}}({{\cal G}})}$ is a set of **acceptance conditions**. In the labeling–based semantics for ADFs, we use three–valued interpretations which assign truth values $\{{{\bf t}}, {{\bf f}}, {{\bf u}}\}$ to arguments that are compared according to precision (information) ordering: ${{\bf u}}\leq_i {{\bf t}}$ and ${{\bf u}}\leq_i {{\bf f}}$. The pair $(\{\textbf{t}, \textbf{f}, \textbf{u}\}, \leq_i)$ forms a complete meet–semilattice with the meet operation $\sqcap$ assigning values in the following way: $\textbf{t} \, \sqcap \, \textbf{t} = \textbf{t}$, $\textbf{f} \, \sqcap \, \textbf{f} = \textbf{f}$ and $\textbf{u}$ in all other cases. These notions can be easily extended to interpretations. For two interpretations $v$ and $v'$ on ${{\sf Nodes}}({{\cal G}})$, $v \leq_i v'$ iff for every argument ${{{\tt {A}}}}\in {{\sf Nodes}}({{\cal G}})$, $v({{{\tt {A}}}}) \leq_i v'({{{\tt {A}}}})$. In the case $v$ is three and $v'$ two–valued (i.e. contains no ${{\bf u}}$ mappings), we say that $v'$ extends $v$[^10]. The set of all two–valued interpretations extending $v$ is denoted $\lbrack v \rbrack_2$. Given an acceptance condition ${{\cal AC}}_{{{{\tt {A}}}}}$ for an argument ${{{\tt {A}}}}\in {{\sf Nodes}}({{\cal G}})$ and an interpretation $v$, we define a shorthand $v({{\cal AC}}_{{{{\tt {A}}}}})$ as value of ${{\cal AC}}_{{{{\tt {A}}}}}$ for $v^{{\bf t}}\cap {{\sf Parent}}({{{\tt {A}}}})$. Let $D=({{\cal G}}, {{\cal L}}, {{\cal AC}})$ be an ADF and $v$ a three-valued interpretation on ${{\sf Nodes}}({{\cal G}})$. The **three–valued characteristic operator** of $D$ is a function s.t. $\Gamma(v) = v'$ where $v'({{{\tt {A}}}}) = \bigsqcap_{w \in \lbrack v \rbrack_2} w({{\cal AC}}_{{{{\tt {A}}}}})$ for ${{{\tt {A}}}}\in {{\sf Nodes}}({{\cal G}})$. An interpretation $v$ is: - a **complete** labeling iff $v = \Gamma (v)$. - a **preferred** labeling iff it is $\leq_i$–maximal admissible labeling. - a **grounded** labeling iff it is the least fixpoint of $\Gamma$. [0.37]{} \[-&gt;,&gt;=stealth,shorten &gt;=1pt,auto,node distance=1.7cm, thick,main node/.style=[shape=rounded rectangle,fill=darkgreen!10,draw,minimum size = 0.6cm,font=****]{}, condition/.style=[rectangle,fill=none,draw=none,minimum size = 1cm,font=****]{}\] \(e) at (0,0) [${{{\tt {E}}}}$]{}; (a) at (0,-2) [${{{\tt {A}}}}$]{}; (b) at (2,0)[${{{\tt {B}}}}$]{}; (c) at (1,1.5) [${{{\tt {C}}}}$]{}; (d) at (2,-2) [${{{\tt {D}}}}$]{}; (ca) \[left of= a,xshift=1.1cm\] [${{{\tt {E}}}}$]{}; (cb) \[right of= b, xshift=-0.4cm\] [${{{\tt {D}}}}\lor (\neg {{{\tt {C}}}}\land {{{\tt {E}}}})$]{}; (cc) \[above of= c, yshift=-1.1cm\] [$\neg {{{\tt {E}}}}$]{}; (cd) \[right of= d, xshift= -0.7cm\] [$\neg {{{\tt {A}}}}\lor \neg {{{\tt {E}}}}$]{}; (ce) \[left of= e, xshift=0.9cm\] [${{{\tt {A}}}}\land {{{\tt {B}}}}$]{}; \(e) edge node[$-$]{} (d) (a) edge node[$-$]{} (d) \(e) edge node[$+$]{} (a) (d) edge node\[swap\][$+$]{} (b) (c) edge node[$-$]{} (b) (e) edge \[bend left\] node[$+$]{} (b) (e) edge node[$-$]{} (c) (a) edge \[bend left\] node[$+$]{} (e) (b) edge node[$+$]{} (e); [0.55]{} ${{{\tt {A}}}}$ ${{{\tt {B}}}}$ ${{{\tt {C}}}}$ ${{{\tt {D}}}}$ ${{{\tt {E}}}}$ CMP PREF GRD ------- ----------------- ----------------- ----------------- ----------------- ----------------- ------------------ ------------------ ------------------ $v_1$ ${{\bf u}}$ ${{\bf u}}$ ${{\bf u}}$ ${{\bf u}}$ ${{\bf u}}$ [$\checkmark$]{} [$\times$]{} [$\checkmark$]{} $v_2$ ${{\bf t}}$ ${{\bf t}}$ ${{\bf f}}$ ${{\bf f}}$ ${{\bf t}}$ [$\checkmark$]{} [$\checkmark$]{} [$\times$]{} $v_3$ ${{\bf f}}$ ${{\bf t}}$ ${{\bf t}}$ ${{\bf t}}$ ${{\bf f}}$ [$\checkmark$]{} [$\checkmark$]{} [$\times$]{} \[ex:adflab\] The admissible, complete, preferred and grounded labelings of the ADF depicted in Figure \[fig:grd\] are visible in Table \[tab:adflab\]. Having an acceptance condition for each node is similar in spirit to having constraints for epistemic graphs. Furthermore, both frameworks can handle relations that are positive, negative, or neither. However, there are some fundamental differences between ADFs and epistemic graphs. The acceptance conditions can tell us whether an argument is accepted or rejected based on the acceptance of its parents. In contrast, the epistemic constraints can produce probability assignments in the unit interval that depend on the degrees of belief in other arguments, which offers a much more fine–grained perspective. It also allows epistemic graph to easily handle some forms of support, such as the abstract or deductive supports, which are normally too weak to be expressed in ADFs or require certain translations [@Polberg16; @Polberg17]. The constraints also allow us to define a range of values that an argument may take on in given circumstances as well as a single particular value, and thus offers more flexibility in modelling the acceptability of an argument. Furthermore, in epistemic graphs the constraints are assigned per graph, not per argument. We can handle situations where the belief in one argument might depend not just on its parents, but also on other arguments for reasons known only to the agent, without necessarily forcing edges in the graph to be modified. Additionally, the completeness of acceptance conditions is obligatory in ADFs, while the completeness of epistemic constraints is optional and requiring it should be motivated by a given application. This control may be useful in user modelling, where we are not yet sure how a given argument and its associated relations are perceived by the user. These differences show that epistemic graphs are quite distinct from ADFs. Nevertheless, it is possible for epistemic graphs to model ADFs. We will show how this can be achieved based on an example. \[ex:adfepigraph\] Let us come back to the ADF from Example \[ex:adflab\] and Figure \[fig:grd\]. We will now show how acceptance conditions can be transformed into constraints s.t. the labelings extracted from the probabilistic distributions correspond to the ADF labelings under a given semantics. Let us focus on argument ${{{\tt {E}}}}$. If we were to create a truth table for its condition ${{{\tt {A}}}}\land {{{\tt {B}}}}$, we would observe that if ${{{\tt {E}}}}$ is to be accepted, then ${{{\tt {A}}}}$ and ${{{\tt {B}}}}$ have to be true, and if ${{{\tt {E}}}}$ is to be rejected, then ${{{\tt {A}}}}$ or ${{{\tt {B}}}}$ has to be false. Taking into account the nature of the complete semantics, this rather straightforwardly translates to the following constraints: argument ${{{\tt {E}}}}$ is believed iff ${{{\tt {A}}}}$ and ${{{\tt {B}}}}$ are believed; and, argument ${{{\tt {E}}}}$ is disbelieved iff either ${{{\tt {A}}}}$ or ${{{\tt {B}}}}$ is disbelieved. What is therefore happening is that for every (propositional) acceptance condition we create two constraints that are the epistemic adaptations of the formulas ${\sf X} \leftrightarrow {{\cal AC}}_{\sf X}$ and $\neg {\sf X} \leftrightarrow \neg {{\cal AC}}_{\sf X}$, where, assuming that the consequent is in a form without nested negations, a positive literal ${\sf Z}$ is transformed into an epistemic atom ${p}({\sf Z}) > 0.5$ and a negative literal $\neg {\sf Z}$ becomes ${p}({\sf Z}) < 0.5$. We can gather such constraints for all arguments into a set ${{\cal C}}$: - $({p}({{{\tt {A}}}}) > 0.5 \leftrightarrow {p}({{{\tt {E}}}}) >0.5) \land ({p}({{{\tt {A}}}}) < 0.5 \leftrightarrow {p}({{{\tt {E}}}}) <0.5)$ - $({p}({{{\tt {B}}}}) > 0.5 \leftrightarrow {p}({{{\tt {D}}}}) >0.5 \lor ({p}({{{\tt {C}}}}) <0.5 \land {p}({{{\tt {E}}}}) >0.5)) \land ({p}({{{\tt {B}}}}) < 0.5 \leftrightarrow {p}({{{\tt {D}}}}) < 0.5 \land ({p}({{{\tt {C}}}}) >0.5 \lor {p}({{{\tt {E}}}}) <0.5))$ - $({p}({{{\tt {C}}}}) > 0.5 \leftrightarrow {p}({{{\tt {E}}}}) <0.5) \land ({p}({{{\tt {C}}}}) < 0.5 \leftrightarrow {p}({{{\tt {E}}}}) >0.5)$ - $({p}({{{\tt {D}}}}) > 0.5 \leftrightarrow {p}({{{\tt {A}}}}) <0.5 \lor {p}({{{\tt {E}}}}) < 0.5) \land ({p}({{{\tt {D}}}}) < 0.5 \leftrightarrow {p}({{{\tt {A}}}}) >0.5 \land {p}({{{\tt {E}}}}) >0.5)$ - $({p}({{{\tt {E}}}}) > 0.5 \leftrightarrow {p}({{{\tt {A}}}}) >0.5 \land {p}({{{\tt {B}}}}) > 0.5) \land ({p}({{{\tt {E}}}}) < 0.5 \leftrightarrow {p}({{{\tt {A}}}}) <0.5 \lor {p}({{{\tt {B}}}}) < 0.5)$ The ternary satisfying distributions of this set are visible in Table \[tab:epiadflab\]. We observe that by transforming the distributions into labelings that map to ${{\bf t}}$ arguments that are believed, ${{\bf f}}$ that are disbelieved and ${{\bf u}}$ that are neither, we retrieve the complete labelings of our ADF. By considering distribution maximal or minimal w.r.t. $\lesssim_I$ we can retrieve the preferred or grounded labelings. ${P}({{{\tt {A}}}})$ ${P}({{{\tt {B}}}})$ ${P}({{{\tt {C}}}})$ ${P}({{{\tt {D}}}})$ ${P}({{{\tt {E}}}})$ ${{\sf Sat}}({{\cal C}})$ Max. $\lesssim_I$ Min. $\lesssim_I$ --------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- --------------------------- ------------------- ------------------- -- ${P}_1$ $0.5$ $0.5$ $0.5$ $0.5$ $0.5$ [$\checkmark$]{} [$\times$]{} [$\checkmark$]{} ${P}_2$ $1$ $1$ $0$ $0$ $1$ [$\checkmark$]{} [$\checkmark$]{} [$\times$]{} ${P}_3$ $0$ $1$ $1$ $1$ $0$ [$\checkmark$]{} [$\checkmark$]{} [$\times$]{} : Ternary satisfying and information maximizing/minimizing distributions for the epistemic graph from Example \[ex:adfepigraph\].[]{data-label="tab:epiadflab"} This example shows us that it is possible for the epistemic graphs to handle ADFs under the labeling–based semantics, even though providing a full translation for any type of condition may be more involved than the presented approach. Given the fact that ADFs can subsume a number of different frameworks [@Polberg17], it is also possible for the epistemic graphs to express many more approaches to argumentation than we recall here. There are certain generalizations of ADFs that are relevant in the context of our work. In [@PolbergDoder14], a probabilistic version of ADFs has been introduced. However, this work follows the constellation interpretation of a probability, not the epistemic one, which leads to significantly different modelling [@Hunter:2013; @PolbergHT17]. In a recent work [@Brewka18], a new version of weighted ADFs has been proposed, in which conditions no longer map subsets of parents of a given argument to $in$ or $out$, but take values assigned to the parents and state a specific value that should be assigned to the target. These values can be abstract entities with some form of ordering between them as well as numbers from the $[0,1]$ interval. The information ordering present in the original ADFs is then adopted accordingly and then the definition of the existing operator–based semantics (admissible, grounded, preferred, complete) remains unchanged. Despite certain possible overlaps, weighted ADFs are incomparable to epistemic graphs. On the one hand, similarly as in original ADFs, condition completeness and limiting the conditions to depend only on the parents of a given argument is enforced. Furthermore, unlike epistemic constraints, weighted acceptance conditions are very specific in the sense that a given combination of values assigned to a given argument leads to a precise, defined outcome. Therefore, a constraint stating that if the attacker is believed, then the attackee should be disbelieved (we can formalize it e.g. as ${p}({{{\tt {A}}}}) > 0.5 \rightarrow {p}({{{\tt {B}}}}) < 0.5$), cannot be conveniently expressed in weighted ADFs. This is due to the fact that the belief in the target is meant to be a function of beliefs of the source, while in epistemic graphs a more general relation is permitted. Consequently, there are properties expressible with epistemic graphs, but not with weighted ADFs. On the other hand, weighted ADFs are not specialized for handling probabilities, and therefore can take as input further unspecified values, not only numbers. Thus, we can construct scenarios handled by weighted ADFs, but not by epistemic graphs. Additionally, even if values from the $[0,1]$ interval are considered, for computational reasons they are amended with a special element indicating that a given value is undefined and the interpretation of this element is different than the one of neither agreeing nor disagreeing in the epistemic proposal. Constrained Argumentation Frameworks {#sec:caf} ------------------------------------ Our proposal shares certain similarities with the constrained argumentation frameworks [@CosteMarquisDM06], which permits external requirements among unrelated arguments to be imposed in the framework. This constraint would represent certain restrictions that (for reasons unknown to the abstract system) are considered desirable by, for example, the user, and which are not necessarily reflected by the structure of the graph. Although this approach has only been analyzed in the context of attack–based graphs, certain positive relationships between arguments could potentially be simulated through the use of propositional formulae representing the external requirements. Nevertheless, this modelling is targeted mainly at two–valued semantics, and thus the framework does not deal with fine–grained acceptability. Let $PROP_S$ be a propositional language defined in the usual inductive way from a set $S$ of propositional symbols, boolean constants $\top$, $\bot$ and the connectives $\neg, \land, \lor, \leftrightarrow$ and $\rightarrow$. A **constrained argumentation framework** is a tuple $({{\cal G}}, {{\cal L}}, {\sf PC})$ where $({{\cal G}}, {{\cal L}})$ is a labelled graph s.t. ${{\cal L}}$ assigns only $-$ to all edges, and ${\sf PC}$ is a propositional formula from $PROP_{{{\sf Nodes}}({{\cal G}})}$. Semantics of the graph are primarily defined in terms of sets of arguments that, along with meeting the classical extension-based semantics [@Dung95], satisfy the external constraint. Such classical semantics can be easily retrieved by the epistemic postulates [@PolbergHunter17; @Baroni:2011], which themselves are straightforwardly generalized by epistemic graphs. The propositional formula ${\sf PC}$ can also be straightforwardly mapped to an epistemic constraint. We will therefore consider an example showing how constrained argumentation frameworks can be expressed within epistemic graphs. \[-&gt;,&gt;=stealth,shorten &gt;=1pt,auto,node distance=1.7cm, thick,main node/.style=[shape=rounded rectangle,fill=darkgreen!10,draw,minimum size = 0.6cm,font=****]{} \] \(a) [${{{\tt {A}}}}$]{}; (b) \[right of=a\] [${{{\tt {B}}}}$]{}; (c) \[right of=b\] [${{{\tt {C}}}}$]{}; (d) \[right of=c\] [${{{\tt {D}}}}$]{}; (e) \[right of=d\] [${{{\tt {E}}}}$]{}; \(a) edge node [$-$]{} (b) (c) edge node\[swap\] [$-$]{} (b) (d) edge node [$-$]{} (e) (e) edge \[loop right\] node [$-$]{} (e) (c) edge \[bend right\] node\[swap\] [$-$]{} (d) (d) edge \[bend right\] node\[swap\] [$-$]{} (c); \[ex:cafx\] Consider the graph depicted in Figure \[fig:cafx\] and augmented with the constraint ${\sf PC} = \neg {{{\tt {A}}}}\lor {{{\tt {D}}}}$. The admissible extensions (i.e. extensions in which no arguments attack each other and every argument attacking an argument in the set is itself attacked by an element of the set) of the graph (without the constraint) are $\{{{{\tt {A}}}},{{{\tt {C}}}}\}$,$\{{{{\tt {A}}}}, {{{\tt {D}}}}\}$, $\{{{{\tt {A}}}}\}$, $\{{{{\tt {C}}}}\}$, $\{{{{\tt {D}}}}\}$ and $\emptyset$. Once the constraint is applied, the sets $\{{{{\tt {A}}}}, {{{\tt {C}}}}\}$ and $\{{{{\tt {A}}}}\}$ have to be removed. The preferred extensions (i.e. maximal admissible extensions) of the graph are initially $\{{{{\tt {A}}}}, {{{\tt {C}}}}\}$ and $\{{{{\tt {A}}}},{{{\tt {D}}}}\}$. However, if we take the constraint into account, we in fact receive $\{{{{\tt {C}}}}\}$ and $\{{{{\tt {A}}}}, {{{\tt {D}}}}\}$. Following the method from [@PolbergHunter17], we now create the following set of constraints ${{\cal C}}$: - ${p}({{{\tt {A}}}}) \geq 0.5$ - $({p}({{{\tt {B}}}}) >0.5 \rightarrow ({p}({{{\tt {A}}}}) < 0.5 \land {p}({{{\tt {C}}}}) < 0.5)) \land ({p}({{{\tt {B}}}}) <0.5 \rightarrow ({p}({{{\tt {A}}}}) > 0.5 \lor {p}({{{\tt {C}}}}) > 0.5))$ - $({p}({{{\tt {C}}}}) >0.5 \leftrightarrow {p}({{{\tt {D}}}}) < 0.5) \land ({p}({{{\tt {C}}}}) <0.5 \leftrightarrow {p}({{{\tt {D}}}}) > 0.5)$ - $({p}({{{\tt {E}}}}) > 0.5 \rightarrow ({p}({{{\tt {D}}}}) < 0.5 \land {p}({{{\tt {E}}}}) <0.5)) \land ({p}({{{\tt {E}}}}) <0.5 \rightarrow ({p}({{{\tt {D}}}}) > 0.5 \lor {p}({{{\tt {E}}}}) >0.5))$ In Table \[tab:constrlabx\] we have listed all the ternary distributions satisfying ${{\cal C}}$. It is easy to see that the sets of believed arguments obtained from these distributions coincide with the admissible extensions of our labelled graph. The epistemic representation of the ${\sf PC}$ constraint is ${p}({{{\tt {A}}}}) \leq 0.5 \lor {p}({{{\tt {D}}}}) > 0.5$. By adding it to the set ${{\cal C}}$, we obtain the constraint set ${{\cal C}}'$, which excludes distributions ${P}_{2}$, ${P}_{3}$, ${P}_{8}$ and ${P}_9$ that corresponded to extensions $\{{{{\tt {A}}}}\}$ and $\{{{{\tt {A}}}},{{{\tt {C}}}}\}$. By enforcing information maximality along with the ${{\cal C}}$ and ${{\cal C}}'$ constraints, we obtain either distributions ${P}_{9}$ and ${P}_{13}$ or ${P}_4$ and ${P}_{13}$, which are associated with the desired preferred extensions. ------------ ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ------------------ ------------------ ------------------ ------------------ ${P}({{{\tt {A}}}})$ ${P}({{{\tt {B}}}})$ ${P}({{{\tt {C}}}})$ ${P}({{{\tt {D}}}})$ ${P}({{{\tt {E}}}})$ ${{\cal C}}$ ${{\cal C}}'$ ${{\cal C}}$ ${{\cal C}}'$ ${P}_1$ $0.5$ $0.5$ $0.5$ $0.5$ $0.5$ [$\checkmark$]{} [$\checkmark$]{} [$\times$]{} [$\times$]{} ${P}_2$ $1$ $0.5$ $0.5$ $0.5$ $0.5$ [$\checkmark$]{} [$\times$]{} [$\times$]{} [$\times$]{} ${P}_3$ $1$ $0$ $0.5$ $0.5$ $0.5$ [$\checkmark$]{} [$\times$]{} [$\times$]{} [$\times$]{} ${P}_4$ $0.5$ $0$ $1$ $0$ $0.5$ [$\checkmark$]{} [$\checkmark$]{} [$\times$]{} [$\checkmark$]{} ${P}_5$ $0.5$ $0.5$ $1$ $0$ $0.5$ [$\checkmark$]{} [$\checkmark$]{} [$\times$]{} [$\times$]{} ${P}_6$ $0.5$ $0.5$ $0$ $1$ $0.5$ [$\checkmark$]{} [$\checkmark$]{} [$\times$]{} [$\times$]{} ${P}_7$ $0.5$ $0.5$ $0$ $1$ $0$ [$\checkmark$]{} [$\checkmark$]{} [$\times$]{} [$\times$]{} ${P}_8$ $1$ $0.5$ $1$ $0$ $0.5$ [$\checkmark$]{} [$\times$]{} [$\times$]{} [$\times$]{} ${P}_9$ $1$ $0$ $1$ $0$ $0.5$ [$\checkmark$]{} [$\times$]{} [$\checkmark$]{} [$\times$]{} ${P}_{10}$ $1$ $0.5$ $0$ $1$ $0.5$ [$\checkmark$]{} [$\checkmark$]{} [$\times$]{} [$\times$]{} ${P}_{11}$ $1$ $0.5$ $0$ $1$ $0$ [$\checkmark$]{} [$\checkmark$]{} [$\times$]{} [$\times$]{} ${P}_{12}$ $1$ $0$ $0$ $1$ $0.5$ [$\checkmark$]{} [$\checkmark$]{} [$\times$]{} [$\times$]{} ${P}_{13}$ $1$ $0$ $0$ $1$ $0$ [$\checkmark$]{} [$\checkmark$]{} [$\checkmark$]{} [$\checkmark$]{} ------------ ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ------------------ ------------------ ------------------ ------------------ : Ternary satisfying and information maximizing distributions associated with the sets of constraints ${{\cal C}}$ and ${{\cal C}}'$ from Example \[ex:cafx\].[]{data-label="tab:constrlabx"} Other Approaches ---------------- We conclude our comparison with related work by considering two general approaches in knowledge representation and reasoning. ### Constraint Satisfaction Problem Constraint programming [@Dechter2003; @Rossi:2006; @Tsang1993] is a general problem solving paradigm for modelling and solving hard search problems. In essence, our approach comprises of a series of constraints that take both probabilistic and argumentative aspects into account. Thus, the problems we discussed can be captured as constraint satisfaction problems and existing constraint programming solvers can be used to solve these problems. Constraint programming is a general approach and we already noted that there are some general notions in the constraint programming literature that subsume some of our specific notions (such as *eliminating explanations*, cf. Definition \[def:coverage\]). However, our work provides the first proposal for how to turn some aspects of representing and reasoning with beliefs in arguments into a constraint satisfaction problem. We provide the language for constraints over belief in arguments, the entailment and consequence relations, and epistemic semantics. A concrete formalisation and implementation of our approach using constraint programming technology is part of current work. As a first step in this direction, we have considered performing updates in sub-classes of epistemic graphs as linear optimization problems [@HunterPP18]. ### Bayesian Networks A Bayesian network (or a causal probabilistic network) is an acyclic directed graph in which each node denotes a random variable (a variable that can be instantiated with an element from some set of events) and each arc denotes causal influence of one random variable on another [@Pearl2000; @Barber2012]. Random variables can be used to represent propositions that are either “true” or “false”. For example, if the random variable is [car-battery-is-flat]{}, then it can be instantiated with the event [car-battery-is-flat]{}, or the event $\neg\mbox{\tt car\mbox{-}battery\mbox{-}is\mbox{-}flat}$. A key advantage of a Bayesian network is the use of independence assumptions that can be derived from the graph structure. These independence assumptions allow for the joint probability distribution for the random variables in a graph to be decomposed into a number of smaller joint probability distributions. This makes the acquisition and use of probabilistic information much more efficient. Superficially, there are some similarities between Bayesian networks and epistemic graphs. Both have a graphical representation of the influence of one node on another where the nodes can be used to represent statements. Furthermore, the influences by one node on another can change the belief in the target node, and this change can be either positive or negative. However, Bayesian networks and epistemic graphs are significantly different in their underlying representation and in the way they work as we clarify here: 1. A Bayesian network is used with a single probability distribution whereas the constraints associated with an epistemic graph allow for multiple probability distributions that satisfy the constraints; 2. A Bayesian network updates a random variable by taking on a specific instantiation, and that there is no longer any doubt about that instantiation (e.g. in the case of a random variable being updated by taking on the value “true”, then there is no longer any doubt or uncertainty about the value of the random variable being “true”), whereas with epistemic graphs, if the belief in a node is updated, it can be of any value in the unit interval (e.g. for an argument ${{{\tt {A}}}}$, that is current believed to degree $0.7$, we may choose to update it to degree $0.3$), and so this means epistemic graphs can reflect uncertainty in updating; and 3. A Bayesian network propagates updates by conditioning, which is a specific kind of constraint (e.g. for a graph with two variables $\alpha$ and $\beta$, after updating $\alpha$, the propagated belief in $\beta$ is ${P}(\beta\mid\alpha)$), whereas the framework for epistemic graphs provides a rich language for specifying a wide variety of constraints between the two variables. The motivations for Bayesian networks and epistemic graphs are also different. Bayesian networks are for modelling normative reasoning (i.e. they model how we should reason with a set of random variables with given set of influences between them). In contrast, epistemic graphs are for modelling non-normative reasoning, and intended to reflect how people may choose to reason with the uncertainty concerning arguments. So with epistemic graphs, we may model how some people may regard the relative belief in a set of arguments, but it does not mean that they are correct in any normative sense, rather it is just a way of modelling their perspective or behaviour. There are also some proposals for capturing aspects of Bayesian networks in argumentation such as qualitative probabilistic reasoning [@Parsons2004] and [@Timmer2017]. As with Bayesian networks, they are also concerned with capturing aspects of normative reasoning, and so have a different aim to epistemic graphs. Discussion {#section:Discussion} ========== In this paper, we have generalized the epistemic approach to probabilistic argumentation by introducing the notion of epistemic graphs which define how arguments influence each other through the use of epistemic constraints. We provided an extensive study of properties of graphs and exemplified their potential use in practical applications. We have also created a proof theory for reasoning with the constraints that is both sound and complete, and analyzed various ways in which the constraints can affect arguments and relations between them. We have also compared our research to other relevant works in argumentation, CSP and Bayesian networks. Our proposal meets the requirements postulated in the introduction: Modelling fine–grained acceptability : Epistemic graphs can express varying degrees of belief we have in arguments and these beliefs can be harnessed and restricted through the use of epistemic constraints, as seen in Section \[sec:epistemicgraphs\]. The beliefs can be easily associated with the traditional notions of acceptance and rejection of arguments [@PolbergHunter17] and, in contrast to more abstract forms of scoring and ranking arguments, provide a clearer meaning of the values associated with arguments. Modelling positive and negative relations between arguments : With epistemic graphs, we can model various types of relations between arguments, including positive, negative or mixed, as studied in Section \[sec:consistentlabel\]. Furthermore, they can also handle relations marked as group or binary (for example, two attackers need to be believed in order for the target to be disbelieved versus at least one attacker needs to be believed for the target to be disbelieved). Finally, in our analysis of the nature of various relations, we have also discussed how the views on the influence one argument has over another change depending on whether local or global perspective is taken into account. Modelling context–sensitivity : Two structurally similar graphs can be assigned different sets of epistemic constraints. An agent is allowed to have different opinions on similar graphs and adopt them according to his or her needs, be it caused by the actual content of the arguments, agent’s preferences or knowledge, or the way an agent understands the arguments. Thus, there is no requirement for the same graphs being evaluated in the same fashion under the same epistemic semantics. For example, we can easily create two different sets of constraints for the two scenarios considered in Example \[ex:context\] despite the fact that their formal representations are equivalent. Epistemic graphs can also deal with restrictions that are not necessarily reflected in the structure of the graph. Modelling different perspectives : Agents do not need to adhere to a uniform perspective on a given problem. They can perceive arguments and relations between them differently, and thus find different arguments believable or not, such as seen in Examples \[ex:trains\]. Furthermore, even arguments sharing some similarities in their views can respond differently when put in the same situation. Such behaviour could have been observed in Example \[ex:smoking\], and it would not be problematic to create constraints that handle rejecting certain arguments differently. Modelling imperfect agents : The freedom in defining constraints and beliefs in arguments allows agents to express their views freely, independently of whether they are deemed rational or not or are strongly affected by cognitive biases. For example, two logically conflicting arguments do not need to be accompanied by constraints reflecting this conflict. Furthermore, agents do not necessarily need to adhere to various types of semantics [@PolbergHunter17], and epistemic constraints could be used to grasp their views more accurately. Modelling incomplete graphs : An argument graph might not reflect all the knowledge an agent has and that is relevant to a given problem, as seen in Example \[ex:smoking\]. Consequently, a given argument can be believed or disbelieved without any apparent justification, as seen in Section \[section:CaseStudy\]. It is however not difficult to create constraints stating that a given argument should be assigned a particular score. It is also possible to not create any constraints at all if it is not known how an agent views the interactions between arguments, and thus provide no coverage to arguments or relations, as seen in Section \[sec:CoverageAnalysis\]. Although our analysis of epistemic graphs is extensive, there are still various topics to be considered. The currently proposed epistemic graph semantics can be further refined in order to take the additional information contained in the structure of the graph, but not in the constraints, into account. We could, for example, consider a coverage–based family of semantics, where the status assigned to a given argument can depend on the level of coverage it possesses. Another issue we want to explore concerns how the constraints can be obtained. Crowdsourcing opinions on arguments is a popular method for obtaining data [@Cerutti2014; @HunterPolberg17; @PolbergHunter17]. Such data concerning beliefs in arguments and whether arguments are seen as related could be analyzed with, for example, machine learning techniques, in order to construct appropriate constraints. In the future we would like to explore the use of epistemic graphs for practical applications, in particular for computational persuasion. Applying the existing epistemic approach to modelling persuadee’s beliefs in arguments has produced methods for updating beliefs during a dialogue [@Hunter15ijcai; @Hunter16sum; @HunterPotyka17], efficient representation and reasoning with the probabilistic user model [@Hadoux16], modelling uncertainty in belief distributions [@Hunter16ecai], for learning belief distributions [@HadouxHunter18], and harnessing decision rules for optimizing the choice of arguments based on the user model [@Hadoux17]. These methods can be further developed in the context of epistemic graphs in order to provide a well understood theoretical and computationally viable framework for applications such as behaviour change. For a preliminary investigation on how to update beliefs in epistemic graphs, please see [@HunterPP18]. The epistemic approach is not the only form of probabilistic argumentation. Another popular method relies on constellation probabilities [@Li:2011; @Hunter:2013; @Fazzinga:2015] in which we can consider a number of argument graphs, each one having a probability of being the “real graph”. Incorporating constellation probabilities in epistemic graphs would, for example, allow for a more refined handling of agents whose argument graphs are not complete but have a chance of containing certain arguments. Furthermore, it is also possible to allow epistemic constraints to express beliefs in arguments as well as in the relations between them, similarly as done in [@PolbergHT17]. Consequently, further developments of the epistemic graphs are an interesting topic for future work. Finally, we will also investigate algorithms and implementations aimed at handling epistemic graphs. This can be done through devising dedicated solutions as well as by introducing appropriate translations to, for example, propositional logic, as indicated by the results in Sections \[section:ReasoningConstraints\]. Further possibilities concern employing SMT solvers or constraint logic programming. Acknowledgments {#acknowledgments .unnumbered} =============== Sylwia Polberg and Anthony Hunter were supported by EPSRC Project EP/N008294/1 “Framework for Computational Persuasion”. Matthias Thimm was partially supported by the Deutsche Forschungsgemeinschaft (grant KE 1686/3-1). [^1]: Corresponding author. E–mail: sylwia.polberg at gmail.com. University College London, Department of Computer Science, 66-72 Gower Street, London WC1E 6EA, United Kingdom.\ © 2019. This manuscript version is made available under the CC-BY-NC-ND 4.0 license <http://creativecommons.org/licenses/by-nc-nd/4.0/>\ The published version of this manuscript is available at <https://doi.org/10.1016/j.artint.2020.103236>. [^2]: Frameworks with recursive relations are represented as generalizations of directed graphs where edges point at other edges. Frameworks with group relations are often represented by B– graphs, i.e. directed hypergraphs where the head of the edge is a single node. [^3]: We note that we refer only to edge labels here, not edges in general. Epistemic graphs are structurally different from ADFs and deriving the graph from the constraints is generally not possible (we refer to the beginning of Section \[sec:epistemicgraphs\] and to Section \[sec:ef-adf\] for additional details). [^4]: This is an example of how extended argumentation frameworks can be modelled [@Polberg17] [^5]: In frameworks such as ADFs [@BrewkaESWW13], a relation that is both attacking and supporting is redundant and can be safely removed from the graph [@Polberg17]. In our case, this more closely corresponds to unspecified ones due to their lack of effectiveness. Epistemic graphs are more fine-grained than ADFs and relations that are both positive and negative might not be redundant. [^6]: Please note that this property was previously referred to as *binary* [@PolbergHunter17]. [^7]: We note that ${P}_{opt}$ is one of many possible distributions that could be picked to satisfy the assumed constraints. [^8]: We note that ${P}_{pes}$ is one of many possible distributions that could be picked to satisfy the assumed constraints. [^9]: We note we use the propositional representation of ADFs [@BrewkaESWW13]. [^10]: This means that the elements mapped originally to ${{\bf u}}$ are now assigned either ${{\bf t}}$ or ${{\bf f}}$.
--- abstract: - | We consider the creation conditions of diverse hierarchical trees both analytically and numerically. A connection between the probabilities to create hierarchical levels and the probability to associate these levels into a united structure is studied. We argue that a consistent probabilistic picture requires the use of deformed algebra. Our consideration is based on the study of the main types of hierarchical trees, among which both regular and degenerate ones are studied analytically, while the creation probabilities of Fibonacci, scale-free and arbitrary trees are determined numerically. probability, hierarchical tree, deformation 02.50.-r, 89.75.-k, 89.75.Fb - | =3000Розглянуто аналітично і чисельно умови утворення різних ієрархічних дерев. Досліджено зв’язок між ймовірностями утворення ієрархічних рівнів та ймовірності об’єднання цих рівнів у єдину структуру. Показано, що побудова послідовної ймовірнісної картини вимагає використання деформованої алгебри. Даний розгляд заснований на дослідженні основних типів ієрархічних дерев, серед яких регулярне і вироджене досліджені аналітично, тоді як ймовірності утворення дерева Фібоначчі, безмасштабного та довільного дерева визначені чисельно. ймовірність, ієрархічне дерево, деформація author: - 'A.I. Olemskoi, S.S. Borysov, I.A. Shuda' - 'О.І. Олємской, С.С. Борисов, І.О. Шуда' date: 'Received February 18, 2011' title: Аналітичне і чисельне дослідження ймовірності утворення ієрархічних дерев --- Introduction {#Sec.1} ============ As it is shown for diverse systems, ranging from the World Wide Web [@15a] to biological [@16a] and social [@20a] networks, real networks are governed by strict organizing principles displaying the following properties: i) most networks have a high degree of clustering; ii) many networks have been found to be scale-free [@23a] which means that the probability distribution over node degrees, being the set of the numbers of links with neighbors, follows the power law. A formal basis of the theory of hierarchical structures is represented by the fact that hierarchically constrained objects are related to an ultrametric space whose geometrical image is the Cayley tree with nodes and branches corresponding to elementary cells and their links [@Rammal]. One of the first theoretical pictures [@Huberman] has been devoted to the diffusion process on either uniformly or randomly multifurcating trees. The consequent study of hierarchical structures has shown [@JETP_Let2000] that their evolution is reduced to an anomalous diffusion process in ultrametric space that arrives at a steady-state distribution over hierarchical levels, which represents the Tsallis power law inherent to non-extensive systems [@Tsallis]. The principal peculiarity of the Tsallis statistics is known to be governed by a deformed algebra [@Borges]. This paper briefly represents the results of our study of creation conditions of a vast variety of hierarchical trees on the basis of methods initially developed within the quantum calculus [@QC]. An extended version of our analysis is published elsewhere [@OBS]. The outline of the paper is as follows. In section \[Sec.2\], we discuss the statistical peculiarities of the picture of hierarchical structure creation to demonstrate that effective energies of hierarchical levels remain to be additive values, while the set of corresponding probabilities becomes both non-additive and non-multiplicative due to the coupling between different levels. Further consideration is based on an analytical and numerical study of the main types of hierarchical trees in section \[Sec.4.0\]. Section \[Sec.8\] is devoted to the discussion of obtained results. Statistical peculiarities of hierarchical ensembles {#Sec.2} =================================================== As pointed out above, the stationary creation probability of the $l$-th hierarchical level takes the Tsallis form $$p_l=p_0\exp_q\left(-\frac{\varepsilon_l}{\Delta}\right),\qquad \exp_q(x):= \left[1+(1-q)x\right]_+^{1\over 1-q},\qquad [y]_+\equiv \max(0,y). \label{1}$$ Here, $p_0$ is the top-level probability fixed by the normalization condition, $q\geqslant 0$ is a deformation parameter, $\varepsilon_l$ is an effective energy of the $l$ number, $\Delta$ is an effective temperature. Although the energy is a key concept of the network optimization theory, it is not always possible to match its value to a given graph. However, basing on heuristic ideas, it is always possible to attach an effective value of energy to some phenomenological parameter. Also, for our purpose it is convenient to consider the nodes of the hierarchical tree as particles of a statistical ensemble, while its edges represent couplings between these particles. In contrast to the statistical theory of complex networks [@Newman], the hierarchical systems under our consideration cannot simultaneously display the properties of additivity of effective energies and multiplicativity of related probabilities. The cornerstone of our approach is that the creation of a hierarchical structure does not break the law of the energy conservation, so that the energies $\varepsilon_l$ remain to be additive values: $$\epsilon_n:=\sum_{l=0}^{n}\varepsilon_l. \label{2}$$ Within the statistical theory of random networks [@Newman], effective energies $\varepsilon_l$ are reduced to a constant for microcanonical ensemble and are fixed by the set of particular probabilities $p_l$ according to the relation $\varepsilon_l=-\Delta\ln(p_l)+{\rm const}$ for both canonical and grand canonical ensembles with an effective temperature $\Delta$. On the other hand, due to the coupling between different levels, the hierarchy essentially deforms the corresponding probabilities $p_l$, which become non-multiplicative. Indeed, the probability $P_n$ to create an $n$-level hierarchical structure is connected with total energy $\epsilon_n$ by means of the relation $\epsilon_n=-\Delta\ln_q(P_n)$ with the deformed logarithm $\ln_q(x):=\left(x^{1-q}-1\right)/(1-q)$. Then, the condition (\[2\]) leads to the additivity of these logarithms: $\ln_q(P_n)=\sum_{l=0}^{n}\ln_q(p_l)$, and one obtains the probability relation $$P_n:=p_0\otimes_q p_1\otimes_q p_2\otimes_q\dots\otimes_q p_n, \label{5}$$ where the deformed product is defined as $x\otimes_q y:=\left[x^{1-q}+y^{1-q}-1\right]_+^{1\over 1-q}$. Thus, in contrast to ordinary statistical systems, the creation probability $P_n$ of a hierarchical structure is equal to the [*deformed*]{} production of specific probabilities $p_l$ related to the levels $l=0,1,\dots,n$. The above law of the deformed multiplicativity determines the probabilities $p_l$ to create a set of hierarchical levels simultaneously. Another problem emerges when we consider the connection between the creation probability of a given hierarchical level $l$ and the same for each node at this level. For simplicity let us consider a regular tree, whose nodes multifurcate to generate a set of the $N_l$ nodes determined with inherent probabilities $\pi=p_0/N_l$ where $p_0$ is their top magnitude being a normalization constant. If one permits additivity of the node probabilities, we arrive at the total probability of the $l$ level realization to be independent of their numbers: $p_l:=N_l\pi=p_0$. Since the probability $p_l$ to create a hierarchical level decays with the level number $l$, we are forced again to replace the trivial additive connection of the level probability $p_l$ with the node value $\pi$ by a deformed sum. Finally, since the creation probabilities of the hierarchical levels go beyond probabilities related to non-hierarchical structures, the standard normalization condition based on the use of the usual sum is broken as well. With the growth of the difference $|q-1|$, the probability (\[1\]) increases at arbitrary values of the energy $\varepsilon_l$ with respect to the non-deformed value related to the parameter $q=1$. On the other hand, the deformed sum $x\oplus_q y:=x+y+(1-q)xy$ decreases with the growth of the parameter $q>1$. As a result, one can anticipate that a self-consistent probabilistic picture of hierarchical ensembles is reached if one proposes the normalization condition $$p_0\oplus_q p_1\oplus_q\dots\oplus_q p_n=1,\qquad q>1 \label{1a}$$ that is deformed to fix the top level probability $p_0$. Taking into account the above statements, one obtains an explicit form of the creation probability of a hierarchical structure [@OBS] $$P_n=\exp_q\left[\frac{\sum_{l=0}^{n}p_l^{1-q}-(n+1)}{1-q}\right] =\left(\sum_{l=0}^{n}p_l^{1-q}-n\right)_+^{\frac{1}{1-q}}. \label{11}$$ The relations (\[11\]) mean the decrease of the creation probability with the growth of the hierarchical tree in accordance with the difference equation $$P_{n-1}^{1-q}-P_n^{1-q}=1-p_n^{1-q}. \label{13}$$ In the non-deformed limit $q\to 1$, relations (\[5\]) and (\[11\]) are reduced to the ordinary rule $P_n=\prod_{l=0}^n p_l$ (respectively, equation (\[13\]) reads $P_{n}/P_{n-1}=p_n$), while at $q=2$ the creation probability (\[11\]) takes a maximal value. According to equation (\[11\]) the subsequent step in the definition of the creation probability $P_n$ of a hierarchical structure is the determination of a set of probabilities $\{p_l\}_0^n$ related to different hierarchical levels. Level probabilities for different hierarchical trees {#Sec.4.0} ==================================================== First, we consider a regular tree whose nodes multifurcate at the level $l$ with constant branching index $b>1$ to generate a set of the $N_l=b^l$ nodes determined with inherent probabilities $\pi=p_0/N_l=p_0b^{-l}$. Within naive proposition, one could permit additivity of the node probabilities to arrive at the total probability of the $l$ level realization to be $p_l:=N_l\pi=p_0$. Thus, within the condition of additivity of the node probabilities, the related values $p_l=p_0=(n+1)^{-1}$ for all levels appear to be independent of their numbers $l=0,1,\dots,n$. To avoid this trivial situation, we propose to replace the common additive connection of the level probability with the node value $\pi$ by the deformed one. Such deformation leads to the required level distribution in the binomial form [@OBS] $$\label{aa} p_l=\frac{[1+(1-q)b^{-l}]_+^{b^l}-1}{1-q}p_0\,.$$ This probability increases with the growing number $l$ of hierarchical level at $q<1$ and decays at $q>1$. From the physical point of view, the creation probability of a lower hierarchical level should be less than for higher levels, so that one ought to conclude that only the case $q>1$ is meaningful. Characteristically, the form of this distribution very weakly depends on both deformation parameter $q$ and branching index $b$ excluding the domain $2-q\ll1$, where the probability does not decay that fast at small values of the branching index $b$. With large growth of the parameter $b\gg1$ or level number $l$, the dependence $p_l$ decreases faster to exponentially reach the minimum value $$p_\infty=\frac{{\rm e}^{1-q}-1}{1-q}p_0=p_0\ln_q{\rm e}. \label{c}$$ There is a distinctive feature in the behavior of the regular hierarchical tree near the limit value $q=2$ where the dependence (\[aa\]) has no singularity. This feature is corroborated with the dependence of the top level probability $p_0$ on the deformation parameter. This probability increases monotonously with the $q$-growth to reach sharply the limit value $p_0=1$ in the point $q=2$. Obviously, this means an anomalous increase of probabilities $p_l$ for the whole set of hierarchical levels. Though, within the domain $2-q\ll1$, the ordinary normalization condition $\sum_{l=0}^n p_l=1$ is violated appreciably, the definition of the deformed sum shows that the deformed normalization condition (\[1a\]) can be recovered with the $q$ parameter growing. However, beyond the border $q=2$, this condition is not satisfied at all. As a result, we arrive at the conclusion that physically meaningful values of the deformation parameter are concentrated within the domain $q\in[1,2]$. The difference between regular and degenerate trees is that [*all*]{} nodes multifurcate at each level in the former case, while the [*only one*]{} node branches in the latter. In this sense, the degenerate tree can be considered as an antipode of the regular one to be studied analytically. Taking into account this peculiarity, the creation probability of the $l$th hierarchical level takes the form [@OBS] $$p_l=\frac{\left[1+(1-q)b^{-l}\right] \prod_{m=1}^l\Bigl[1+(1-q)b^{-m}\Bigl]^{b-1}-1}{1-q}p_0\,. \label{S6}$$ Similarly to the case of the regular tree, this distribution decays exponentially fast to the limit probability $p_\infty$ determined by equation (\[c\]). Above, we have considered two conceptual examples of hierarchical trees with self-similar structure, i.e., regular and degenerate trees. By contrast, a scale-free tree has rather random structure, but the probability distribution over hierarchical levels tends to the power-law form inherent in self-similar statistical systems. In this case, the probability distribution over tree levels is determined by the discrete difference equation [@JETP_Let2000] $$p_{l+1}-p_l=-p_l^q/\Delta,\qquad l=0,1,\dots,n \label{12a}$$ accompanied with the deformed normalization condition (\[1a\]) ($\Delta$ being a distribution dispersion). In figure \[fig1\] we compare the probability distributions over \(a) (b)\ ![Probability distributions over hierarchical levels for scale-free, regular, Fibonacci and degenerate trees (curves 1-4, respectively) at $\Delta=2$, $b=2$, $n=10$ and $q=1.9$ (a) and $q=1.9999$ (b).[]{data-label="fig1"}](fig1a "fig:"){width="70mm"} ![Probability distributions over hierarchical levels for scale-free, regular, Fibonacci and degenerate trees (curves 1-4, respectively) at $\Delta=2$, $b=2$, $n=10$ and $q=1.9$ (a) and $q=1.9999$ (b).[]{data-label="fig1"}](fig1b "fig:"){width="70mm"}\ hierarchical levels of scale-free, regular, degenerate and Fibonacci trees at different values of the deformation parameter. As it is seen, at all $q$-values, that the forms of these distributions are actually similar for all the above trees excluding the scale-free one. In the latter case, the level probability decays to zero as a power law, whereas there is a limit non-zero value (\[c\]) for the regular trees. In accordance with such a behavior, the creation probabilities depicted in figure \[fig2\] \(a) (b)\ ![Creation probabilities of scale-free, regular, Fibonacci and degenerate hierarchical trees (curves 1-4, respectively) as function of the whole level number at $\Delta=2$, $b=2$ and $q=1.9$ (a) and $q=1.9999$ (b).[]{data-label="fig2"}](fig2a "fig:"){width="70mm"} ![Creation probabilities of scale-free, regular, Fibonacci and degenerate hierarchical trees (curves 1-4, respectively) as function of the whole level number at $\Delta=2$, $b=2$ and $q=1.9$ (a) and $q=1.9999$ (b).[]{data-label="fig2"}](fig2b "fig:"){width="70mm"} decay faster for the scale-free tree than in the case of the regular and degenerate ones. Characteristically, this difference appears only within the domain $2-q\ll1$ of the deformation parameter variation. Finally, let us consider two examples of arbitrary trees, among which the former concerns the Fibonacci tree (number of nodes at its each level is equal to Fibonacci number), while the latter relates to the schematic evolution tree shown in figure \[fig3.pdf\]a \(a) (b)\ ![(a) Schematic representation of evolution tree (from Ref. [@Science]). (b) Creation probability of the evolution tree vs. the level number at: $q = 1.0001, 1.1, 1.2, 1.3, 1.4, 1.5, 1.7, 1.9, 1.9999$ (curves 1-9, respectively).[]{data-label="fig3.pdf"}](fig3a "fig:"){width="70mm"} ![(a) Schematic representation of evolution tree (from Ref. [@Science]). (b) Creation probability of the evolution tree vs. the level number at: $q = 1.0001, 1.1, 1.2, 1.3, 1.4, 1.5, 1.7, 1.9, 1.9999$ (curves 1-9, respectively).[]{data-label="fig3.pdf"}](fig3b "fig:"){width="70mm"}\ (in the latter case, the nodes identify substantial stages in the evolution of life, e.g., human is situated on the 24th level). Using the approach developed for the node and level probabilities, obeying the normalization condition, we show that the probability distributions of the Fibonacci tree depicted in figures \[fig1\], \[fig2\] do not actually differ from the related dependencies for both regular and degenerate trees. As concerns the evolution tree, its probability distributions (figure \[fig3.pdf\]b) show that the presence of the stopped branches (type of two rightmost ones in figure \[fig3.pdf\]a) considerably decreases the creation probability of new hierarchical levels. Particularly, the probability of human appearance takes the values greater than $10^{-4}$ only at the deformation parameter $q=1.9999$. Concluding remarks {#Sec.8} ================== To avoid ambiguities it is worthwhile to stress that our consideration concerns rather the probabilistic picture of creation of the hierarchical trees themselves than hierarchical phenomena and processes evolving on these trees (for example, hierarchically constrained statistical ensembles [@13a], diffusion processes on multifurcating trees [@Huberman], et cetera). The principal peculiarity of the probabilistic picture elaborated is a distinction between the deformed and non-deformed quantities. Thus, effective energies of hierarchical levels in equation (\[2\]) are non-deformed quantities because the creation of hierarchical structures does not break the conservation law of the energy being an additive value. Moreover, the node probabilities are determined using the non-deformed relations because these probabilities relate to the configuration of the hierarchical tree itself (in other words, they are determined for geometrical, rather than for probabilistic reasons). At the same time, the hierarchy appearance essentially deforms the probability relations (\[5\])–(\[11\]) due to the coupling level probabilities $p_l$. Similarly, the definition of these probabilities through corresponding node values is based on the use of a deformed summation. Making use of the deformed algebra leads to an increase of probabilities $p_l$ for the whole set of hierarchical levels assuming an anomalous character near the point $q=2$. The deformed normalization condition (\[1a\]) is fulfilled only at $q\leqslant 2$, while it is broken beyond the limit $q=2$. As a result, taking into account the fact that the creation probability of a lower hierarchical level should be less than the one for higher levels, the physically meaningful values of the deformation parameter belong to the domain $q\in[1,2]$. [00]{} Albert R., Jeong H., Barabási A.-L., Nature, 1999, [**401**]{}, 130; . Jeong H. et al., Nature, 2000, [**407**]{}, 651; . Newman M.E.J., Proc. Nat. Acad. Sci. U.S.A., 2001, [**98**]{}, 404; . Barabási A.-L., Albert R., Science, 1999, [**286**]{}, 509; . Rammal R., Toulouse G., Virasoro M.A., Rev. Mod. Phys., 1986, [**58**]{}, 765;\ . Bachas C.P., Huberman B.A., Phys. Rev. Lett., 1986, [**57**]{}, 1965; . Olemskoi A.I., JETP Letters, 2000, [**71**]{}, 285; . Tsallis C., Introduction to Nonextensive Statistical Mechanics – Approaching a Complex World. Springer, New York, 2009; . Borges E.P., Physica A, 2004, [**340**]{}, 95; . Kac V., Cheung P., Quantum calculus. Springer, New York, 2002. Olemskoi A., Borysov S., Shuda I., J. Phys. Stud., 2011, in press. Newman M.E.J., SIAM Review, 2003, [**45**]{}, 167; . Sugden A.M., Jasny B.R., Culotta E., Pennisi E., Science, 2003, [**300**]{}, 1691;\ . Olemskoi A.I., Ostrik V.I., Kokhan S.V., Physica A, 2009, [**388**]{}, 609; .
--- author: - | Falk Bruckmann\ Friedrich Schiller University Jena\ Theoretisch-Physikalisches Institut\ Max-Wien-Platz 1, D-07743 Jena\ title: Hopf defects as seeds for monopole loops --- Introduction ============ Topological objects are prominent examples of non-perturbative effects in quantum field theories. For Yang-Mills theories these are instantons, magnetic monopoles (and center vortices), respectively. While the first are intimately connected to the gauge invariant topological density and responsible for chiral symmetry breaking [@schaefer:98], the others are visible only after an Abelian (center) gauge fixing [@thooft:81a; @deldebbio:97; @alexandrou:00a], and supposed to be responsible for confinement. Since both physical effects take place below the same critical temperature [@kogut:83], a relation between instantons and monopoles is highly desirable but still not fully known. The first result in this direction is due to Rossi [@rossi:79]: a static ’t Hooft-Polyakov monopole can be built out of an array of instantons placed along the time axis. A similar construction exists for the caloron [@kraan:98a]. To see how instantons are built from monopoles we take the point of view of Abelian projections (for a detailed prescription see [@bruckmann:00c]). An Abelian gauge is a partial gauge fixing leaving the maximal Abelian subgroup[^1] untouched. The needed gauge transformation is best described by the diagonalisation of an ‘auxiliary Higgs field’ $\phi$ in the adjoint representation. Defects occur where this field vanishes, i.e. the diagonalisation becomes ambiguous. Since this means solving three equations, generic defects in four dimensions form lines. Moreover, the normalised Higgs field $n=\phi/|\phi|$ perpendicular to those lines generically is a hedgehog. It can be diagonalised only at the expense of introducing a singular gauge field, the Dirac monopole. By charge conservation defects form closed lines, i.e. loops. Topological arguments enforce the existence of defects for every configuration with non-vanishing instanton number on the four-sphere[^2]. The topological properties of the defects necessary to generate an instanton number are highly non-trivial for general Abelian gauges [@jahn:00]. However, the relation between instantons and [*static*]{} monopoles in the Polyakov gauge is well understood [@weiss:81; @griesshammer:97; @reinhardt:97b; @ford:98; @jahn:98; @ford:99a; @ford:99b]. Much less is known analytically about defects in the Laplacian Abelian gauge (LAG) [@vandersijs:97; @vandersijs:98; @vandersijs:99]. The Higgs field of the LAG is defined as the lowest eigenvector of the gauge covariant Laplacian in the background of $A$, $-\D^2[A]\phi=E_0\phi$. The LAG has turned out to be ‘very useful’ on the lattice in the sense that it shares Abelian dominance with the Maximal Abelian Gauge (MAG) [@ilgenfritz:00] but does not suffer from a severe Gribov problem [@bali:96; @deforcrand:01b]. Monopole loops have been observed for instanton backgrounds in the LAG by numerical means [@reinhardt:00; @deforcrand:01a]. A fully analytical treatment, however, is very difficult. So far it was only possible for the ’t Hooft instanton [@bruckmann:01a] (and the meron [@reinhardt:00]) which is highly symmetric and thus non-generic; the defect is degenerate to a point and localised at the instanton core (see below). The present work is the first step towards an analytical investigation of generic configurations in the LAG. By breaking the high symmetry, we show that for configurations near the ’t Hooft instanton the defect manifold becomes a loop (even a circle, Section \[first\_section\]). Furthermore, the associated hedgehog is twisted once along the loop (the simplest possibility to account for the instanton number, Section \[second\_section\]). As a by-product, the picture of Hopf defects as ‘monopole loops with vanishing radius’ is proven. From Hopf defects to monopole loops {#first_section} =================================== The ground state of the $SU(2)$ covariant Laplacian in the background of a ’t Hooft instanton in regular gauge is of the form [@bruckmann:01a], $$\begin{aligned} \label{phi_instanton} \phi=f(r)n_{\rm H},\qquad f(r)\stackrel{r\rightarrow 0}{\longrightarrow}r^2,\end{aligned}$$ where $n_{\rm H}$ is the standard Hopf map[^3] [@hopf:31; @nakahara:90; @dubrovin:85] $$\begin{aligned} \label{hopf_map} n_{\rm H}\equiv\left(\begin{array}{c} 2(\hat{x}_1\hat{x}_3+\hat{x}_2\hat{x}_4) \\ 2(\hat{x}_2\hat{x}_3-\hat{x}_1\hat{x}_4) \\ \hat{x}_1^2+\hat{x}_2^2-\hat{x}_3^2-\hat{x}_4^2 \end{array}\right),\qquad\hat{x}_\mu\equiv x_\mu/r\end{aligned}$$ This Higgs field $\phi$ vanishes quadratically[^4] at the instanton core – the origin – where a pointlike defect is located. We conjecture that this behaviour is not a feature of the particular Abelian gauge chosen, but rather a matter of symmetry. The ’t Hooft instanton is spherically symmetric (in a proper definition involving gauge transformations); hence any Abelian gauge which does not break the rotational symmetry $SO(4)$ enforces the Higgs field to be spherically symmetric as well. Monopole loops would break this symmetry, while pointlike defects (as well as $S^3$ defect manifolds) do not. On the other hand it is very easy to verify that the Higgs field (\[hopf\_map\]) has the right topological behaviour. Living in an associated bundle it must have the same boundary conditions[^5] as the gauge field, $$\begin{aligned} r\rightarrow\infty:\qquad A\rightarrow ig\d g^\dagger,\quad n\rightarrow g\,\mbox{const}\,g^\dagger\end{aligned}$$ For the instanton in regular gauge we have $g=h\equiv\hat{x}_4\Eins_2+i\hat{x}_a\sigma_a$. It follows that $n$, being a mapping from $S^3$ (in coordinate space) to $S^2$ (in color space), must have a Hopf invariant equal to the instanton number (equal to the winding of $h$). $n_{\rm H}=h\,\sigma_3/2\,h^\dagger$ is just the prototype mapping with Hopf invariant one. Let us now [*slightly perturb*]{} the ’t Hooft instanton, $A=A_{\rm inst}+\lambda\delta A$, with perturbation parameter $\lambda$. The usual Schrödinger perturbation theory for the change of the groundstate, $\phi=\phi_{\rm inst}+\lambda\delta\phi$, requires access to all eigenvalues and eigenfunctions of $-\D^2[A_{\rm inst}]$. These are not known analytically. But if perturbation theory is valid, the size of the defect manifold (i.e. the size of the expected monopole loop) is small. Therefore we can restrict ourselves to the vicinity of the origin. There we can Taylor expand $\delta\phi$; for our purposes even the lowest order approximation is sufficient. Thus the Higgs field of a generic configuration $A$ *close to the instanton* (in orbit space) and *near the origin* (in coordinate space) is (cf. (\[phi\_instanton\])), $$\begin{aligned} \label{phi_perturbed_1} \phi=\phi_{\rm inst}+\lambda\delta\phi= r^2\,n_{\rm H}+R^2\:\mbox{const}\end{aligned}$$ where we have introduced a radius parameter $R$, since the Higgs field in our convention is of dimension (length)$^2$. Without loss of generality we specialise to a perturbation pointing in the third color direction, $$\begin{aligned} \label{phi_perturbed_2} \phi=r^2\,n_{\rm H}-R^2\left(\begin{array}{c} 0\\0\\1 \end{array}\right)\end{aligned}$$ all other cases can be obtained by rotations. A straightforward calculation shows that the zeros of $\phi$ are then on the [*circle*]{} $C:\:\:x_1^2+x_2^2=R^2,\:x_3=x_4=0$. Its size scales with the perturbation parameter $R=\sqrt{\lambda}$ (see (\[phi\_perturbed\_1\])). The perturbation has [*enlarged the defect manifold from a point to a loop*]{}, thereby breaking the spherical symmetry. Such a picture was conjectured in [@brower:97b] for instantons in the MAG, but there the formation of the loop is suppressed by the gauge fixing functional [@bruckmann:00a]. Notice that the deformed Higgs field $\phi$ is now on a different orbit, since we changed its zeros which are gauge invariant. Its global properties, however, remain the same (as we will also see in the next section) because we performed only a local perturbation around the origin. Monopole charge and twist {#second_section} ========================= To proceed further we introduce polar angles for both coordinate space $\mathbb{R}^4$, $$\begin{aligned} x=(r_{12}\cos\varphi_{12}, r_{12}\sin\varphi_{12}, r_{34}\cos\varphi_{34}, r_{34}\sin\varphi_{34}), \: r_{12}=r\cos\vartheta,\,r_{34}=r\sin\vartheta\end{aligned}$$ and color space $\mathbb{R}^3$ resp. $S^2$, $$\begin{aligned} n=\left(\begin{array}{c} \sin\beta\cos\alpha \\ \sin\beta\sin\alpha \\ \cos\beta \end{array}\right).\end{aligned}$$ The Hopf map (\[hopf\_map\]) is given by assigning $$\begin{aligned} \label{angles_hopf} \alpha_{\rm H}=\varphi_{12}-\varphi_{34},\qquad \beta_{\rm H}=2\vartheta=\arctan\frac{2r_{12}r_{34}}{r_{12}^2-r_{34}^2}\end{aligned}$$ while the perturbation (\[phi\_perturbed\_2\]) corresponds to a deformation of $\beta$, $$\begin{aligned} \label{angles_monopole} \alpha=\varphi_{12}-\varphi_{34},\qquad \beta=\arctan\frac{r^2\sin(2\vartheta)}{r^2\cos(2\vartheta)-R^2} =\arctan\frac{2r_{12}r_{34}} {r_{12}^2-r_{34}^2-R^2}\end{aligned}$$ It turns out that this Higgs field perfectly agrees with the one considered in [@brower:97b; @jahn:00], $\beta=\vartheta_++\vartheta_-,\: \tan\vartheta_{\pm}=r_{34}/(r_{12}\pm R)$. From Fig. \[process\] it is obvious that both Higgs fields (\[angles\_hopf\]) and (\[angles\_monopole\]) agree in their global ($r\rightarrow\infty$) properties. For the local properties of the new field it is important to notice that on the loop $C$ both angles $\alpha$ and $\beta$ are singular. In a vicinity perpendicular to the loop they take on all values $\alpha\in[0,2\pi],\:\beta\in[0,\pi]$. Put differently, the normalised field $n$ is homotopic to the hegdehog: on any two-sphere perpendicular to the loop it covers the whole two-sphere in color space exactly once. Viewed as a mapping $n:\:S^2\rightarrow S^2$ it has winding number one. From the ’t Hooft-Polyakov monopole it is well-known that such a Higgs field cannot be diagonalised[^6] smoothly [@arafune:75]. One may nevertheless diagonalise $n$ by a singular gauge transformation; in this way the Dirac monopole appears in the gauge field. The relevant gauge transformation is $$\begin{aligned} \label{gauge_transformation} g=e^{i\gamma\sigma_3/2}e^{i\beta\sigma_2/2}e^{i\alpha\sigma_3/2}\end{aligned}$$ The residual $U(1)$-freedom (of rotations around the third color axis) is encoded in $\gamma$. We choose $$\begin{aligned} \gamma=\varphi_{12}+\varphi_{34}\end{aligned}$$ for which $g$ is singular on the disc $D:\:\:r_{12}\leq R,\:r_{34}=0$, spanned by the loop $C$. The gauge transformation (\[gauge\_transformation\]) induces an inhomogeneous term, the Abelian part of which is $$\begin{aligned} a\equiv(i\Omega\d\Omega^{-1})_3= \d\gamma+\cos\beta\d\alpha\end{aligned}$$ One can easily compute the Abelian field strength, $$\begin{aligned} f&\equiv&\d a=f^{\rm reg}+f^{\rm sing},\\ f^{\rm reg}&=&-\sin\beta\,\d\beta\wedge\d\alpha,\\ f^{\rm sing}&=&(1-\cos\beta)\,\d^2\varphi_{34} =4\pi\theta(R-r_{12})\delta(r_{34})\d x_3\wedge\d x_4\end{aligned}$$ The regular part $f^{\rm reg}$ is just the Coulombic magnetic field, while the singular part $f^{\rm sing}$ is the set of all Dirac strings filling the disc $D$, called the Dirac sheet [@dirac:48]. We use the latter to identify the monopoles as endpoints of Dirac strings. The magnetic current is, $$\begin{aligned} k\equiv *\d f^{\rm sing}=4\pi\delta(r_{12}-R)\delta(r_{34})\d\varphi_{12}\end{aligned}$$ The angle $\alpha$ not only depends on $\varphi_{34}$ (which gives the hedgehog) but also on the world-line coordinate $\varphi_{12}$. This means that the monopole is ‘twisted’ [@taubes:84a] once: the Higgs field $n$ rotates once around the third axis in isospace while moving along the loop. Our Higgs field is such that the complicated relation in [@jahn:00] reduces to $$\begin{aligned} \mbox{Hopf invariant}=\mbox{magnetic charge}\times\mbox{twist},\qquad 1=1\times 1.\end{aligned}$$ Gradually ‘switching off’ the perturbation $\lambda\delta\phi$ one can see that [*a Hopf defect emerges when a twisted monopole loop is shrunk to vanishing radius*]{}. The Dirac sheet $D$ degenerates to a point, too. Notice that the two-spheres used to measure the magnetic charge are incapable to detect the Hopf defect. Instead one has to pass to three-spheres surrounding a point in four dimensions. Conclusions =========== We have investigated the Laplacian Abelian Gauge in the vicinity of the ’t Hooft instanton by means of Schrödinger perturbation theory. While the spherically symmetric instanton is related to a pointlike defect, a generic (constant) perturbation induces a monopole loop with unit charge and twist (cf. (\[angles\_monopole\]) and its interpretation). Together these topological quantities give rise to the Hopf invariant, which reflects the instanton number of the background gauge field. Extending the correlation between the defect manifold and the instanton core in the unperturbed case, the instanton density of the new background is supposed to be localised on a circle [@garciaperez:00; @hansen:xx]. Our result implies that for isolated, highly symmetric instantons the monopole loops are very small. Such small loops could be missed in lattice simulations of the Abelian and monopole string tension. Since isolated instantons are not sufficient for confinement, the defects induced by them cannot be the whole story. Bringing more instantons and anti-instantons close to each other, monopole loops start to spread out [@brower:97b; @reinhardt:00]. Percolation is achieved if there are monopole loops extending over the whole space-time as verified in lattice simulations [@bornyakov:92; @ivanenko:93]. Further analytic approaches are necessary to better understand this mechanism. [^1]: We will restrict ourselves to the gauge group $SU(2)$, where the maximal Abelian subgroup is simply $U(1)$ embedded in terms of diagonal matrices. [^2]: Such a strong statement does not hold for the four-torus as is plausible from the existence of [*Abelian instantons*]{} [@thooft:81c]. [^3]: being the projection in the Hopf bundle describing the Dirac monopole [@ryder:80] [^4]: in agreement with lattice simulations [@deforcrand:xx] [^5]: in the bundle language the same transition functions [^6]: or brought to ‘unitary gauge’
Low dimensional (1D or 2D) spin S=$\frac{1}{2}$ materials are critical systems displaying a variety of phenomena in crystals. These range from spin gap behavior in CaV$_4$O$_9$[@taniguchi] to anomalous spin physics in the one-dimensional spin chains and ladders.[@dagrice] Many of these characteristics depend crucially on specific structural or chemical bonding features.[@wepcavo] The magnetic coupling is particularly sensitive, with simple nearest neighbor (nn) exchange varying from large and antiferromagnetic (AF) to small and ferromagnetic (FM) when the metal-oxygen-metal angle $\phi$ varies from 180$^{\circ}$ to 90$^{\circ}$. Weak intrachain couplings and competing exchange couplings can lead to frustration, magnetic ordering, spin gap behavior or spin-Peierls phase formation. Copper oxides play a very important role due to the various possibilities of linking their fundamental unit, a (often slightly distorted) CuO$_4$ square. Three general types of arrangement can be found in these systems, classified in terms of the oxygen squares sharing corners (as in the high T$_c$ planar compounds and Sr$_2$CuO$_3$), edges (as in CuGeO$_3$, La$_6$Ca$_8$Cu$_{24}$O$_{41}$, and Li$_2$CuO$_2$), [@edgeshare], or both (as in SrCuO$_2$). In edge sharing CuGeO$_3$, for example, there is moderately strong antiferromagnetic (AF) nn coupling (J$\approx$150 K),[@cugeo3] but a spin-Peierls transition occurs only at 14 K. The typical example of corner-sharing chains is Sr$_2$CuO$_3$, where the system orders antiferromagnetically[@sr2cuo3] with a low transition temperature of 5 K and a small induced magnetic moment (0.06 $\mu_B$) in spite of very large interactions between ions (J$\approx$2200 K). Despite considerable progress a clear understanding of the magnetic behavior of the Cu$^{2+}$ ion in several regimes is still lacking. Here we report a new aspect of spin behavior in edge-sharing systems revealed by spin-polarized local density approximation (LDA) studies of a 1D S=$\frac{1}{2}$ system Li$_2$CuO$_2$. In this compound neutron scattering[@structure] indicates three dimensional AF ordering at 9 K arising from the antialignment of FM chains. The experimental moment of 0.9 $\mu_B$ per cell was attributed completely to the Cu ions. Based upon experience in the undoped two-dimensional cuprates ([*viz.*]{} La$_2$CuO$_4$) where LDA is unable to obtain any moment whatsoever on the Cu ion[@rmp], it might seem that LDA is unlikely to produce a magnetic Cu ion or an insulating system. Due to this problem with corner-sharing Cu-O planes, very few spin-polarized calculations on cuprate compounds have been reported. However, we find that LDA predicts Cu in Li$_2$CuO$_2$ to be robustly magnetic, allowing us to obtain the relative energies and electronic properties of the system with FM chain with both AF and FM coupling between chains, the AF chain, and the unpolarized (PM) system taken as reference. The oxygen ions play a fundamental role in the band dispersion and the magnetism, and carry a magnetic moment approaching 0.2 $\mu_B$ per atom, the largest O moment yet reported. Previous reports of O moments lie in the 0.02-0.10 $\mu_B$ range.[@LSMO] Li$_2$CuO$_2$ is orthorhombic and belongs to the category of edge-sharing compounds, with one-dimensional CuO$_2$ ribbons carrying the Cu chains along the $b$ axis, arrayed in a body-centered fashion in the $a-c$ plane (Fig. \[Fig1\]). The Cu-O-Cu angle $\phi$ = 94$^{\circ}$, intermediate between the two other edge-share compounds with very different characteristics: GeCuO$_3$ (spin-Peierls, $\phi = 99^{\circ}$) and La$_6$Ca$_8$Sr$_{24}$O$_{41}$ (FM, $\phi = 91^{\circ}$). Distances between two Cu ions, 2.86 Å along the chain, 3.65 Å in the $a-b$ plane and 5.23 Å in the diagonal direction, do not reflect the relative coupling strengths, as we explain below. Calculations were done using the linearized augmented plane wave (LAPW) method,[@wien] which makes no shape approximations for the density or potential. The sphere radii used in fixing the LAPW basis were chosen to be 2.00 a.u for Cu and Li and 1.65 for O. Local orbitals (Cu $3p$; Li $1s$; O $2s$) were added to the basis set for extra flexibility and to allow semicore states to be treated within the same energy window as the band states. The plane wave cutoff corresponded to energy of 23.5 Ry resulting in 640 LAPWs per formula unit. Self-consistency was carried out on k-points meshes of 512 points in the Brillouin zone for the compounds which need a single unit cell calculation and 256 points when we considered an =8.9cm AF arrangement for the chains. The paramagnetic system has an odd number of electrons per unit cell and thus is metallic. There is a single band in the range of 1 eV around the Fermi level split off from the rest of the $p-d$ band complex, similar to what was found in the other CuO$_2$ edge-sharing compounds CuGeO$_3$ [@matth] and NaCuO$_2$.[@djs]. This isolated band shows up as two bands in Fig. \[Fig2\], where a doubled cell with two chains has been used for comparison with the AF bands (see below). The analysis of the partial density of states shows that only Cu d$_{yz}$-O $p_{\sigma}$ (the $p_y \pm p_z$ combination directed towards the Cu site) are present in this band. The geometry of this CuO$_2$ edge-sharing chain leads to a simple description of the important band, which we expect (and find) to involve an antibonding combination of Cu $d$ and O $p$ orbitals. The atomic orbital basis in a primitive cell can be chosen as the Cu $d$ orbitals, the $\sigma$-type O $p_{\sigma}$ orbitals on each of the [*four*]{} neighboring O atoms which strongly overlap the $d_{yz}$ orbital, and half of the out-of-plane $p_x$ orbitals that are non-bonding and lie well below E$_F$ (as do all $d$ orbitals except $d_{yz}$). The $p_{\pi}$ orbitals in the $y-z$ plane of the ribbon are $p_{\sigma}$ with respect to a neighboring Cu and belong to the next unit cell. The $d_{yz}$ and the four $p_{\sigma}$ orbitals can be decomposed into five hybridized combinations: one bonding combination ${\cal D}_{yz}$ and one antibonding combination ${\cal D}_{yz}^*$ of $d_{yz}$ and the combination of the four $p_{\sigma}$ orbitals with d$_{yz}$ symmetry, and three other $p_{\sigma}$-only combinations of lower symmetry. ${\cal D}_{yz}$ and ${\cal D}_{yz}^*$ are split strongly, leaving ${\cal D}_{yz}^*$ at the Fermi level and ${\cal D}_{yz}$ 5 eV below, as shown in the density of states plot in Fig. \[Fig3\]. The others lie around -4 eV below E$_F$ and are not of interest. The general behavior of the coupled $d_{yz}-p_{\sigma}$ cluster can be modelled with $\varepsilon_d=-1.5, \varepsilon_p=-4, (dp{\sigma})=\pm 1.15, (pp{\sigma})= 0.25,$ (all in =8.9cm eV). For these parameters the ${\cal D}_{xy}^*$ density is 70% on the Cu and 30% on the four O ions. The active orbital ${\cal D}_{yz}^*$ shown (schematically) as $| {\cal D}_{yz}^* |^2$ on next nearest neighbors in Fig.\[Fig4\], is an effective $d_{yz}$-type orbital centered on each Cu ion but extending strongly to the neighboring O sites. Symmetry allows direct ${\cal D}{\cal D}\pi$ overlap, and therefore hopping amplitude $t_{\pi}$ along the chain. Due to its parentage, however, it is clear that the main contribution to the overlap arises from the O ion region. If the O quadrilateral were perfectly square ($\phi=90^{\circ}$) the O $p_{\sigma}$ orbitals directed toward the two neighboring Cu ions would be precisely $p_y \pm p_z$. These combinations are orthogonal, so $t_{\pi}$ reduces to direct d-d overlap and will be very small. When the Cu-O-Cu angle is not exactly 90$^{\circ}$, the $p_{\sigma}$ orbitals are no longer orthogonal and the overlap (and $t_{\pi}$) increases. The effective Hamiltonian therefore reduces to a single orbital (${\cal D}_{yz}^*$) per cell. The dispersion of the ${\cal D}_{yz}^*$ band cannot be fit simply by nn hopping $t_{\pi}=t_1$ along the chain and $t^{\prime}_1$ between neighboring chains (in roughly the $\hat x \pm \hat z$ directions), but requires as well both next nearest neighbor (nnn) hopping terms $t_2$ and $t^{\prime}_2$. Consideration of the ${\cal D}_{yz}^*$ orbitals on second neighbors indicates why this is so: Cu-O-O-Cu coupling along the ribbon becomes important because of O-O coupling and because the nn hopping is so small, and similarly for interchain hopping along the diagonal. The manner of second neighbor overlap along the chain is clear in Fig.\[Fig4\]. The values $$\begin{aligned} nn: ~~~~ t_1&=&-63 ~meV ~ , ~~ t_1^{\prime}=-16 ~meV, \\ nnn:~~ t_2&=&-94 ~meV ~ , ~~ t_2^{\prime}=~~44 ~meV, \end{aligned}$$ =9.0cm (note that second neighbors values are larger than nearest neighbors values), provide an excellent fit to the dispersion except near the ($\pi,\pi,k_z$) line (not shown) where some additional interaction would be required to obtain a perfect fit. This emergence of an effective single-band system with a simple, strongly hybridized orbital is one of the cleanest in Cu-O systems. It implies that the (ferro- or antiferro-) magnetism of this compound should be interpreted in terms of this extended ${\cal D}_{yz}^*$ orbital instead of the Cu $d_{yz}$ orbital; likewise, correlation effects must involve an on-site repulsion U$_{\cal D}$ rather than U$_d$ and therefore should be smaller than might have been anticipated. Likewise, the exchange splitting of this band will be significantly different from that of the other $d$ orbitals. The spin wave dispersion curves have been measured by Boehm [*et al.*]{}[@Jli2cuo2] Their fit requires important nnn exchange couplings, consistent with our finding that nnn hopping is essential to account for the dispersion. Although our hopping parameters are small, the exchange constants they obtain are very small, $\vert$J$\vert \le$0.4 meV. Application of the superexchange expression J=4$t^2$/U leads to unphysically large estimates of U$\approx$70-100 eV, indicating that exchange coupling requires more careful consideration. Relative to the paramagnetic system, the FM state gains 70 meV/f.u. AF chains lead to a slight lowering by another 1.5 meV/f.u., but the phase with oppositely aligned FM chains is lower still (consistent with observation), 80 meV/f.u. below the unpolarized state. The density of states for these phases are shown in Fig.\[Fig2\], revealing an insulating gap of 0.32 eV. As usual, correlation effects will widen the gap, but we are not aware of any =9.0cm published data. The moment of 0.92 $\mu_B$ per CuO$_2$ unit is roughly 60% on Cu and 40% on O ions. This transfer of magnetic moment from a transition metal ion to a ligand ion (almost 0.2 $\mu_B$ on each O) is to our knowledge the strongest yet reported in a transition metal oxide compound[@k2ircl6]. Previous reports of moments lie in the 0.02-0.10 $\mu_B$ range.[@LSMO] To illustrate the importance of the O sites, we display the exchange potential $V_{\uparrow}-V_{\downarrow}$ for the ferromagnetic and antiferromagnetic chains in Fig. \[Fig5\]. There is a clear similarity, especially for the AF chain, to the ${\cal D}_{yz}^*$ density in Fig.\[Fig4\]. Two other features should be noted: the exchange potential on O is comparable to that on Cu, and the exchange potential is of predominantly one sign for the entire ${\cal D}_{yz}^*$ orbital for both FM and AF ordered chains. Unlike the strong dependence of O moment on the magnetic order, the size of the moment on the Cu ion itself (0.50-0.55 $\mu_B$) is essentially independent of the overall magnetic ordering. Antialignment of FM chains leads to the insulating band structure shown in Fig. 2(c). As expected, the two bands are described well by the eigenvalues of the system $$\begin{aligned} \left( \begin{array}{cc} t_{1,1}(k) + \frac{1}{2}\Delta & t_{1,2}(k) \\ t_{2,1}(k) & t_{2,2}(k) - \frac{1}{2}\Delta \\ \end{array} \right),\end{aligned}$$ where $t_{1,1}=t_{2,2}$ contains all intrachain hopping, $t_{1,2}= t_{2,1}$ contains the interchain hopping given above, and $\Delta$=0.8 eV is the exchange splitting evident in Fig.2(c). Since $\Delta$ is much greater than all of the hopping amplitudes and in fact similar to the PM bandwidth, the exchange coupling between the antialigned chains results in very narrow band splits by $\Delta$. Although the chains are nominally fully polarized, the coupling reduces the net moment to $0.92 \mu_B$/f.u. This total moment is in excellent agreement with neutron scattering results,[@structure] although the moment there was =9.0cm =9.0cm attributed solely to Cu. This is a remarkably large value for a quasi-1D S=$\frac{1}{2}$ chain, where quantum fluctuations should be large. The strong nnn coupling that we have uncovered account for the observation qualitatively: coupling between the ${\cal D}^*_{yz}$ effective orbitals, and therefore the spins, is really three dimensional, hence quantum fluctuation effect are vastly reduced. Recently two similar compounds with one-dimensional chains have been reported. Both Sr$_{0.73}$CuO$_2$ and Ca$_{0.85}$CuO2$_2$ have CuO$_2$ ribbons and order magnetically.[@srcuo2] Comparison with Li$_2$CuO$_2$ is difficult, however, because the strongly differing doping level and the structural disorder can lead to large changes in the magnetic behavior. Recent work on the Ca$_{2+x}$Y$_{2-x}$Cu$_5$O$_{10}$[@hayashi] is suggestive, since susceptibility measurements are interpreted as indicating that each extra doped hole creates a (non-magnetic) Zhang-Rice singlet. Our results for Li$_2$CuO$_2$ suggest that for the edge-sharing chains each extra hole occupies a ${\cal D}_{yz}^*$ orbital and therefore is weakly coupled to neighboring Cu ions. In this case the Cu ion and four neighboring p$_\sigma$ orbitals would be non-magnetic, rather than having a Cu spin be compensated by neighboring O spins. Further experiments will be necessary to test our picture. To summarize, we have found that LDA provides a consistent picture of the magnetic properties and insulating character of the quasi-one-dimensional antiferromagnet Li$_2$CuO$_2$. Due to the formation of a strongly hybridized, and energetically isolated, combination of $d_{yz}$ and $p_{\sigma}$ orbitals, a large moment is transferred to the O ions. A simple single-band system results, but one in which second neighbors coupling exceeds nearest neighbors coupling and the electronic and magnetic behavior is three dimensional. We acknowledge stimulating conversations with G. Shirane and a preprint of Ref. from A.Zheludev. This research was supported by Office of Naval Research Grant No. N00014-97-1-0956. Corresponding author: Warren E. Pickett, pickett@physics.ucdavis.edu. S. Taniguchi [*et al.*]{}, J. Phys. Soc. Jpn. [**64**]{}, 2758 (1995). E. Dagotto and T. M. Rice, Science [**271**]{}, 618 (1996). W. E. Pickett, Phys. Rev. Lett. [**79**]{}, 1746 (1997). Y. Mizumo [*et al.*]{}, Phys. Rev. B[**57**]{}, 5326 (1998). J.Riera and A.Dobry, Phys. Rev. B[**51**]{}, 16098 (1995); G.Castilla, S.Chakravarty, and V.J.Emery, Phys. Rev. Lett. [**75**]{}, 1823 (1995). K. M. Kojima [*et al.*]{}, Phys. Rev. Lett. [**78**]{}, 1787 (1997). F. Sapina [*et al.*]{}, Solid State Commun. [**74**]{}, 779 (1990). W.E.Pickett, Rev. Mod. Phys. [**61**]{}, 433 (1989). J. Pierre, B. Gillon, L. Pnsard, and A. Revcolevschi, Europhys. Lett. [**42**]{}, 85 (1998) report $\approx$ 0.1 $\mu_B$ in La$_{0.8}$Sr$_{0.2}$MnO$_3$. W.E.Pickett and D.J.Singh, Phys.Rev. B[**53**]{}, 146 (1996) report very similar calculated value La$_{0.67}$Ca$_{0.33}$MnO$_3$. M.Matsuda [*et al.*]{}, Phys.Rev. B[**57**]{}, 11467 (1998) report 0.02 $\mu_B$ on O on FM chains in La$_5$Ca$_9$Cu$_{24}$O$_{41}$. P. Blaha, K. Schwarz, and J. Luitz, WIEN97, Vienna University of Technology, 1997. Improved and updated version of the original copyrighted WIEN code, which was published by P. Blaha, K. Schwarz, P. Sorantin, and S. B. Trickey, Comput. Phys. Commun. [**59**]{}, 399 (1990). L. F. Mattheiss, Phys. Rev. B[**49**]{}, 14050 (1994). D. J. Singh, Phys. Rev. B[**49**]{}, 1580 (1994). M. Boehm [*et al.*]{}, “Competing exchange interactions in Li$_2$CuO$_2$,” preprint. A moment of 0.15 $\mu_B$ on the apical Cl ions in K$_2$IrCl$_6$ has been reported by J. W. Lynn, G. Shirane, and M. Blume, Phys. Rev. Lett. [**37**]{}, 154 (1976). A. Shengelaya [*et al.*]{}, Phys. Rev. Lett. [**80**]{}, 3626 (1998); J. Karpinski [*et al.*]{}, Physics C [**274**]{}, 99 (1997). J.Dolinsek [*et al.*]{}, Phys. Rev. B[**57**]{}, 7798 (1998). A.Hayashi, B.Batlogg, and R.J.Cava, preprint.
--- abstract: 'Considerable recent theoretical and experimental efforts have been devoted to the study of quantum criticality and novel phases of antiferromagnetic heavy-fermion metals. In particular, quantum phase transitions have been discovered in heavy-fermion compounds with geometrical frustration. These developments have motivated us to study the competition between the RKKY and Kondo interactions on the Shastry-Sutherland lattice. We determine the zero-temperature phase diagram as a function of magnetic frustration and Kondo coupling within a slave-fermion approach. Pertinent phases include the valence bond solid and heavy Fermi liquid. In the presence of antiferromagnetic order, our zero-temperature phase diagram is remarkably similar to the global phase diagram proposed earlier based on general grounds. We discuss the implications of our results for the experiments on Yb$_2$Pt$_2$Pb and related compounds.' author: - 'J. H. Pixley' - Rong Yu - Qimiao Si title: | Quantum phases of the Shastry-Sutherland Kondo lattice:\ implications for the global phase diagram of heavy fermion metals --- Geometrical frustration in insulating quantum antiferromagnets can lead to a variety of quantum phases, such as valence bond solids (VBS) and quantum spin liquids [@Balents]. Recent studies have discovered intriguing properties in a growing list of metallic systems with local magnetic moments residing on frustrated lattices. In these heavy fermion compounds, the interplay of Kondo screening and magnetic frustration may give rise to entirely new ground states and quantum phase transitions [@Si]. For example, the compounds Yb$_2$Pt$_2$Pb [@Kim] and CePd$_{1-x}$Ni$_x$Al [@Fritsch] have spin-$1/2$ local moments located on the Shastry-Sutherland and Kagome lattices respectively. Likewise, both YbAgGe [@Morosan] and YbAl$_3$C$_3$ [@Khalyavin] feature triangular lattices. All these compounds show an enhanced specific heat coefficient, implying a large effective mass and the presence of the Kondo effect. General theoretical considerations of the competition between Kondo and RKKY interactions have led to a proposal for the global phase diagram of heavy fermion metals as a function of frustration or quantum fluctuations ($G$), and the Kondo coupling ($J_K$) [@Si2; @Coleman]; see Fig. \[fig:model\](a). This phase diagram incorporates not only antiferromagnetic (AF) order, but also the physics of Kondo destruction [@LCP; @JPCM; @Senthil]. From the Kondo-destroyed antiferromagnetic phase (AF$_S$), the transition to the heavy fermi liquid phase (P$_L$) could take place directly (type I), via the spin-density-wave phase (AF$_L$) (type II), or through the Kondo-destroyed paramagnetic phase ($P_S$) (type III). The heavy fermion compounds CeCu$_{6-x}$Cu$_x$, YbRh$_2$Si$_2$ and CeRhIn$_5$ have shown strong evidence for realizing the type I transition [@Schroder; @Friedeman; @Shishido; @Park06]. CePd$_3$Si$_{20}$, which is cubic and therefore would have a smaller $G$, has properties consistent with a type II transition [@Custers12]. Geometrical frustration is expected to enhance the quantum fluctuation parameter $G$, raising the prospect of realizing a type III transition. There is a recent surge of heavy-fermion materials that appear to be suitable for exploring this large-$G$ portion of the global phase diagram. In particular, Yb$_2$Pt$_2$Pb and its homologues such as Ce$_2$Pt$_2$Pb [@Kim], featuring the geometrically-frustrated Shastry-Sutherland lattice, may involve an intermediate VBS P$_S$ phase. ![ (color online). (a) Proposed global phase diagram of heavy fermion metals. Here, AF$_S$ and AF$_L$ refer to antiferromagnetic states without or with static Kondo screening. P$_L$ is the paramagnetic heavy Fermi liqud, and P$_S$ refers to a paramagnetic phase without static Kondo screening. Adapted from Ref. [@Si2]. We have sketched the proposed trajectory of Yb$_2$Pt$_2$Pb under magnetic field tuning. (b) Shastry-Sutherland lattice, denoting the Heisenberg exchange couplings $J_1$ on all the horizontal and vertical bonds, and $J_2$ along the diagonals. The unit cell is the dashed square, containing four sites $A,B,C,D$. (c) The bond singlet parameters. []{data-label="fig:model"}](fig1.eps){height="2.0in"} In this work, we study the effect of frustration on the Kondo-Heisenberg model by considering it on the Shastry-Sutherland lattice (SSL) [@Shastry], as illustrated in Fig. \[fig:model\](b). The Hamiltonian is defined as $$H = \sum_{(i,j),\sigma}t_{ij} ( c_{i\sigma}^{\dag}c_{j\sigma} + \mathrm{h.c.}) + J_K \sum_i {\bf S}_i\cdot{\bf s}_i^c + \sum_{(i,j)}J_{ij}{\bf S}_i\cdot{\bf S}_j \label{eqn:ham}$$ where $(i,j)$ denote the nearest neighbors (NN) and next nearest neighbors (NNN) on the SSL as shown in Fig. \[fig:model\](b). The NN and NNN tight binding parameters for the conduction electrons, denoted by $c_{i\sigma}$, are $t_1$ and $t_2$, respectively. The spins of the conduction electrons are ${\bf s}_i^c = c_{i\alpha}^{\dag}(\bm{\sigma}_{\alpha\beta}/2)c_{i\beta}$ at site $i$, where $\bm{\sigma}_{\alpha\beta}$ are the Pauli spin matrices. They are coupled to spin-$1/2$ local moments, ${\bf S}_i$, through an antiferromagnetic Kondo coupling $J_K$. We have explicitly included the RKKY interactions, incorporating $J_1$ and $J_2$, the NN and NNN terms respectively \[Fig. \[fig:model\](b)\]. The degree of frustration is measured by the ratio $G=J_2/J_1$. We represent the local moments using fermionic spinons [@Auerbach], $f_{i\sigma}$ such that ${\bf S}_i = f_{i\alpha}^{\dag} (\bm{\sigma}_{\alpha\beta}/2) f_{i\beta}$ with a constraint $\sum_{\sigma}f_{i\sigma}^{\dag}f_{i\sigma}=1$ at each lattice site. The spin-$1/2$ Heisenberg model on the SSL was extensively studied (e.g., Refs. [@Shastry; @Chung; @Lauchli; @Miyahara]). For $J_2/J_1>2$, it possesses an exact VBS ground state, where singlets form across each disconnected diagonal bond [@Shastry]. Whereas for small $J_2/J_1$, the model has an AF ground state [@Chung; @Lauchli]. The transition between these two states has not been completely determined  [@Chung; @Lauchli]. The model in the presence of Kondo coupling was studied in some detail by Bernhard [*et al.*]{} [@Bernhard], and was also discussed qualitatively [@Coleman]. As we will discuss, our work here reports the first complete analysis of the relevant phases, and this is essential both in realizing the global phase diagram and in shedding light on the experimentally observed PKS phase. *Large-$N$ limit*: Generalizing the spin symmetry from $SU(2)$ to $SU(N)$, we arrive at $H = \sum_{(i,j),\sigma}t_{ij} (c_{i\sigma}^{\dag}c_{j\sigma} + \mathrm{h.c.}) - J_K/N \sum_i :B_i^{\dag}B_i: -\sum_{(i,j)}(J_{ij}/N):D_{ij}^{\dag}D_{ij}:$ where $B_i = \sum_{\sigma}c_{i\sigma}^{\dag}f_{i\sigma}$, $D_{ij} = \sum_{\sigma}f_{i\sigma}^{\dag}f_{j\sigma}$. The sum now runs over $\sigma = 1,\dots,N$, and the constraint becomes $\sum_{\sigma}f_{i\sigma}^{\dag}f_{i\sigma}=N/2$. We have also used $:\dots:$ to denote normal ordering. The large-$N$ mean field Hamiltonian can be expressed as: $$\begin{aligned} H_{MF} &=& E-\sum_{(i,j),\sigma}( Q_{ij}^*f^{\dag}_{i\sigma}f_{j\sigma} + \mathrm{h.c.} ) +\sum_{i,\sigma}\lambda_if^{\dag}_{i\sigma}f_{i\sigma} \nonumber \\ &+&\sum_{(i,j),\sigma}t_{ij} (c^{\dag}_{i\sigma}c_{j\sigma} + \mathrm{h.c.}) -\sum_{i,\sigma}(b_i^*c^{\dag}_{i\sigma} f_{i\sigma} + \mathrm{h.c.}). \nonumber \\ \label{eqn:Hmf}\end{aligned}$$ We have used a Hubbard-Stratonovich transformation decoupling $B_i$ and $D_{ij}$ in the Kondo singlet and resonating valence bond (RVB) channels respectively  [@Senthil; @Coleman2], and the constraint is enforced by $\lambda_i$. The constant term is $E/N=\sum_i\left(|b_i|^2/J_K-\lambda_i/2\right)+ \sum_{(i,j)}|Q_{ij}|^2/J_{ij}$. The Kondo parameter $N b_i=J_K\langle B_i \rangle$ can be taken to be real by absorbing its phase into the constraint field $\lambda_i$ [@Read], whereas the RVB parameters $N Q_{ij}=J_{ij}\langle D_{ij} \rangle$ are in general complex. We solve Eq. (\[eqn:Hmf\]) by using a four-site unit cell, where each site is labeled by $i\rightarrow({\bf r},X)$, with $X=A,B,C,D$ marking the sublattice \[see Fig. \[fig:model\](b)\], and ${\bf r}$ specifying a unit cell. We introduce Fourier transforms per sublattice [@Dodds] as $c_{{\bf r} X \sigma} = 1/\sqrt{N_u}\sum_{{\bf k}}e^{-i{\bf k}\cdot({\bf r}+\bm{\delta}_X)}c_{{\bf k} X \sigma}$, where $\bm{\delta}_X$ points to each sub lattice $X$ from sub-lattice $A$. Keeping the full generality of the four-site unit cell we introduce sublattice dependent Kondo parameters and constraint fields $b_X,\lambda_X$, and use ten complex RVB parameters $Q_{ij}$ as shown in Fig. \[fig:model\](c). These parameters are determined by solving the saddle-point equations self-consistently (see supplementary material [@supp]). We consider the metallic case $0<n_c<1$, where $n_c=\frac{1}{4N_u}\sum_{i,\sigma} \langle c^\dagger_{i\sigma} c_{i\sigma}\rangle$ is the filling of the conduction band. The zero temperature phase diagram is shown in figure \[fig:phase-diagram\]. Without loss of generality, we have chosen $t_1/t_2=1.0$ and $n_c=0.5$ for Figs. \[fig:phase-diagram\] and \[fig:b-structure\]. For small Kondo coupling and a large $J_2/J_1$ ratio, a VBS ground state arises for which only $Q_{x+y}=Q_{x-y}$ are nonzero. The singlet bonds are the same as in the pure Heisenberg model in the Shastry-Sutherland lattice at large $J_2/J_1$, and we label it as SSL-VBS. This solution does not break any symmetry of the SSL. Keeping $J_K/t_1$ small and decreasing $J_2/J_1$, we find a first order transition at $J_2/J_1=1$ from the SSL-VBS to a plaquette VBS (P-VBS) ground state where only $Q_{x2}=Q_{x4}=Q_{y1}=Q_{y2}$ are nonzero. The P-VBS ground state breaks a reflection symmetry about either of the diagonal bonds in the the SSL. It is degenerate with the conventional VBS on the square lattice with only $Q_{x1}=Q_{x4}$ being nonzero. ![(color online). Large-$N$ phase diagram as a function of frustration ($J_2/J_1$) and Kondo coupling ($J_K/t_1$), for a metallic filling $n_c=0.5$. The phases are described in the main text. The solid lines represent first-order transitions, and the dashed lines surrounding the grey area locate the boundaries of the intermediate phases that exhibit partial Kondo screening (PKS).[]{data-label="fig:phase-diagram"}](fig2.eps){height="2.in"} For a large Kondo coupling we find a heavy Fermi liquid (HFL) ground state, which has a nonzero Kondo parameter $b_A=b_B=b_C=b_D $. The singlet bond parameters are also nonzero: $Q_{xi}=Q_{yi}$, for $i=1-4$ and $Q_{x+y}=Q_{x-y}$. We also obtain $\lambda_A=\lambda_B=\lambda_C=\lambda_D$, so the solution does not break any symmetry of the SSL. Here, we find that each $Q_{ij}$ acquires a finite phase $Q_{ij}=|Q_{ij}|e^{i\phi_{ij}}$. Correspondingly, we define a gauge independent flux through the triangular and square plaquettes as $\Phi_{\triangle}=\sum_{\triangle} \phi_{ij}(\mathrm{mod}\,\,2\pi)$ and $\Phi_{\square}=\sum_{\square} \phi_{ij}(\mathrm{mod}\,\,2\pi)$, respectively, where the summation is over the bonds around a plaquette. For the range of fillings $0<n_c \lesssim 0.75$, we find $\Phi_{\triangle}=\pi$ and $\Phi_{\square}=0$, whereas for $0.75\lesssim n_c<1$ we obtain $\Phi_{\triangle}=0$ and $\Phi_{\square}=0$. The finite flux through each triangular plaquette is a consequence of the spinons acquiring a finite kinetic energy from their hybridization with the conduction-electron band; we can therefore consider this as a hybridization induced flux phase. However, even though the flux through each triangular plaquette is $\pi$, the total flux through each square plaquette is still zero (mod. $2\pi$); the flux does not affect the electronic band structure in the HFL phase. We now turn to the transition among the two VBS phases and the HFL phase. Restricting the solution to these three states, we obtain the phase boundary in Fig. \[fig:phase-diagram\] and the mean field parameters shown in Fig. \[fig:b-structure\](a). Unexpectedly, when considering the general solution we find a number of intermediate states that break the lattice symmetry, in the region shown as the grey shaded area in Fig. \[fig:phase-diagram\]. In some cases, for example the intermediate phase between the SSL-VBS phase and the HFL phase we find a PKS state: some (half) of the moments in the unit cell are still locked into valence bonds, while the other spins are Kondo screened. This is discussed in detail in the supplementary material [@supp]. Tuning $t_1/t_2$ only affects the location of the phase boundary; a smaller ratio of $t_1/t_2$ makes the transition between each VBS phase and the HFL phase occur for smaller values of $J_K/t_1$. *Magnetism at $N=2$*: We now incorporate long range AF order into our approach. To do so we decouple the Heisenberg term into two distinct channels, but we no longer have access to the large-$N$ limit and are restricted to $N=2$. In keeping with the generalized procedure of Hubbard-Stratonovich decouplings [@NG.book], and similar to reference [@Senthil], we rewrite the Heisenberg term in Eq. (\[eqn:ham\]) as follows: $J_{ij}{\bf S}_i\cdot{\bf S}_j =xJ_{ij}{\bf S}_i\cdot{\bf S}_j+(1-x)J_{ij}{\bf S}_i\cdot{\bf S}_j$; the term proportional to $x$ is treated within the RVB decoupling described previously. The additional term is decoupled in terms of Néel order: $(1-x)J_{ij}{\bf S}_i\cdot{\bf S}_j=(1-x)J_{ij}(2{\bf M}_i\cdot{\bf S}_j -{\bf M}_i\cdot{\bf M}_j)$, where $ {\bf M}_i=\langle {\bf S}_i \rangle$. We consider the Néel ground state with an ordering wave vector ${\bf Q}=(\pi/a,\pi/a)$ . This AF order corresponds to ${\bf M}_A = {\bf M}_D = -{\bf M}_C =-{\bf M}_B={\bf M}$ within the four site unit cell. In the absence of a Kondo coupling, $J_K=0$, the phase diagram of the Heisenberg model as a function of $x$ and $J_2/J_1$ is presented in the supplementary material [@supp]. The phase diagram at $J_K=0$ provides the physical basis for choosing the parameter $x$. Within our fermonic representation, classical AF order arises at $x=0$, corresponding to a full ordered magnetic moment $|{\bf M}|=1/2$. The ground state wave function of the true quantum AF state is known to contain considerable RVB correlations [@Dagatto], suggesting a choice of $x>1/2$. Indeed, for the nearest neighbor Heisenberg model ($J_K=0$ and $J_2=0$) we find a quantum AF phase as a self consistent solution for $x$ in the range $0.67 \leq x < 0.8125$, which has a lower free energy than the classical Néel state ($Q_{xi}=Q_{yi}=0$). This state, is a free energy local minimum, and taken as a candidate of the true Néel ground state. The quantum AF phase has finite RVB parameters $Q_{xi}=Q_{yi}$ for $i=1-4$, which reduce the ordered moment $|{\bf M}|<1/2$. Incorporating fluctuations further will reduce the free energy even more, making the AF phase the true ground state in the limit $J_2/J_1\ll1$. Here we present the results for $x=0.7$. The phases and the overall profile of the phase diagram are not sensitive to the choice of $x$ in the range $0.67 \leq x < 0.8125$, as discussed in the supplementary material [@supp]. The resulting phase diagram is given in Fig. \[fig:M-pd\], for parameters $n_c=0.5$ and $t_1/t_2=1$. We have restricted the solutions to states that do not break any lattice symmetries. For small $J_K/t_1$ and tuning the ratio of $J_2/J_1$, we find a first order transition from the AF phase to the SSL-VBS phase. For small $J_2/J_1$, and tuning the Kondo coupling, the AF phase has a continuous transition [@note] into a spin density wave (SDW) phase characterized by the onset of Kondo screening: $b_A=b_B=b_C=b_D$ increases continuously from zero with nonzero values of ${\bf M}$, $Q_{xi}=Q_{yi}$ for $i=1-4$ and $Q_{x+y}=Q_{x-y}$. Upon increasing $J_K$ further, there is a first order transition from the SDW phase into the HFL phase with ${\bf M}=0$. Our results demonstrate a rich interplay between Kondo and RKKY interactions. In addition to the AF order and its suppression, there is also the competition between the Kondo effect and VBS order in the magnetically-disordered region. In the notation of Fig. \[fig:model\](a), we associate Kondo hybridization ($b\ne 0$) with a large Fermi surface (subscript $L$) and Kondo destruction ($b =0$) with a small Fermi surface (subscript $S$). The phase diagram we have calculated, Fig. \[fig:M-pd\], represents a remarkable realization of the global phase diagram that had been advanced on qualitative considerations [@Si2; @Coleman]. It will be instructive to study Kondo lattice models in other geometrically frustrated cases, for example Kagome lattices (pertinent to CePd$_{1-x}$Ni$_x$Al [@Fritsch]) and triangular lattices (relevant to YbAgGe [@Morosan] and YbAl$_3$C$_3$ [@Khalyavin]), and explore the generality of the global phase diagram. Compared to those cases, the Kondo model on the SSL has the main advantage that the magnetically frustrated regime is accessible by a large-$N$ approach. Several remarks are in order. First, in the phase diagram of Fig. \[fig:M-pd\], we find a *line* of direct transitions from AF$_S$ to P$_L$. However, whether this is a line of transitions or a single point is sensitive to the model parameters in our approach and for $x=0.75$ we find the transition collapsing to a single point (see the supplementary material [@supp]). It is important to consider how further quantum fluctuations will affect the topology of the phase diagram in Fig. \[fig:M-pd\]. Recently, insights have been gained from calculations on a quantum impurity model incorporating local quantum fluctuations [@Nica]; within an extended dynamical mean field context [@LCP], the results of Ref. [@Nica] imply this direct transition to be a line in the phase diagram. Second, due to an even number of spins per unit cell, the spinon bands are either empty or completely full [@Coleman]. Hence the volume of the Fermi surface will not change when the system goes from the SSL-VBS to the HFL phase. Nonetheless, the topology of the Fermi surfaces reflects the incorporation ($L$) or absence ($S$) of the Kondo resonances in the Fermi volume and can be different for the two cases. We show the Fermi surfaces for $n_c=0.5$ in the supplementary material [@supp]. Third, it is instructive to compare our results to those of Ref. . Where there is overlap, the results of that work and ours are largely consistent. We are able to draw substantially new implications by studying the competitions of all the phases pertinent to the global phase diagram, including the $AF_L$ phase. Furthermore, our work has also uncovered PKS phases. In a similar vein, we note that the Shastry-Sutherland Kondo lattice was also considered in Ref. , with a particular focus on possible superconducting pairings. The implications of our study for superconductivity is an intriguing issue, but is beyond the scope of the present work. ![Phase diagram of the Shastry-Sutherland Kondo lattice incorporating magnetic order for a metallic filling $n_c=0.5$. Thin (thick) lines represent first order (continuous [@note]) transitions.[]{data-label="fig:M-pd"}](fig4.eps){height="2.in"} The systematic nature of our results is important not only for generating insights into the global phase diagram, but also to drawing implications for experiments in heavy-fermion metals. Our phase diagram in Fig. \[fig:M-pd\] opens up a trajectory from the AF$_S$ to the HFL phase via a sequence of quantum phase transitions that passes through a VBS, $P_S$ phase. This result has implications for Yb$_2$Pt$_2$Pb, where experiments [@Kim] appear to have realized such a sequence of transitions \[see figure \[fig:model\](a)\]. In addition, we have provided evidence for intermediate, partially Kondo screened phases [*in metallic cases.*]{} This type of phase has also been discussed in a variational quantum Monte Carlo approach in Kondo insulator settings  [@Motome]. In this regard, it is intriguing that experiments on the metallic CePd$_{1-x}$Ni$_x$Al [@Fritsch] have suggested that the frustration in this material is not large enough to yield a spin liquid, but instead leads to a ground state where some of the magnetic moments form long range AF order, while the others are completely screened by the Kondo effect. In conclusion, we have studied the global phase diagram in the prototypical geometrically-frustrated Shastry-Sutherland Kondo lattice. Our work represents the first concrete calculation in which all four phases, AF$_S$, AF$_L$, P$_S$, and P$_L$ appear in a single zero-temperature phase diagram. Our results have elucidated the rich variety of quantum phases and their transitions in heavy-fermion metals, and provide new insights into the puzzling experimental observations recently made in geometrically frustrated heavy fermion metals. *Acknowledgements*: We would like to acknowledge useful discussions with D. T. Adroja, M. Aronson, C. H. Chung, P. Coleman, S. Kirchner, A. Nevidomskyy, and E. Nica. This work was supported in part by the NSF Grant No. DMR-1309531, the Robert A. Welch Foundation Grant No. C-1411, the Alexander von Humboldt Foundation and the East-DeMarco fellowship (JHP). The majority of the calculations have been performed on the Shared University Grid at Rice funded by NSF under Grant EIA-0216467, and a partnership between Rice University, Sun Microsystems, and Sigma Solutions, Inc.. Q.S.  also acknowledges the hospitality of the the Karlsruhe Institute of Technology, the Aspen Center for Physics (NSF Grant No. 1066293), and the Institute of Physics of Chinese Academy of Sciences. [1]{} L. Balents, Nature [**464**]{}, 199 (2010). Q. Si and F. Steglich, Science [**329**]{}, 1161 (2010). M. S. Kim and M. C. Aronson, Phys. Rev. Lett. [**110**]{}, 017201 (2013); M. S. Kim, M. C. Bennet, and M. C. Aronson, Phys. Rev. B [**77**]{}, 144425 (2008); M. S. Kim and M. C. Aronson, J. Phys.: Condens. Matter [**23**]{} 164204 (2011). V. Fritsch, N. Bagrets, G. Goll, W. Kittler, M. J. Wolf, K. Grube, C.-L. Huang, and H. v. Löhneysen, arXiv:1301.6062 (2013). J. K. Dong, Y. Tokiwa, S. L. Bud’ko, P. C. Canfield, and P. Gegenwart, Phys. Rev. Lett. [**110**]{}, 176402 (2013); S. L. Bud’ko, E. Morosan, P. C. Canfield, Phys. Rev. B [**71**]{}, 054408 (2005). D. D. Khalyavin, D. T. Adroja, P. Manuel, A. Daoud-Aladine, M. Kosaka, K. Kondo, K. A. McEwen, J. H. Pixley and Q. Si, Phys. Rev. B [**87**]{}, 220406(R) (2013). Q. Si, Physica B [**378**]{}, 23 (2006); Phys. Status Solidi B [**247**]{}, 476 (2010). P. Coleman and A. Nevidomskyy, J. Low. Temp. Phys. [**161**]{}, 182 (2010). Q. Si, S. Rabello, K. Ingersent and J. L. Smith, Nature [**413**]{}, 804 (2001). P. Coleman, C. P[é]{}pin, Q. Si, and R. Ramazashvili, J. Phys.: Conden. Matt. [**13**]{}, R723-R738 (2001). T. Senthil, M. Vojta, and S.Sachdev, Phys. Rev. B [**69**]{}, 035111 (2004). A. Schröder, G. Aeppli, R. Coldea, M. Adams, O. Stockert, H. v. Löhneysen, E. Bucher, R. Ramazashvili, and P. Coleman, Nature [**407**]{}, 351 (2000). S. Friedemann, N. Oeschler, S. Wirth, C. Krellner, C. Geibel, F. Steglich, S. Paschen, S. Kirchner, and Q. Si, PNAS [**107**]{}, 14547-14551 (2010). H. Shishido, R. Settai, H. Harima and Y. Ōnuki, J. Phys. Soc. Jpn. [**74**]{}, 1103 (2005). T. Park, F. Ronning, H. Q. Yuan, M. B. Salamon, R. Movshovich, J. L. Sarrao and J. D. Thompson, Nature [**440**]{} 65 (2006). J. Custers, K.-A. Lorenzer, M. Müller, A. Prokofiev, A. Sidorenko, H. Winkler, A. M. Strydom, Y. Shimura, T. Sakakibara, R. Yu, Q. Si, and S. Paschen, Nature Materials [**11**]{}, 189 (2012). B. S. Shastry and B. Sutherland, Physica B [**108**]{}, 1069 (1981). D. P. Arovas and A. Auerbach, Phys. Rev. B [**38**]{} 316 (1988). C. H. Chung, J. B. Marston and S. Sachdev, Phys. Rev. B [**64**]{} 134407 (2001). A. Lauchli, S. Wessel, and M. Sigrist Phys. Rev. B [**66**]{}, 014401 (2002). S. Miyahara and K. Ueda, J. Phys.: Condens. Matter [**15**]{}, R327 (2003). B. H. Bernhard, B. Coqblin, and C. Lacroix, Phys. Rev. B [**83**]{}, 214427 (2011). P. Coleman and N. Andrei, J. Phys.: Condens. Matter [**1**]{}, 4057 (1989). N. Read, D. M. Newns and S. Doniach, Phys. Rev. B [**30**]{}, 3841 (1984) S. Furukawa, T. Dodds and Y. B. Kim, Phys. Rev. B [**84**]{}, 054432 (2011). See Supplemental Material. J. W. Negele and H. Orland, *Quantum Many-Particle Systems*, Westview Press (1998). A. Auerbach, *Interacting electrons and quantum magnetism*, Springer (1994). E. Dagotto and A. Moreo, Phys. Rev. B 38, 5087(R) (1988). We are not able to discern between a continuous and a very weakly first order transition. E. M. Nica, K. Ingersent, J. X. Zhu and Qimiao Si, Phys. Rev. B [**88**]{}, 014414 (2013). Y. Motome, K. Nakamikawa, Y. Yamaji and M. Udagawa, Phys. Rev. Lett. [**105**]{} 036403 (2010). [Supplementary Material ]{}\ Quantum phases of the Shastry-Sutherland Kondo lattice:\ implications for the global phase diagram of heavy fermion metals {#supplementary-material-quantum-phases-of-the-shastry-sutherland-kondo-lattice-implications-for-the-global-phase-diagram-of-heavy-fermion-metals .unnumbered} ================================================================= [J. H. Pixley, Rong Yu and Qimiao Si ]{} 1.0 cm Numerical Method to Solve the Self-Consistent Equations ======================================================= In this work, we minimize the ground-state energy with respect to the self-consistent parameters $Q_{ij}$, $b_X$, and $\lambda_X$ by solving the coupled non-linear equations. To solve the equations efficiently, we apply the Broyden’s mixing method [@Broyden65; @NumRecp07; @Johnson88], which has been widely used in density functional theory (DFT) [@MarksLuke08; @Baran], in dynamical mean-field theory [@Zitko09], and in other contexts of correlated electron problems [@Yunoki08; @Yu13]. In general, the method solves the self-consistent equations $\boldsymbol{X}=\boldsymbol{F}[\boldsymbol{X}]$ iteratively with a set of initial values of the parameters $\boldsymbol{X}^{(0)}_{\mathrm{in}}$. At the $m$-th iteration, a new set of the parameters is obtained via $$\boldsymbol{X}^{(m)}_{\mathrm{out}} = \boldsymbol{F}[\boldsymbol{X}^{(m)}_{\mathrm{in}}].$$ The input of the next iteration is constructed from $$\boldsymbol{X}^{(m+1)}_{\mathrm{in}} = \boldsymbol{X}^{(m)}_{\mathrm{in}} - \boldsymbol{B}^{(m)} (\boldsymbol{X}^{(m)}_{\mathrm{out}} - \boldsymbol{X}^{(m)}_{\mathrm{in}}),$$ where $\boldsymbol{B}^{(m)}$ is an approximation to the inverse Jacobian matrix of the non-linear equations at the $m$-th iteration. The Broyden’s method provides a scheme to update the approximate inverse Jacobian $\boldsymbol{B}^{(m)}$ at each iteration such that good convergence is obtained [@Broyden65; @NumRecp07; @Johnson88]. To achieve the global minimum of the ground-state energy, we have done a random search of initial conditions in the parameter space. The typical number of the $\boldsymbol{X}^{(0)}_{\mathrm{in}}$ configurations used in the calculation is $10^3$. For each given initial condition, a Broyden mixing is performed; when convergence is approached, the corresponding ground-state energy is recorded. The global minimum of the ground-state energy is then obtained by comparing the energies for different configurations. Intermediate Phases and the Effect of Model Parameters ====================================================== Here we present the full Large-N phase diagram for the metallic SS Kondo lattice, focusing in particular on the intermediate phases sketched in Fig. 2 of the main text. We consider tight binding parameters $t_1=t_2=1$, and a conduction electron filling $n_c=0.5$. We find a series of intermediate states that break the lattice symmetry within the unit cell. In particular we find four intermediate states labelled with different colors in Fig. \[fig:N-pd\]: blue (1), green (2), red (3) and grey (4). We first consider the blue region (1), which is the most physically relevant intermediate phase because it occurs in the transition region between the SSL-VBS and HFL phases. Here, we find a phase with *partial Kondo screening* defined as $b_A=b_D> 0$, $b_B=b_C=0$, with $Q_{x-y} \gg Q_{x+y} > 0$. The local moments on sublattices A and D are screened by the Kondo effect, whereas those on the $B$ and $C$ sub-lattices are locked into a VBS singlet. Turning next to the green region (2), we find a square plaquette RVB phase with Kondo screening, defined by $b_A=b_D>b_B=b_C>0$, $Q_{x-y}>Q_{x+y}>0$, $Q_{x1}=Q_{y1}=Q_{x3}=Q_{y2}$ and $Q_{x2}=Q_{y3}=Q_{x4}=Q_{y4}$. In this phase, plaquette valence bonds form on each square plaquette that contains a $J_2$ bond. The red region (3) describes a phase where the RVB parameters are in a “kite” phase, and the local moments are screened. It is defined by $b_A=b_D>b_B=b_C$, $Q_{x+y}\neq Q_{x-y}$, $Q_{x1}=Q_{y1}$, $Q_{x2}=Q_{y3}$, $Q_{x3}=Q_{y2}$, and $Q_{x4}=Q_{y4}$. Lastly, we come to the grey region (4), which corresponds to a spin-Peierls phase with partial Kondo screening. Here, $b_A=b_B \neq 0$, $b_C=b_D=0$ with $Q_{x3}\neq0$; all the other parameters vanish. ![Full large-N phase diagram of the SS Kondo lattice, for $t_2=t_1$ and $n_c=0.5$. The phases are described in the text. []{data-label="fig:N-pd"}](figS1.eps){height="2.0in"} ![Phase diagram of the SS Kondo lattice incorporating magnetic order using $x=0.75$, while keeping $t_2=t_1$ (a) and for $x=0.70$ with $t_2=0$ (b). In both cases, the conduction electron filling, $n_c=0.5$, is unchanged from before. The thin (thick) lines represent first order (continuous) transitions.[]{data-label="fig:N-pd2"}](figS2a.eps){height="2.0in"} ![Phase diagram of the SS Kondo lattice incorporating magnetic order using $x=0.75$, while keeping $t_2=t_1$ (a) and for $x=0.70$ with $t_2=0$ (b). In both cases, the conduction electron filling, $n_c=0.5$, is unchanged from before. The thin (thick) lines represent first order (continuous) transitions.[]{data-label="fig:N-pd2"}](figS2b.eps){height="2.0in"} We now turn the effect of changing model parameters on the global phase diagram. We first consider choosing a different value of $x=0.75$ (see the main text for the definition of $x$) while keeping $t_1/t_2=1$ and $n_c=0.5$. This choice of $x$ is still in the range over which a quantum antiferromagnetic phase arises (see below, Fig. \[fig:M-pd\]). Separately, we consider keeping $x=0.70$ but setting $t_2=0$, with $t_1/J_1=4.0$ and $n_c=0.5$; this allows us to study the effect of the conduction-electron band dispersion ([*cf.*]{} Figs. \[fig:t2=0\_JK=0\] and \[fig:t2=0\_JK=2\]). Interestingly, for both cases we find the line of transitions between AF$_S$ and HFL collapse to a point. Therefore, whether the transition between AF$_S$ and HFL is a line or a single point is a question that needs to be addressed beyond the mean field level, for example within an extended dynamical mean field approach. The key point, however, is that the overall profile of the global phase diagram is robust against these changes of parameters. ![(a) Fermi Surface corresponding to the band structure in Fig. 3(b) of the main text, in the VBS phase. Here, $J_2/J_1=2$, $J_K/t_1=2.1$. Since the spinon bands are gapped, this corresponds to the band structure of the conduction electron dispersion alone, defined on the SSL with $t_1/t_2=1$ and $n_c=0.5$; (b) Fermi Surface corresponding to the band structure in Fig. 3(c) of the main text, in the HFL phase. The bare parameters are the same as in (a). Note that the Fermi volume does not change from (a) to (b), due to the even number of spins per unit cell as described in the main text.[]{data-label="fig:N-pd3"}](figS4a.eps){height="2.0in"} ![(a) Fermi Surface corresponding to the band structure in Fig. 3(b) of the main text, in the VBS phase. Here, $J_2/J_1=2$, $J_K/t_1=2.1$. Since the spinon bands are gapped, this corresponds to the band structure of the conduction electron dispersion alone, defined on the SSL with $t_1/t_2=1$ and $n_c=0.5$; (b) Fermi Surface corresponding to the band structure in Fig. 3(c) of the main text, in the HFL phase. The bare parameters are the same as in (a). Note that the Fermi volume does not change from (a) to (b), due to the even number of spins per unit cell as described in the main text.[]{data-label="fig:N-pd3"}](figS4b.eps){height="2.0in"} To explore further the robustness of our results against the change of the conduction-electron dispersion, we also discuss the effect of an additional tight binding parameter $t_3$, which connects every next nearest neighbor that is not connected via $t_2$. Just like tuning $t_2/t_1$ away from $1$, any finite $t_3$ will introduce a curvature to the region of the flat band; this flat portion existed for $t_2/t_1=1$ (and $t_3=0$), along the $k_x=k_y$ direction in the Brillouin zone and away from the Fermi energy, as shown in Fig. 3(b) in the main text. We find that tuning the ratio of $t_3/t_1$ can change the degree to which the intermediate PKS phases occur. The region of the intermediate phases that break the lattice symmetry within the unit cell narrows for increasing $t_3/t_1$, and can even be completely eliminated for a large $t_3/t_1$ ratio. Again, the overall profile of the global phase diagram is robust against the change of $t_3/t_1$. We close this subsection by showing the Fermi surfaces of the SSL-VBS and HFL phases, both for the case $t_2=t_1$ and $n_c=0.5$ (Figs. \[fig:N-pd3\](a) and (b), respectively) and for $t_2=0$ and $n_c=0.5$ (Figs. \[fig:t2=0\_JK=0\] and \[fig:t2=0\_JK=2\], respectively). ![(a) Band structure in the VBS phase when the diagonal hopping vanishes, $t_2/t_1=0$, and for $J_2/J_1=1.6$, $J_K/t_1=0$ and $n_c=0.5$. The parameter $t_1=1.0$ sets the unit of energy. The blue lines are the gapped spinon bands. The Fermi energy is at $\epsilon_k/t_1=0$; (b) The corresponding Fermi surface. []{data-label="fig:t2=0_JK=0"}](figS5a.eps){height="2.0in"} ![(a) Band structure in the VBS phase when the diagonal hopping vanishes, $t_2/t_1=0$, and for $J_2/J_1=1.6$, $J_K/t_1=0$ and $n_c=0.5$. The parameter $t_1=1.0$ sets the unit of energy. The blue lines are the gapped spinon bands. The Fermi energy is at $\epsilon_k/t_1=0$; (b) The corresponding Fermi surface. []{data-label="fig:t2=0_JK=0"}](figS5b.eps){height="2.0in"} ![(a) Band structure in the HFL phase, for parameters as in Fig. \[fig:t2=0\_JK=0\]. The fermi energy is also at $\epsilon_k/t_1=0$; (b) The corresponding Fermi surface. []{data-label="fig:t2=0_JK=2"}](figS6a.eps){height="2.0in"} ![(a) Band structure in the HFL phase, for parameters as in Fig. \[fig:t2=0\_JK=0\]. The fermi energy is also at $\epsilon_k/t_1=0$; (b) The corresponding Fermi surface. []{data-label="fig:t2=0_JK=2"}](figS6b.eps){height="2.0in"} Magnetic Phase Diagram ====================== Finally, we discuss the Heisenberg model in the absence of any Kondo coupling in our approach, when both the RVB correlations and Néel order are incorporated and the self consistent solutions with the lowest free energy are determined. The region for the candidate quantum Néel phase, as described in the main text, is also considered as a self consistent solution. The resulting phase diagram is shown in Fig. \[fig:M-pd\], where both the P-VBS and SSL-VBS phases have been described in the main text. The classical Néel state (Classical AFM) is defined as ${\bf M}_A = {\bf M}_D= -{\bf M}_B = -{\bf M}_C>0$, with all the other parameters equal to zero. Here the order parameter retains the full classical moment. We find the candidate quantum Néel phase, with ${\bf M}_A = {\bf M}_D= -{\bf M}_B = -{\bf M}_C>0$ and $Q_{xi}=Q_{yi}\neq0$, $ i=1-4$ to be a self consistent solution only in the range $0.67 \leq x < 0.8125$. The nonzero RVB singlet parameters cause a reduced value for the ordered moment. This is shown as the magenta region in Fig. \[fig:M-pd\], where the boundary of the region at a finite ratio of $J_2/J_1$ marks the transition to the SSL-VBS phase. For a judicious choice of $x$ in the range $0.6<x<0.67$ we find the transition from the classical AFM phase to the SSL-VBS has an intermediate P-VBS phase similar to what is found in exact diagonalization studies (see reference \[20\] in the main text). We remark that for $x=0.5$, which treats Néel and VBS order on equal footing, we find the location of the transition from the classical AFM phase to the SSL-VBS to be $(J_2/J_1)_c =1.35$ which is close to the value obtained from a variety of other methods (see reference \[21\] in the main text). ![The phase diagram of the Heisenberg model in the slave-fermion approach. The parameter $x$ and the phases are described in the main text.[]{data-label="fig:M-pd"}](figS7.eps){height="2.0in"} [1]{} C. G. Broyden, Math. Comput. [**19**]{}, 577 (1965). W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, [*Numerical Recipies: The Art of Scientific Computing*]{}, 3rd ed. (Cambridge University Press, New York, 2007). D. D. Johnson, Phys. Rev. B [**38**]{}, 12807 (1988). L. D. Marks and D. R. Luke, Phys. Rev. B [**78**]{}, 075114 (2008). A. Baran [*et al.*]{}, Phys. Rev. C [**78**]{}, 014318 (2008). R. Žitko, Phys. Rev. B [**80**]{}, 125125 (2009). S. Yunoki, E. Dagotto, S. Costamagna, and J. A. Riera, Phys. Rev. B [**78**]{}, 024405 (2008). R. Yu, P. Goswami, Q. Si, P. Nikolic, and J.-X. Zhu, Nat. Commun. [**4**]{}, 2783 (2013).
--- abstract: 'The fully analytical solution for isothermal Bondi accretion on a black hole (MBH) at the center of two-component Jaffe (1983) galaxy models is presented. In a previous work we provided the analytical expressions for the critical accretion parameter and the radial profile of the Mach number in the case of accretion on a MBH at the center of a spherically symmetric one-component Jaffe galaxy model. Here we apply this solution to galaxy models where both the stellar and total mass density distributions are described by the Jaffe profile, with different scale-lengths and masses, and to which a central MBH is added. For such galaxy models all the relevant stellar dynamical properties can also be derived analytically (Ciotti & Ziaee Lorzad 2018). In these new models the hydrodynamical and stellar dynamical properties are linked by imposing that the gas temperature is proportional to the virial temperature of the galaxy stellar component. The formulae that are provided allow to evaluate all flow properties, and are then useful for estimates of the scale-radius and the mass flow rate when modeling accretion on massive black holes at the center of galaxies. As an application, we quantify the departure from the true mass accretion rate of estimates obtained using the gas properties at various distances from the MBH, under the hypothesis of classical Bondi accretion.' author: - 'Luca Ciotti$^{\star}$ and Silvia Pellegrini' title: 'Isothermal Bondi accretion in two-component Jaffe galaxies with a central black hole' --- Introduction ============ Observational and numerical investigations of accretion on massive black holes (hereafter MBH) at the center of galaxies often lack the resolution to follow gas transport down to the parsec scale. In these cases, the [*classical*]{} Bondi (1952) solution for spherically-symmetric, steady accretion of a spatially infinite gas distribution onto a central point mass is then commonly adopted; this is the standard reference for estimates of the accretion radius (i.e., the sonic radius), and the mass accretion rate (see, e.g., Rafferty et al. 2006, Sijacki et al. 2007; Di Matteo et al. 2008; Gallo et al. 2010; Pellegrini 2010; Barai et al. 2011; Bu et al. 2013; Cao 2016; Volonteri et al. 2015; Choi et al. 2017; Park et al. 2017; Beckmann et al. 2018; Ramírez-Velasquez et al. 2018; Barai et al. 2018). Even though highly idealized, during phases of moderate accretion (in the “maintainance” mode), indeed, the problem can be considered almost steady, and Bondi accretion could provide a reliable approximation of the real situation (e.g., Barai et al. 2012, Ciotti & Ostriker 2012). However, leaving aside the validity of the fundamental assumptions of spherical symmetry, stationarity, and optical thinness, two major problems affect the direct application of the classical Bondi solution, namely the facts that 1) the boundary values of density and temperature of the accreting gas should be evaluated at infinity, and 2) in a galaxy, the gas experiences the gravitational effects of the galaxy itself (stars plus dark matter) all the way down to the central MBH, and the MBH gravity becomes dominant only in the very central regions, inside the so-called “sphere-of-influence”. The solution commonly adopted in numerical and observational applications to alleviate these problems is to use values of the gas density and temperature “sufficiently near” the MBH, thus assuming that the galaxy effects are negligible. Of course, as the density and temperature of the accreting gas change along the pathlines, also the predictions of classical Bondi accretion will change, when based on the density and temperature measured at a finite distance from the MBH. It is therefore of great interest to be able to quantify the systematic effects on the accretion radius and the mass accretion rate obtained from the classical Bondi solution, due to measurements taken at finite distance from the MBH, and under the effects of the galaxy potential well. A first step towards a quantititative analysis of this problem was carried out in Korol et al. (2016, hereafter KCP16) where the Bondi problem was generalized to the case of mass accretion at the center of galaxies, including also the effect of electron scattering on the accreting gas. KCP16 then calculated the deviation from the true values of the estimates of the Bondi radius and of the mass accretion rate, due to adopting as boundary values for the density and temperature those at a finite distance from the MBH, and assuming the validity of the classical Bondi accretion solution. In the specific case of Hernquist (1990) galaxies, KCP16 obtained the analytical expression of the critical accretion parameter, as a function of the galaxy properties and of the gas polytropic index $\gamma$. However, even for this quite exceptional case, the radial profiles of the hydrodynamical variables remained to be determined numerically. Following KCP16, Ciotti & Pellegrini (2017, hereafter CP17) showed that the whole accretion solution can be given in an analytical way (in terms of the Lambert-Euler $W$-function) for the [*isothermal*]{} accretion in Jaffe (1983) and Hernquist galaxy models with central MBHs. This meant that for these models not only it is possible to express analytically the critical accretion parameter, but also that the whole radial profile of the Mach number (and then of all the hydrodynamical functions) can be explicitely written. At the best of our knowledge, CP17 provided the first fully analytical solution of the accretion problem on a MBH at the center of a galaxy. The galaxy models used in KCP16 and CP17, i.e., the Hernquist and Jaffe models, are not only relevant because for them it is possible to solve the accretion problem, but also because of their numerous applications in Stellar Dynamics. In fact, these models belong to the family of the so-called $\gamma$-models (Dehnen 1993, Tremaine et al. 1994) and are known to reproduce very well the radial trend of the stellar density distribution of real elliptical galaxies; at the same time, their simplicity allows for analytical studies of one and two-component galaxy models (e.g., Carollo et al. 1995, Ciotti et al. 1996, Ciotti 1999). In particular, Ciotti & Ziaee Lorzad (2018, hereafter CZ18), expanding a previous study by Ciotti et al. (2009), presented spherically symmetric two-component galaxy models (hereafter JJ models), where the [*stellar*]{} and [*total*]{} mass density distributions are both described by the Jaffe profile, with different scale-lengths and masses, and a MBH is added at the center. The orbital structure of the stellar component is described by the Osipkov-Merritt anisotropy (Merritt 1985). Moreover, the dark matter halo (resulting from the difference between the total and the stellar distributions) can reproduce remarkably well the Navarro et al. (1997; hereafter NFW) profile, over a very large radial range, and down to the center. Among other properties, for the JJ models the solution of the Jeans equations and the relevant global quantities entering the Virial Theorem can be expressed analytically. Therefore, the JJ models offer the [*unique*]{} opportunity to have a simple yet realistic family of galaxy models with a central MBH, allowing both for the fully analytical solution of the Bondi (isothermal) accretion problem [*and*]{} for the fully analytical solution of the Jeans equations; all this permits then a simple joint study of stellar dynamics and fluidodynamics without resorting to ad-hoc numerical codes. This paper is organized as follows. In Section 2 we recall the main properties of the Jaffe isothermal accretion solution, and in Section 3 we list the main properties of the JJ models. In Section 4 we show how the structural and dynamical properties of the stellar and dark matter components can be linked to the parameters appearing in the accretion solution. In Section 5 we examine the departure of the estimate of the mass accretion rate from the true value, when the estimate is obtained using as boundary values for the density and temperature those at points along the solution, at finite distance from the MBH. The main conclusions are summarized in Section 6. Isothermal Bondi Accretion in Jaffe Galaxies with a Central MBH, and with electron scattering {#sec:class} ============================================================================================= Following KCP16, and in particular the full treatment of the isothermal case in CP17, we shortly recall here the main properties of isothermal Bondi accretion, in the potential of a Jaffe galaxy hosting a MBH at its center. We begin with the classical Bondi case. The classical Bondi solution ---------------------------- In the classical Bondi problem, the gas is perfect, has a spatially infinite distribution, and is accreting on to a MBH, of mass ${M_{\rm BH}}$. The gas density and pressure are linked by the polytropic relation $$p = {k_{\rm B} \rho T\over <\mu>{m_{\rm p}}} = {p_{\infty}}{\tilde\rho}^{\gamma},\quad {\tilde\rho}\equiv{\rho\over{\rho_{\infty}}},$$ where $\gamma$ is the polytropic index ($\gamma =1$ in the isothermal case), ${m_{\rm p}}$ is the proton mass, $<\mu>$ is the mean molecular weight, $k_{\rm B}$ is the Boltzmann constant, and ${p_{\infty}}$ and ${\rho_{\infty}}$ are respectively the gas pressure and density at infinity. The sound speed is ${c_{\rm s}}=\sqrt{\gamma p/\rho}$, and of course in the isothermal case it is constant, ${c_{\rm s}}={c_{\infty}}$, its value at infinity. The time-independent continuity equation is: $$4 \pi r^2 \rho(r) v(r)= {\dot M_{\rm B}},$$ where $v(r)$ is the modulus of the gas radial velocity, and ${\dot M_{\rm B}}$ is the time-independent accretion rate on the MBH. An important scalelength of the problem, the so-called Bondi radius, is naturally defined as $${r_{\rm B}}\equiv {G{M_{\rm BH}}\over{c_{\infty}}^2}:$$ we stress that the Bondi radius remains defined by the equation above [*independently*]{} of the presence of the galactic potential. After introducing the normalized quantities $$x\equiv {r\over{r_{\rm B}}},\quad {{\cal M}}(r)={v(r)\over{c_{\rm s}}(r)},$$ where ${{\cal M}}$ is the Mach number, eq. (2) determines the accretion rate for assigned ${M_{\rm BH}}$ and boundary conditions: $$x^2 {\tilde\rho}(x){{\cal M}}(x) ={{\dot M_{\rm B}}\over 4\pi{r_{\rm B}}^2{\rho_{\infty}}{c_{\infty}}}\equiv\lambda,$$ where $\lambda$ is the dimensionless accretion parameter. In the isothermal case, the classical Bondi problem (e.g., KCP16) reduces to the solution of the following system: $$\begin{cases} g({{\cal M}})= f(x) -\Lambda,\qquad\Lambda \equiv \ln\lambda, \\ \\ \displaystyle{g({{\cal M}})\equiv {{{\cal M}}^2\over 2}-\ln{{\cal M}},} \\ \\ \displaystyle{f(x)\equiv {1\over x}+2\ln x}. \end{cases}$$ As well known, $\Lambda$ cannot be chosen arbitrarily; in fact, both $g({{\cal M}})$ and $f(x)$ have a minimum, and $$\begin{cases} \displaystyle{{g_{\rm min}}={1\over 2}, \qquad\qquad \quad {\rm for}\quad\quad {{\cal M}}_{\rm min}=1,} \\ \\ \displaystyle{{f_{\rm min}}= 2-2\ln 2, \qquad {\rm for} \quad\quad {x_{\rm min}}={1\over 2}.} \end{cases}$$ Solutions of eq. (6) exist only for ${g_{\rm min}}\le {f_{\rm min}}-\Lambda$, i.e., for $\Lambda\leq{\Lambda_{\rm cr}}\equiv{f_{\rm min}}-{g_{\rm min}}$, which in turn is equivalent to the condition $$\lambda \leq {\lambda_{\rm cr}}= {{\rm e}^{3/2}\over 4}.$$ Along the [*critical solutions*]{}, i.e., the solutions of eq. (6) for $\lambda ={\lambda_{\rm cr}}$, ${x_{\rm min}}$ marks the position of the [*sonic point*]{}, i.e., ${{\cal M}}({x_{\rm min}}) =1$. For $\lambda <{\lambda_{\rm cr}}$ instead two regular subcritical solutions exist, one everywhere subsonic and one everywhere supersonic, with the respective maximum and the minimum value of ${{\cal M}}(x)$ reached at ${x_{\rm min}}$. Summarizing, the solution of the classical Bondi problem requires to determine ${x_{\rm min}}$ and so ${\lambda_{\rm cr}}$, and possibly to obtain the radial profile ${{\cal M}}(x)$ for given $\lambda \le {\lambda_{\rm cr}}$ (see, e.g., Bondi 1952; Frank, King & Raine 1992, KCP16, CP17). Once $\lambda$ is assigned and ${{\cal M}}(x)$ is known, all functions involved in the accretion problem are known from eqs. (5) and (1): for example, along the critical solution, $${\tilde\rho}(x)={{\lambda_{\rm cr}}\over x^2{{\cal M}}(x)},$$ while the modulus of the accretion velocity in the isothermal case is $v(r)={c_{\infty}}{{\cal M}}(x)$. Isothermal Bondi accretion (with electron scattering) in Jaffe galaxies ----------------------------------------------------------------------- The classical Bondi problem can be generalized by including the effects of radiation pressure due to electron scattering, and the additional gravitational field of the host galaxy. In fact, the accretion flow can be affected by the emission of radiation near the MBH, that exerts an outward pressure (see, e.g., Mościbrodzka & Proga 2013 for a study of the irradiation effects on the flow). In the optically thin case the radiation feedback is implemented as a reduction of the gravitational force of the MBH, by the factor $$\chi\equiv 1- {L\over{L_{\rm Edd}}},$$ where $L$ is the accretion luminosity, ${L_{\rm Edd}}=4\pi c G {M_{\rm BH}}{m_{\rm p}}/\sigma_{\rm T}$ is the Eddington luminosity, and $\sigma_{\rm T}=6.65 \times 10^{-25}$cm$^2$ is the Thomson cross section. Note that the maximum luminosity remains equal to ${L_{\rm Edd}}$ as defined above even in presence of the potential of the host galaxy. As shown in KCP16 and CP17 (see also Lusso & Ciotti 2011), for the Bondi problem on an isolated MBH the critical value ${\lambda_{\rm cr}}$ and the mass accretion rate modified by electron scattering can be calculated analytically, with the new critical parameter given by $\chi^2{\lambda_{\rm cr}}$. The more general problem of Bondi accretion with electron scattering in the potential of a galaxy hosting a central MBH was addressed in KCP16 and CP17. In particular, it was shown that there is an analytical expression for ${x_{\rm min}}$ in the isothermal case in Jaffe galaxies with a central MBH, and in the generic polytropic case for Hernquist galaxies with a central MBH; thus, in both cases it is possible to determine, also in presence of radiation pressure, the value of the critical accretion parameter (that now we call ${\lambda_{\rm t}}$). For Hernquist galaxies, the polytropic problem leads to the solution of a cubic equation, producing one or two sonic points (depending on the specific choice of the galactic parameters), while in the isothermal Jaffe case the relevant equation is quadratic, and only one sonic point exists, independently of the galaxy parameters. In addition, CP17 showed that isothermal Bondi accretion cannot be realized in Hernquist galaxies with ${M_{\rm BH}}=0$ (or $\chi =0$) while it is possible in a subset of Jaffe galaxies (provided a simple inequality is satisfied among the galaxy parameters). Summarizing, since also ${{\cal M}}$ is given analytically in the isothermal case, a fully analytical solution exists for isothermal accretion on MBHs at the center of Hernquist and Jaffe galaxies. However, due to the complicacies of Bondi accretion in Hernquist galaxies, and given that two-component JJ models with a total Jaffe potential and a central MBH have been recently presented (CZ18), in the rest of the paper we restrict to the case of two-component Jaffe galaxies. Of course, the existence of the analytical accretion solution for the one-component Hernquist model with central MBH guarantees that a similar analysis could be done for the two-component Hernquist analogues of JJ models. In the remainder of the Section we recall the main properties of isothermal Bondi accretion in a Jaffe total potential with a central MBH (CP17); in Section 4, these will be used to address the problem of accretion in JJ models. The gravitational potential of a Jaffe density distribution of total mass ${M_{\rm g}}$ and scale-length ${r_{\rm g}}$ is given by: $${\Phi_{\rm g}}={G{M_{\rm g}}\over{r_{\rm g}}}\ln {r\over r+{r_{\rm g}}},$$ and, with the introduction of the two parameters: $${{\cal R}}\equiv {{M_{\rm g}}\over{M_{\rm BH}}},\quad \xi \equiv {{r_{\rm g}}\over{r_{\rm B}}},$$ the function $f$ in eq. (6) becomes: $$f = {\chi\over x} - {{{\cal R}}\over\xi}\ln {x\over x+\xi} + 2\ln x,\quad x\equiv{r\over{r_{\rm B}}}.$$ Note how, for ${{\cal R}}\to 0$ (or $\xi\to\infty$), the galaxy contribution vanishes, and the problem reduces to the classical case. In the limit of $\chi=0$ ($L={L_{\rm Edd}}$)[^1], radiation pressure cancels exactly the MBH gravitational field, and the problem describes accretion in the potential of the galaxy only, without electron scattering and MBH. When $\chi=1$ ($L=0$), the radiation pressure has no effect on the accretion flow. The presence of the galaxy potential and electron scattering changes the accretion rate, that (in the critical case) we now indicate as $${\dot M_{\rm t}}= 4 \pi {r_{\rm B}}^2 {\lambda_{\rm t}}{\rho_{\infty}}{c_{\infty}}={{\lambda_{\rm t}}\over{\lambda_{\rm cr}}}{\dot M_{\rm B}},$$ where again ${\dot M_{\rm B}}$ is the classical critical Bondi accretion rate for the same chosen boundary conditions ${\rho_{\infty}}$ and ${c_{\infty}}$ in eq. (5), and ${\lambda_{\rm t}}$ is the critical accretion parameter of the new problem. From the same arguments presented in Sect. 2.1, ${\lambda_{\rm t}}$ is known once the absolute minimum ${f_{\rm min}}(\chi,{{\cal R}}, \xi)$ is known; this, in turn, requires the determination of ${x_{\rm min}}(\chi,{{\cal R}},\xi)$, while the function $g({{\cal M}})$ is unaffected by the addition of the galaxy potential. As shown in CP17 for the Jaffe galaxy, the position of the only minimum of $f$ (corresponding to the sonic radius of the critical solution) in eq. (13) is given by: $${x_{\rm min}}\equiv {{r_{\rm min}}\over{r_{\rm B}}}= {{{\cal R}}+ \chi -2\xi + \sqrt{({{\cal R}}+\chi -2\xi)^2 + 8\chi\xi}\over 4},$$ and then one can evaluate $ {f_{\rm min}}=f({x_{\rm min}})$ and $\ln{\lambda_{\rm t}}= {f_{\rm min}}- {g_{\rm min}}$. In the peculiar case of $\chi =0$, a solution of the accretion problem is possible only for ${{\cal R}}\geq 2\xi$, with: $${x_{\rm min}}={{{\cal R}}-2\xi\over 2},\quad {\lambda_{\rm t}}={{{\cal R}}^2\over 4\sqrt{e}}\left(1-{2\xi\over{{\cal R}}}\right)^{2-{{\cal R}}/\xi}.$$ When ${{\cal R}}\to2\xi$, then ${x_{\rm min}}\to 0$ (the sonic point is at the origin), ${f_{\rm min}}\to 2\ln\xi$, and finally ${\lambda_{\rm t}}\to \xi^2/\sqrt{e}$. Note that the $\chi=0$ case can be [*also*]{} interpreted as the case of a null MBH mass, and the formulae in eq. (16) can be used provided the dependence of ${{\cal R}}$, $\xi$ and ${r_{\rm B}}$ on ${M_{\rm BH}}$ is factored out, and simplified before considering the limit for ${M_{\rm BH}}\to 0$. Thus, when ${M_{\rm BH}}=0$ the condition for the existence of the solution, and the position of the sonic radius, are: $${{{\cal R}}\over 2\xi}={G{M_{\rm g}}\over 2{r_{\rm g}}{c_{\infty}}^2}\geq 1,\quad {{r_{\rm min}}\over{r_{\rm g}}}={G{M_{\rm g}}\over 2{c_{\infty}}^2{r_{\rm g}}}-1.$$ Moreover, even though ${\lambda_{\rm t}}$ in eq. (16) diverges for ${M_{\rm BH}}\to 0$, the accretion rate given in eq. (14) remains finite also in this case, with a value that can be easily calculated in closed form in terms of the galaxy parameters. The radial trend of the Mach number for the critical accretion solution of eq. (6) with $f$ in eq. (13), and $\lambda ={\lambda_{\rm t}}$, is given by eq. (35) in CP17, that is: $${{\cal M}}^2= - \begin{cases} \displaystyle{ W\left(0, - {\rm e}^{-2f }{\lambda_{\rm t}}^2\right), \quad x\geq {x_{\rm min}}, } \\ \\ \displaystyle{ W\left(-1, - {\rm e}^{-2f}{\lambda_{\rm t}}^2\right), \quad\quad 0< x\leq {x_{\rm min}}, } \end{cases}$$ where $W$ is the Lambert-Euler function, and its relevant properties are given in CP17 (Appendix A, see also Appendix B in CZ18). Note that the subcritical ($\lambda<{\lambda_{\rm t}}$) solutions are obtained by using $W(0,z)$ for the subsonic branch, and $W(-1,z)$ for the supersonic branch. It is useful to recall that from the expansion for $x\to 0^+$ of the supersonic branch of $W(-1,z)$ in eq. (18), one has that for $\chi >0$, ${{\cal M}}(x)\sim {\sqrt{2\chi}}x^{-1/2}$, while for $\chi =0$, ${{\cal M}}(x)\sim 2{\sqrt{(1-{{\cal R}}/2\xi)\ln(x/{x_{\rm min}})}}$ (provided that ${\cal R} \geq 2\xi$); moreover, the expansion of ${{\cal M}}(x)$ for $x \to \infty$ along the solution with vanishing Mach number at infinity gives ${{\cal M}}(x)\sim {\lambda_{\rm t}}e^{-(\chi +{{\cal R}})/x}/x^2$. Once the Mach number radial profile is known, the density profile of the accreting gas is obtained from the analogous of eq. (9), with $${\tilde\rho}(x)={{\lambda_{\rm t}}\over x^2{{\cal M}}(x)}.$$ The two-component galaxy models =============================== We now extend the results in Section 2.1, pertinent to isothermal accretion in the one-component Jaffe model, to the family of two-component JJ models presented in CZ18. These models are characterized by a [*total*]{} density distribution (stars plus dark matter) ${\rho_{\rm g}}$ described by a Jaffe profile of total mass ${M_{\rm g}}$ and scale-length ${r_{\rm g}}$; the stellar density distribution ${\rho_*}$ is also described by a Jaffe profile of stellar mass ${M_*}$, and scale radius ${r_*}$. The velocity dispersion anisotropy of the stellar component is described by the Osipkov–Merritt formula (Merritt 1985), and a MBH is added at the center of the galaxy. Remarkably, almost all stellar dynamical properties of JJ models with a central MBH can be expressed by analytical functions. The accretion solution of CP17 for a MBH at the center of a Jaffe potential fully applies to JJ models, that are the first family of two-component galaxy models with a central MBH where both the fluidodynamics of accretion and the dynamics of the galaxy can be described in an analytical way. In practice, for isothermal accretion in the JJ models, there is the unique opportunity to easily compare the dynamical properties of the stellar component of the galaxy (velocity dispersion, MBH sphere of influence, etc.) with the accretion flow properties (the sonic radius, the Bondi radius, the critical accretion parameter, the Mach number profile, etc.). We take here an important step further, and fix the constant gas temperature ${T_{\infty}}$ by using the virial temperature of the stellar component. In this way JJ models not only provide a more realistic potential well for accretion, but also determine the accretion temperature itself, yielding a natural “closure” for the problem and fully constraining the solution. In order to link the properties of JJ models to the general solution given in CP17, in the following two Sections we introduce the properties of JJ models that are relevant for the present study. Structure of the JJ models -------------------------- The density distribution of the stellar component of JJ galaxies is $${\rho_*}(r)={{M_*}{r_*}\over 4\pi r^2({r_*}+ r)^2}= {{\rho_{\rm n}}\over s^2(1+s)^2},\quad s\equiv {r\over{r_*}},$$ where ${M_*}$ is the total stellar mass and ${r_*}$ is the scale-length; the effective radius ${R_{\rm e}}$ of the stellar profile is ${R_{\rm e}}\simeq 0.7447{r_*}$. We adopt ${M_*}$ and ${r_*}$ as the mass and length scales, and define $${\rho_{\rm n}}\equiv{{M_*}\over 4\pi{r_*}^3},\quad {\Psi_{\rm n}}\equiv{G{M_*}\over{r_*}},\quad \mu\equiv{{M_{\rm BH}}\over{M_*}},$$ as the density and potential scales, and the last parameter measures the MBH-to-galaxy stellar mass ratio. After introducing the structural parameters (cfr. with eq. 12) $${{\cal R}_{\rm g}}\equiv {{M_{\rm g}}\over{M_*}},\quad {\xi_{\rm g}}\equiv {{r_{\rm g}}\over{r_*}}$$ we can give the [*total*]{} density distribution (stars plus dark matter), that is also described by a Jaffe profile of scale lenght ${r_{\rm g}}$ and total mass ${M_{\rm g}}={M_*}+{M_{\rm {DM}}}$: $${\rho_{\rm g}}(r)= {{\rho_{\rm n}}{{\cal R}_{\rm g}}{\xi_{\rm g}}\over s^2({\xi_{\rm g}}+s)^2}.$$ From the request that the dark halo has a non-negative total mass ${M_{\rm {DM}}}$, it follows that ${{\cal R}_{\rm g}}\geq 1$ (see eq. 22). The cumulative mass within the sphere of radius $r$, and the associated gravitational potential, are given by $${M_{\rm g}}(r)={ {M_*}{{\cal R}_{\rm g}}s\over {\xi_{\rm g}}+s},\quad {\Phi_{\rm g}}(r)= {{\Psi_{\rm n}}{{\cal R}_{\rm g}}\over{\xi_{\rm g}}}\ln {s\over {\xi_{\rm g}}+s},$$ and the analogous quantities for the stellar component are obtained from eq. (24) for ${{\cal R}_{\rm g}}={\xi_{\rm g}}=1$. It also follows that the half-mass (spatial) radius of the total mass profile is ${r_{\rm g}}$, and it is ${r_*}$ for the stellar mass. The density distribution ${\rho_{\rm DM}}$ of the dark halo is given by $${\rho_{\rm DM}}(r)={{\rho_{\rm n}}\over s^2} \left[{{{\cal R}_{\rm g}}{\xi_{\rm g}}\over ({\xi_{\rm g}}+s)^2} - {1\over (1+s)^2} \right],$$ so that ${\rho_{\rm DM}}$ of JJ models [*is not a Jaffe profile, unless the stellar and total length scales are equal*]{} (${\xi_{\rm g}}=1$); the total mass associated with ${\rho_{\rm DM}}$ is ${M_{\rm {DM}}}={M_*}({{\cal R}_{\rm g}}-1)$. The request of a non-negative ${M_{\rm {DM}}}$ does not prevent the possibility of an unphysical, [*locally negative*]{} dark matter density, for an arbitrary choice of ${{\cal R}_{\rm g}}$ and ${\xi_{\rm g}}$. In fact, CZ18 showed that the condition to have ${\rho_{\rm DM}}\ge 0$ at all radii is $${{\cal R}_{\rm g}}\ge{{\mathcal{R}}_{\rm m}}\equiv\max\left({1\over {\xi_{\rm g}}},{\xi_{\rm g}}\right).$$ This condition implies also a monotonically decreasing ${\rho_{\rm DM}}$, with important dynamical consequences. A dark halo of a model with ${{\cal R}_{\rm g}}={{\mathcal{R}}_{\rm m}}$ is called a [*minimum halo*]{} model. In the following we use the parameter $\alpha$ to measure how much ${{\cal R}_{\rm g}}$ is larger than the minimum halo mass model, for assigned ${\xi_{\rm g}}$: $${{\cal R}_{\rm g}}=\alpha {{\mathcal{R}}_{\rm m}},\quad \alpha\geq 1.$$ The special value ${\xi_{\rm g}}=1$ corresponds to the minimum value ${{\mathcal{R}}_{\rm m}}=1$, i.e., to stellar and total densities that are proportional; in this case, eq. (22) shows that for $\alpha=1$ there is no dark matter, and one recovers the one-component Jaffe model used in CP17. The properties of the dark halo profile as a function of ${\xi_{\rm g}}$ and ${{\cal R}_{\rm g}}$ are fully discussed in CZ18. We stress that [*everywhere in this paper we restrict to a dark profile more diffuse than the stellar mass*]{}, which is obtained for ${\xi_{\rm g}}\geq 1$; this choice corresponds to the common expectation for real galaxies[^2]. From eqs. (26) and (27), then, in the following we always have that ${{\cal R}_{\rm g}}=\alpha{\xi_{\rm g}}$. It can be of interest for applications to evaluate the dark matter fraction within a sphere of chosen radius. This amount is easily calculated from eq. (24) as: $${{M_{\rm {DM}}}(r)\over{M_{\rm g}}(r)} = 1 - {{\xi_{\rm g}}+s \over \alpha {\xi_{\rm g}}(1+s)},$$ where ${M_{\rm {DM}}}(r)={M_{\rm g}}(r)-{M_*}(r)$. Figure 1 shows the ratio between the dark mass and the total mass within a sphere of radius $r={R_{\rm e}}$, as a function of ${\xi_{\rm g}}$, for various values of $\alpha$: the minimum halo case ($\alpha =1$), and two cases (blue and red curves) with ${{\cal R}_{\rm g}}>{{\mathcal{R}}_{\rm m}}$. Dark matter fractions below unity are easily obtained, with low fractions ($<0.4$) for the minimum halo case. These low values agree with those required by the modeling of the dynamical properties of nearby early-type galaxies, that indicate the dark mass within ${R_{\rm e}}$ to be lower than the stellar mass (e.g., Cappellari et al. 2015). -0.2truecm ![Ratio between the dark mass and the total mass of JJ models (eq. 28), within a sphere of radius $r={R_{\rm e}}\simeq 0.75{r_*}$, as a function of ${\xi_{\rm g}}$, for the minimum halo case ($\alpha =1$, black), and two non-minimum halo cases ($\alpha=2$, blue; and $\alpha=3$, red). The dark mass fraction is zero for the one-component (stellar) model, obtained for ${\xi_{\rm g}}=1$ and $\alpha=1$.[]{data-label=""}](f1 "fig:"){height="55.00000%" width="55.00000%"} For what concerns the radial profile of ${\rho_{\rm DM}}$, for $r\to\infty$, eq. (25) shows that ${\rho_{\rm DM}}\sim ({{\cal R}_{\rm g}}{\xi_{\rm g}}-1){\rho_*}\propto r^{-4}$, and so the densities of the dark matter and of the stars in the outer regions are proportional. For $r\to 0$, in non minimum-halo models ${\rho_{\rm DM}}\sim ({{\cal R}_{\rm g}}/{\xi_{\rm g}}-1){\rho_*}\propto r^{-2}$, while in the minimum-halo models ${\rho_{\rm DM}}\sim 2(1-1/{\xi_{\rm g}}) s {\rho_*}\propto r^{-1}$, so that these latter models are centrally baryon-dominated, being ${\rho_*}\propto r^{-2}$. It is interesting to compare the dark halo profile of JJ models with the NFW profile, of total mass ${M_{\rm {DM}}}$, that we rewrite for $r <{r_{\rm t}}$ (the so-called truncation radius) as: $${\rho_{\rm NFW}}(r)={{\rho_{\rm n}}({{\cal R}_{\rm g}}-1)\over f(c)s({\xi_{\rm NFW}}+s)^2},\quad f(c)=\ln(1+c) -{c\over 1+c},$$ where ${\xi_{\rm NFW}}\equiv r_{\rm NFW}/{r_*}$ is the NFW scale-length $r_{\rm NFW}$ in units of ${r_*}$, and $c\equiv{r_{\rm t}}/r_{\rm NFW}$. From the considerations above about the behavior of ${\rho_{\rm DM}}(r)$ at small and large radii, it follows that ${\rho_{\rm DM}}$ and ${\rho_{\rm NFW}}$ at small and large radii cannot in general be similar. However, in the case of the minimum-halo, near the center ${\rho_{\rm DM}}\propto r^{-1}$, and it can be proven that ${\rho_{\rm DM}}$ and ${\rho_{\rm NFW}}$ can be made identical for $r\to 0$ by fixing $${\xi_{\rm NFW}}=\sqrt{{{\xi_{\rm g}}\over 2f(c)}}$$ in eq. (29). Therefore, once a specific JJ minimum halo model is considered, eqs. (29)-(30) allow to determine the NFW profile that has the same total mass and central density profile as ${\rho_{\rm DM}}$. It turns out that the dark halo profiles of JJ minimum halo models are surprisingly well approximated by the NFW profile over a very large radial range, for realistic values of ${\xi_{\rm NFW}}$ and $c$ (CZ18). Dynamics of the JJ models ------------------------- CZ18 present and discuss the analytical solutions of the Jeans equations and all the dynamical properties of Osipkov-Merritt anisotropic JJ models with a central MBH of mass ${M_{\rm BH}}$, where the total gravitational potential is $$\Phi_{\rm T} (r)={\Phi_{\rm g}}(r)- {{\Psi_{\rm n}}\mu\over s}.$$ The radial component of the [*stellar*]{} velocity dispersion tensor is given by ${\sigma_{\rm r}}^2 (r)={\sigma_{\rm g}}^2 (r)+{\sigma_{\rm BH}}^2(r)$, where ${\sigma_{\rm g}}$ indicates the contribution due to ${\Phi_{\rm g}}(r)$, and ${\sigma_{\rm BH}}$ the contribution due to the MBH. Under the assumption of Osipkov-Merritt anisotropy, ${\sigma_{\rm g}}(0)$ is coincident with that of the isotropic case (except for purely radial orbits), independently of the anisotropy radius ${r_{\rm a}}>0$, and is given by: $${\sigma_{\rm g}}^2 (0)= {{\Psi_{\rm n}}{{\cal R}_{\rm g}}\over 2{\xi_{\rm g}}}={{\Psi_{\rm n}}\alpha\over 2},$$ where in the last identity we use eq. (27), and restrict to ${\xi_{\rm g}}\geq 1$. Note that the value of ${\sigma_{\rm g}}(0)$ is therefore [*independent*]{} of ${\xi_{\rm g}}$, for ${\xi_{\rm g}}\geq 1$, and, in the minimum halo model, it is [*coincident*]{} with that of the one-component (purely stellar) Jaffe model. The leading term of the MBH contribution to ${\sigma_{\rm r}}$ near the center (except for the case of purely radial orbits, ${r_{\rm a}}=0$) coincides with the isotropic case independently of the Osipkov-Merritt anisotropy radius, with $${\sigma_{\rm BH}}^2 (r) \sim {{\Psi_{\rm n}}\mu\over 3 s};$$ at variance with ${\sigma_{\rm g}}(r)$, it diverges as $\mu /r$ for $r\to 0$. Therefore, in presence of the central MBH, ${\sigma_{\rm r}}$ is dominated by its contribution, and similarly is the projected velocity dispersion (${\sigma_{\rm gp}}$). In order to relate the models with observed quantities, it is helpful to consider the [*projected*]{} velocity dispersion in the central regions. CZ18 shows that for ${r_{\rm a}}>0$ $${\sigma_{\rm gp}}(0)={\sigma_{\rm g}}(0)$$ while, independently of the value of ${r_{\rm a}}\geq 0$ $${\sigma_{\rm BHp}}^2 (R)\sim {2{\Psi_{\rm n}}\mu\over 3\pi}{{r_*}\over R},$$ where $R$ is the radius in the projection plane. The two equations above determine a fiducial value for the radius (${R_{\rm inf}}$) of the so-called [*sphere of influence*]{}. We define operationally ${R_{\rm inf}}$ as the distance from the center in the projection plane where the (galaxy plus MBH) projected velocity dispersion ${\sigma_{\rm p}}(R)$ equals a chosen fraction $\epsilon$ of the projected velocity dispersion of the galaxy in absence of the MBH. In practice, as ${R_{\rm inf}}<<{r_*}$, and in JJ models without MBH the velocity dispersion profile flattens to a constant value, we define $${\sigma_{\rm p}}(R)\simeq \sqrt{{\sigma_{\rm BHp}}^2 ({R_{\rm inf}})+{\sigma_{\rm gp}}^2 (0)}=(1+\epsilon){\sigma_{\rm gp}}(0),$$ and from eqs. (34)-(35) one has: $${{R_{\rm inf}}\over{r_*}}= {4{\xi_{\rm g}}\mu\over 3\pi{{\cal R}_{\rm g}}\epsilon(2+\epsilon)}= {4\mu\over 3\pi\alpha\epsilon(2+\epsilon)},$$ where the last identity holds for ${\xi_{\rm g}}\geq 1$. For realistic values of the parameters, ${R_{\rm inf}}$ is of the order of a few pc (see Sect. 5.1 for a more quantitative discussion). As anticipated in the Introduction, a reasonable estimate of the gas temperature, supported by observations (e.g., Pellegrini 2011), is given by the stellar virial temperature ${T_{\rm V}}=<\mu>m_p{\sigma_{\rm V}}^2/3$. The definition of ${\sigma_{\rm V}}$ comes from the virial theorem that, for the stellar component, reads: $$2{K_*}=-{W_{\rm *g}}-{W_{\rm *BH}},$$ where ${K_*}\equiv {M_*}{\sigma_{\rm V}}^2/2$ is the total kinetic energy of the stars, and $${W_{\rm *g}}= - 4\pi G\int_0^{\infty} r{\rho_*}(r){M_{\rm g}}(r)dr$$ is the interaction energy of the stars with the total gravitational field of the galaxy (stars plus dark matter), and finally $${W_{\rm *BH}}=-4\pi G {M_{\rm BH}}\int_0^\infty r {\rho_*}(r) dr$$ is the interaction energy of the stars with the central MBH. Note that ${W_{\rm *BH}}$ diverges, because the stellar density profile diverges near the origin as $ r^{-2}$; instead, ${W_{\rm *BH}}$ converges for $\gamma$ models with $0\le \gamma<2$. Since we will use ${K_*}$ to evaluate the gas temperature over the whole body of the galaxy (where the MBH effect is negligible), we only consider ${W_{\rm *g}}$ in the determination of ${\sigma_{\rm V}}$. Therefore ${\sigma_{\rm V}}^2\equiv -{W_{\rm *g}}/{M_*}$, where from CZ18: $${W_{\rm *g}}= -{\Psi_{\rm n}}{M_*}{{\cal R}_{\rm g}}{{\widetilde W}_{\rm g}},\quad {{\widetilde W}_{\rm g}}={{\xi_{\rm g}}-1-\ln {\xi_{\rm g}}\over ({\xi_{\rm g}}-1)^2},$$ and ${{\widetilde W}_{\rm g}}(1) =1/2$. It follows that ${\sigma_{\rm V}}^2={\Psi_{\rm n}}{{\cal R}_{\rm g}}{{\widetilde W}_{\rm g}}={\Psi_{\rm n}}\alpha{{\cal F}_{\rm g}}({\xi_{\rm g}})$, where we introduced the function ${{\cal F}_{\rm g}}({\xi_{\rm g}})\equiv{\xi_{\rm g}}{{\widetilde W}_{\rm g}}$. Note that ${{\cal F}_{\rm g}}({\xi_{\rm g}})$ increases from ${{\cal F}_{\rm g}}(1)=1/2$ to ${{\cal F}_{\rm g}}(\infty)=1$. In practice, at fixed $\alpha$ and increasing ${\xi_{\rm g}}$, eq. (27) dictates that ${{\cal R}_{\rm g}}$ increases to arbitrarily large values, but since ${{\cal F}_{\rm g}}\to 1$, then ${W_{\rm *g}}$ and ${\sigma_{\rm V}}$ (and so the mass-weighted squared escape velocity) remain limited. Physically, this is due to the fact that more massive halos are necessarily more and more extended, due to the request for positivity in eq. (26), with a compensating effect on the depth of the total potential. Moreover, from eqs. (32) and (34) it follows that ${\sigma_{\rm V}}^2=2{{\cal F}_{\rm g}}({\xi_{\rm g}}){\sigma_{\rm gp}}^2(0)$, so that in JJ models without MBH, ${\sigma_{\rm V}}$ is just proportional to the stellar central projected velocity dispersion, and the proportionality constant is a function of ${\xi_{\rm g}}$ [*only*]{}, with ${\sigma_{\rm V}}={\sigma_{\rm gp}}(0)$ for ${\xi_{\rm g}}=1$. Symbol Description ---------------------- --------------------------------------------------------------- Galaxy structure: ${M_{\rm g}}$ total mass ${M_*}$ stellar mass ${M_{\rm BH}}$ central MBH mass ${r_{\rm g}}$ total density scale-length ${r_*}$ stellar density scale-length ${\sigma_{\rm V}}$ stellar virial velocity dispersion ${T_{\rm V}}$ stellar virial temperature $\mu$ ${M_{\rm BH}}/{M_*}$ ${\xi_{\rm g}}$ ${r_{\rm g}}/{r_*}$ ${{\cal R}_{\rm g}}$ ${M_{\rm g}}/{M_*}$ ($=\alpha{\xi_{\rm g}}$,${\xi_{\rm g}}\geq 1$,$\alpha\geq 1$) Accretion flow: ${T_{\infty}}$ temperature ($=\beta{T_{\rm V}}$,$\beta >0$) ${c_{\infty}}$ sound velocity ${r_{\rm B}}$ Bondi radius ${r_{\rm min}}$ sonic radius ${\dot M_{\rm t}}$ mass accretion rate ${{\cal M}}$ Mach number ${\lambda_{\rm t}}$ critical accretion parameter ${{\cal R}}$ ${M_{\rm g}}/{M_{\rm BH}}$ $\xi$ ${r_{\rm g}}/{r_{\rm B}}$ ${x_{\rm min}}$ ${r_{\rm min}}/{r_{\rm B}}$ ${\beta_{\rm c}}$ critical $\beta$ ($=3/(2{{\cal F}_{\rm g}}$)) : List of parameters \[tab:params\] Linking stellar dynamics to fluidodynamics ========================================== In the general solution of CP17, once that the parameters $\cal R$ and $\xi$ in eq. (12) are assigned, and the Jaffe structural scales ${M_{\rm g}}$ and ${r_{\rm g}}$ characterizing the [*total*]{} galactic potential are chosen, the gas temperature ${T_{\infty}}$ remains fixed, because $\xi ={r_{\rm g}}/{r_{\rm B}}$. Therefore, generic values of $\xi$ can easily correspond to unrealistic values of the gas temperature. Clearly, JJ models offer an interesting possibility: as the dynamical properties of the stellar component of the galaxy can be analytically calculated once the total potential (due to stars, dark matter and central MBH) is assigned, the Virial Theorem for the stellar component can be used to compute the virial “temperature” ${T_{\rm V}}$ of the stars, a realistic proxy for the gas temperature ${T_{\infty}}$; then, the CP17 solution for the accreting gas in the total Jaffe potential of given ${T_{\rm V}}$ can be built. In practice, the idea is to self-consistently “close” the model, determining a fiducial value for the gas temperature as a function of the galaxy model hosting accretion. In this approach, the steps to build an accretion solution are: 1) choose ${M_*}$, ${r_*}$, $\mu$, ${{\cal R}_{\rm g}}$ and ${\xi_{\rm g}}$ for a realistic galactic model; 2) obtain the gas virial temperature ${T_{\rm V}}$; 3) derive ${{\cal R}}$ and $\xi$ to be used in the Bondi problem, and construct the corresponding CP17 solution. Suppose the galaxy parameters in the first step are given. Then, for assigned ${{\cal R}_{\rm g}}$, ${\xi_{\rm g}}$ and $\mu$, the parameter ${{\cal R}}$ in the accretion solution is obtained from eqs. (12) and (21)-(22) as $${{\cal R}}={{{\cal R}_{\rm g}}\over\mu} = {\alpha{\xi_{\rm g}}\over\mu},$$ where the last expression depends on the fact that we restrict to the case ${\xi_{\rm g}}\geq 1$. Since $\mu\approx 10^{-3}$, and ${{\cal R}_{\rm g}}$ is expected (say) in the range $1\div 20$, the ${{\cal R}}$ values fall in the range $10^3\div 10^4$ (see also Sect. 5.1 for a more quantitative discussion). -0.2truecm ![image](f2a.pdf){height="50.00000%" width="50.00000%"} ![image](f2c.pdf){height="50.00000%" width="50.00000%"} ![image](f2b.pdf){height="50.00000%" width="50.00000%"} ![image](f2d.pdf){height="50.00000%" width="50.00000%"} From eq. (12), the second parameter ($\xi$) characterizing the accretion solution requires the computation of the Bondi radius ${r_{\rm B}}$, and then of the gas temperature ${T_{\infty}}$; we choose ${T_{\infty}}=\beta{T_{\rm V}}$, with $\beta >0$ a dimensionless parameter. Thus the isothermal sound speed is given by: $${c_{\infty}}^2={k {T_{\infty}}\over <\mu> {m_{\rm p}}}={\beta{\sigma_{\rm V}}^2\over 3},$$ and from eqs. (3) and (41) the Bondi radius reads: $${{r_{\rm B}}\over{r_*}}={3\mu\over \alpha\beta{{\cal F}_{\rm g}}},\quad {{\cal F}_{\rm g}}\equiv{\xi_{\rm g}}{{\widetilde W}_{\rm g}}({\xi_{\rm g}}).$$ From the behavior of the function ${{\cal F}_{\rm g}}({\xi_{\rm g}})$, it follows that at fixed $\alpha$ and $\beta$ one has: $${3\mu\over\alpha\beta}\leq{{r_{\rm B}}\over{r_*}}\leq {6\mu\over\alpha\beta},$$ where the lower limit is obtained for ${\xi_{\rm g}}\to\infty$, and the upper limit for ${\xi_{\rm g}}=1$; in this latter case, $\alpha =1$ gives the value of ${r_{\rm B}}/{r_*}$ in a one-component (stellar) Jaffe galaxy. As expected, ${r_{\rm B}}$ decreases for increasing $\alpha$, $\beta$ and ${\xi_{\rm g}}$, i.e., for increasing ${T_{\infty}}$. Figure 2 (top left panel) shows the trend of ${r_{\rm B}}/{r_*}$ with ${\xi_{\rm g}}$, in the minimum halo case ($\alpha =1$), and for $\alpha=2$ and $3$, when $\beta=1$ and $\mu=0.002$. Therefore, in a real galaxy with ${r_*}$ of the order of a few kpc, and with a gas temperature of the order of ${T_{\rm V}}$, ${r_{\rm B}}$ is of the order of tens of pc (see also Sect. 5.1 for a more quantitative discussion). As an additional information on ${r_{\rm B}}$, in Fig. 2 (top right panel) we show the trend of ${r_{\rm B}}/{R_{\rm inf}}$ with ${\xi_{\rm g}}$, for $\beta =1$; note that from eqs. (37) and (44), the ratio is independent of $\alpha$ and $\mu$, so that only one curve is plotted; higher values of $\beta$ correspond to smaller values of ${r_{\rm B}}$. Using eqs. (44) and (12), we finally obtain the expression $$\xi ={\alpha\beta{\xi_{\rm g}}{{\cal F}_{\rm g}}\over 3\mu}={{{\cal R}}\beta{{\cal F}_{\rm g}}\over 3}.$$ It follows that $\xi \to \alpha\beta/(6\mu)$ for ${\xi_{\rm g}}\to 1$, while it grows without bound for ${\xi_{\rm g}}\to\infty$, as $\xi\sim \alpha\beta{\xi_{\rm g}}/(3\mu)$. In practice, at variance with the general cases in KCP16 and CP17, now ${{\cal R}}$ and $\xi$ are linked, and increasing values of ${{\cal R}}$ correspond to increasing values of $\xi$. The list of all parameters introduced in this work is given in Table 1. Figure 2 (bottom left panel) shows the trend of $\xi $ with ${\xi_{\rm g}}$, in the minimum halo case ($\alpha =1$), and for $\alpha=2$ and $3$, and $\beta =1$; ${r_{\rm B}}$ is of the order of $10^{-3}{r_{\rm g}}$; higher values of $\beta$ correspond to larger values of $\xi$. Having obtained the expressions for ${{\cal R}}$ and $\xi$ as a function of the model parameters, a few considerations are in order. The first is that in JJ models isothermal accretion is [*always*]{} possible in absence of a central MBH and $\beta =1$, because the accretion condition in eq. (17) is automatically satisfied by the virial temperature of the stellar component, when ${T_{\infty}}={T_{\rm V}}$. By allowing for a ${T_{\infty}}>{T_{\rm V}}$, it is easy to show that Bondi isothermal accretion in [*absence*]{} of a central MBH (or when $\chi =0$) is possible in JJ models only for gas temperatures lower than a critical value, i.e. only for $\beta \leq {\beta_{\rm c}}\equiv 3/(2{{\cal F}_{\rm g}})$. From the behavior of ${{\cal F}_{\rm g}}$ it follows that $3/2\leq{\beta_{\rm c}}\leq 3$, where the lower limit corresponds to ${\xi_{\rm g}}\to\infty$ and the upper limit to ${\xi_{\rm g}}=1$. The second is that from eq. (46) the ratio ${{\cal R}}/\xi =2{\beta_{\rm c}}/\beta$ appearing in the definition of $f(x)$ in eq. (13), depends on $\beta$ and ${\xi_{\rm g}}$ [*only*]{}, and it shows that for very large values of $\beta$ the problem reduces to the classical Bondi accretion. The critical value ${\beta_{\rm c}}$ plays an important role also in models [*with*]{} a central MBH, determining a particular temperature at which there is a sudden transition in the position of the sonic radius. In fact, the location of ${r_{\rm min}}$ in terms of the scale-length ${r_*}$, is given by $${{r_{\rm min}}\over{r_*}}={x_{\rm min}}(\chi,{{\cal R}},\xi) \, {{r_{\rm B}}\over{r_*}},$$ where ${x_{\rm min}}$ is given by eq. (15), and can be easily computed analytically once ${{\cal R}}$ and $\xi$ are determined. Figure 2 (bottom right panel) shows ${r_{\rm min}}/{r_*}$ as a function of ${\xi_{\rm g}}$, for $\alpha =1$ and different values of $\beta$. The most relevant feature is the considerable variation in the position of ${r_{\rm min}}$, from very external to very inner regions, for $\beta$ increasing. Equation (47) shows that ${r_{\rm min}}$ is determined by the combined behavior of two functions, namely ${x_{\rm min}}$ and ${r_{\rm B}}/{r_*}$; we now focus on ${x_{\rm min}}$, having already established that ${r_{\rm B}}\propto 1/\beta$, and thus the large variation of ${r_{\rm min}}$ can only in part be due to the dependence of ${r_{\rm B}}$ on $\beta$. Instead, from eqs. (15), (42) and (46), for ${{\cal R}}\to\infty$ and fixed[^3] $\beta$, ${x_{\rm min}}$ is given by: $${x_{\rm min}}\sim \begin{cases} \displaystyle{{{{\cal R}}\; (1- \beta/{\beta_{\rm c}})\over 2},\quad \beta<{\beta_{\rm c}},}\\ \displaystyle{ {\sqrt{\chi{{\cal R}}}\over 2},\quad \beta = {\beta_{\rm c}},}\\ \displaystyle{{\chi \over 2(1 -{\beta_{\rm c}}/\beta)},\quad \beta >{\beta_{\rm c}}.} \end{cases}$$ Note that the limit ${{\cal R}}\to\infty$ describes models with increasing $\alpha$ at fixed $\mu$ and ${\xi_{\rm g}}$, or with increasing ${\xi_{\rm g}}$ at fixed $\alpha$ and $\mu$, or a vanishing MBH mass at fixed $\alpha$ and ${\xi_{\rm g}}$. As in the present models $\alpha\geq 1$, ${\xi_{\rm g}}\geq 1$ and $\mu=0.002$, then ${{\cal R}}$ is quite large, and the asymptotic trends in eq. (48) already provide a reasonable approximation of the true behavior, that is increasingly better for large values of $\alpha$ and ${\xi_{\rm g}}$, and small values of $\mu$. Of course, an independent check of the first two expressions above can be obtained by recovering them from the exact eq. (16) (pertinent to a Jaffe galaxy with ${M_{\rm BH}}=0$), by using eqs. (42) and (46) for vanishing MBH mass, i.e., $\mu\to 0$ and ${{\cal R}}\to\infty$, and $\beta\leq{\beta_{\rm c}}$. Qualitatively, eq. (48) shows that for $\beta <{\beta_{\rm c}}$, ${x_{\rm min}}$ increases as ${{\cal R}}$. Instead, for $\beta >{\beta_{\rm c}}$, ${x_{\rm min}}$ is independent of ${{\cal R}}$, and for very large values of the gas temperature tends to $\chi/2$, the limit value of classical Bondi accretion with electron scattering (e.g., CP17, eq. 25). As Fig. 3 shows, even a moderate increase in the gas temperature produces a sudden decrease in the value of ${x_{\rm min}}$. A numerical investigation of polytropic accretion in JJ models shows that ${x_{\rm min}}$ suddenly drops to values $\la 1$, as $\gamma$ increases with respect to the isothermal case. This behavior is reminiscent of the sudden transition of ${x_{\rm min}}$ from external to internal regions in Hernquist galaxies, discussed in CP17; in this case the transition is due to the existence, for $\gamma >1$, of [*two*]{} minima for the polytropic function $f(x)$ of the Jaffe potential (as obtained inserting eq. 11 in eq. 47 in KCP16). In the polytropic Hernquist case, the two minima can be present also in the isothermal case (CP17, Appendix B.2), while for the Jaffe potential in the isothermal case there is only one minimum, given in eq. (15). It is now easy to explain the behavior of the curves in Fig. 2 (bottom right panel), where several cases of eq. (47) are plotted. For example, from the first identity in eq. (48) and eqs. (42)-(44), eq. (47) predicts that for ${{\cal R}}\to\infty$ and $\beta<{\beta_{\rm c}}$, we have ${r_{\rm min}}/{r_*}\sim{\xi_{\rm g}}({\beta_{\rm c}}/\beta -1)$, so that for ${\xi_{\rm g}}\to \infty$ and $\beta = 1$, it results ${r_{\rm min}}/{r_*}\sim{\xi_{\rm g}}/2$, while for ${\xi_{\rm g}}=1$ and large values of $\alpha/\mu$, one has ${r_{\rm min}}\approx 2{r_*}$. For ${T_{\infty}}$ corresponding to the range $3/2\leq\beta\leq 3$, there is a transition value of ${\xi_{\rm g}}$ such that, for larger ${\xi_{\rm g}}$, ${\beta_{\rm c}}$ drops below the adopted $\beta$, and the third expression in eq. (48) applies. In these cases the sonic radius moves to the central regions, with ${r_{\rm min}}/{r_*}\sim \chi\mu/[\alpha (\beta/{\beta_{\rm c}}-1)]$. -1truecm ![Position of ${x_{\rm min}}$ as a function of the temperature parameter $\beta$, for $\mu=0.002$ and for three minimum halo models ($\alpha =1$), with ${\xi_{\rm g}}=1,3,20$; these correspond to critical values of the temperature given by ${\beta_{\rm c}}\simeq 3, 2.2, 1.7$, respectively (solid dots). The values of ${x_{\rm min}}$ are reproduced remarkably well by the asymptotic expressions in eq. (48).[]{data-label=""}](f3.pdf "fig:"){height="55.00000%" width="55.00000%"}\ Finally, Fig. 4 shows the trend of ${\lambda_{\rm t}}$ as a function of ${\xi_{\rm g}}$, for $\alpha =1,2,3$ and, when $\alpha =1$, for different gas temperatures as determined by the $\beta$ value (dotted lines). In analogy with eq. (48), an asymptotic analysis shows that in the limit of ${{\cal R}}\to\infty$, $${\lambda_{\rm t}}\sim \begin{cases} \displaystyle{{{{\cal R}}^2\;(1- \beta/{\beta_{\rm c}})^{2-{2{\beta_{\rm c}}\over \beta}}\over 4\sqrt{e}}, \quad \beta<{\beta_{\rm c}},}\\ \displaystyle{{{{\cal R}}^2\over 4\sqrt{e}},\quad \beta = {\beta_{\rm c}},} \end{cases}$$ and for simplicity we do not report the expression of ${\lambda_{\rm t}}$ for $\beta>{\beta_{\rm c}}$, that however can be easily calculated. For fixed ${{\cal R}}$ and $\chi$, very large $\beta$ correspond to ${\lambda_{\rm t}}\sim\chi^2{\lambda_{\rm cr}}$, in accordance with the classical case (KCP16, CP17). Equation (49) nicely explains the values and the trend of ${\lambda_{\rm t}}$ with ${\xi_{\rm g}}$ and $\alpha$, in particular the almost perfect proportionality of ${\lambda_{\rm t}}$ to $\alpha^2{\xi_{\rm g}}^2$. From eq. (14), this implies that, for the same boundary conditions, the true accretion rate ${\dot M_{\rm t}}$ for increasing ${\xi_{\rm g}}$ becomes much larger than ${\dot M_{\rm B}}$, the rate in sole presence of the MBH. -1truecm ![The critical accretion parameter ${\lambda_{\rm t}}$ as a function of ${\xi_{\rm g}}$, for the miminum halo case $\alpha=1$ (black), and $\alpha = 2,3$ (blue and red), and $\chi =1$ and $\beta=1$. The dotted curves refer to $\alpha =1$ and three different values of $\beta$.[]{data-label=""}](f4.pdf "fig:"){height="55.00000%" width="55.00000%"}\ Two applications ================ We present here two applications of the results above. The first is a practical illustration of how to determine the main parameters describing the galactic structure, and the gas accretion in it, for JJ models (see Table 1). One will see how very reasonable values for the main structural properties can be obtained, and then realistic galaxy models can be built. The second application considers the deviation from the true value of the mass accretion derived using the density along the accretion solution in JJ galaxies, and the framework of classical Bondi theory. -2truecm ![image](f5a){height="55.00000%" width="55.00000%"} -1truecm ![image](f5b){height="55.00000%" width="55.00000%"} -1.5truecm ![image](f5c){height="55.00000%" width="55.00000%"} -1truecm ![image](f5d){height="55.00000%" width="55.00000%"} Accretion in realistic galaxy models ------------------------------------ Here we show how to build JJ galaxy models with the main observed properties of real galaxies, and how to derive the corresponding parameters for isothermal accretion. The first step consists of the determination of the stellar component of the JJ galaxy. This is done via the choice of two main galaxy properties, for example the effective radius ${R_{\rm e}}$ and the stellar projected velocity dispersion ${\sigma_{\rm gp}}(0)$. For JJ models ${\sigma_{\rm gp}}(0)={\sigma_{\rm g}}(0)$, and ${\sigma_{\rm g}}(r)$ is quite flat at the center; thus ${\sigma_{\rm gp}}(0)$ is very close to the emission-weighted projected stellar velocity dispersion, within a small fraction of ${R_{\rm e}}$ (as typically given by observations). For chosen ${R_{\rm e}}$ and ${\sigma_{\rm gp}}(0)$, then, one has ${r_*}\simeq 1.34 {R_{\rm e}}$ \[see below eq. (20)\], and then ${M_*}$, from eqs. (32) and (21) once a value for $\alpha\geq 1$ is fixed. The central MBH enters the problem via a choice for $\mu$, that here we take to be $\mu=2\times 10^{-3}$ (e.g., Kormendy & Ho 2013). Then the radius of influence ${R_{\rm inf}}$ can be evaluated from eq. (37). As an example, for a choice of ${R_{\rm e}}=5$ kpc, and ${\sigma_{\rm gp}}(0)=210$ km s$^{-1}$, one has ${r_*}\simeq 6.7$ kpc, $\alpha{M_*}\simeq 1.38\times 10^{11}$ M$_{\odot}$, and ${R_{\rm inf}}\simeq 4.6/\alpha$ pc, for a fiducial $\epsilon=0.5$. The second step consists of the determination of the parameters ${{\cal R}_{\rm g}}$ and ${\xi_{\rm g}}$ that fix the total density distribution of the galaxy, and in particular its total potential. Since ${{\cal R}_{\rm g}}= \alpha {\xi_{\rm g}}$, we can only fix either ${{\cal R}_{\rm g}}$ or ${\xi_{\rm g}}$. It may be convenient to fix ${\xi_{\rm g}}$, for the following reason. A detailed dynamical modeling of stellar kinematical data for galaxies of the local universe has shown that the dark matter fraction within ${R_{\rm e}}$ is low (e.g., Cappellari 2016). To fit with this result, one can use eq. (28), that relates ${\xi_{\rm g}}$ and the dark matter fraction within any radius $r$; thus, for the desired (low) value of the dark matter fraction, ${\xi_{\rm g}}$ remains determined. Figure 1 shows that the ratio ${M_{\rm {DM}}}({R_{\rm e}})/{M_{\rm g}}({R_{\rm e}})$ is always in the range determined by the dynamical modeling, for $\alpha=1$. The figure also shows that, for ${\xi_{\rm g}}\ga 5$, the dark matter fraction within $r={R_{\rm e}}$ is quite independent of ${\xi_{\rm g}}$. This means that a certain freedom remains in the choice of ${\xi_{\rm g}}$, and we can further constrain it from other considerations. If $\alpha=1$, we may require that the scale-length of the dark halo ($r_{\rm NFW}$) is larger than that of the stars (i.e., ${\xi_{\rm NFW}}>1$). For example a value of the concentration parameter $c\simeq 10$, as predicted for galaxies of the local universe (e.g., Dutton & Macciò 2014), gives ${\xi_{\rm NFW}}=2.6$ for ${\xi_{\rm g}}=20$ \[eq. (30)\]. Finally, one recovers the stellar virial velocity dispersion ${\sigma_{\rm V}}^2=2 {{\cal F}_{\rm g}}({\xi_{\rm g}}) {\sigma_{\rm gp}}^2(0)$, and then ${T_{\rm V}}$ from ${\sigma_{\rm V}}^2$. Since ${{\cal F}_{\rm g}}({\xi_{\rm g}})$ varies only by a factor of two for ${\xi_{\rm g}}=1$ to $\infty$, then in turn ${T_{\rm V}}$ varies at most by a factor of two. For ${\xi_{\rm g}}=20$, one has ${\sigma_{\rm V}}\simeq 280$ km s$^{-1}$ and ${T_{\rm V}}\simeq 2.0\times 10^6$ K (for $<\mu> =0.6$). Having completely determined the galaxy structure with the choice of two observed quantites, ${R_{\rm e}}$ and ${\sigma_{\rm gp}}(0)$, and three parameters ($\mu$, $\alpha$, ${\xi_{\rm g}}$), the next step consists in the determination of the accretion properties. These are all fixed, once the galaxy structure is fixed; only the gas temperature needs to be chosen. The first parameter describing accretion is ${{\cal R}}$, obtained from eq. (42). The second parameter is $\xi$, that comes from eq. (46), once the parameter $\beta$ is fixed in eq. (43). With the choice of this last parameter, i.e., with the choice of ${T_{\infty}}$, all the accretion properties are finally determined analytically, in particular the gas sound speed ${c_{\infty}}$ \[eq. (43)\], the Bondi radius ${r_{\rm B}}$ \[eqs. (3) and (44)\], the sonic radius ${r_{\rm min}}$ \[eqs. (15) and (47)\], the critical accretion parameter ${\lambda_{\rm t}}$ \[as described below eq. (15)\], and finally the Mach number profile ${{\cal M}}$ \[eq. (18)\]. As an example, for the galaxy model considered, for $\alpha=1$, $\beta =1$ and ${\xi_{\rm g}}=20$, one has ${{\cal R}}=10^4$, ${r_{\rm B}}\simeq 45$ pc, $\xi\simeq 2.96\times 10^3$, ${r_{\rm min}}\simeq 93$ kpc, and ${\lambda_{\rm t}}\simeq 5.23\times 10^7$. Instead, changing only the gas temperature to $\beta =2$, one has ${{\cal R}}$ unchanged, ${r_{\rm B}}\simeq 23$ pc, $\xi\simeq 5.91\times 10^3$, ${r_{\rm min}}\simeq 73$ pc, and ${\lambda_{\rm t}}\simeq 2.85\times 10^6$. Figure 5 shows the Mach number profiles for accretion on a MBH (the classic Bondi problem), and on a MBH at the center of a JJ model, for the minimum halo case and three values of $\beta$. For the three galaxy models the top axis gives the radial scale in terms of $r/{r_*}$. Again it is apparent how a modest increase in the gas temperature produces a dramatic decrease of ${r_{\rm min}}$. Figure 6 shows a comparison between the gas velocity profile and the stellar velocity dispersion profile, for the JJ models in Fig. 5. Notice that near the center ${\sigma_{\rm BH}}\sim r^{-1/2}$ and ${{\cal M}}\sim r^{-1/2}$ so that their ratio is a constant; it can be easily shown that this constant is ${\sqrt{6\chi}}$, independently of $\alpha$, $\beta$, ${\xi_{\rm g}}$, ${{\cal R}_{\rm g}}$ and $\mu$. In principle, then, the value of ${\sigma_{\rm BH}}$ close to the center of a galaxy is a proxy for the (isothermal) gas inflow velocity. -0.2truecm ![Accretion velocity profile for the gas (dotted) and isotropic stellar velocity dispersion profile (solid), both normalized to $\sqrt{{\Psi_{\rm n}}}$, for the minimum halo model with ${{\cal R}_{\rm g}}={\xi_{\rm g}}=20$. The accretion solutions correspond to $\beta = 1,2,3$ and are given by the black, blue and red dotted lines, respectively. The horizontal dotted lines mark the corresponding values of ${c_{\infty}}/\sqrt{{\Psi_{\rm n}}}$. For each $\beta$ the intersection between the accretion velocity and the sound velocity marks the sonic point (bottom right panel in Fig. 2, solid dots).[]{data-label=""}](f6 "fig:"){height="55.00000%" width="55.00000%"} The bias in estimates of the mass accretion rate {#sec:biasclass} ------------------------------------------------ We investigate here the use of the classical Bondi solution in problems involving accretion on MBHs residing at the center of galaxies. This use is common in the interpretation of observational results, in numerical investigations, or in cosmological simulations (see Sect. 1). In many such studies, when the instrumental resolution is limited, or the numerical resolution is inadequate, an estimate of the mass accretion rate is derived using the classical Bondi solution, taking values of temperature and density measured at some finite distance from the MBH. This procedure clearly produces an estimate that can depart from the true value, even when assuming that accretion fulfills the hypotheses of the Bondi model (stationariety, spherical symmetry, etc.). KCP16 developed the analytical set up of the problem for generic polytropic accretion, with the inclusion of the effects of radiation pressure and of a galactic potential; they also investigated numerically the size of the deviation for Hernquist galaxies. CP17 presented a detailed exploration of the deviation for isothermal accretion in one-component Jaffe and Hernquist galaxies was done by CP17. Here we consider the more realistic case of two-component JJ models, in the isothermal case, exploiting the fully analytical character of JJ models. We first consider the deviation of the estimate of the mass accretion rate for the classical Bondi solution, when taking values of temperature and density measured at some finite distance from the MBH. For assigned values of ${\rho_{\infty}}$, ${T_{\infty}}$, $\gamma$ and ${M_{\rm BH}}$, the Bondi radius ${r_{\rm B}}$ and the critical accretion rate ${\dot M_{\rm B}}$ are given by eq. (3) and by eq. (5) with $\lambda={\lambda_{\rm cr}}$. If one inserts in these equations the values of $\rho (r) $ and $T(r)$ at a finite distance $r$ from the MBH, taken along the classical Bondi solution, and considers them as “proxies” for ${\rho_{\infty}}$ and ${T_{\infty}}$, then an [*estimated*]{} value for the accretion radius (${r_{\rm e}}$) and mass accretion rate (${\dot M_{\rm e}}$) is obtained: $${r_{\rm e}}(r)\equiv {G{M_{\rm BH}}\over{c_{\rm s}}^2(r)},\quad {\dot M_{\rm e}}(r) \equiv 4 \pi {r_{\rm e}}^2(r) {\lambda_{\rm cr}}\rho(r){c_{\rm s}}(r).$$ The question is how much ${r_{\rm e}}$ and ${\dot M_{\rm e}}$ depart from the true values ${r_{\rm B}}$ and ${\dot M_{\rm B}}$, as a function of $r$. In the isothermal case the sound speed is constant, with ${c_{\rm s}}(r) ={c_{\infty}}$, and then ${r_{\rm e}}(r)={r_{\rm B}}$, independently of the distance from the center at which the temperature is evaluated. Then ${\dot M_{\rm e}}(r) = 4 \pi {r_{\rm B}}^2 {\lambda_{\rm cr}}\rho(r){c_{\infty}}$; at infinity, ${\dot M_{\rm e}}= {\dot M_{\rm B}}$. At finite radii, instead $${{\dot M_{\rm e}}(r)\over {\dot M_{\rm B}}} ={\tilde\rho}(x)={{\lambda_{\rm cr}}\over x^2 {{\cal M}}(x)},$$ where the last identity comes from eq. (9), and ${{\cal M}}(x)$ is given in eq. (19) of CP17. The deviation of ${\dot M_{\rm e}}$ from ${\dot M_{\rm B}}$ then is just given by ${{\tilde\rho}(x)} $ at the radius $r$ where the “measure” is taken. Thus, ${\dot M_{\rm e}}$ gives an overestimate of ${\dot M_{\rm B}}$, and this overestimate becomes larger for decreasing $x$ (see Fig. 1 in KCP16, and Fig. 4 in CP17). -0.2truecm ![image](f7a){height="55.00000%" width="55.00000%"} ![image](f7b){height="55.00000%" width="55.00000%"}\ In presence of a galaxy, the departure of ${\dot M_{\rm e}}(r)$ of eq. (50) from the true mass accretion rate ${\dot M_{\rm t}}=4 \pi {r_{\rm B}}^2 {\lambda_{\rm t}}{\rho_{\infty}}{c_{\infty}}$, is: $${{\dot M_{\rm e}}(r)\over{\dot M_{\rm t}}}={{\lambda_{\rm cr}}{\tilde\rho}(x)\over{\lambda_{\rm t}}}={{\lambda_{\rm cr}}\over x^2{{\cal M}}(x)},$$ where $\rho (r) $ is taken along the solution for accretion within the potential of the galaxy[^4], the last identity comes from eq. (19), and ${{\cal M}}(x)$ is given in eq. (18). Figure 7 (left panel) shows the trend of ${\dot M_{\rm e}}/{\dot M_{\rm t}}$ with $r$. One sees that the use of $\rho(r)$ instead of ${\rho_{\infty}}$ leads to an overestimate for $r$ taken in the central regions, while ${\dot M_{\rm e}}(r)$ becomes an [*underestimate*]{} for $r\ga$ a few$\times 10^{-3}{r_*}$. The radius marking the transition from the region in which there in an overestimate, to that where there is an underestimate, depends on the specific galaxy model. An important consequence of the results in Fig. 7 is that, for example, in numerical simulations not resolving ${r_{\rm B}}$, ${\dot M_{\rm e}}$ should be boosted by a large factor to approximate the true accretion rate ${\dot M_{\rm t}}$. Moreover, since ${\dot M_{\rm e}}/{\dot M_{\rm t}}$ increases steeply with $r$, this “boost factor” in turn also varies steeply with $r$. In the right panel of Fig. 7 we also show the bias measured at the Bondi radius as a function of ${\xi_{\rm g}}$. The panel indicates an underestimate by roughly a factor of 5. It is instructive to find the reason for the trend of ${\dot M_{\rm e}}$ near the center and at large radii. From eq. (52) and the expansion of ${{\cal M}}$ for $x\to 0^+$ and for $x \to \infty$, one has: $${{\dot M_{\rm e}}(r)\over{\dot M_{\rm t}}}\sim {{\lambda_{\rm cr}}\over\sqrt{2\chi}x^{3/2}}, \quad x \to 0^+,$$ and $${{\dot M_{\rm e}}(r)\over{\dot M_{\rm t}}}\sim {{\lambda_{\rm cr}}\over{\lambda_{\rm t}}}, \quad x \to \infty.$$ Therefore near the center ${\dot M_{\rm e}}/{\dot M_{\rm t}}\sim r^{-3/2}$, while at large radii, as in general ${\lambda_{\rm t}}>>{\lambda_{\rm cr}}$ (see Fig. 4), ${\dot M_{\rm e}}/{\dot M_{\rm t}}$ becomes very small. Summary and conclusions ======================= The classical Bondi accretion theory is the tool commonly adopted in many investigations where an estimate of the accretion radius and the mass accretion rate is needed. In this paper, extending the results of previous works (KCP16, CP17), we focus on the case of isothermal accretion in two-component galaxies with a central MBH, and with radiation pressure contributed by electron scattering in the optically thin regime. In CP17 it was shown that the radial profile of the Mach number, and the critical eigenvalue of the isothermal accretion problem, can be expressed analitycally in Jaffe and Hernquist potentials with a central MBH. Here we adopt the two-component JJ galaxy models presented in CZ18. These are made of a Jaffe stellar component plus a dark halo such that the total density is also described by a Jaffe profile; all the relevant dynamical properties of JJ models, including the solution of the Jeans equations for the stellar component, can be expressed analytically. Therefore, the results of CP17 and CZ18 give the opportunity of building a family of two-component galaxy models where all the accretion properties can be given analytically, and then explored in detail, with no need to resort to numerical studies. The main results of this work can be summarized as follows. 1\) The parameters describing accretion in the hydrodynamical solution of CP17 (${{\cal R}}$ and $\xi$) have been linked to the galaxy structure. In particular, it is assumed that the isothermal gas has a temperature ${T_{\infty}}$ proportional to the virial temperature of the stellar component, ${T_{\rm V}}$. Then, simple formulas are derived relating the galactic properties (as the effective radius, ${R_{\rm e}}$, and the radius of influence of the MBH, ${R_{\rm inf}}$) with those describing accretion (as the Bondi radius ${r_{\rm B}}$, and the sonic radius ${r_{\rm min}}$). The critical accretion parameter ${\lambda_{\rm t}}$ is also expressed as a function of the galactic properties. 2\) For realistic galaxy structures, ${r_{\rm B}}$ is of the order of a few$\times 10^{-3}{r_*}$, and ${R_{\rm inf}}$ is of the order of $\simeq 0.1{r_{\rm B}}$. For ${T_{\infty}}={T_{\rm V}}$, the sonic radius ${r_{\rm min}}$ is of the order of a few ${R_{\rm e}}$. For moderately higher values of ${T_{\infty}}$, ${r_{\rm min}}$ suddenly drops to radii within ${r_{\rm B}}$. The same happens also for a small increase of the polytropic index above unity, and this behavior is reminiscent of the similar jump shown by ${r_{\rm min}}$ in Hernquist models, as discussed in CP17. As a consequence, accretion in JJ models can switch from being supersonic over almost the whole galaxy to being everywhere subsonic, except for $r\la{r_{\rm B}}$. An explanation for this phenomenon is given. 3\) As for the isothermal accretion in one-component Jaffe models, the determination of the critical accretion parameter involves the solution of a quadratic equation, and there is only one sonic point for any choice of the parameters describing the galaxy. In presence of the galaxy, ${\lambda_{\rm t}}$ is several orders of magnitude larger than without the galaxy. It is found that Bondi accretion in JJ models in absence of a central MBH (or when $\chi =0$) is possible, provided that ${T_{\infty}}$ is lower than a critical value and we derive the esplicit formula for it. This critical value depends only on ${\xi_{\rm g}}$, and is in the range $3/2\leq{T_{\infty}}/{T_{\rm V}}\leq 3$. It also determines the the jump in ${r_{\rm min}}$ in models with the central MBH. 4\) We provide a few examples of accretion in realistic galaxy models, and present the resulting Mach number profiles, the trends of the accretion velocity and of the isotropic stellar velocity dispersion profiles. 5\) We finally examine the problem of the deviation from the true value ${\dot M_{\rm t}}$ of an estimate of the mass accretion rate ${\dot M_{\rm e}}(r)$ obtained adopting the classical Bondi solution for accretion on a MBH, where the gas density and temperature at some finite distance from the center are inserted, as proxies for their values at infinity. The size of the departure of ${\dot M_{\rm e}}(r)$ from ${\dot M_{\rm t}}$, that is determined by the presence of the galaxy, is given as a function of the distance $r$ from the center. ${\dot M_{\rm e}}(r)$ [*overestimates*]{} ${\dot M_{\rm t}}$, if the gas density is taken in the very central regions, and [*underestimates*]{} ${\dot M_{\rm t}}$ if it is taken outside a few Bondi radii. This shows how sensitive to the model parameters is the determination of a physically based value for the so-called “boost factor” adopted in simulations, and that in general a universally valid prescription is impossible. [99]{} Barai, P., Proga, D., & Nagamine, K. 2011, MNRAS, 418, 591 Barai, P., Proga, D., & Nagamine, K. 2012, MNRAS, 424, 728 Barai, P., de Gouveia Dal Pino, E.M. 2018, MNRAS in press (arXiv:1807.04768) Beckmann, R. S., Slyz, A., Devriendt, J. 2018, MNRAS 478, 995 Bondi, H., 1952, MNRAS 112, 195 Bu, D.-F., Yuan, F., Wu, M., Cuadra, J. 2013, MNRAS 434, 1692 Cappellari, M., Romanowsky, A. J., Brodie, J. P., et al. 2015, ApJL, 804, L21 Cappellari, M. 2016, ARA&A 54, 597 Cao, X. 2016 ApJ 833, 30 Carollo, M., van der Marel, R., & de Zeeuw, P.T. 1995, MNRAS 276, 1131 Choi, E., Ostriker, J.P.O., Naab, T., et al. 2017, MNRAS 844, 31 Ciotti, L. 1999, ApJ, 520, 574 Ciotti, L., Ostriker, J. P. 2012, in [*Hot Interstellar Matter in Elliptical Galaxies*]{}, Vol. 378, eds. D.-W. Kim and S. Pellegrini (New York: Springer), 83 Ciotti, L., Lanzoni, B., & Renzini, A. 1996, MNRAS 282, 1 Ciotti, L., Morganti, L., & de Zeeuw, P.T. 2009, MNRAS 393, 491 Ciotti, L., Pellegrini, S. 2017, ApJ 848, 29 (CP17) Ciotti, L., & Ziaee Lorzad, A. 2018, MNRAS, 473, 5476 (CZ18) Dehnen, W. 1993, MNRAS 265, 250 Di Matteo, T., Colberg, J., Springel, V., Hernquist, L., Sijacki, D. 2008, ApJ 676, 33 Dutton, A.A., Macciò, A.V., 2014, MNRAS 441, 3359 Frank, J., King, A. and Raine, D., 1992, [ *Accretion power in astrophysics.*]{}, Camb. Astrophys. Ser., Vol. 21, Cambridge University Press Gallo, E., et al. 2010, ApJ 714, 25 Hernquist, L., 1990, ApJ 356, 359 Jaffe, W., 1983, MNRAS 202, 995 Kormendy, J., Ho, L.C. 2013, ARA&A 51, 511 Korol, V., Ciotti, L., Pellegrini, S. 2016, MNRAS 460, 1188 (KCP16) Lusso, E., and Ciotti, L., 2011, A&A 525, 115 Merritt D., 1985, AJ, 90, 1027 Mościbrodzka, M., & Proga, D. 2013, ApJ 767, 156. Navarro, J. F., Frenk, C. S., White, S.D.M. 1997, ApJ 490, 493 Park, K.-H., Wise, J.H., Bogdanović, T. 2017, ApJ 847, 70 Pellegrini, S. 2010, ApJ 717, 640 Pellegrini, S. 2011, ApJ 738, 57 Rafferty, D. A., McNamara, B. R., Nulsen, P. E. J., & Wise, M. W. 2006, ApJ 652, 216 Ramírez-Velasquez, J.M., Sigalotti, L. Di G., Gabbasov, R., Cruz, F., Klapp, J. 2018, MNRAS 477, 4308 Sijacki, D., Springel, V., Di Matteo, T., Hernquist, L. 2007, MNRAS 380, 877 Tremaine, S., Richstone, D. O., Byun, Y.-I., et al. 1994, AJ 107, 634 Volonteri, M., Capelo, P.R., Netzer, N., et al. 2015, MNRAS 449, 1470 [^1]: Due to a typo, before their Sect. 4.1, KCP16 wrote that this case corresponds to $\chi=1$. [^2]: The extension of the analysis to the cases $0\leq{\xi_{\rm g}}<1$ would be immediate. [^3]: From eq. (46) $2\xi={{\cal R}}\beta/{\beta_{\rm c}}$, and from eq. (15) it follows immediately that the limit for ${{\cal R}}\to\infty$ is not uniform in $\beta$. [^4]: In the monoatomic adiabatic case $\gamma=5/3$, one has that ${\dot M_{\rm e}}(r)={\lambda_{\rm cr}}{\dot M_{\rm t}}/{\lambda_{\rm t}}$ independently of the distance from the center, while ${r_{\rm e}}(r)$ departs from ${r_{\rm B}}$ (KCP16, eq. 42).
--- abstract: 'As the size of quantum devices continues to grow, the development of scalable methods to characterise and diagnose noise is becoming an increasingly important problem. Recent methods have shown how to efficiently estimate Hamiltonians in principle, but they are poorly conditioned and can only characterize the system up to a scalar factor, making them difficult to use in practice. In this work we present a Bayesian methodology, called [Bayesian Hamiltonian Learning]{} ([BHL]{}), that addresses both of these issues by making use of any or all, of the following: well-characterised experimental control of Hamiltonian couplings, the preparation of multiple states, and the availability of any prior information for the Hamiltonian. Importantly, [BHL]{} can be used online as an *adaptive* measurement protocol, updating estimates and their corresponding uncertainties as experimental data become available. In addition, we show that multiple input states and control fields enable [BHL]{} to reconstruct Hamiltonians that are neither generic nor spatially local. We demonstrate the scalability and accuracy of our method with numerical simulations on up to 100 qubits. These practical results are complemented by several theoretical contributions. We prove that a $k$-body Hamiltonian $H$ whose correlation matrix has a spectral gap $\Delta$ can be estimated to precision $\varepsilon$ with only $\tilde{O}\bigl(n^{3k}/(\varepsilon \Delta)^{3/2}\bigr)$ measurements. We use two subroutines that may be of independent interest: First, an algorithm to approximate a steady state of $H$ starting from an arbitrary input that converges factorially in the number of samples; and second, an algorithm to estimate the expectation values of $m$ Pauli operators with weight $\le k$ to precision $\epsilon$ using only $O(\epsilon^{-2} 3^k \log m)$ measurements, which quadratically improves a recent result by Cotler and Wilczek.' author: - 'Tim J. Evans' - Robin Harper - 'Steven T. Flammia' bibliography: - 'HamLearn.bib' title: Scalable Bayesian Hamiltonian Learning --- Introduction {#section:introduction} ============ Extracting diagnostic information about noise processes is central to the development and improvement of quantum devices. Already we are witnessing the realisation of quantum devices that are of a size that is out of reach for standard tools for experimental noise characterisation [@Preskill2018quantumcomputingin]. Randomized benchmarking and its variants [@emerson2005scalable; @Emerson2007; @knill2008randomized; @dankert2009exact; @magesan2012characterizing] offer efficient characterisation of quantum devices through averaging the noise and consequently reducing the number of parameters to be learned. However, for the most part they offer performance metrics and do not pinpoint the physical origin of noise sources, giving limited diagnostic insight. New approaches have been proposed and experimentally demonstrated that yield more detailed error models than standard randomized benchmarking, for instance allowing full reconstruction of Pauli error channels and consequently all correlated errors [@erhard2019characterizing; @harper2019efficient; @flammia2019efficient]. Such methods, however, remove coherent noise terms in order to maintain scalability. Other scalable methods have been proposed to determine all $k$-qubit reconstructed density matrices of multi-qubit systems [@cotler2019; @garciapereze2019], although such methods are aimed at reconstructing specific states and do not give insight as to the dynamics of the systems in question. Hamiltonian learning is a well-studied problem [@granade2012robustonline; @granade2014quantumhamiltonian; @wiebe2014hamiltonianlearning; @wiebe2015quantumbootstrapping; @krastanov2019stochastic] that addresses the need to characterize coherent noise sources. In general, it requires the estimation of a number of parameters that scales exponentially in the system size, but most physically relevant Hamiltonians will have only few-body interactions and are described by polynomially many parameters. Reconstructing the Hamiltonian of a quantum system can provide rich diagnostic information for an experimentalist seeking to reduce noise-induced errors in a device. The benefits from learning the Hamiltonian extend past diagnosis, opening up a range of engineering tools that can be used to counteract, say, noise couplings. For instance, pulse shaping techniques such as GRAPE [@khaneja2005] can be used to design specific pulse shapes, leading to vastly improved fidelities [@Yang2019]. The GRAPE optimization procedure, however, requires a good characterisation of the Hamiltonian affecting the system making it prohibitive for larger devices. Recent works [@qi2019determininglocal; @bairey2019learning; @garrison2018does; @chertkov2018computational; @hou2019determining] have shown how one can reconstruct a generic spatially local Hamiltonian given a single state that commutes with the Hamiltonian. These results can also be generalised to local Lindbladians [@bairey2019lindblad]. Despite being remarkable technical results, there remain several fundamental barriers to the practical implementation of these ideas. For example, the unknown Hamiltonian is only recovered up to a scalar factor, $ H\sim\alpha H $, and the inverse problem is generally ill-conditioned, making it highly sensitive to noise. Also, there are many physically interesting Hamiltonians that are neither generic nor local. In the next section, we will review the method of Hamiltonian estimation developed in [@qi2019determininglocal; @bairey2019learning] and describe in more detail the barriers to making this method practical. We present our main results for addressing these difficulties in \[section:results\] and provide a discussion of future directions in \[section:discussion\]. In the appendices, we provide a derivation of our Bayesian model, provide proofs for our claims about the subroutines together with pseudocode for them, and prove our upper bound on the sample complexity for accurate reconstruction. Problem statement {#section:problem-setting} ----------------- Consider a $d$-dimensional quantum system consisting of $n$ finite-dimensional spins, and let $H$ be the system Hamiltonian. We will focus this discussion on qubit Hamiltonians, so $d=2^n$. Let us choose as an operator basis the $n$-qubit Pauli matrices, $\{P_j\}$ so that $P_j^{\vphantom{\dagger}}=P_j^\dagger$ and $\operatorname{Tr}(P_j P_k)=d\delta_{jk}$. We can then expand $$\begin{aligned} H = \sum_{i=1}^{m} c_i P_i\end{aligned}$$ with a vector of couplings $c \in \mathbb{R}^m$ and $c_i = \frac{1}{d}\operatorname{Tr}(P_i H)$. Any Hamiltonian can be written in such a way when $m=d^2$, but in practice most Hamiltonians have only few-body couplings and hence are well described by an expansion in a local basis having $m=O\bigl(\mathrm{poly}(n)\bigr)$. The most physically relevant example is the set of $n$-qubit Pauli operators that act non-trivially on only $k$ or fewer sites (a *$k$-body* operator) or, more restrictively, on $k$ spatially contiguous sites (a *$k$-local* operator). When $k=n$ we recover a general operator, but for $k=O(1)$ we can still accurately and efficiently describe generic few-body couplings, including nearly all cases of experimental relevance. We now define a *steady state* of $H$ to be any state $\rho$ such that $[H,\rho]=0$. In general, a steady state will depend implicitly on the coupling constants $c$ that define $H$, since $[H(c_1),\rho]=0$ says nothing about the value of $[H(c_2),\rho]$. When $\rho = \rho(c)$ is a steady state of a $k$-body Hamiltonian $H=H(c)$, it will satisfy a set of $(2k-1)$-body constraints given as follows [@bairey2019learning]. Define the $m\times m$ matrix $K = K(\rho)$ given by $$\begin{aligned} \label{eqn:sqrt-correlation-matrix} K_{jk} : = \operatorname{Tr}\bigl(i[P_j,P_k]\rho\bigr)\,.\end{aligned}$$ Then [@qi2019determininglocal; @bairey2019learning] the vector $c$ of couplings in $H$ is in the kernel of $K$, $$\begin{aligned} \label{eqn:K-kernel} Kc = 0\,.\end{aligned}$$ Since the matrix elements of $K$ are all observable (they are expectations of hermitian operators), they can be estimated through a series of experiments. Finding an approximate null vector from a noisy $\tilde{K} \approx K$ would be an estimator $\hat{c}$ for something proportional to the unknown vector of couplings $c$. There are several challenges in implementing this idea in practice. First, preparing an appropriate steady state is challenging, since there seems to be a tradeoff in the complexity of preparing steady states and their utility in this estimation scheme: States that contain lots of information about $H$ (e.g. a ground state of $H$) seem to be difficult to prepare in general, and states that are easy to prepare in general (e.g. the maximally mixed state) are not useful because they are compatible with more than one $H$. Even assuming that a suitable initial state can be prepared *efficiently*, it must still be prepared *accurately*. More generally, one must worry about state preparation and measurement errors (SPAM) and how they will affect the method. Even in the absence of statistical noise from measurements, preparing a state that is a steady state of the wrong Hamiltonian will bias the estimate of $c$. A reliable reconstruction method should be robust to small SPAM errors and should accurately quantify the error uncertainties due to the SPAM. Next, given an estimate $\hat{K}$ of the matrix $K$, even in the absence of state preparation errors the noise will in general depend on the signal $c$. This means that accurately quantifying the uncertainty of an estimate becomes impossible with naive estimators. Furthermore, the inverse problem of recovering $\hat{c}$ from $\hat{K}$ is generally ill-posed. A simple estimate of how ill-posed it is comes from the spectral gap of $K$, defined as follows. Introduce a matrix $M = K{^\text{T}}K$, which is equal to the following $m\times m$ matrix that is called the *correlation matrix* [@qi2019determininglocal; @bairey2019learning] $$\begin{aligned} \label{eqn:correlation-matrix} M_{j,k} := \frac{1}{2} \operatorname{Tr}\bigl( \{P_j, P_k\} \rho \bigr) - \operatorname{Tr}\bigl(P_j\rho \bigr) \operatorname{Tr}\bigl(P_k\rho \bigr)\,.\end{aligned}$$ If the eigenvalues of $M$ are $\lambda_m \ge \ldots \ge \lambda_2 \ge \lambda_1$, then we define the *spectral gap* of $K$ to be the quantity $\Delta := \Delta(K) = \lambda_2-\lambda_1$. This is of course also the actual spectral gap of the correlation matrix $M$, but it will be more convenient to work with $K$ throughout, so we will abuse language and call this the spectral gap of $K$. Perhaps surprisingly, in the case of a generic $k$-body Hamiltonian the spectral gap is nonzero [@qi2019determininglocal], so the kernel of $M$ (and hence $K$) is unique and exact reconstruction of $c$ up to a scale factor is possible in the noiseless case. However, the gap is in many cases so small as to preclude any realistic implementation once noise is introduced. For example, in Ref. [@bairey2019learning] unique reconstruction was achieved when the noise standard deviation was $10^{-12}$, which corresponds to about $\sim10^{24}$ or more measurements. While the small gap “only” introduces a constant overhead factor, it is unfortunately too large to allow practical reconstruction. Moreover, there is no compelling theoretical reason known for why the overhead introduced by the gap shouldn’t also depend on the size $n$ of the Hamiltonian and possibly even exponentially. Finally, even after finding a good enough estimate $\hat{K}$ such that there is a unique null space spanned by $\hat{c}$, any real number $\alpha$ gives an equally valid estimate $\alpha \hat{c}$ if \[eqn:K-kernel\] is the only reconstruction requirement. To eliminate this additional ambiguity, either further constraints beyond \[eqn:K-kernel\] would be required, or a further phase estimation step would have to be performed. Results {#section:results} ======= Summary of main results {#section:summaryofresults} ----------------------- In this paper, we address many of the above difficulties to arrive at a protocol for learning Hamiltonians that is practical for present day experiments in quantum computing and quantum many-body physics. Our most important contribution is to make the inverse problem well-posed. We achieve this by introducing two new degrees of freedom to the experimental design: the ability to choose multiple state preparations and/or multiple control fields. At first glance this might seem to increase, not decrease, the complexity of the inverse problem. In fact, the addition of even one additional state preparation or control field is enough to improve the spectral gap by many orders of magnitude in relevant situations. This makes the total number of measurements required to obtain a useful estimate well within the realm of feasibility for many current experiments. Moreover, when adding control fields, the controls themselves are also estimated by the algorithm. This addresses one of the central difficulties in using pulse shaping and dynamical decoupling by giving a method to efficiently obtain a complete description of all dynamical variables for a universal set of controls. We next show how the estimation can be cast in a Bayesian formulation to yield an algorithm we call [BHL]{}. This confers a long list of advantages. First, by utilizing prior information the Bayesian estimation achieves a much greater speed of convergence to an improved estimate, and second it intrinsically comes with rigorously justifiable error bars directly from the data without resorting to numerically expensive and heuristic post-processing. Remarkably, the Bayesian framework also lets us return a *point estimate* of the true couplings, removing the overall scalar factor ambiguity that has plagued previous methods [@qi2019determininglocal; @bairey2019learning]. It also allows us to correctly deal with the fact that the noise on the estimate depends on the unknown Hamiltonian itself. This in turn helps to avoid over-fitting and problematic bias in estimates. We prove that the [BHL]{} estimator is robust to a large class of measurement errors, and we address the issue of state preparation errors in two ways. First, we incorporate them into the Bayesian model to allow for accurate uncertainty quantification, and second, we prove that approximate state preparations can be used in conjunction with efficient time averaging [@bairey2019learning] to yield improved accuracy in the estimates. We can perform the relevant data collection in a highly parallel fashion from single-qubit measurements only, and our algorithm for low-weight Pauli expectation value estimation improves quadratically over the recent work [@cotler2019]. Finally, we discuss how to implement an online version of [BHL]{}. This enables the estimation of Hamiltonians in real-time whenever the preparation of the input steady state can be done quickly. Using multiple input states {#section:multiple-preparations} --------------------------- Let us first consider what happens when we prepare multiple input states, $\rho_1,\ldots,\rho_N$, and construct their corresponding matrices $K_i := K(\rho_i)$. Clearly \[eqn:K-kernel\] must hold for each $K_i$, and we can stack each of these constraints into a single matrix constraint, $$\begin{aligned} A := \begin{bmatrix} K_1\\ \vdots\\ K_N\\ \end{bmatrix} \text{\ and\ \ }x := c\,.{\addtocounter{equation}{1}\tag{\theequation}}\label{eqn:A-multiple-states}\end{aligned}$$ We adopt the notation $A$ for a composite object that incorporates multiple constraints, and we label our unknown as $x$ in this more general setting. The analog of \[eqn:K-kernel\] now becomes $$\begin{aligned} \label{eqn:A-kernel} Ax=0\,.\end{aligned}$$ The spectral gap of $A$ obeys the inequality, $$\begin{aligned} \label{eqn:gap-ineq} \Delta(A) \ge \max_j \Delta(K_j)\,,\end{aligned}$$ which follows from Weyl’s inequality [@Bhatia1997]. Thus the gap of $A$ is at least as good as the best constituent constraint matrix $K_j$ that comprises a block of $A$. at (0,0) [![This figure shows how the use of multiple eigenstates can yield unique reconstruction, even of non-generic and non-local Hamiltonians. In (a) we illustrate the effect of using multiple eigenstates on a randomly generated disordered and non-disordered Hamiltonian in the form of \[eqn:lr-ising-hamiltonian\]. Plotted are the singular value spectra for the operator $ A $ as more eigenstates are added to the estimation process. We see that for even one additional preparation we remove the degeneracy in $ A $. In (b) we plot the spectral gap $\Delta(K)$ for randomly generated 10-qubit local Hamiltonians using an increasing number of $N$ eigenstates, showing the improvement in spectral gap over a wide variety of Hamiltonians.[]{data-label="fig:multiple_states_svs"}](multiple_states_svs "fig:"){width="\columnwidth"}]{}; at (-1,-2.5) [(a)]{}; at (3.2,-2.5) [(b)]{}; In practice, the inequality (\[eqn:gap-ineq\]) greatly understates the improvement to the spectral gap in practically relevant cases. We illustrate this in \[fig:multiple\_states\_svs\] for two separate cases: a disordered 2-local spin chain, as was studied in ref. [@qi2019determininglocal], and a nonlocal, 2-body Ising-type model. The Hamiltonian for the nonlocal model is given by $$\begin{aligned} \label{eqn:lr-ising-hamiltonian} H = \sum_{i\neq j} X_i X_j + \sum_{i\neq j} J_{i,j} P_i P_j\,,\end{aligned}$$ where the couplings $J_{i,j}$ control the strength of arbitrary two-qubit Pauli interactions. These couplings can be chosen to be either uniformly 0 (in the case of no disorder) or sampled as independent Gaussian random variables in the disordered case, $J_{i,j}{\sim \mathcal{N}}(0,\sigma)$; we choose $\sigma = 10^{-1}$ for the simulations in \[fig:multiple\_states\_svs\]. The disorder is intended to avoid any special structure that might inadvertently close the spectral gap. The Hamiltonian (\[eqn:lr-ising-hamiltonian\]) is physically relevant because it is the coupling that drives the global entangling Mølmer-Sørensen gate used in ion-traps [@sorenson2000entanglement], and understanding deviations from the uniform case will help calibrate these gates. For our simulations of \[eqn:lr-ising-hamiltonian\], we prepare multiple eigenstates ${\left|E_i\right>}$ for $i=1,...,N$, and measure each corresponding $K_i$. Fig. \[fig:multiple\_states\_svs\] shows the singular spectrum of $A$ for $N=1,2$, or $4$ eigenstate preparations. Firstly, we can see that for a single eigenstate there is a highly degenerate ground space, meaning there are many Hamiltonians that share this preparation as an eigenstate. Therefore, the $N=1$ estimator will fail in this case of a nonlocal Hamiltonian, even in the presence of disorder. However, in \[fig:multiple\_states\_svs\](a) we see that with the addition of a single eigenstate this degeneracy is lifted and, up to numerical precision, $\dim \ker A = 1$. This means that the reconstruction is now unique: i.e., there exists only one $2$-body Hamiltonian that has both of those preparations as eigenstates. (b) shows how this holds, even for random Hamiltonians; the spectral gap improves even further with additional eigenstates. Using multiple control fields {#section:multiple_controls} ----------------------------- Now suppose that we also wish to utilize and characterize several additional control fields. We will expand these control fields in the same basis $\{P_j\}$ as before, so that for the control field $V_i$ we have $$\begin{aligned} V_i = \sum_{j=1}^m v_{i,j} P_j\,,\end{aligned}$$ where $v_i$ is the vector of coupling constants. In the presence of the control field the total Hamiltonian is given by $$\begin{aligned} H_i := H_0 + V_i = \sum_{j=1}^m \bigl(c_j + v_{i,j}\bigr) P_j\,,\end{aligned}$$ where $H_0$ is the bare Hamiltonian in the absence of any controls. Preparing a steady state of $H_i$ means that the correlation matrix $K$ has a kernel of $c+v_i$. Since we seek to estimate the $v_i$ as well, we can incorporate these additional variables and constraints again into a larger matrix (again called $A$) and a longer list of variables (again called $x$), given by $$\begin{aligned} \label{eqn:A-mult-controls} A := \begin{bmatrix} K_0&0&0 &\ldots&0\\ K_1&K_1&0 &\ &\ \\ K_2&0&K_2 &\ &\ \\ \vdots&\ &\ &\ddots &\vdots \\ K_N&0&\ &\ldots&K_N \\ \end{bmatrix}, \ \ x:=\begin{bmatrix} c\\ v_1\\ \vdots\\ v_N\\ \end{bmatrix}.\end{aligned}$$ Furthermore, it is clear that using multiple control settings and multiple input states are compatible with one other, as any extra input states for each control field can again be stacked vertically onto the matrix $A$. Unfortunately, even if the matrices $K_j$ have a unique kernel, it is not true that the kernel of $A$ in \[eqn:A-mult-controls\] is unique. If we have $N \ge 0$ separate control fields, then by a rank-counting argument there will be at least $N+1$ independent consistent solutions. It seems we have made an already ill-posed problem *worse* by introducing the control fields! One possible solution is to add additional state preparations. In the generic case, adding $\ell$ extra state preparations will suffice to break the degeneracy, where $\ell = \lceil N/(m-1)\rceil$, which again follows from a rank-counting argument. However, even without these extra state preparations there is still utility in this strategy. To avoid the inconvenience of multiple state preparations, we can instead assume that one has access to control fields that are already well characterized. This is often the case in practice, such as when a well-characterized single-qubit gate is already known, but one wishes to characterize a two-qubit gate. Then the prior information about the well-characterized control fields serves to pin down a preferred solution within the $(N+1)$-fold degenerate space. To see how this works in more detail, we must first introduce our Bayesian model to properly account for this prior information. Bayesian model {#section:bayesian-model} -------------- The models presented in the previous sections all require the inference of a vector in the kernel of an operator $ A $. In this section we will construct a Bayesian method for this task, [BHL]{}, giving us the ability to leverage prior information for greater robustness to noise as well as providing us with accurate quantification of the uncertainty in our estimates. One of the most important features of the Bayesian method is that it provides a point estimate for system couplings. This is in contrast to prior works [@bairey2019learning; @bairey2019lindblad; @qi2019determininglocal; @hou2019determining; @chertkov2018computational] that only resolve the system parameters up to a linear subspace by finding an approximate null vector. As shown in \[fig:multiple\_states\_svs\], for physically relevant instances this subspace can even be larger than one dimensional, but even in the case of a unique kernel the prior methods failed to estimate the overall scale factor, and this had to be added by hand. [BHL]{} eliminates this ambiguity. As with all prior related works [@bairey2019learning; @bairey2019lindblad; @qi2019determininglocal; @hou2019determining; @chertkov2018computational], our model will be $ Ax = 0 $ where our unknown $ x $ will be in the kernel of an operator $ A $. We will call $A$ the *forward operator*. at (0,0) [![image](posterior_figure_2){width="\textwidth"}]{}; The entries of $A$ are inherently uncertain because they can only be estimated by repeated experiments. To model the noise in the entries of $A$, we assume that each entry of the noisy forward operator $\tilde{A}$ is a random variable $$\begin{aligned} \tilde{A}_{i,j} = A_{ij} + E_{ij}\end{aligned}$$ where we chose the distribution to be normal with zero mean, $E_{ij}{\sim \mathcal{N}}(0,\sigma^2_{E_{ij}})$, though we note that a nonzero mean could be accommodated as well. A more realistic noise model would be for the $E_{ij}$ to be binomial random variables, as they are likely to be obtained by averaging experimental two-outcome measurements. However, the Gaussian approximation will be useful theoretically for updating our prior information and will be a good approximation to the binomial case in the regime of interest. With this noise model in place for $\tilde{A}$, the resulting model becomes $$\begin{aligned} \label{eqn:approximate_model} \tilde{A}x + \epsilon = 0\,,\end{aligned}$$ where we call the additive noise process $ \epsilon:=-Ex $ the *approximation error* [@kaipio2007statistical; @kaipio2004computational; @kaipio2013approximate]. We call this an approximation error as it corresponds to an uncertainty in the operator $ A $. Importantly, the noise on $\epsilon$ depends on the unknown $x$. Bayesian methods provide a natural framework for handling such errors, and correctly dealing with this state-dependent noise is a key contribution of our work. We will assume that we have a Gaussian prior distribution for our coefficients $ x{\sim \mathcal{N}}(\bar{x},\Gamma_x) $ where $ \bar{x} $ is our best guess for the unknown coefficients and $ \Gamma_x $ is the covariance reflecting the prior uncertainty. In this case, as we show in Appendix \[appendix:bayesian\], we can model the conditional distribution $ \epsilon|x{\sim \mathcal{N}}(0,\Gamma_{\epsilon|x}) $, where $$\begin{aligned} \label{eqn:approximation-error-covariance} \Gamma_{{\epsilon|x}_{k,l}}(x) &=\begin{cases} \sigma_E^2 \left(\operatorname{Tr}\left[\Gamma_x\right] + \|x\|_2^2\right)\,, &k = l \\ \sigma_E^2 \left(\Gamma_{x_{k,l}} + x_k x_l\right)\,,&k\neq l \end{cases}\,.{\addtocounter{equation}{1}\tag{\theequation}}\end{aligned}$$ This expression for the covariance of the approximation error forms the basis of our Gaussian likelihood. It is easy to specialize this result to the cases of multiple input states or multiple control fields, and we do so in Appendix \[appendix:bayesian\]. There are two important features of the noise in \[eqn:approximation-error-covariance\]: it is *not* independent of our unknown $x$, as noted above, and it is also correlated. Because we have chosen a conjugate prior, we also have a Gaussian posterior $ x|A{\sim \mathcal{N}}(\mu_p,\Gamma_p) $. As shown in Appendix \[appendix:bayesian\], the mean and covariance of the posterior is given by $$\begin{gathered} \mu_p = {\underset{x}{\mathrm{argmin}}} \|L_{\epsilon|x}(x) Ax\|_2^2 + \|L_x (x-\bar{x})\|_2^2,\label{eqn:posterior-MAP} \\ \Gamma_p = \left(\Gamma_{x}^{-1} + A^\text{T} \Gamma_{\epsilon|x}^{-1} A \right)^{-1} \label{eqn:posterior-covariance}\end{gathered}$$ where $ L_{\epsilon|x},L_x $ are the Cholesky factors of $ \Gamma_{\epsilon|x}^{-1} $ and $ \Gamma_x^{-1} $ respectively, that is, $L{^\text{T}}L = \Gamma^{-1}$ for each respective pair $L$ and $\Gamma$. The estimate (\[eqn:posterior-MAP\]) is the maximum *a posteriori* (MAP) estimate and can be recognized as a generalized Tikhonov regularization. The posterior covariance matrix \[eqn:posterior-covariance\] gives a direct quantification of the uncertainty of the point estimate. We can compare our estimate to the estimate obtained by taking the null space of the forward operator as in [@qi2019determininglocal; @bairey2019learning]. This estimate can be written explicitly as $$\begin{aligned} \label{eqn:bairey-mle} \hat{c} = \pm\,{\underset{x}{\mathrm{argmin}}} \|Ax\|_2\indent\mathrm{s.t.}\indent\|x\|=\|c\|.\end{aligned}$$ As noted above, the normalisation constraint $\|x\|=\|c\|$ and the sign ambiguity $\pm$ both depend on the true state and will not be known in practice, so this estimate is unrealistically optimistic. In the absence of the unrealistic norm constraint in \[eqn:bairey-mle\], this estimator coincides with \[eqn:posterior-MAP\] only when we have no prior information and the approximation error is i.i.d. zero-mean Gaussian noise, which is never the case in practice. This highlights the advantage of using the [BHL]{} estimator \[eqn:posterior-MAP\]: by more careful computation of the statistics of the approximation error $ \epsilon $ and using prior information, the estimator \[eqn:posterior-MAP\] avoids overfitting to the measurements. In addition, the error quantification for our estimator is given to us directly from our posterior distribution in the form of the covariance in \[eqn:posterior-covariance\], and this depends less heavily on the spectral gap than \[eqn:bairey-mle\]. One of the nice aspects of the Bayesian formalism is the ability to sequentially update the posterior. This is a concept that has already been explored in the context of Hamiltonian learning [@granade2012robustonline] and is likewise a good fit for [BHL]{}. The posterior distribution is updated having only partial data which will become the prior for the next posterior update when the data set is received. details the online procedure for [BHL]{}. $\bar{x}, \Gamma_x, A$\ \# $ x{\sim \mathcal{N}}\left(\bar{x},\Gamma_x\right) $ is the initial prior distribution\ \# $A=\{A_1,\dots,A_N\}$ a list of $N$ measured submodels $\bar{x}\gets {\underset{x}{\mathrm{argmin}}} \|L_{\epsilon|x}(x) A_k x\|_2^2 + \|L_x (x-\bar{x})\|_2^2$ $\Gamma_x\gets \left(\Gamma_{x}^{-1} + A_k^\text{T} \Gamma_{\epsilon|x}^{-1} A_k \right)^{-1}$ $(\bar{x},\Gamma_x)$ At each iteration the models $ A_k $ could be the relevant $K_i$ for different input states or an additional control setting. Moreover, as shown in [@bairey2019learning; @bairey2019lindblad], the system couplings defined on a subregion of the full system depend only on the measurements of that subregion alone. This suggests an adaptive measurement scheme where the subsequent measurement of $ A_k $ are measurements of subregions of the full system, updating the posterior for those couplings only. at (0,0) [![Reconstruction error for different schemes as a function of measurement noise for a simulated ensemble of random 2-local Hamiltonians of 10-qubit spin chains. The red line corresponds to the estimator $ \hat{c} $ given by \[eqn:bairey-mle\]. With the [BHL]{} plots, the Bayesian reconstruction uses a conservative prior standard deviation of $ \sigma_{c} = 0.5 $ and well calibrated control fields with $\sigma_{v_{i}} = 10^{-3}$. The different error curves for additional control settings ($ N=1,2,4,8 $) all have constant experimental cost; the experimental settings are sampled $s/N$ times. The Bayesian reconstruction error is always bounded above by the prior uncertainty. The decay in performance of [BHL]{} for very small noise is due to an inaccuracy in the solver; theoretically the reconstruction error should saturate a $\sqrt{s/N}$ scaling according to central limit theorem rates. []{data-label="fig:error-ensemble-controls"}](logerror_controls "fig:"){width="\columnwidth"}]{}; shows the reconstruction from the MAP estimate for a 100-qubit random spin chain Hamiltonian. The full posterior is marginalised down to two parameters of the $1191$-dimensional full parameter space. The couplings are known up to a prior accuracy of $\sigma_c = 10^{-1}$ and $N=4$ extra random control fields $v_i$ are sequentially added. These control fields are well characterised, with prior $\sigma_{v_i}=10^{-3}$. The Hamiltonians are represented using Matrix Product Operators (MPOs) and the corresponding reconstructions are computed using an implementation of the density matrix renormalization group (DMRG). (For a review of MPOs and DMRG, see Ref. [@bridgeman2017handwaving].) We can see that as the control fields are added, the posterior sequentially contracts around the true parameters. Moreover, the $2$ standard deviation posterior ellipse shown consistently contains the true parameter at each iteration which is one of most appealing attributes of the Bayesian formalism. In \[fig:error-ensemble-controls\] we show the reconstruction error for the case where we have multiple control fields. Shown are the expected reconstruction errors for an ensemble of random 10-qubit spin chain Hamiltonians. The expectation value for the reconstruction error ${\mathbb{E}\left[\|\hat{c} - c\|_2\right]}$ for the posterior is given by $\sqrt{ \operatorname{Tr}\left[\Gamma_p\right] }$. This reconstruction error is shown for an increasing number of $N$ control fields, with constant experimental complexity; the settings are sampled $s/N$ times so that statistical power is held fixed. If our control fields $V_i$ are better characterised than the system we see that we get better reconstruction when our experimental resources are spread over a larger set of configurations (i.e. larger $N$). However, there are diminishing returns for this procedure for large $N$ and the $N=8$ performs only slightly better than $N=4$. Also shown is the expected error of the prior distribution which always gives an upper bound on the posterior accuracy. For comparison, the estimator given by \[eqn:bairey-mle\] is given to show how [BHL]{} adds robustness to noise. Furthermore, unlike [BHL]{}, any reconstruction using \[eqn:bairey-mle\] needs to be normalised to match the exact 2-norm of the unknown which is not be possible in practice, hence the reconstruction error for \[eqn:bairey-mle\] should be read as best-case. These results are for an additive error corresponding only to sampling statistics, however for [BHL]{} to be of practical use we must consider additional relevant noise such as state preparation and measurement (SPAM) errors. Dealing with errors {#section:SPAM-errors} ------------------- In this section we will consider the three types of errors that plague any estimation scheme, describe our strategies for overcoming them, and quantify an upper bound on the experimental effort required to suppress them. The errors we consider are state-preparation errors, measurements errors, and statistical errors from finite sampling. The strategy for dealing with measurement and statistical errors is simple: take more samples. However, our method will measure multiple weight-$k$ Pauli expectation values in parallel and takes time $O_k(\log m)$ to measure $m$ Paulis to fixed precision, beating the naive scaling of $O(m)$ and improving on recent work achieving $O_k\bigl[(\log m)^2\bigr]$ [@cotler2019], where $O_k$ means the constant implied by the big-$O$ notation depends on $k$. We deal with state preparation errors in two steps. Following Ref. [@bairey2019learning], we first use a time-averaged state to get our initial state closer to a steady state, and then we use quadrature rules to approximate this time average. ### Measurement errors {#section:measurement-errors} Let us first consider measurement errors since these are the easiest to address. For a given measurement setting $ C_{jk} := i\left[P_j,P_k\right] $ we take $ s $ repeated measurements to estimate $ K_{jk} = \operatorname{Tr}\left(C_{ij} \rho \right) $. However, these estimates will contain errors beyond those we obtain from finite averaging of measurement outcomes. Let us consider the case that the $C_{jk}$ are qubit Pauli operators, and we assume that these are measured via two-outcome measurements. Suppose that with some probability $e_{jk}$ we have a measurement error, meaning that we should have observed outcome $+1$ but instead we observe outcome $-1$ (or vice versa, symmetrically). Subjected to this binary symmetric noise channel, the expectation value of repeated measurements becomes $$\begin{aligned} K_{ij} = \left(1-2e_{ij}\right) \operatorname{Tr}\left(C_{ij} \rho\right).\end{aligned}$$ In the case where we have a uniform error rate $ e_{ij} = e $ for all $ i,j $ we obtain measurements $ K_\text{meas}=(1-2e)K $. Therefore, since the kernel is invariant under scalar multiplication, our model is in fact inherently robust to such measurement errors, and only fluctuations between the $e_{ij}$ will contribute bias to the estimate. That is not to say however that the estimator is invariant under measurement errors. Measurement errors will cause the MAP estimate (\[eqn:posterior-MAP\]) and posterior covariance (\[eqn:posterior-covariance\]) to shift toward the prior as expected. Although there is no bias, more samples will be needed to achieve the same statistical resolution. The Bayesian methodology also allows us to incorporate a model of these measurement errors and account for them. Suppose that we have a statistical description of our measurement error rates $ e_{ij} $. Then these measurement errors can also be corrected for by using the methods of Ref. [@ferrie2012estimating]. The corresponding uncertainty in this correction step can then replace the additive noise process $\epsilon$ (via \[eqn:approximate\_model\]), supplying the relevant error $ E $ for the approximation error. at (0,0) [![image](ising_example_figure){width="\textwidth"}]{}; at (-6.5,-3) [(a)]{}; at (-1.6,-3) [(b)]{}; at (5,-3) [(c)]{}; ### State preparation errors {#section:preparation-errors} Preparation errors usually pertain to our inability to exactly prepare some target state. In our case we won’t know *a priori* the state we want to prepare (that would require knowing the Hamiltonian) so preparation errors will actually correspond to the use of an approximate input state. This means that it is imperative to model and include preparation errors into the noise model as these are likely to be a dominant noise source in practice. It also means that we should appeal to the system for an appropriate preparation, rather than predefine it based on prior information. An example of a system-defined preparation is the time-averaged state $ {\bar{\rho}}(t) $, introduced in [@bairey2019learning], which will approximately commute with the Hamiltonian. It is defined by $$\begin{aligned} \label{eqn:time-averaged-state} {\bar{\rho}}(t) :=\frac{1}{t} \int_{0}^{t} \mathrm{d}u \rho(u).\end{aligned}$$ The appeal of this state is that the dynamics of the system provides us with the state. Following [@bairey2019learning], we show in \[appendix:approximate-state-preps\] that a “warm start” initial state that is $\epsilon$-close in 1-norm to a steady state converges like $O(\epsilon/t)$ to a true steady state. Therefore we can use approximate steady state preparations and time averaging to systematically reduce our state preparation errors. A further difficulty, is that \[eqn:time-averaged-state\] cannot be physically realized exactly. We must sample from times in the interval $[0,t]$ and average an ensemble of measurements over those times to approximate the statistics from ${\bar{\rho}}(t)$. To deal with this additional source of error, which we call quadrature error, we sample from specific times $t_i \in (0,t)$ and produce a weighted average ${\bar{\rho}}_s(t) = \sum_{i=1}^s w_i \rho(t_i)$. In \[appendix:approximate-state-preps\] we show that our state ${\bar{\rho}}_s(t)$ rapidly converges to the desired state ${\bar{\rho}}(t)$ after $s$ samples, and satisfies the bound $$\begin{aligned} \bigl\|{\bar{\rho}}(t) - {\bar{\rho}}_s(t)\bigr\|_1 \le \frac{\sqrt{\pi}}{4\sqrt{s}} \biggl(\frac{\mathrm{e}\|H\|t}{4s}\biggr)^{2s}\,.\end{aligned}$$ ### Sample complexity upper bound {#sec:samplecomplexitymaintext} We can estimate the experimental cost of Hamiltonian learning using our scheme for state preparation via time averaging and quadrature together with our method for efficient estimation of Pauli expectation values which are both detailed in pseudocode in \[tab:pseudocode\] in \[appendix:algorithms\]. We have seen numerically that the Bayesian approach outperforms the naive approximate kernel estimate \[eqn:bairey-mle\]. We would like to be able to properly bound the sample complexity of this estimator. However, the Bayesian model is difficult to analyze because of the role played by the prior information. We therefore give an analysis in \[appendix:sample-complexity\] of the sample complexity for the naive kernel estimator (\[eqn:bairey-mle\]) to obtain an infidelity of at most $\epsilon$. Here the infidelity (one minus the fidelity, $F$) between two vectors $a$ and $b$ is defined as $$\begin{aligned} 1-F(a,b) = 1- \frac{ |a{^\text{T}}b|^2}{\|a\|_2^2 \|b\|_2^2}\,,\end{aligned}$$ and is a measure that is insensitive to an overall rescaling of $a$ or $b$ by nonzero real numbers. In \[appendix:sample-complexity\], we show that starting with any *a priori* bound $\max_i |c_i| \le O(1)$, we can (with high probability) learn an estimate $\hat{c}$ such that $1-F(c,\hat{c}) \le \varepsilon$ using at most $$\begin{aligned} N = \tilde{O}\biggl(\frac{m^3}{\varepsilon^{3/2} \Delta^{3/2}} 3^{2k}\biggr)\end{aligned}$$ measurements, where $\Delta$ is the spectral gap of the (noisy) forward operator, $m$ is the length of $c(H)$, and $\tilde{O}$ means we are ignoring logarithmic terms. A more precise version of this claim is given in \[thm:bound\] and \[cor:KE\]. The main take away from this bound is that Hamiltonian estimation of $k$-body Hamiltonians can be done in polynomial time in the number of qubits, so long as the gap $\Delta$ is not too small. We also stress these are recovery guarantees and provide only sufficient conditions for convergence. Using the Bayesian methodology of [BHL]{} allows us to leverage the use of prior information and multiple state estimates to improve the estimation dramatically beyond what can be rigorously proven. Example: Long-range Ising Hamiltonian {#section:molmer-sorenson-example} ------------------------------------- Consider again the long-range Ising model in \[eqn:lr-ising-hamiltonian\] and suppose we are trying to reconstruct this from multiple time-averaged states. We fix the spin-chain length to be $n=4$ and the basis $ \{P_i\} $ to be all $2-$body Pauli operators. (The small value of $n$ is chosen for visual clarity in \[fig:ising-example\].) We set our prior mean to be centred at the Ising model with $$\begin{aligned} \bar{c}_i = \begin{cases} 1,&P_i = X_jX_k\\ 0,&\text{otherwise} \end{cases}\end{aligned}$$ and we choose an i.i.d. prior with $ \sigma_{c} = 10^{-1} $. The true Hamiltonian $H$ is then taken to be sampled from the prior distribution. a\) shows the distribution of the spectra for Hamiltonians drawn from the prior, centred around the degenerate mean. Eigenstates of our prior are natural choices for initial states as they are potentially close to steady states, especially in the presence of more prior information. b) shows the decay of $\|H,{\bar{\rho}}(t)\|_1$ over time for different initial states including two eigenstates of the prior $\rho_\mathrm{gs}$ and $\rho_\mathrm{es}$ defined by $$\begin{aligned} \rho_\mathrm{gs} =& {\left|0001\right>} + {\left|0100\right>} + {\left|1011\right>} + {\left|1110\right>}\,,\\ \rho_\mathrm{es} =& {\left|0000\right>} + {\left|0011\right>} + {\left|0101\right>} + {\left|0110\right>} + \\ &\ {\left|1001\right>} + {\left|1010\right>} + {\left|1100\right>} + {\left|1111\right>} \,,\end{aligned}$$ which are shown in yellow and green, respectively. The time-average of both these states approaches a commuting state according to a $1/t$ scaling, however the time-averaged state corresponding to the initial state $ \rho_\mathrm{es}$, which we will denote ${\bar{\rho}}_\mathrm{es}(t) $, vanishes periodically. If we look at the spectrum in \[fig:ising-example\] a), we can see that $\rho_\mathrm{es}$ has large overlap with the two most excited states. The frequency components $f_{ij}$ of the time evolution correspond to the energy spacings of the Hamiltonian, $ \vert E_i - E_j \vert = f_{ij} \hbar $ and high frequency components decay under averaging fastest, $\sim 1/(f_{ij} t)$. Hence, the major frequency component that persists in $ {\bar{\rho}}_\mathrm{es}(t)$ is due to the difference between the two largest eigenvalues $\vert E_i - E_j \vert\approx 0.5$ which gives a period of $T/\pi\approx 2$. Therefore, certain initial states will yield time averaged states that approximately commute at specific times according to $\rho(0) \approx \rho(t)$. In this example we prepare the $16$ eigenstates of the prior distribution, and time-average them for $t=3\pi$. In between each state, we update the posterior online according to \[alg:online-learning\]. Most importantly, the approximation error for the use of time-averaged states is accounted for as detailed further in \[appendix:approximate-state-preps\]. c) shows the true, prior and posterior couplings for the Hamiltonian. We see that correctly handling the approximation error allows for the use of non-commuting states and [BHL]{} will still yield robust error quantification. The shaded areas show the marginal variances which correspond to the diagonal of the covariance $\Gamma_p$. However, it should be noted that [BHL]{} provides access to the full covariance matrix, including off-diagonal correlations in the unknowns. There are 3 couplings, the $X_0$, $X_3$, and $X_0X_2$ terms, out of the full 66 that lie outside the posterior $2\sigma$ error bars. These errors, however, are within what we expect from a 95% credible region. Discussion {#section:discussion} ========== We have introduced a new framework which allows for the efficient reconstruction of $k$-body Hamiltonians called [BHL]{}. Our method uses prior information of the type that is likely to be available to most experimentalists. This prior information allows us to lift the estimator from a subspace containing the system coefficients to a bare point estimate, and includes a natural quantification of the uncertainty in the form of a posterior covariance matrix. We propose two new extended models that can be used singly or together to improve the stability and accuracy of the model, one using multiple input states and the other making use of well calibrated control fields. We also present an algorithm (\[alg:online-learning\]) for performing [BHL]{} online and suggest one way in which measurements could be performed adaptively to maximise estimation accuracy. The major contribution that [BHL]{} brings to Hamiltonian learning is a rigorous Bayesian framework for handling the numerous approximation errors inherent in these correlation matrix models. We show how these errors can be systematically added to the likelihood of our model in order to prevent overfitting and give concrete examples. Most importantly [BHL]{} provides robust uncertainty quantification in the inference of the system parameters. We have furthermore introduced an efficient method for approximate preparation of time-averaged steady states and an efficient method for estimating the expectation values of $m$ low-weight Pauli operators in $O(\log m)$ time. These two subroutines have enabled us to give a rigorous upper bound on the required sample complexity for Hamiltonian estimation, and this complexity is polynomial in $n$ for $k$-body Hamiltonians whenever the spectral gap of $K$ is at least $1/\textrm{poly}(n)$. There are many avenues for future work. One obvious step is to generalize [BHL]{} to handle Lindbladian estimation [@bairey2019lindblad]. Another is to consider Hamiltonians for systems other than qubits. In particular, it would be interesting to generalize these methods to Hamiltonians and Lindbladians that are unbounded operators such as systems comprised of coupled oscillators. It is unclear how tight the sample complexity bounds derived here are, or if they can be significantly improved by either a better analysis of the existing algorithms or by better estimations schemes. In particular, it would be interesting to directly analyze [BHL]{}. It would also be interesting to find lower bounds on the sample complexity of Hamiltonian estimation. Finally, perhaps the most interesting direction for future work is to demonstrate the usefulness of [BHL]{} in a real experiment. The authors would like to thank Arne Grimsmo, Kamil Korzekwa and Phillipp Schindler for insightful discussions. This work was supported by the Australian Research Council via EQuS project number CE170100009 and by the US Army Research Office grant numbers W911NF-14-1-0098 and W911NF-14-1-0103. Bayesian Preliminaries {#appendix:bayesian} ====================== Construction of the likelihood {#appendix:construction-likelihood} ------------------------------ We will begin by constructing the likelihood for our problem. Recall that in the ideal noiseless case we have a true operator $ A $ such that $ Ax=0 $. However, we only have access to the measured approximation $ \tilde{A} $ where they differ by some additive noise matrix $ E = A - \tilde{A} $. This means that the error we incur in our true model is given by $$\begin{aligned} \tilde{A}x &= Ax + Ex\end{aligned}$$ leaving us with an additive noise process $ \epsilon:=-Ex $ in our model $$\begin{aligned} \tilde{A}x + \epsilon &= 0\end{aligned}$$ which we call the *approximation error* [@kaipio2007statistical]. We begin with the assumption that the distribution $ \pi\left(A,x,\epsilon\right) $ is jointly normal. Then through repeated use of Bayes’ theorem, $$\begin{aligned} \begin{split} \label{eqn:repeated-bayes} \pi\left(A,x,\epsilon\right)&= \pi\left(A|x,\epsilon\right)\pi\left(\epsilon|x\right)\pi\left(x\right)\\ &= \pi\left(A,\epsilon|x\right)\pi\left(x\right) \end{split}\end{aligned}$$ and also using the fact that $$\begin{aligned} \label{eqn:dirac-distribution} \pi\left(A|x,\epsilon\right) &= \delta\left(-Ax-\epsilon\right) \end{aligned}$$ we can marginalise over our approximation error $ \epsilon $ to yield our likelihood $$\begin{aligned} \label{eqn:likelihood} \pi\left(A|x\right) &= \int \pi\left(A,\epsilon|x\right)d\epsilon\nonumber\\ &= \int \pi\left(A|x,\epsilon\right) \pi\left(\epsilon|x\right) d\epsilon, &\bigl[\text{by~(\ref{eqn:repeated-bayes})}\bigr]\nonumber\\ &= \int \delta\left(-Ax-\epsilon\right) \pi\left(\epsilon|x\right) d\epsilon, &\bigl[\text{by~(\ref{eqn:dirac-distribution})}\bigr]\nonumber\\ &= \pi_{\epsilon|x}\left(-Ax | x\right)\,.\end{aligned}$$ Here $ \pi_{\epsilon|x}\left(-Ax | x\right) $ is the distribution of $ \epsilon|x $ *evaluated* at the point $ \epsilon = -Ax $. In the usual construction of the likelihood for such a linear model, we assume that our additive noise process and the unknown are mutually independent, in which case conditioning on $ x $ makes no difference and we are simply left with the likelihood depending on the distribution of $ \epsilon $. However, our approximation error is $ \epsilon=Ex $, therefore we cannot make this assumption and have to carry the extra baggage of the conditional distribution in our computations. Gaussian Posterior ------------------ In this section we will derive our MAP estimator from a conjugate Gaussian prior. In general, the computation of a posterior cannot be tractably computed, and for small parameter spaces one must often resort to sampling methods such as Markov chain Monte Carlo (MCMC) [@kaipio2004computational]. However, for certain combinations of likelihoods and prior distributions the posterior will be exactly computable, a situation known as a *conjugate prior*. The conjugate prior for a Gaussian likelihood is a Gaussian, and we have shown in the preceding section that our likelihood is Gaussian. Now we will compute the posterior distribution under a Gaussian prior, $x{\sim \mathcal{N}}(\bar{x},\Gamma_x)$, where $$\begin{aligned} \pi(x|A)\propto&\pi(A|x)\pi(x)\nonumber\\ \propto&\exp\biggl(-\frac{1}{2}(-Ax)^\mathrm{T} \Gamma_{\epsilon|x}^{-1} (-Ax) \biggr)\nonumber\\ & \ \times \exp\biggl(-\frac{1}{2}(x-\bar{x})^\mathrm{T}\Gamma_{x}^{-1}(x-\bar{x}) \biggr)\nonumber\\ \begin{split} =&\exp\Biggl(-\frac{1}{2}\biggl( (-Ax)^\mathrm{T} \Gamma_{\epsilon|x}^{-1} (-Ax) +\dots \\ &\indent(x-\bar{x})^\mathrm{T}\Gamma_{x}^{-1}(x-\bar{x}) \biggr) \Biggr)\,. \end{split}\end{aligned}$$ Now if we take a Cholesky decomposition of the positive definite covariance matrices $L_{\epsilon|x}^\mathrm{T}L_{\epsilon|x} = \Gamma_{\epsilon|x}^{-1}$ and $L_{x}^\mathrm{T}L_{x} = \Gamma_{x}^{-1}$ we find $$\begin{aligned} \label{eqn:posterior-potential} \pi(x|A)\propto&\exp\left(-\frac{1}{2}\|L_{\epsilon|x}(x) Ax\|_2^2 -\frac{1}{2} \|L_x (x-\bar{x})\|_2^2 \right)\,.\end{aligned}$$ The posterior mean is then given by $$\begin{aligned} \mu_p = {\underset{x}{\mathrm{argmin}}} \|L_{\epsilon|x}(x) Ax\|_2^2 + \|L_x (x-\bar{x})\|_2^2.\end{aligned}$$ Because the posterior is Gaussian we can compare \[eqn:posterior-potential\] with a standard multivariate distribution to solve for the covariance, $$\begin{aligned} \Gamma_p = \left(\Gamma_{x}^{-1} + A^\text{T} \Gamma_{\epsilon|x}^{-1} A \right)^{-1}.\end{aligned}$$ Distribution of approximation error ----------------------------------- We need to determine the statistics of the conditional distribution $\epsilon|x $ in order to be able to evaluate our likelihood in (\[eqn:likelihood\]). First we will need to compute the statistics of $ \epsilon $. Given a simple noise model corresponding to the averaging of measurement outcomes from $ s $ samples, we can assume we have zero-mean Gaussian noise in the entries of $ A $ given by $ E_{i,j} = E_{j,i} {\sim \mathcal{N}}\left(0,\sigma_E^2\right) $ where $ \sigma_E^2\approx s^{-1}$ due to the central limit theorem. The mean of the approximation error, using the independence of $ x $ and $ E $, is $$\begin{aligned} \bar{\epsilon} = \mathbb{E}\left[Ex\right] = \mathbb{E}\left[E\right]\mathbb{E}\left[x\right] = 0.\end{aligned}$$ Now consider $$\begin{aligned} \Gamma_\epsilon &= \mathbb{E}\left[(\epsilon-\bar{\epsilon})(\epsilon-\bar{\epsilon}){^\text{T}}\right]\\ &= \mathbb{E}\left[\epsilon\epsilon{^\text{T}}\right]\\ &= \mathbb{E}\left[Exx{^\text{T}}E{^\text{T}}\right]\end{aligned}$$ Let $ e_k,e_l $ be rows of $ E $. Then the entries of $ \Gamma_\epsilon $ are $$\begin{aligned} \Gamma_{\epsilon_{k,l}} &= \mathbb{E}\left[e_k x x{^\text{T}}e_l{^\text{T}}\right]\\ &=\sum_{i,j} {\mathbb{E}\left[e_{k,i}e_{l,j}x_i x_j \right]}\end{aligned}$$ and given $$\begin{aligned} {\mathbb{E}\left[e_{k,i}e_{l,j} \right]}&=\begin{cases} \sigma_E^2&k=l,i=j\\ \sigma_E^2&k\neq l,i=l,j=k\\ 0&\text{otherwise}\,, \end{cases}\end{aligned}$$ hence we have $$\begin{aligned} \Gamma_{\epsilon_{k,l}}(x) &= \begin{cases} \sigma_E^2 {\mathbb{E}\left[x{^\text{T}}x\right]} &k=l\\ \sigma_E^2 {\mathbb{E}\left[x_k x_l\right]} & k\neq l\\ \end{cases}{\addtocounter{equation}{1}\tag{\theequation}}\label{eqn:useful-gammae}\\ &= \begin{cases} \sigma_E^2 \left(\operatorname{Tr}\left[\Gamma_x\right] + \|\bar{x}\|_2^2\right) &k=l\\ \sigma_E^2 \left(\Gamma_{x_{k,l}} + \bar{x}_k\bar{x}_k\right) & k\neq l\,. \end{cases}{\addtocounter{equation}{1}\tag{\theequation}}\label{eqn:prior-gammae}\end{aligned}$$ Now we can use the standard Schur complement computation of the conditional Gaussian distribution [@kaipio2004computational] to determine our likelihood. Let $ \epsilon|x{\sim \mathcal{N}}(\mu_{\epsilon|x},\Gamma_{\epsilon|x}) $ then $$\begin{aligned} \mu_{\epsilon|x} &= \bar{\epsilon} + \Gamma_{\epsilon x} \Gamma_x^{-1} (x-\bar{x})\\ \Gamma_{\epsilon|x} &= \Gamma_\epsilon + \Gamma_{\epsilon x} \Gamma_{x}^{-1} \Gamma_{x \epsilon}\,.\end{aligned}$$ However, we have $$\begin{aligned} \begin{split} \Gamma_{\epsilon x} &= {\mathbb{E}\left[\left(\epsilon-\bar{\epsilon}\right)\left(x-\bar{x}\right){^\text{T}}\right]}\\ &= {\mathbb{E}\left[E\left(x-\bar{x}\right)\left(x-\bar{x}\right){^\text{T}}\right]}\\ &= {\mathbb{E}\left[E\right]}\Gamma_x\\ &= 0. \end{split}\end{aligned}$$ This means that we obtain the conditional distribution $ \epsilon|x{\sim \mathcal{N}}(0,\Gamma_{\epsilon|x}) $ where $$\begin{aligned} \label{eqn:conditional-covariance} \Gamma_{\epsilon|x} &= \Gamma_{\epsilon}\end{aligned}$$ as defined in \[eqn:prior-gammae\]. We can now provide specific Bayesian formulations for the two models eqs. (\[eqn:A-multiple-states\]) and (\[eqn:A-mult-controls\]). First let us consider the model (\[eqn:A-multiple-states\]) for the case of multiple input states. Our prior statistics are simply $\bar{x} = \bar{c},\ \Gamma_x = \Gamma_c$ and our noise covariance is given by $$\begin{aligned} \Gamma_{\epsilon|x} := \begin{bmatrix} \Gamma_{\epsilon_1|c}& &\ldots&0\\ \ &\Gamma_{\epsilon_2|c}&\ &\ \\ \vdots&\ &\ddots &\vdots \\ 0&\ &\ldots &\Gamma_{\epsilon_N|c}\\ \end{bmatrix}. \end{aligned}$$ Next, the model for multiple control fields will have a joint prior distribution given by $$\begin{aligned} \bar{x}:=\begin{bmatrix} \bar{c}\\ \bar{v}_1\\ \vdots\\ \bar{v}_N\\ \end{bmatrix}\text{ and } \Gamma_x := \begin{bmatrix} \Gamma_c& &\ldots&0\\ \ &\Gamma_{v_1}&\ &\ \\ \vdots&\ &\ddots &\vdots \\ 0&\ &\ldots &\Gamma_{v_{N}} \\ \end{bmatrix}.{\addtocounter{equation}{1}\tag{\theequation}}\label{eqn:multiplecontrolmodel}\end{aligned}$$ The noise $\epsilon$ will be distributed as $ \epsilon{\sim \mathcal{N}}(0,\Gamma_\epsilon) $ where, similar to above, the covariance $ \Gamma_\epsilon $ is the block diagonal matrix $$\begin{aligned} \Gamma_{\epsilon|x} := \begin{bmatrix} \Gamma_{\epsilon_0|c}& &\ldots&0\\ \ &\Gamma_{\epsilon_1|c,v_1}&\ &\ \\ \vdots&\ &\ddots &\vdots \\ 0&\ &\ldots &\Gamma_{\epsilon_N|c,v_{N}} \\ \end{bmatrix}.\end{aligned}$$ The conditional covariance is given by $$\begin{aligned} \Gamma_{{\epsilon_i|c,v_i}_{k,l}} &=\begin{cases} \sigma_E^2 \bigl(\operatorname{Tr}\left[\Gamma_c \right] + \|c\|_2^2+\\ \ \ldots\operatorname{Tr}\left[\Gamma_v \right] + \|v_i\|_2^2\bigr)\,, & k = l \\ \sigma_E^2 \bigl(\Gamma_{c_{k,l}} + c_k c_l\\ \ \ldots \Gamma_{v_{i_{k,l}}} + v_{i_{k}} v_{i_{l}}\bigr)\,, & k\neq l. \end{cases}\end{aligned}$$ Approximate state preparations {#appendix:approximate-state-preps} ============================== Here we show that the vector $c(H)$ of the couplings of the unknown Hamiltonian $H$ is still approximately in the kernel of a matrix $K$ when $K = K(\rho)$ is assumed only to come from a *$\delta$-approximate steady state*. By this we mean a state having a small commutator with $H$, $\|[\rho,H]\|_1\le \delta$. Such states can be prepared using the time averaging argument first described in Ref. [@bairey2019learning]. Our first result is a slight refinement of a similar result in [@bairey2019learning], and is stated precisely in \[lemma:preparation-error\]. In \[thm:bound\] below, we will derive a bound on the error for the kernel estimator due to these approximate state preparations. \[lemma:preparation-error\] Let $\rho$ be a steady state of $H$ and suppose $\|\rho(0)-\rho\|_1 \le \epsilon$ for some $\rho(0)$. Then the time-averaged state ${\bar{\rho}}(t)$ satisfies the inequality $$\begin{aligned} \bigl\|[H,{\bar{\rho}}(t)] \bigr\|_1 \le \frac{2\epsilon}{t}\,.\end{aligned}$$ We define the time-averaged state under the evolution of $H$, $$\begin{aligned} \label{eq:timeavgrho} {\bar{\rho}}(t) := \frac{1}{t} \int_0^t \mathrm{d}u \, \rho(t) = \frac{1}{t} \int_0^t \mathrm{d}u \, \mathrm{e}^{-iHu} \rho(0) \mathrm{e}^{iHu}\,.\end{aligned}$$ This time-averaged state is also always close to a steady state, as follows. Using the triangle inequality and the unitary invariance of the norm, we have $$\begin{aligned} \begin{split} \|{\bar{\rho}}(t)-\rho\|_1 & = \left\|\frac{1}{t} \int_0^t \mathrm{d}u \, \mathrm{e}^{-iHu} \bigl(\rho(0) - \rho\bigr) \mathrm{e}^{iHu}\right\|_1\\ &\le \frac{1}{t} \int_0^t \mathrm{d}u \left\| \mathrm{e}^{-iHu} \bigl(\rho(0) - \rho\bigr) \mathrm{e}^{iHu}\right\|_1\\ &= \frac{1}{t} \int_0^t \mathrm{d}u \|\rho(0) - \rho\|_1\\ & \le \epsilon\,. \end{split}\end{aligned}$$ Next we show that the commutator of the time-averaged state with the Hamiltonian is decreasing with time. Using the equations of motion and the fundamental theorem of calculus, we have $$\begin{aligned} \begin{split} \|[H,{\bar{\rho}}(t)] \|_1 &= \frac{1}{t}\|\rho(0) - \rho(t)\|_1\\ &=\frac{1}{t}\|\left(\rho(0) - \rho\right) + \left(\rho - \rho(t)\right)\|_1\,. \end{split}\end{aligned}$$ Then again using the triangle inequality and the unitary invariance of the norm we have $$\begin{aligned} \begin{split} \|[H,{\bar{\rho}}(t)] \|_1&\leq\frac{\epsilon}{t} + \frac{1}{t}\|\rho-\rho(t)\|_1\\ &\leq\frac{\epsilon}{t} + \frac{1}{t}\|\rho-\rho(0)\|_1\\ &\le \frac{2\epsilon}{t} \end{split}\end{aligned}$$ and the result is immediate. Although the time-averaged state becomes a better and better approximate state preparation, there is no physical way to prepare the exact state ${\bar{\rho}}(t)$ for $t>0$. An obvious approach to deal with this is to sample from a discrete set of intermediate times and average the values of the experiments at these sampled times. Let us denote by ${\bar{\rho}}_s(t)$ a discretely averaged density operator over a set of $s$ points $\{t_1\ldots,t_s\}$ in the interval $[0,t]$. We will allow positive weights $w_i$ so that our approximate time-averaged state is given by $$\begin{aligned} \label{eq:rhoav-s} {\bar{\rho}}(t) \approx {\bar{\rho}}_s(t) := \sum_{i=1}^s w_i \rho(t_i)\,,\end{aligned}$$ where $t_i = u_i t$ for some $w_i>0$ and $u_i \in (0,1)$ to be chosen later by \[eq:u,eq:weights\]. The matrix elements of $K\bigl({\bar{\rho}}(t)\bigr)$ are given by $$\begin{aligned} \label{eq:K-rhoav} K\bigl({\bar{\rho}}(t)\bigr)_{jk} = \operatorname{Tr}\bigl(i[P_j,P_k]{\bar{\rho}}(t)\bigr)\,,\end{aligned}$$ which is linear in ${\bar{\rho}}(t)$. This linearity is important because it means that a weighted average the results of experiments with the states $\rho(t_i)$ will have the same expected value as an experiment with the unphysical state ${\bar{\rho}}(t)$. Our next result bounds the error in the 1-norm associated to approximating ${\bar{\rho}}(t)$ from \[eq:timeavgrho\] by ${\bar{\rho}}_s(t)$ from \[eq:rhoav-s\]. \[thm:quadrature\] There exist sets of weights $w_i>0$ and times $t_i\in [0,t]$, $\bigl\{(w_i,t_i)\bigr\}_{i=1}^s$, such that $$\begin{aligned} \label{eq:quaderrorbound1norm} \bigl\|{\bar{\rho}}(t) - {\bar{\rho}}_s(t)\bigr\|_1 \le \frac{\sqrt{\pi}}{4\sqrt{s}} \biggl(\frac{\mathrm{e}\|H\|t}{4s}\biggr)^{2s}\,.\end{aligned}$$ We first change variables so that $$\begin{aligned} {\bar{\rho}}(t) = \frac{1}{t}\int_0^t \rho(u) \mathrm{d}u = \int_0^1 \rho\bigl(ut\bigr)\mathrm{d}u\,.\end{aligned}$$ Now trace both sides against any operator $\Pi$ satisfying $\|\Pi\|\le 1$, and we can introduce a function $$\begin{aligned} \label{eq:f(u)} f(u) = \operatorname{Tr}\bigl(\Pi\, \rho(ut)\bigr)\,.\end{aligned}$$ The function $f$ is very well behaved, and is an entire function in the complex plane since is obtained from compositions, sums and products of entire functions. In particular, it has well-defined derivatives at all orders. We will use Gauss-Legendre quadrature to construct positive weights $w_i$ and points $u_i$ so that $$\begin{aligned} I := \int_0^1 f(u) \mathrm{d}u\, \approx\, I_s:=\sum_{i=1}^s w_i f(u_i)\,.\end{aligned}$$ We will choose the evaluation points $u_i$ from the distinct roots of the $s$th order Legendre polynomial $\mathcal{P}_s(x)$, so that for $i=1,\ldots,s$ we have $$\begin{aligned} \label{eq:u} \mathcal{P}_s(2u_i-1) = 0\,.\end{aligned}$$ The choice of weight for point $i$ is given by $$\begin{aligned} \label{eq:weights} w_i = \frac{4 u_i(1-u_i)}{(s+1)^2 \mathcal{P}_{s+1}(2u_i-1)^2}\,.\end{aligned}$$ With these choices, an upper bound for the error of the quadrature rule $I_s$ is given by [@Kahaner1989 §5.2] $$\begin{aligned} \bigl|I-I_s\bigr| \le \frac{(s!)^4}{(2s+1)((2s)!)^3} |f^{(2s)}(\xi)|\,,\end{aligned}$$ for some $\xi \in [0,1]$. By \[lemma:M-bound\] below and $\|\Pi\|\le 1$, the term $|f^{(2s)}(\xi)|$ is bounded by $$\begin{aligned} |f^{(2s)}(\xi)| \le \bigl(2\|H\|t\bigr)^{2s}\,\end{aligned}$$ and we find the following inequality, $$\begin{aligned} |I-I_s| \le \frac{\bigl(2\|H\|t\bigr)^{2s}(s!)^4}{(2s+1)((2s)!)^3}\,.\end{aligned}$$ The right hand side of \[eq:quaderrorbound1norm\] is an upper bound on this after an elementary application of Stirling’s formula. This bound holds for any choice of $\Pi$, so in particular $$\begin{aligned} |I-I_s| \le \max_{\|\Pi\| \le 1} |I-I_s| = \bigl\|{\bar{\rho}}(t) - {\bar{\rho}}_s(t)\bigr\|_1\end{aligned}$$ and the result follows. We remark that, while we have used Gauss-Legendre quadrature to get a provable guarantee on the approximation error for the time-averaged matrix elements, it would be more suitable in practice to use a Gauss-Kronrod quadrature formula or other nested quadrature rule so that convergence can be checked online while reusing preexisting data points. We now prove the lemma used in the proof of \[thm:quadrature\]. \[lemma:M-bound\] For any operator $\Pi$, we have the bound $$\begin{aligned} \sup_{u \in [0,1]}\, \biggl|\frac{\partial^k}{\partial u^k} \operatorname{Tr}\bigl(P\rho\bigl(ut\bigr)\bigr)\biggr|\le \|\Pi\| \bigl(2\|H\|t\bigr)^k\,.\end{aligned}$$ From the equation of motion and the chain rule, we have $$\begin{aligned} \partial_u \rho\bigl(ut\bigr) = -i\bigl[H,\rho\bigl(ut\bigr)\bigr] t\,.\end{aligned}$$ Therefore the $k$th derivative involves a $k$-nested commutator which will have $2^k$ terms, and there will be an overall factor of $t^k$. Expanding the nested commutators and using the triangle inequality, each term has $k$ factors of $H$ in it, and is of the form $|\operatorname{Tr}(P H^{k-\ell} \rho H^{\ell})|$. By using the matrix Hölder inequality and the submultiplicativity of the norm, each term is less than $\|\Pi\| \|H\|^k \|\rho\|_1 \le \|\Pi\|\|H\|^k$. Summing all of these terms and accounting for the overall factor from the chain rule gives the result. This additive error on the estimates to the matrix elements of $K$ can be incorporated into the bounds derived in \[appendix:sample-complexity\] for finite sampling. To see how approximate state preparations will affect our estimate, we define a measure of overlap between two subspaces spanned by vectors $a$ and $b$ given by the *fidelity*, $$\begin{aligned} F(a,b) = \frac{ |a{^\text{T}}b|^2}{\|a\|_2^2 \|b\|_2^2}\,.\end{aligned}$$ Note that this measure is canonical in the sense that it only depends on the subspace projectors and not on the specific choice of spanning vector within each subspace. That is, it is invariant under the rescalings $a\to \alpha a$ and $b \to \beta b$ for any nonzero $\alpha, \beta$. The fidelity is always bounded by $F\in[0,1]$, with $F=1$ if and only if $a=b$. Therefore $1-F$, called the *infidelity*, is a sensible measure of error. \[thm:bound\] Let $\rho$ be a $\delta$-approximate steady state for $H$, satisfying $\bigl\|[H,\rho]\bigr\|_1\le\delta$. Then the estimate $\hat{c}$ obtained from the least right singular vector of $K(\rho)$ obeys an error bound with $c=c(H)$ given by $$\begin{aligned} 1-F(c,\hat{c}) \le \frac{m\delta^2}{\Delta\|c\|_2^2}\,,\end{aligned}$$ where $m = \dim(c)$ and $\Delta > 0$ is the spectral gap of $K(\rho)$. Since $\rho$ is a $\delta$-approximate steady state with $H$, we have $$\begin{aligned} \bigl\|[H,\rho]\bigr\|_1\le\delta\,.\end{aligned}$$ The matrix elements of $K = K(\rho)$ are given by $$\begin{aligned} K_{jk} = i \operatorname{Tr}\bigl([P_j,P_k]\rho\bigr)\,,\end{aligned}$$ where the $P_j$ are the Pauli matrices and the indices run over the supported elements in the span that we are considering. The unknown $H$ is described by $c(H)$, the vector of couplings of $H$ with elements $$\begin{aligned} c(H)_j = \frac{1}{d}\operatorname{Tr}(P_j H)\,.\end{aligned}$$ so that $$\begin{aligned} H = \sum_j c(H)_j P_j\,.\end{aligned}$$ We first show that $c = c(H)$ is an approximate null vector of $K$. Looking at the $j$th element of $Kc$, we find from the cyclic property of the trace that $$\begin{aligned} \begin{split} \bigl[Kc\bigr]_j & = \sum_{k} i\operatorname{Tr}\bigl([P_j,P_k]\rho\bigr) c_k\\ & = i\operatorname{Tr}\bigl([P_j,H]\rho\bigr)\\ & = i\operatorname{Tr}\bigl(P_j H \rho - H P_j \rho\bigr)\\ & = i\operatorname{Tr}\bigl(P_j [H,\rho]\bigr)\,. \end{split}\end{aligned}$$ Now using the matrix Hölder inequality, we have $$\begin{aligned} \label{eq:inftynormbound} \begin{split} \bigl|\bigl[Kc\bigr]_j\bigr| & \le \|P_j\|_\infty\, \bigl\|[H,\rho]\bigr\|_1\\ & \le \delta\,. \end{split}\end{aligned}$$ Now consider the estimate $\hat{c}$ that would be returned by finding the least right singular vector of $K$. We let $M=K{^\text{T}}K$ and then $\hat{c}$ is equivalently the eigenvector of $M$ with the least eigenvalue. We have assumed that the gap of $K$ is $\Delta > 0$, so the “ground state” of $M$ is unique and spanned by $\hat{c}$. Now from \[eq:inftynormbound\] we have that the correct unknown vector $c$ has small overlap with $M$, $$\begin{aligned} \label{eq:cMc-ineq} c{^\text{T}}M c \le m\delta^2\,.\end{aligned}$$ Next, we note that as $M$ is a positive semidefinite matrix, we have the matrix inequality $$\begin{aligned} \label{eq:M-ineq} M \succeq \Delta \left(\mathbbm{1}-\frac{\hat{c}\,\hat{c}{^\text{T}}}{\|\hat{c}\|_2^2}\right)\,.\end{aligned}$$ Now multiplying \[eq:M-ineq\] by $c\,c{^\text{T}}$ and taking the trace and then using \[eq:cMc-ineq\], we find a bound on the infidelity of $$\begin{aligned} 1-F(c,\hat{c}) \le \frac{m\delta^2}{\Delta\|c\|_2^2}\,,\end{aligned}$$ as claimed. This error bound agrees with the scaling estimate derived in Ref. [@qi2019determininglocal] using a matrix perturbation theory argument. Our derivation has the advantage that it makes clear the scaling with respect to the length of the true Hamiltonian vector $c(H)$, and our bound is an effective bound that contains no uncontrolled sub-leading error terms. As a corollary of this result, suppose we have estimates of $K(\rho)$ that additionally have some additive error, $K \to K+\mathcal{E}$, where $|\mathcal{E}_{jk}| \le \epsilon$. Then the same argument as before shows that the true Hamiltonian $c(H)$ is close to the approximate kernel estimate $\hat{c}$. \[cor:KE\] Under the same conditions as \[thm:bound\], if $K\to K+\mathcal{E}$ and $|\mathcal{E}_{jk}| \le \epsilon$ then $$\begin{aligned} 1-F(c,\hat{c}) \le \frac{m(\delta+\epsilon\|c\|_1)^2}{\Delta\|c\|_2^2}\,.\end{aligned}$$ where $\Delta$ is the gap of $K+\mathcal{E}$. The idea is the same as the proof of \[thm:bound\], except we transform $M\to M+\mathcal{E}{^\text{T}}K+K{^\text{T}}\mathcal{E}+\mathcal{E}{^\text{T}}\mathcal{E}$ on the left hand side of \[eq:cMc-ineq\]. Then using the triangle inequality and the elementary estimates $$\begin{aligned} |c{^\text{T}}\mathcal{E}{^\text{T}}Kc| \le m\delta\epsilon\|c\|_1\ \, \text{and}\ \, |c{^\text{T}}\mathcal{E}{^\text{T}}\mathcal{E}c|\le m\epsilon^2\|c\|_1^2\,,\end{aligned}$$ the result follows. Pseudocode for the main algorithms {#appendix:algorithms} ================================== The first algorithm (\[alg:efficient-pauli-expectation\]) presented in this appendix allows for the fast estimation of $k-$body Pauli expectation values. Given a list of expectation values required $C_{1,\dots,m}$ and an input state $\rho$, it will return and estimate of $\operatorname{Tr}\bigl(C_i \rho\bigr)$ for each of these values. The second algorithm (\[alg:tim-averaged-states\]) leverages the first to show how to measure the relevant expectation values for a time-averaged state. Both of these algorithms deliver the performance guarantees set out in \[appendix:approximate-state-preps\] and \[appendix:sample-complexity\]. The pseudocode is shown in \[tab:pseudocode\].   $\rho$, $C$, $L$.\ \# $\rho$ is an $n$-qubit quantum state\ \# $C$ is a list of $m$ Pauli operators $\{C_1,\dots,C_{m}\}$ $M\gets $ Initialise vector with $L$ entries. $P\gets $ Initialise vector with $L$ entries. $P_l\gets$ Random full-weight Pauli $M_l\gets$ length $n$ array of $\pm 1$ meas. results for $\rho$, $P_l$ E $\gets $ vector with $m$ entries, each initialised to 0. $E_k\gets E_k + \prod_{j \in \text{supp}(C_k)} M_{l,j}$ $j\gets $ number of entries in $P$ that support $C_k$. $E_k\gets E_k/j$ $E$ Approximates $\operatorname{Tr}\bigl(C_i \rho\bigr)$ for all $i$. $\rho(0)$, $C$, $s$, $t$, $L$\ \# $C$ is a list of $m$ Pauli operators $\{C_1,\dots,C_{m}\}$\ \# Compute quadrature weights and times\ $u \gets \bigl\{u_k: \mathcal{P}_s(2u_k-1)=0\,|\,k=1:s\bigr\}$ \[eq:u\]\ $w \gets \bigl\{w_k = \tfrac{4u_k(1-u_k)}{(s+1)^2 \mathcal{P}_{s+1}(2u_k-1)^2}\,|\,k=1:s\bigr\}$ \[eq:weights\]\ $\tau \gets ut$\ $M \gets$ Initialise vector with $s$ entries. $\rho \gets \rho(0)$ Initialize. $\rho \gets \rho(\tau_k) =\mathrm{e}^{-iH\tau_k}\rho\mathrm{e}^{iH\tau_k}$ Time evolve. $M_k\gets$ <span style="font-variant:small-caps;">Pauli</span>$\left(\rho,C,L\right)$ .\ \# $M_k$ contains estimates of $\operatorname{Tr}\bigl(C_i\rho(\tau_k)\bigr)$ for all $i$. $E\gets$ Initialise empty vector with $m$ entries. $E_i = \sum_{k=1}^{s} w_k M_{k,i}$ \[eq:rhoav-s\] $E$ Approximates $\operatorname{Tr}\bigl(C_i {\bar{\rho}}(t)\bigr)$ for all $i$.   Sample complexity {#appendix:sample-complexity} ================= If the Hamiltonian $H$ is supported on a basis of Pauli operators $\{P_j\}_{j=1}^m$ where each of the $m$ operators has weight at most $k$, then with probability at least $1-\delta$ we can measure all of the relevant elements of the matrix $K$ to precision $\epsilon$ using a small number of state preparations. Using \[thm:random\] below, we require at most $\tfrac{2}{\epsilon^2(1-\epsilon)} 3^{2k-1} \log\bigl(\tfrac{3m'}{\delta}\bigr)$ state preparations. Here $m' \le m(m-1)/2$ is the number of nonvanishing commutators in the set $\bigl\{[P_j,P_k]\bigr\}_{j,k=1}^m$, and the weight $\mathrm{wt}\bigl([P_j,P_k]\bigr)\le 2k-1$ for $k$-body Hamiltonians. A slightly weaker result using $\epsilon^{-2}\mathrm{e}^{O(k)} \log^2 m$ total measurements can be obtained by using the recent work Ref. [@cotler2019]. Here we give a simple randomized algorithm that avoids using perfect hash families and has an improved scaling with $m$. \[thm:random\] Consider a set of $m$ Pauli operators on $n$ qubits $\{P_i\}_{i=1}^m$, each with weight $\le k$ and let $\epsilon \in (0,1)$. Then with probability at least $1-\delta$, using $$\begin{aligned} N = \frac{2}{\epsilon^2(1-\epsilon)}3^{k}\log(3m/\delta)\end{aligned}$$ copies of $\rho$ suffices to estimate $\operatorname{Tr}(P_i \rho)$ to within $\pm\epsilon$ for all $i$. For each of $N$ state preparations we measure an independent, random, full-weight string of Pauli operators. For example, for three qubits, we might measure $XYZ$, then $YZY$, etc, with an independent choice of Pauli on each qubit for each copy. A measurement of any given weight-$k$ Pauli operator will occur as a marginal measurement in a $p=\frac{1}{3^k}$ fraction of strings, at least in expectation. Let $S$ be the number of times that a given correlator appears in the list of $N$ strings. Then by the Chernoff bound, the probability of $S < T = (1-\epsilon)Np$ for some $\epsilon > 0$ is bounded from above by $$\begin{aligned} \Pr\bigl(S < T\bigr) \le \exp\bigl[-\epsilon^2 N p/2\bigr] < \exp\bigl[-\epsilon^2 T/2\bigr]\,.\end{aligned}$$ Assuming that each of the $m$ correlators of interest appears in at least $T$ strings, we can average the results of those $T$ (or more) measurements to get estimates of the expectation values. Again by the Chernoff bound, the probability that the sample mean $\hat{P}$ averaged over $T$ independent trials is within $\epsilon$ of the true expected value $\langle P\rangle$ is bounded by $$\begin{aligned} \Pr\bigl(|\hat{P}-\langle P\rangle| \ge \epsilon\bigr) \le 2 \exp\bigl[-\epsilon^2 T/2\bigr]\,.\end{aligned}$$ Then by the union bound, the probability that any of the estimates $\hat{P}_i$ is further than $\epsilon$ from its mean is bounded by $$\begin{aligned} \Pr\Bigl(\max_i \bigl|\hat{P}_i-\langle P_i\rangle\bigr| \ge \epsilon\Bigr) < 3 m \exp\bigl[-\epsilon^2 T/2\bigr]\,.\end{aligned}$$ Therefore, to ensure that the total probability of failure is less than $\delta$, it suffices to choose $T = 2\epsilon^{-2} \log(3m/\delta)$. We can now bound the entire sample complexity of the protocol to get a high-fidelity kernel estimate. It is an open question how to extend this to a bound on the Bayesian MAP estimator that we develop here, but as our numerics suggest, the sample complexity of the MAP estimator should generally be better. Our main result is the following. \[thm:main\] Using \[alg:tim-averaged-states\] to construct the approximate kernel estimate $\hat{c}$, and given an *a priori* upper bound $|c_i|=O(1)$, then with probability at least $1-\delta$ using $$\begin{aligned} N = O\left(\frac{m^3 3^{2k}}{\varepsilon^{3/2} \Delta^{3/2}} \sqrt{\log\Bigl(\tfrac{m^3 }{\delta\sqrt{\varepsilon\Delta} }\Bigr)}\right)\end{aligned}$$ samples is sufficient to get an infidelity $1-F(c,\hat{c}) \le \varepsilon$ of a $k$-body Hamiltonian $H$ with support on a set of $m$ given Pauli operators, where $m = \dim(c)$ and $\Delta$ is the spectral gap of the noisy $K$ matrix. We assume an initial state preparation that is close to a steady state as follows: $$\begin{aligned} \|\rho(0)-\rho\|_1 \le \eta\,.\end{aligned}$$ By \[lemma:preparation-error\], the time-averaged state ${\bar{\rho}}(t)$ is then a $(2\eta/t)$-approximate steady state. We can approximate the matrix elements of $K$ on this state by repeated sampling via \[thm:random\] and by using the quadrature formulas from \[thm:quadrature\]. For each of the sample points $t_i$, $i=1,\ldots,s$ we can estimate each of the $m'\le m(m-1)/2$ nontrivial matrix elements of $K$ using $$\begin{aligned} \label{eq:Ntotal} N = \frac{2 s}{\epsilon^2(1-\epsilon)}3^{2k-1} \log\biggl(\frac{3m's}{\delta}\biggr) \end{aligned}$$ total copies and the results are guaranteed with probability $1-\delta$ to be within $\pm\epsilon$. Now let $$\begin{aligned} \mu := \frac{\sqrt{\pi}}{4\sqrt{s}} \biggl(\frac{\mathrm{e}ht}{4s}\biggr)^{2s}\,,\end{aligned}$$ where $h\ge \|c\|_1 \ge \|H\|$ is any *a priori* upper bound on the 1-norm of the unknown vector $c$ and, by the triangle inequality, the norm of $H$. Since the weights $w_i$ in \[eq:weights\] satisfy $\sum_i w_i = 1$, then the error $\mathcal{E}_{jk}$ of each matrix element of $K$ is bounded by $$\begin{aligned} |\mathcal{E}_{jk}|\le \mu + \epsilon\,,\end{aligned}$$ where the $\mu$ contribution comes from quadrature error and the $\epsilon$ comes from sampling. Under these conditions, by \[cor:KE\] the approximate kernel estimator returns an estimate $\hat{c}$ that has infidelity bounded by $$\begin{aligned} 1-F(c,\hat{c}) \le \frac{m h^2}{\Delta\|c\|_2^2}\biggl(\frac{2\eta}{h t}+\mu+\epsilon\biggr)^2\,.\end{aligned}$$ where $\Delta$ is the gap of $K+\mathcal{E}$. This expression hides some of the dependence on $t$ and $s$, but we can optimize the choice of $t$ to minimize the error for fixed values of $h, \eta, s$, and $\epsilon$. The optimal time $t^\star$ while holding the other parameters constant can be calculated as $$\begin{aligned} h t^\star = 4s \biggl(\frac{s^{-\frac{3}{2}}\eta}{\mathrm{e}^{2s}\sqrt{\pi}}\biggr)^{\frac{1}{1+2s}} = \Theta(s)\,.\label{eq:optimalT}\end{aligned}$$ Plugging in this value, we find $$\begin{aligned} \frac{2 \eta }{h t}+\mu\bigg\rvert_{t=t^\star} &= (\sqrt{\pi } e^{2 s} s^{3/2} \eta ^{2 s})^{\frac{1}{1+2s}} \frac{2 s+1}{4 s^2}\nonumber\\ & < \frac{7}{2s}\,,\end{aligned}$$ where the latter bound uses $\eta \le 2$. Now using $\|c\|_1 \le \sqrt{m}\|c\|_2$, we can substitute this upper bound for $h$ in the scaling to cancel the dependence on $\|c\|_2$, and we have $$\begin{aligned} \label{eq:F-scaling-second} 1-F(c,\hat{c}) &\le \frac{m^2}{\Delta}\biggl(\frac{7}{2s}+\epsilon\biggr)^2\,.\end{aligned}$$ This uses at most $$\begin{aligned} L = \frac{2}{\epsilon^2 (1-\epsilon)} 3^{2k-1} \log\left(\frac{3m's}{\delta}\right)\end{aligned}$$ many samples for each of the $s$ quadrature points. A slightly weaker bound than claimed comes from choosing $\epsilon = 7/2s$ and $s = \left\lceil\frac{7 m}{\sqrt{\Delta}\sqrt{\varepsilon}}\right\rceil$. Then we find that $1-F(c,\hat{c}) \le \varepsilon$ using at most $$\begin{aligned} L = O\left(\frac{m^2 3^{2k}}{\varepsilon \Delta} \log\left(\frac{m^3}{\delta\sqrt{\varepsilon\Delta}}\right)\right)\end{aligned}$$ measurements per quadrature point, where we use $m' < m^2/2$. The total number of measurements is then $$\begin{aligned} N = O\left(\frac{m^3 3^{2k}}{\varepsilon^{3/2} \Delta^{3/2}} \log\left(\frac{m^3}{\delta\sqrt{\varepsilon\Delta}}\right)\right)\,.\end{aligned}$$ The square-root improvement in the logarithmic term is achieved by additionally optimizing the tradeoff between $\epsilon$ and $s$. Let $\alpha = O(m^2\delta^{-1})$ and $L' = 3^{-k}\sqrt{L}$, then we can choose $s =O\bigl(L'\sqrt{\log(L'\alpha)}\bigr)$ and $\epsilon = \sqrt{\log(\alpha s)}/L'$. Then choosing a number of samples per quadrature point of $L=3^{2k}m^2 \varepsilon^{-1}\Delta^{-1}\log(\alpha^2\varepsilon^{-1}\Delta^{-1})$ gives the stated result. Finally, we state a corollary of \[thm:main\] for the case of $k$-body or $k$-local Hamiltonians. Under the same conditions as \[thm:main\], the sample complexity for \[alg:tim-averaged-states\] to obtain $1-F(c,\hat{c}) \le \varepsilon$ is $$\begin{aligned} N_{\text{$k$-body}} = O\biggl(\frac{n^{3k}}{\varepsilon^{3/2} \Delta^{3/2}} \sqrt{\log\Bigl(\tfrac{n^{3k}}{\delta\sqrt{\varepsilon\Delta} }\Bigr)}\biggr)\end{aligned}$$ for general $k$-body Hamiltonians and $$\begin{aligned} N_{\text{$k$-local},D} = O\biggl(\frac{n^3 k^{3D}3^{2k}}{\varepsilon^{3/2} \Delta^{3/2}} \sqrt{\log\Bigl(\tfrac{n^3 k^{3D}}{\delta\sqrt{\varepsilon\Delta} }\Bigr)}\biggr)\end{aligned}$$ for $k$-local Hamiltonians in $D$-spatial dimensions. We simply observe that a general $k$-body Hamiltonian is supported on at most $m \le O(n^k)$ terms, and a $k$-local Hamiltonian in $D$ spatial dimensions has $m = O(n k^D)$ terms.
--- abstract: 'Legacy Electronic Health Records (EHRs) systems were not developed with the level of connectivity expected from them nowadays. Therefore, interoperability weakness inherent in the legacy systems can result in poor patient care and waste of financial resources. Large hospitals are less likely to share their data with external hospitals due to economic and political reasons. Motivated by these facts, we aim to provide a set of software implementation guidelines, i.e., *MedShare* to deal with interoperability issues among disconnected healthcare systems. The proposed integrated architecture includes: 1) a data extractor to fetch legacy medical data from a hemodialysis center, 2) converting it to a common data model, 3) indexing patient information using the HashMap technique, and 4) a set of services and tools that can be installed as a coherent environment on top of stand-alone EHRs systems. Our work enabled three cooperating but autonomous hospitals to mutually exchange medical data and helped them develop a common reference architecture. It lets stakeholders retain control over their patient data, winning the trust and confidence much needed towards a successful deployment of *MedShare*. Security concerns were effectively addressed that also included patient consent in the data exchange process. Thereby, the implemented toolset offered a collaborative environment to share EHRs by the healthcare providers.' author: - 'Yilong Yang,   Xiaoshan Li, Nafees Qamar, Wei Ke, and Zhiming Liu[^1][^2][^3][^4]' bibliography: - 'IEEEabrv.bib' title: | *MedShare:* Medical Resource Sharing\ among Autonomous Healthcare Providers --- [Shell : Bare Demo of IEEEtran.cls for IEEE Journals]{} Electronic Health Record, EHR, Privacy Preserving, EHR Sharing, Medical Resource Introduction ============ EHRs systems have been mostly designed and implemented to meet the internal clinical needs of healthcare providers, which have become obsolete and no longer meet the external needs of the patients and local governments. Consequently, this impedes the way to an improved patientcare in a networked healthcare setting, also resulting in increased cost and clinical negligence. The future health information systems aimed at the integration, interoperability, innovation, and intelligence [@Greenes2016][@Murua2018] for sharing the resource. Exchanging medical information has seamlessly paved the way to introducing medical standards [@Garde2007][@Bakken2000][@adams2017analysis] providing with a unified approach to medical vocabulary and exchange of information, but none of them has come of age to be used smoothly. For example, a study [@doi:10.1093/jamia/ocv103] finds weak evidence of the ‘meaningful use program (MU)’ initiated by the 2009 Health Information Technology for Economic and Clinical Health (HITECH) Act on EHRs uptake due to data interoperability challenges. The study [@TURAN201457] presents the top ten technical issues in healthcare, which include privacy, quantity, security, and the implementation of electronic medical records. Moreover, the political and economical issues and healthcare providers of contingent factors [@NGUYEN2014779] should take into an account in the development of medical information sharing. Large medicalcare providers seem reluctant to share their patient cum customers data with other healthcare providers [@miller2014health]. They exchange patient information internally and are less likely to cooperate outside their network [@RePEc:eee:jhecon:v:33:y:2014:i:c:p:28-42]. In such a scenario, the design and development of an interoperable health information exchange system is a non-trivial task. This is not only because of complex workflows involving data acquisition, storing, communication, and manipulation, but also lacking in a coordinated effort to connect autonomous healthcare providers. Albeit, in an ideal scenario healthcare networks are expected to: (a) to support direct data exchange, (b) query-based exchange of patient-related information in an emergency situation, medication history, radiology reports and records of a diseased person hospitalized for emergency care, and (c) personalized patient data management by patients themselves like online banking. Architecting and implementing such an interoperable system, meeting the aforementioned requirements, needs a comprehensive and multifaceted approach to catering both technical non-technical issues. Motivated by this, the current research is focused on connecting disintegrated healthcare providers in Macau SAR that include three major hospitals named Hospital Conde S. Januário (HC), Kiang Wu Hospital (KW) and Macau University of Science and Technology Hospital (UH). However, the theme of our work has wider implications and scope to build health information exchange systems that confront the same challenges. The autonomous EHRs systems under consideration were neither developed using special instructions or standards at the time of their birth, nor the concerned authorities were ready to update their legacy systems. Because the three hemodialysis centers had their fully functional and independent electronic health records in place. Among the three collaborating hospitals in this research, two are private healthcare providers while the third one is public. ![image](completearch.pdf){width="80.00000%"}\ In the given healthcare setting, distributed information sharing is mandatory for effective patient care and monitoring where patients often opt for switching a healthcare provider due to numerous reasons. *MedShare* is a simple yet robust EHRs system to allow for exchanging medical resources for an improved patient care between isolated hemodialysis centers in the given scenario. The types of data shared in *MedShare* includes lab reports, radiology images, transcription reports and medication histories. *MedShare* works in three steps: 1) it uses a data extractor to extract legacy data of a patient located at a hemodialysis centers, 2) it converts the data to a unified data format agreed upon by all the stakeholders and medical providers belonging to three hospitals, 3) the platform indexes the patient information using the HashMap technique. Our approach integrates a set of services and tools that can be installed as a coherent environment on top of standalone EHRs. As discussed in [@HYPPONEN20141], Operational Data Model (ODM) lacks explicit support for modern exchange mechanisms, and our authentication mechanism is based on RESTful web services and our previous work also employs the same techniques to exchange medical information [@qamar2016querying]. The *MedShare* EHRs sharing system, as depicted in Fig. \[ucdarch\], allows to handle situations such as follows: **Example**: A doctor can request all the hemodialysis records of a patient. The EHR sharing system returns a date-wise list of all the hemodialysis records of a queried patient. Furthermore, the doctor can access the detail of the EHRs on a specific date. E.g., Sep 30, 2015. *MedShare* allows an administrator to track the potential leaks in the system. For example, when the tracker needs to know the accessing information about the EHR with ID 0221, the tracking system shows all the relevant results. Hence, *MedShare* facilitates with distributed patient care but also allows to share tasks of the hemodialysis EHRs, if required, among the Macau hospitals without compromising patient privacy. We identify the data exchange scenarios, capture the intent behind it and identify collaborating entities. These and other system goals are achieved by system components such as authentication, EHR query, synchronization, and audit. **Contributions:** This paper offers three substantive contributions. 1) We set up a reference architecture for a diverse set of healthcare providers to connect and exchange medical information of their patients. 2) The implemented approach is reusable. The source code of the system is uploaded on GitHub[^5], which is freely available to download, and 3) the implementation sets forth technological guidelines for designing and implementing health information exchange system. For brevity, the remainder of the paper is organized as follows: Section \[sec:Relatedwork\] reviews the related literature. Section \[sec:SharingPattern\] presents data exchange scenarios from the hemodialysis centers in Macau. Section \[sec:architecure\] proposes *MedShare*, a medical data resource sharing architecture. Section \[sec:Prototype\] shows system prototyping and demonstration. Section \[sec:Conclusion\] concludes this paper and outlines our future work. A Literature Review {#sec:Relatedwork} =================== Legacy EHR systems were not developed with a certain level of interoperability in mind. Therefore, dis-connectivity inherent in these systems can result in their inability to exchange medical resources. Contrarily, numerous benefits can be achieved by connecting legacy EHRs systems. The authors [@FEATHERSTONE201245] demonstrate a suggestive evidence that a shared electronic health record can support more integrated care. More evidence comes through quantitative analyses of the actual contribution of shared EHR systems and is discussed in a large case study conducted in Austria [@RINNER201644]. But, large-scale adoption of such systems is impractical without addressing the privacy and security concerns [@REZAEIBAGHA201625]. On the other hand, larger hospital systems generally exchange electronic patient information internally, not with other external hospitals [@miller2014health]. It also reveals that larger hospital systems tend to create ‘information silos’, which is a data system that is incapable of reciprocal operations with other hospital systems. The reason is if larger hospitals allow outflow of data they are more likely to loose patients. In such a given situation, the adaptability of open standards for interoperable hospital systems is still far from practice. This situation necessitates the need to engage health informatics researchers and users for a better interconnection among different hospitals. Another study shows that inter-organizational data exchange is one of the most important information system challenges [@HYPPONEN20141], where it reports on the user experiences with different regional health information exchange systems in Finland. A recent work [@SINACI2013784] combines metadata registries and semantic web technologies to uniquely reference, query and process a Common Data Element (CDE) to enable the syntactic and semantic interoperability. However, this research is limited to the interoperability of medical vocabulary. The survey consists of 43 techniques addressing interoperability issues. Another study [@doi:10.1136/amiajnl-2012-000855] provides interesting findings that 40.7% (n=1465) of the predefined headings applied in the multi-professional EHR system was shared by two or more professional groups and only 1.7% (n=62) of the predefined headings were shared by all eight groups. The study [@SINACI2013784] creates the Portal of Medical Data Models[^6] to foster sharing of medical data models. This is achieved by a web front-end that enables users to search, view, download and discuss data models. Some other related work can be found in  [@huang2012hierarchical][@sun2011][@Garde2007][@Sheth2014][@Kotz2015][@Bakken2000]. Numerous factors, e.g., scalability, heterogeneity, resource management, transparency, openness, performance analysis and synchronization contribute to the development of a dependable EHR system, nonetheless, security may be considered at the core of system properties. Medical resource sharing, if it is between cross-organization or cross-domain, a study considers cross-domain authentication and fine-grained access control [@5196665]. This study discusses an on-demand revocation if any of the two cooperating organizations are unwilling to share data anymore. This may be seen as the flexibility to the notion of security and privacy concerns in a networked healthcare setting. Another approach [@Reicher2016] uses direct messaging, a secure e-mail-like protocol employed to allow the exchange of encrypted health information online. The paper [@doi:10.1093/jamia/ocv038] provides a tool supports a privacy-preserving linkage of electronic health records (EHRs) data across multiple sites in a large metropolitan area in the United States (Chicago, IL). Another research [@Kwon2015] discusses the possibility of attacks on healthcare systems. Two closely related work, the first is the eMOLST project [@laszewski2011emolst], which officially supported by the New York State Department of Health (NYS DOH) providers, handles data interoperability through: a) authenticating access to a shared medical resource by applying Single Sign-On (SSO) technique, b) it employs a patient identity source system to assign a unique identifier to a patient and requires extra work to maintain a set of attributes associated with the patient. In contrast, our system computes the hash code of the patient identifies card number, uniquely representing each patient in the EHR sharing system. eMOLST requires the new system portal to be deployed to access the EHRs, while our system is designed to work with EHR legacy system. Our patient indexing component that lets hospitals keep the data by themselves. eMOLST needs to push the data away to a centralized repository. The second work [@fragidis2016integrated] proposed a semi-distributed architecture NEHR to offer EHR sharing in Greece. Every sharing request requires the authentication of the patient in NEHR. However, our MedShare architecture provides two-way authentication to not only need the authentication of the patient but also require the authentication by the data providers. Moreover, our locator service uses the de-identified HashMap to locate the resource, which reduces the risk of privacy breach. For privacy requirements, a survey [@Sheth2014] across North America, Asia, and Europe shows that data sharing and data breaches are the biggest concerns for the users. Moreover, there is a comparison [@Liu2013] on the effectiveness of the methods used for anonymizing quasi-identifiers to avoid sensitive information, and this quantitative analysis of de-identification shows that de-identifying data provides no guarantee of anonymity. The study [@Angiuli2015] elaborates on the de-identified patient data, and  [@Qamar2013] argues explicit validation of healthcare security policies. [@Hydari2015] relates electronic health records and patient safety. Some other works [@DBLP:journals/titb/KilicDE10][@Lee2014] discuss Peer-to-Peer (P2P) networks to support medical data sharing. A research [@7457335] studies the data exchange between patients and healthcare facilities. It further investigates the real-time data synchronization issues. Our work also addresses synchronization challenges, but no real-time data synchronization is needed in our case. An Overview of the Hemodialysis Centers in Macau {#sec:SharingPattern} ================================================ The hemodialysis centers in Macau serve a large number of population, but they are disconnected to share medical records of their patients. A patient, generally tends to see a doctor in a hospital of her choice, is prescribed a hemodialysis treatment plan at specified dates. If a patient suddenly decides to change her hemodialysis center, the exchange of patient information between hemodialysis centers becomes a bottleneck for the smooth delivery of medical services. The Macau citizens have confidence in public health systems, and they prefer to see a doctor in HC. Consequently, the initial diagnosis records and treatment plans are produced and stored in HC. Nonetheless, a patient may opt to go to another hospital, say UH, to take treatments due to unavailability of resources and the place where they live. The hemodialysis centers have no sharing platform in place. Therefore, it results in carrying paper-based by the patient medical data along with any other electronic data copies on CDs. It is noteworthy that patient privacy is well preserved with respect to the security of the EHR system in HC for a non-closure data agreement exists between KW and UH. ![Use Case Diagram of the networked EHR System[]{data-label="usecasedia"}](usecase.png "fig:"){width="1\columnwidth"}\ The data-sharing problem leads to developing a hemodialysis network that should address the following functional and non-functional requirements. We use the Unified Modeling Language[^7] (UML) to give a flavor of the EHR system requirements in Fig. \[usecasedia\]. Some of the main functional requirements are listed below: - The use case of seeing a doctor describes the procedure that a patient visits the doctor in a hospital, and the doctor requests for the related shared EHRs of the patient from other hospitals, if any. - A doctor is authenticated and authorized to access a local medical record. - A doctor may access medical records placed at another hospital through the same authentication service in her working hospital. - The patient provides her consent, and authorizes the doctor to access her medical records. This guarantees that in a EHR sharing session, a patient authorization is recorded. - The scheduler updates the local patient data in a unified format and updates it at the indexing server. These shared records should be regularly synchronized but not required to be updated in real-time. Note that a hemodialysis patient usually takes her next treatment after a specificied time. The Workflow for Resource Sharing --------------------------------- The high-level EHR sharing workflow is presented as follows: a patient sees a doctor in an arbitrary hospital $H_1$ among HC, KW and UH. The EHRs are generated and stored in the respective legacy EHR systems of $H_1$. A scheduler regularly triggers the synchronization of the extracted shared EHRs from the legacy EHR system and updates the corresponding indexes in the patient indexing server. Only then, the patient can see a doctor in another hospital $H_2$ with the access to the shared EHRs. At the time of requesting old EHRs of the patient, the doctor must be authorized by both the current hospital $H_2$ and the patient. ![The part (a) of the Figure \[umlnotation\] shows the self-explanatory UML component diagram and names its elements. Part (b) shows the elements of the sequence diagram presenting the notion of actor and calling functions.[]{data-label="umlnotation"}](umlnotation){width="0.8\columnwidth"} ![image](SequenceDiagram.png){width="95.00000%"}\ To understand the graphical notation used in the paper, non-familiar readers are refered to UML specification. However, for brevity, we provide the names and functions of the used notation in Fig. \[umlnotation\]. Fig. \[seqdia\] presents the detailed system usage scenario of the communication taking place between actors and the EHR sharing system. We use the standard sequence diagram from UML that allows to graphically depict how your system could potentially be interacted with. Considering the proposed architecture, Fig. \[seqdia\] describes the detailed activities beyond the system architecture. A doctor of HC is authorized through the service in her working hospital (Please refer to *Steps 1, 2, 3, 4* in the diagram), and then the doctor runs the EHR queries on a patient data (*Step 5*). The patient then authorizes this request by scanning her ID card (*Step 6, 7, 8, 9, 10*). This two-way authentication by the patient and the hospital satisfies the required privacy requirements. The request can then be sent to the data center (*Step 11*). If the patient data is distributed over multiple locations (e.g. KW and UH), the relevant indexes are retrieved by the query (*Step 13*). Afterwards, the transmission requests will be sent out to those hospitals (*Step 14*). Once the data transfer (*Step 16*) is completed in EHR sharing client of HC, the requested EHRs are displayed to the doctor (*Step 17*). Moreover, the transactions are recorded in the log database for post-event analysis (*Step 12, 15*). This is important in case of a privacy and security breach. If the patient has EHRs in more than one hospital, the operations (*Step 14, 15, 16*) will be run in parallel for each of remote hospitals. Data Format Inconsistencies --------------------------- Since the studied legacy EHRs systems were autonomously designed and implemented, a number of database inconsistencies appeared at the time of implementation. The terminologies used to represent the EHR data were not based on any standard or common data format, which needed to be resolved first. TABLE \[table1\] provides an example of the database entries from the three hospitals, though representing the same meanings, but with different names. The right most column presents the unified format agreed upon by the concerned authorities. [max width=]{} Attribute Name HC Format KW Format UH Format Unified Format -------------------- ------------- ---------------- ----------- ---------------- Patient identity `card_id` `identitiy_id` `id` `patient_id` EHR identity `record_id` `id_ehr` `eid` `ehr_id` Patient identity `p_name` `patient_n` `pname` `patient_name` Name of the doctor `d_name` `dotctor_n` `dname` `doctor_name` : An example of unified data format \[table1\] The unified EHR data format can significantly reduce the number of data inconsistencies between different EHR formats. Otherwise, each hospital requires a targeted data conversion for each corresponding hospital. In our unified EHR sharing scenario, each hospital only requires to conform to a single negotiated data format. However, EHRs sharing (independent of unified format) requires bidirectional data conversion between two autonomous health care providers and the number of conversions can be calculated by the formula $n(n-1)$ if there are $n$ hospitals. However, only $n$ number of conversions are needed in a unified EHR sharing. Although, we currently have only three hospitals in the Macau EHRs sharing case study, the network may grow well in the near future and other health providers and research institutes may take part in the data sharing process. In the future, the unified data format will ease the merger of a new healthcare provider into the *MedShare*. Note that the unified data sharing format and the negotiation process was directly held by the administration of the hospitals. The HL7 format [@Bakken2000], OpenEHR [@Garde2007] standards, and other semantic models of EHRs [@doi:10.1093/jamia/ocv008][@LEGAZGARCIA2016175] were not under consideration during the negotiation process. Our work in this research project was only confined to fill the technological gaps. Interoperable Architecture for Sharing Medical Resources {#sec:architecure} ======================================================== This section introduces the architectural aspects of the health information exchange system and elaborates on the technical details encountered in the development of the system. Our experience with developing a large system reveals that interoperability is not only the issue to enable two autonomous systems to exchange systems, but other non-technical factors also play a vital role. In this regard, one of the challenges lies in mediating the situation when autonomous health care providers are not interested to share the data of their patient cum customers and show a complete lack of interest in transfering the data to their competitors. After presenting the architectural details first, we will present a simple yet robust solution to this problem. MedShare Architecture --------------------- We employ the component diagram based on standardized UML notation of Fig. \[umlnotation\] (a) to present MedShare Architecture. The architecture has two views: 1) External view: this represents the foundational block of resource sharing approach that allows for linking legacy EHR systems into a collaborative sharing of their data. 2) Internal view: this describes the design for the core components of the MedShare. **External View**: The Figure \[external\] illustrates the external view of our system. Legacy EHR systems provide the services of *data conversion* to convert shared EHRs from a legacy system to a distributed EHR system. However, using the *authentication* service the doctor and the patient are authorized. By using both services from the local legacy systems, the unified EHR sharing system provides two services: 1) It allows to run a query on the *MedShare*. 2) The audit service handles the privacy requirements of the system and post-breach data analysis, which is not detailed in this paper. ![The External View of Distributed EHR Sharing Architecture[]{data-label="external"}](ExternalView "fig:"){width="\columnwidth"}\ **Internal View**: The internal view of the unified EHR sharing system in Fig. \[internal\] shows how the sub-systems collaborate to provide the required medical data querying mechanism from the different hospitals. The subsystems use the services provided by the index system in the data center to locate EHRs, then using the service of *transfer EHRs* in each subsystem to transfer all requested EHRs. To understand what is a service in the system, readers are directed to the implementation section of the paper. ![The Internal View of Distributed EHR Sharing Architecture[]{data-label="internal"}](internalView "fig:"){width="\columnwidth"}\ **Patient Indexing**: The patient indexing component stores all the references of the shared EHRs to facilitate data queries from the participating hospitals, i.e. requesters and providers. A requester poses a data location query to the patient index component without a direct connection with a peer hospital. This is represented as $\dashrightarrow$ with a *locate EHRs* label and shows the dependency between components in Fig. \[internal\]. The label *transfer EHRs* provides access to the real data. The indexing component stores only the unique reference for each shared EHR, without any physical data relocation taking place from the original source. This approach offers two main advantages: 1) huge data synchronization burden is alleviated and 2) cyber-security attacks and other threats from the internal users are minimized. The *HashMap* technique is employed for patient indexing that includes a relationship between a patient and the EHR with the location. However, we leave it to the healthcare providers to decide about the segments of data to be indexed. Obviously, only the references are not enough. We need to store in the indexing server some attributes of a shared EHR that are not privacy-sensitive, as tags, along with the reference to the EHRs. The indexing server is then able to respond to queries based on these tags. Typically, the tags should include the source location, the encoded patient number, the date and time and the type of the EHRs. On the principle of facilitating queries while complying privacy policies, it also analyzes which set of tags is to be opened to the indexing server may be pre-negotiated between stakeholders. There are two main reasons not to use central storage for patient data: 1) a hospital must push all the shared data into the data center before EHR sharing if the data center stores all data and 2) the local data should frequently be synchronized with the indexing server. That will lead to a huge synchronization burden to the data center because of enormous size of data. For example, imagine the CT scan examination report that may contain more than 1GB of data. Data Query and Output Structure ------------------------------- As mentioned above, a data query includes two steps: 1) locating an EHR, and 2) the data transfer procedure. An EHR is located by a query, followed by the output. Hereunder, we illustrate this by using an example, which further will be detailed in the implementation section. Below, we provide the query attributes that inlcudes patient identity, choosing the range of dates, EHR type and which hospitals to query. **Input Parameters for Locating:** Hashed Patient ID Date Range EHR Type Hospitals ------------------- ------------ ---------- ----------- \[LocatingInput\] **Output:** EHR ID EHR Type Date Location -------- ---------- ------ ---------- \[LocatingOutput\] Note that the retrieved *ID* in above is used to access a particular EHR resource through *Transfer EHR service* as shown in the Fig. \[internal\]. **Input Parameters for Transferring:** EHR ID EHR Type -------- ---------- \[TransferringInput\] The desired output is shown in a simplified way as follows. The output shown below is integrated into the graphical user interface of our toolset. **Output:** EHR Data ---------- \[TransferringOutput\] Ensuring Patient Privacy against Cyber Attacks ---------------------------------------------- Our proposed technique introduces a two-way authentication process protecting the patient data from cyber-security attacks. A doctor logins into the system, upon which the patient is requested to scan her identity card. This two-way authentication is enforced to take patient consent and protect critical medical resources from outside attacks. In a worst scenario, if the patient indexing server is compromised the hashed patient identities are highly likely to remain protected. The authentication process for doctors is implemented using role-based access control. Note that access to medical data records by a doctor requires laws and regulations to restrain her from any type secondary usage of the data. Above all, all the operations in the resource sharing system are logged to be able to investigate data breaches and perform auditing services. System Prototyping and Demonstration {#sec:Prototype} ==================================== We have put forth a technique that allows for sharing medical resource. In order to realize *MedShare*, we implement our approach in four layers. MedShare Implementation Stack ----------------------------- ![Implementation Stack for the Medical Resource Sharing[]{data-label="techdetail"}](Techdetail "fig:"){width="1\columnwidth"}\ **Data Infrastructure Layer:** The data infrastructure, as shown in Fig. \[techdetail\], shows the data storage based on MongoDB [@Abramova2013] which is NoSQL database, more precisely a non-relational database. To deal with the complexity of medical data, it requires to have an adaptable format facilitating the data transformations easily across multiple sources. This approach overcomes the bottlenecks of traditional databases. Using MongoDB also helps mutability and scalability features of EHRs. **Technical Framework Layer:** All the components described in our presented architectural models are implemented by the lightweight Java EE framework Spring [@Gupta2010]. The required two-way authentication service in the legacy EHR system is implemented as a RESTFul web service by NodeJS [@Tilkov2010] and JSON Web Token (JWT) [@Jones2014]. A RESTful service can be defined as a means to hold query parameters. Contrary to JavaEE, NodeJS has the advantage of utilizing low resources to support high concurrency, which is good at scaling it to industrial problems. While JWT is a compact, URL-safe approach for representing claims between two communicating ends. JWT provides foundational authentication service to RESTful web services. Those two techniques guarantee the reliability and safety of the authentication process. **Discovery and Information Exchange Services Layer:** This layer has three Spring MVC services and two web services for authentication and synchronization. The *LocateService*, which is implemented using Spring MVC framework, identifies the required EHR location from the patients indexed in the MongoDB data infrastructure. The *LocateService* locates the EHRs based on the search conditions and transmits it to the doctor. The *DataAccess* is technically similar to *LocateService* but functions differently. It retrieves relevant patient data from the identified source. The *AuthenticationService* provides the authorization service to the patient when the doctor requests for a specific EHR. The authentication also requires a service that integrates legacy EHR system into the authentication process. The *SynchronizationService* timely triggers the replication of the shared EHRs and updates the indexes in the patient indexing server. The *LogService* provides the log and tracking services to avoid data breach and trace irregularities. The authentication component is deployed in all the hospitals. The EHR query component is deployed in all the hospitals to provide the data transmission service, and in the patient indexing server to provide the locating service. The *SynchronizationService* is deployed in all the hospitals and the data center to replicate shared EHRs and update indexes. The *LogService* is deployed on all servers because logs are generated and stored in the patient indexing server and all the other hospitals. **Front-end Medical Resource Sharing Layer:** This layer combines all the described layers and directly utilizes the services available in the discovery and information exchange services layer. Through this front-end the end user poses a query to the shared EHRs resources and retrieves a list of resources against the targeted EHR. The *Audit* service holds the system users accountable for their action in the system. More precisely, the doctor can distributively retrieve all the relevant records of the patient among all the hospitals while preserving patient privacy. Evaluation ---------- We deploy our prototype system in four different servers named HC, KW, UH, and PI (Patient Indexing). The EHR data are retrieved from hemodialysis center of Kiang Wu hospital, and then also generated extensive testing data for other two hospitals. In our testing scenario, we have 10,000 hemodialysis data of 100 patients. After accessing the shared data of these 100 patients with the different date range, *MedShare* worked as desired. Furthermore, all shared data was successfully recorded by logging the data access requests and their providers. Let us assume that a doctor in HC hospital requests all the hemodialysis records of a patient named Yang Yingying. This scenario is demonstrated by the Fig. \[LocatingService\] that shows a list of all the hemodialysis records of Yang Yingying that may further be individually viewed by the doctor by clicking the *Details* link. ![A screenshot of the Query Execution Environment to Locate a Resource[]{data-label="LocatingService"}](LocatingService "fig:"){width="0.9\columnwidth"}\ From the retrieved list of medical records, as given in Fig. \[LocatingService\], a doctor can fetch the detailed record against any displayed link. For example, the EHR corresponding to Sep 30, 2015, as shown in Fig. \[EHRFormat\]. The detailed output includes two types of information: 1) the patient information, and 2) her hemodialysis record. ![A detailed Hemodialysis Report[]{data-label="EHRFormat"}](EHRFormat "fig:"){width="0.9\columnwidth"}\ In order to be able to track the accessed data, *MedShare* supports the administrator to track the logs and investigate the specific operations performed by the users on a patient record. For example, when the tracker needs to know the accessing information about the EHR with ID 0221, the tracking system is able to show all the relevant results between two dates, as demonstrated in Figure \[audit\]. The prototype shows that the proposed architecture can deal with the sharing tasks of the hemodialysis EHRs among the Macau hospitals without compromising the privacy requirements. ![Auditing access to a medical resource[]{data-label="audit"}](audit "fig:"){width="0.9\columnwidth"}\ The distributed resource-sharing environment can be scaled up to include a significantly large number of medical health providers. However, in that case robust testing is proposed. Our resource sharing toolset lets its stakeholders to retain their data, which is otherwise a primary concern for a participating stakeholder. *MedShare* provides a transparent platform by integrating legacy EHR systems that were developed different implementation techniques. To maintained the openness of the system, the participants chose their own interoperable data format through negotiations which may however be replaced with open standards such as HL7 and openEHR. From another technical perspective, we also need an in-depth analysis of data storage strategies. Conclusion and Future Perspectives {#sec:Conclusion} ================================== We presented a set of implementation guidelines for exchanging medical resources among autonomous healthcare providers. We negotiated a common data structure that sets forth the first step to allows for interoperability among the three disconnected hospitals in Macau. Applying standardized data formats, such as HL7, was a daunting task because of bilingual patient data storage in both English and Chinese. *MedShare* ensured that participating healthcare providers have confidence in the system through their primary control over patient data. Our work endorses the fact that the exchange of medical information between independent hospitals is not only limited to technical issues but economic and political issues are equally important. Our experience with developing interoperable systems advocates a gradual replacement of legacy EHRs systems. *MedShare* preserves patient privacy by two-way authentication process that collects patient consent before any data authorization is made. To integrate patient consent into a data-sharing scenario, our system takes the advantage of national identification cards that are swiped by the patients during their medical visits. All patients in a hospital are uniquely identified by their identity numbers, which are hashed in the data indexing process. The patient indexing technique enables a more secure data exchange environment and develops a sense of safe cooperation between hospitals. Our future work includes developing an intense auditing process over shared medical data. To this end, we also aim to study potential attacks on the deployed system. In data sharing scenarios where multiple languages are used to store, process and communicate data, language-dependent and unified data formats are not directly applicable. This necessitates an additional work to tackle interoperable systems using two or more languages. Thereby, both syntax and semantics play an important role to develop such system. We aim to increase the number of hospitals in our interoperable resource sharing network. We also plan to report our findings based on the scalability and openness of the system. Robust evaluation studies are needed to evaluate non-functional aspects of such systems including scalability, heterogeneity, resource management, transparency, openness and performance analysis. Acknowledgment {#acknowledgment .unnumbered} ============== This work was supported by the Macau Science and Technology Development Fund (FDCT) (No. 103/2015/A3 and 018/2011/A1) and the National Natural Science Foundation of China (NSFC) (No. 61562011 and 61672435). [^1]: Yilong Yang and Xiaoshan Li are with the Faculty of Science and Technology, University of Macau, Macau. E-mail: yylonly@gmail.com, xsl@umac.mo. [^2]: Nafees Qamar is with Department of Computer Science, Biomedical and Health Informatics program, State University of New York, USA. Email: nafees.qamar@oswego.edu. [^3]: Wei Ke is with Macau Polytechnic Institute, Macau. [^4]: Zhiming Liu is with School of Computer and Information Science, Southwest University, Chongqing, China. Email: zhimingliu88@swu.edu.cn. [^5]: <https://github.com/yylonly/medshare> [^6]: <https://medical-data-models.org> [^7]: <http://www.omg.org/spec/UML/>
--- abstract: 'We address the following question: To what extent can a quantum field tell if it has been placed in de Sitter space? Our approach is to use the techniques of non-equilibrium quantum field theory to compute the time evolution of a state which starts off in flat space for (conformal) times $\eta<\eta_0$, and then evolves in a de Sitter background turned on instantaneously at $\eta=\eta_0$. We find that the answer depends on what quantities one examines. We study a range of them, all based on two-point correlation functions, and analyze which ones approach the standard Bunch-Davies values over time. The outcome of this analysis suggests that the nature of the equilibration process in this system is similar to that in more familiar systems.' author: - Andreas Albrecht - 'R. Holman' - 'Benoit J. Richard' bibliography: - 'FeelingDeSitterV009PRD.bib' title: 'Equilibration of a quantum field in de Sitter space-time' --- Introduction\[sec:intro\] ========================= De Sitter space is widely accepted as a probable early-universe cosmological solution, as it describes the state of the universe during inflation. Provided our universe possesses a completely stable positive cosmological constant, it should also asymptotically approach de Sitter space at late times^^. The standard lore of de Sitter space is that it acts as a heat bath, in such a way that an Unruh-deWitt detector for a quantum field in a de Sitter background will register a thermal spectrum for the number of particles in a given momentum mode ([@BirrellDavies1982] and references therein). But how does this happen? If de Sitter space is past and future eternal and the state is de Sitter invariant, then it should not come as a surprise that the Green’s functions of the quantum field embedded in this background should partake of its thermal behavior as evidenced by the periodicity in imaginary time inherent in the metric [@GibbonsHawking1977]. However, suppose we start the de Sitter evolution at an initial time, as might happen in inflation, say, and further assume that we start the field in a state that is not de Sitter invariant. What happens next? We address this question here in a particular scheme which is chosen to be relevant to the question and also technically tractable. Consider the situation where, for conformal times $\eta<\eta_0$, the background geometry is that of Minkowski flat space-time, and a minimally coupled free field is taken to be in the free field vacuum state of the flat space Hamiltonian. Then at $\eta =\eta_0$, the background is changed to become de Sitter space with an expansion rate $H$ so that the Gibbons-Hawking temperature is $T_{\rm de\ S} = \frac{H}{2\pi}$. Since we are only considering free field theory in a time-dependent background, we can solve the functional Schrödinger equation for the wave functional describing the state of the field explicitly, and use this wave functional to understand to what extent does this state approximate the “thermal” Bunch-Davies (BD) state [@BunchDavies1978] (extending the work on the corresponding modes by Schomblond and Spindel [@SchomblondSpindel1976]), by analyzing ratios of various correlators and momentum-energy tensors, evaluated in our vacuum state to the quantities considered in the BD vacuum state. This issue is not only of conceptual relevance, but could have observational consequences as well. The state of the field prior to inflation need not be one that matches on smoothly to the BD state at the onset of inflation, and if the number of e-folds is close to the minimum it is not an outlandish thought that some remnants of this pre-inflationary state might have survived to imprint themselves on the CMB and/or large scale structure. Conversely, given how well the power spectrum of CMB fluctuations has been measured [@WMAP2012; @Planck2013], and how closely this spectrum follows what would have been expected from the assumptions of an initial BD state, we can use this data and our calculation to constrain the space of allowed initial states for inflationary fluctuations. It is worth noting that the question we are asking can be recast as: to what extent are there no-hair theorems for the quantum state of a test field in de Sitter space? There has been some prior work in this direction, starting from the seminal work of Ford and Vilenkin [@VilenkinFord1982] as well as the more recent one of Anderson, Eaker, Habib, and Molina-París [@AndersonHabibMottolaParis2000]. In both cases, an attractor behavior was found for sufficiently well-behaved states. Related issues were also addressed in [@FischlerKunduPedraza2013], [@FischlerNguyenPedrazaTangarife2014], [@SinghGangulyPadmanabhan2013], and [@SinghModakPadmanabhan2013]. Our viewpoint is somewhat different here; we don’t know what the state of the field is prior to inflation but, regardless, it should be reasonable to ask what the evolution of that state is after inflation begins. Then we can ask to what extent the BD behavior is generic at late times during inflation. In the next section we set up the initial value problem for the Schrödinger wave functional with the flat space initial conditions described above. We then use that wave functional to compute two-point functions in our state. Since we have a free field theory, that state will be a gaussian, and thus fully described by the three correlation functions: $\langle \Phi_{\vec{k}}\Phi_{-\vec{k}}\rangle$, $\langle \Pi_{\vec{k}}\Pi_{-\vec{k}}\rangle$, and $\langle \Pi_{\vec{k}} \Phi_{-\vec{k}}+ \Phi_{\vec{k}}\Pi_{-\vec{k}}\rangle$. Additionally, we study observables such as the expectation value of the stress-energy tensor of this state. Section \[sec:numerics\] is devoted to numerical results and the analysis of ratios of two-point functions, and stress-energy tensors, evaluated in both our state and the BD states. We end with a discussion of our results as well as some further directions to take in addressing the issues dealt with here. Overall our vacuum state approaches the BD state, when considering coarse-grained collections of modes clearly within the horizon. \[sec:schrodingerFT\] Wave Functional, Mode Equation, and Correlation Functions =============================================================================== Finding the Schrödinger wave functional --------------------------------------- As discussed in the introduction, we consider a test scalar field embedded in an FRW space-time that transitions between a constant scale factor for conformal times $\eta\leq \eta_0$ to a de Sitter scale factor for $\eta>\eta_0$. We assume that such a space-time is generated by an appropriate stress-energy tensor via the Einstein equations, but do not concern ourselves further with how this background geometry is obtained. If $\Phi(\vec{x}, \eta)$ denotes the scalar field in question, the action we use is $$\label{eq:action} S = \frac{1}{2}\int d^4 x a^4(\eta)\left[\frac{1}{a^2(\eta)} \left(\Phi^{\prime 2}-(\nabla \Phi)^2\right) -m^2 \Phi^2\right],$$ \ where a prime denotes a derivative with respect to $\eta$, and $m^2 = m^2_{\Phi} + \xi_BR$, $m_\Phi$ referring to the mass of our field. We will take the scale factor as $$a(\eta) = \left\{\begin{array}{cc}-\frac{1}{\eta_0 H} & \eta\leq \eta_0 \\-\frac{1}{\eta H} & \eta>\eta_0\end{array}\right.$$ \ The scale factor is continuous though not differentiable at $\eta=\eta_0$. A more reasonable assumption would be that the transition is smoother than this (for an example see [@VilenkinFord1982]), but this will suffice for our purposes. Instead of quantizing this theory in the usual way (i.e., by defining creation and annihilation operators acting on the Fock space of states) we will use a Schrödinger picture quantization [@BoyanovskyVegaHolman1994]. Here we use eigenstates of the Schrödinger picture field operator $\hat{\Phi}(\vec{x})$, $|\Phi(\cdot)\rangle$ such that $$\label{eq:spicquant} \hat{\Phi}(\vec{x}) |\Phi(\cdot)\rangle=\Phi(\vec{x}) |\Phi(\cdot)\rangle.$$ \ The state of the field is then represented by a wave functional $\Psi\left[\Phi(\cdot); \eta\right]$ (or more generally by a density matrix element $\rho[\Phi(\cdot),\tilde{\Phi}(\cdot);\eta]$) satisfying the Schrödinger (Liouville) equation $$\label{eq:schreqn} i\frac{\partial \Psi[\Phi(\cdot); \eta]}{\partial \eta} = \hat{H}\left[-i\frac{\delta}{\delta \Phi(\cdot)}, \Phi(\cdot)\right] \Psi[\Phi(\cdot); \eta]\quad \left(\text{or }i\frac{\partial\rho}{\partial \eta}=\left[ \hat{H}, \rho\right]\right),$$ \ where $\hat{H}$ is the Hamiltonian operator (again in the Schrödinger picture) obtained from the action in Eq. (\[eq:action\]). For our case this reads $$\label{eq:hamiltonian} \hat{H} = \int d^3 x\left\{ \frac{\Pi^2}{2 a^2 (\eta)} +\frac{1}{2} a^2 (\eta) \left(\nabla \Phi(\vec{x})\right)^2 + \frac{1}{2} m^2 a^4(\eta)\Phi(\vec{x})^2 \right\},$$ \ with $\Pi = a^2 (\eta) \Phi^{\prime}$ being the canonically conjugate momentum to $\Phi$, represented in the usual way as $\Pi\rightarrow -i\delta\slash \delta \Phi(\cdot)$ in the Schrödinger picture. We note that the Schrödinger equation in Eq. (\[eq:schreqn\]) should be written using the proper time of the observer measuring the wave function. For an FRW space-time this would be the cosmic time $t$. However, the use of conformal time corresponds to a canonical transformation and thus gives rise to the same physics [@BoyanovskyVegaHolman1994], as would be expected of a coordinate transformation. It will be important in our later analysis to keep in mind that $t\rightarrow \infty$ corresponds to $\eta \rightarrow 0^-$. We will take our spatial geometry to be flat so we can expand the field in terms of Fourier components. Furthermore, we will quantize our field in a box of comoving spatial volume $V$ so that the Schrödinger picture field and conjugate momenta can be written as $$\begin{aligned} \label{eq:expansion} & & \Phi(\vec{x}) = \frac{1}{\sqrt{V}} \sum_{\vec{k}} \Phi_{\vec{k}} e^{-i \vec{k}\cdot \vec{x}}\nonumber\\ & & \Pi(\vec{x}) = \frac{1}{\sqrt{V}} \sum_{\vec{k}} \Pi_{\vec{k}} e^{-i \vec{k}\cdot \vec{x}},\end{aligned}$$ \ where the equal time commutation relations $$\label{eq:commrel} \left[ \Phi_S(\vec{x}),\Pi_S(\vec{y})\right]= i\delta^3(\vec{x}-\vec{y})$$ \ imply $\left[\Phi_{\vec{k}}, \Pi_{\vec{q}}\right] = i \delta_{\vec{k}, -\vec{q}}$ and thus, in the Schrödinger picture, $\Pi_{\vec{q}}$ can be represented as $-i\frac{\delta}{\delta \Phi_{\vec{-q}}}$. Hence, the Hamiltonian breaks up into the sum of Hamiltonians for each mode, and we can also write the wave function as the product of wave functions for each mode: $$\begin{aligned} \label{eq:momentumspace} && H = \sum_{\vec{k}} H_{\vec{k}}\quad \text{with } H_{\vec{k}}= \frac{\Pi_{\vec{k}} \Pi_{-\vec{k}}}{2 a^2(\eta)} +\frac{1}{2} a^2(\eta) \Omega_{\vec{k}}^2(\eta)\ \Phi_{\vec{k}} \Phi_{-\vec{k}},\nonumber\\ && \Psi[\{\Phi_{\vec{k}}\}, \eta] = \prod_{\vec{k}} \psi_{\vec{k}} (\Phi_{\vec{k}}, \eta),\nonumber\\ && \Omega_k^2(\eta) \equiv k^2 + m^2 a^2(\eta).\end{aligned}$$ \ Since we have a free field theory, our ansatz for the ground-state wave functional for each mode should be Gaussian as in $$\label{eq:gaussianansatz} \psi_{\vec{k}} (\Phi_{\vec{k}}, \eta)=N_{\vec{k}} (\eta) \exp\left(-\frac{1}{2} A_k (\eta) \Phi_{\vec{k}} \Phi_{-\vec{k}}\right),$$ \ where we have made use of rotational invariance to write the kernel $A_k(\eta)$ as a function of the magnitude $k$ of $\vec{k}$. By matching powers of $\Phi_{\vec{k}}$ on either side of the Schrödinger equation for each mode we find: $$\begin{aligned} \label{eq:seqn} && i\frac{N_{\vec{k}}^{\prime} (\eta)}{N_{\vec{k}} (\eta)} = \frac{A_k(\eta)}{2 a^2(\eta)} \nonumber\\ && i A_k^{\prime}(\eta) = \frac{A_k^2(\eta)}{a^2(\eta)}-a^2(\eta)\Omega_{k}^2(\eta)\quad A_k(\eta_0) = \Omega_k(\eta_0) a^2(\eta_0)\end{aligned}$$ \ where the primes represent conformal time derivatives, and the initial condition is found by considering the ground state wave function of a quantum mechanical harmonic oscillator with mass $a^2(\eta_0)$ and frequency $\Omega_k (\eta_0)$. Solving the mode equations -------------------------- Eq. is of the Ricatti form and can be converted into a second order equation of Schrödinger type via the substitution $$A_k(\eta) = -i a^2(\eta)\left(\frac{\phi_k^{\prime}(\eta)}{\phi_k(\eta)}-\frac{a^{\prime}(\eta)}{a(\eta)}\right).$$ \ Doing this we find $$\label{eq:mode} \phi_k^{\prime \prime}(\eta) + \left(\Omega_k^2(\eta)-\frac{a^{\prime \prime}(\eta)}{a(\eta)}\right) \phi_k(\eta)=0,\quad \phi_k^{\prime}(\eta_0) =\left( i \Omega_k(\eta_0)+\frac{a^{\prime}(\eta_0)}{a(\eta_0)}\right)\phi_k(\eta_0).$$ \ The equation we start with for $A_k(\eta)$ is a first order equation and we have one initial condition for it so that there is a unique solution for $A_k(\eta)$. On the other hand, the equation for $\phi_k(\eta)$ is a second order one, requiring two initial conditions for a unique solution. The resolution of this dilemma can be found by noting that $A_k$ is related to the ratio of $\phi_k^{\prime}$ and $\phi_k$. This means that in any linear combination of the two independent solutions to Eq. (\[eq:mode\]), we can factor out an overall constant leaving only one constant to be determined. We can use this freedom to fix the (constant) Wronskian of $\phi_k(\eta)$ and $\phi_k^*(\eta)$ to equal $-i$. Imposing this condition then implies that $\phi_k(\eta_0) = \frac{1}{\sqrt{2 \Omega_k(\eta_0)}}$. Eq. (\[eq:mode\]) is nothing but the mode equation for a massive, minimally coupled scalar field in de Sitter space. The solutions are well known [@SchomblondSpindel1976] and we can write $$\begin{aligned} \label{eq:modesoln} && \phi_k(\eta) = \alpha_k {\cal U}_k(\eta)+\beta_k {\cal U}_k^*(\eta),\quad {\cal U}_k(\eta) = \frac{\sqrt{-\pi \eta}}{2}H_{\nu}^{(2)}(-k\eta),\nonumber\\ && \alpha_k = \frac{i}{\sqrt{2 \Omega_k(\eta_0)}}\left[{\cal U}_k^{* \prime}(\eta_0)+ \left(-i \Omega_k(\eta_0)+\frac{1}{\eta_0}\right){\cal U}_k^{*}(\eta_0)\right], \\ &&\beta_k = -\frac{i}{\sqrt{2 \Omega_k(\eta_0)}}\left[{\cal U}_k^{ \prime}(\eta_0)+\left(-i \Omega_k(\eta_0)+\frac{1}{\eta_0}\right){\cal U}_k(\eta_0)\right],\nonumber\end{aligned}$$ \ where $\nu = \sqrt{\frac{9}{4}-\frac{m^2}{H^2}}$ and ${\cal U}_k(\eta)$ is commonly referred to as the $k^{th}$ Bunch-Davies mode. It is easy to check that the Wronskian condition implies that $|\alpha_k|^2-|\beta_k|^2=1$; had we been doing Heisenberg field theory, we would infer that the modes $\phi_k(\eta)$ are just the Bogoliubov transforms of the BD modes. Moreover, as $\eta_0 \rightarrow -\infty$, the form of ${\cal U}_k(\eta)$ allows us to conclude: $$\begin{aligned} \label{eq:limeta0} && \Omega_k(\eta_0) \rightarrow k \nonumber \\ &&{\cal U}_k(\eta_0) \rightarrow \frac{1}{\sqrt{2k}}, \\ &&{\cal U'}_k(\eta_0) \rightarrow i \sqrt{\frac{k}{2}} \nonumber,\end{aligned}$$ \ from which we can infer $\alpha_k \rightarrow 1$ and $\beta_k \rightarrow 0$, i.e. in this limit, we go back to an eternal inflationary patch of de Sitter space with the field state being the BD state. The full wave function for the mode $\Phi_{\vec{k}}$ is thus given by $$\label{eq:kwavefcn} \psi_{\vec{k}}(\eta) = \left(\frac{a^2(\eta)}{\pi \left|\phi_k(\eta)\right|^2}\right)^{\frac{1}{4}} \exp\left[\frac{i}{2} a^2(\eta) \left(\frac{\phi_k^{\prime}(\eta)}{\phi_k(\eta)}-\frac{a^{\prime}(\eta)}{a(\eta)}\right) \Phi_{\vec{k}} \Phi_{-\vec{k}}\right],$$ \ where we should note that when computing any expectation values for quantities involving the mode $\Phi_{\vec{k}}$, we also need to include the contribution of the wave function $\psi_{-\vec{k}}(\eta)$, since $\Phi_{-\vec{k}} = \Phi_{\vec{k}}^*$, and $\Phi$ is a real field. This is equivalent to using the square of $\psi_{\vec{k}}(\eta)$ in any such calculation. Eq. coupled with the mode equations (\[eq:mode\]) gives the full specification of the quantum state with the given initial conditions. We can now use this wave function to compute observables that might help us answer the question asked in the introduction: to what extent does this state “feel” de Sitter space? Calculating relevant correlation functions ------------------------------------------ What are the useful diagnostic tools to evaluate the behavior of this state? Since the state is Gaussian, it can be fully specified by the following correlators: $\langle \Phi_{\vec{k}}\Phi_{-\vec{k}}\rangle,\ \langle \Pi_{\vec{k}}\Pi_{-\vec{k}}\rangle,\ \langle \Pi_{\vec{k}} \Phi_{-\vec{k}}+ \Phi_{\vec{k}}\Pi_{-\vec{k}}\rangle$, computed below. From (\[eq:kwavefcn\]) computing $\langle \Phi_{\vec{k}} \Phi_{-\vec{k}}\rangle(\eta)$ results in $$\begin{aligned} \label{eq:2ptphi} \langle \Phi_{\vec{k}} \Phi_{-\vec{k}}\rangle(\eta) &&= \int {\cal D}\Phi_{\vec{k}} \left|\psi_{\vec{k}}(\eta)\right|^2 \left|\psi_{-\vec{k}}(\eta)\right|^2 \Phi_{\vec{k}} \Phi_{-\vec{k}} \nonumber \\ && = \frac{1}{2A_{kR}} \\ && = \frac{|\phi_k(\eta)|^2}{a^2(\eta)}. \nonumber\end{aligned}$$ \ where $A_{kR}$ denotes the real part of the kernel $A_k(\eta)$. The other correlators are also easy enough to compute. For $\langle \Pi_{\vec{k}}\Pi_{-\vec{k}}\rangle$ we have $$\begin{aligned} \label{eq:2ptpi} \langle \Pi_{\vec{k}} \Pi_{-\vec{k}}\rangle(\eta) &&= \int {\cal D}\Phi_{\vec{k}}\ \psi_{\vec{k}}(\eta)^{* 2} \left(-\frac{\delta^2}{\delta \Phi_{\vec{k}} \delta \Phi_{-\vec{k}}}\right) \psi_{-\vec{k}}(\eta)^2 \nonumber \\ && =\frac{\left|A_k\right|^2}{2 A_{k R}} \\ && = a^4(\eta) \left| \frac{d}{d\eta}\left(\frac{\phi_k(\eta)}{a}\right) \right|^2. \nonumber\end{aligned}$$ \ Finally, we find $\langle \Pi_{\vec{k}} \Phi_{-\vec{k}}+ \Phi_{\vec{k}}\Pi_{-\vec{k}}\rangle$ to be given by $$\begin{aligned} \label{eq:2ptmixed} \langle \Pi_{\vec{k}} \Phi_{-\vec{k}}+ \Phi_{\vec{k}}\Pi_{-\vec{k}}\rangle&& =\int {\cal D}\Phi_{\vec{k}}\ \psi_{\vec{k}}(\eta)^{* 2} \left(-i\frac{\delta}{\delta \Phi_{-\vec{k}}}\Phi_{\vec{k}} -\Phi_{-\vec{k}}i\frac{\delta}{\delta \Phi_{\vec{k}}} \right) \psi_{-\vec{k}}(\eta)^2 \nonumber \\ && = -\frac{A_{k I}}{A_{k R}}\\ &&=a^2(\eta) \frac{d}{d\eta}\left(\frac{\left|\phi_k(\eta)\right|^2}{a^2(\eta)}\right) \nonumber.\end{aligned}$$ \ We can check to see what happens to our two-point functions as a function of time. In particular, we might expect that, if de Sitter space really did act as a heat bath and an “equilibration” process truly was in effect over time, then we should see these correlators approach the standard de Sitter two-point functions. We can check this by noticing that at late times, it is only the imaginary part of the Hankel function that becomes relevant (since it is singular as $\eta\rightarrow 0^-$). Hence, the late-time expression of the Hankel function is $$\lim_{\eta \to 0^-} H_{\nu}^{(2)}(k\eta) = i\frac{\Gamma(\nu)}{\pi}\left(\frac{2}{-k\eta}\right)^{\nu},$$ \ so that, as $k\eta$ approaches $0^-$ for finite $k$, we can use this form in our two-point functions to find: $$\begin{aligned} \label{eq:2ptfunctionsAsymptotic} &\left\langle \Phi_{\vec{k}}\Phi_{-\vec{k}} \right\rangle \rightarrow 4^{\nu -1}\frac{H^2 \Gamma^2(\nu)}{\pi}\left|\alpha_k - \beta_k \right|^2(-\eta)^{3} (-k\eta)^{-2\nu} , \nonumber \\ & \left\langle\Pi_{\vec{k}}\Pi_{-\vec{k}} \right\rangle \rightarrow 4^{\nu - 3}\frac{\left|\alpha_k - \beta_k \right|^2}{\pi H^2} (-\eta)^{-3}(-k\eta)^{-2\nu}\left[(k\eta)^2\Gamma(\nu-1)+(6-4 \nu)\Gamma(\nu)\right]^2, \\ & \left\langle \Phi_{\vec{k}}\Pi_{-\vec{k}} +\Pi_{-\vec{k}}\Phi_{\vec{k}} \right\rangle \rightarrow -4^{\nu-\frac{3}{2}} \frac{\left|\alpha_k - \beta_k \right|^2}{\pi} (-k\eta)^{-2\nu}\Gamma(\nu)\left[(k\eta)^2\Gamma(\nu-1)+(6-4\nu)\Gamma(\nu)\right]. \nonumber\end{aligned}$$ \ From , we can compute $ \left|\alpha_k-\beta_k\right|^2$ as $$\left|\alpha_k-\beta_k\right|^2=\frac{2}{\Omega_k(\eta_0)}\left[\left(\Re\left({\cal U}_k^{\prime}(\eta_0)+\frac{1}{\eta_0}{\cal U}_k(\eta_0)\right)\right)^2+\Omega_k^2(\eta_0) \left(\Re\left({\cal U}_k(\eta_0)\right)\right)^2\right]. \label{EqnCoeff}$$ \ At first glance, focusing our attention on $\left\langle \Phi_{\vec{k}}\Phi_{-\vec{k}} \right\rangle$, Eq. (\[eq:2ptfunctionsAsymptotic\]) tells us that, even at late times, information about the initial state as encoded in the coefficients $\alpha_k$ and $ \beta_k$ is not lost, at least not in the two-point function. This should not be surprising since unitary evolution always preserves information about the initial state as long as the state is viewed in a sufficiently fine-grained manner. It is only through coarse-graining that a process of equilibration (should it occur) will be revealed. In this paper we will consider coarse-graining that is expressed by looking at quantities averaged over a range of $k$ modes. We can be more explicit about Eq. in the massless, minimally coupled case where $\nu= \frac{3}{2}$ (this was also treated in [@AndersonHabibMottolaParis2000]). In this case, $${\cal U}_k(\eta) = -\frac{e^{ik\eta}}{\sqrt{2 k}}\left(1+\frac{i}{k \eta}\right),\quad \Omega_k(\eta_0) = k,$$ \ and $$\label{eq:bogomass} \left|\alpha_k-\beta_k\right|^2 = 1-\frac{\sin 2 k \eta_0}{k \eta_0} + \frac{\sin^2 k \eta_0}{\left(k \eta_0\right)^2}.$$ \ For $-k\eta_0\gg 1$, this modulating factor tends towards $1$. The two-point functions are studied in further detail in section \[sec:numerics\], after substituting $q = -k\eta_0$ in the mode equation, and performing proper rescalings of our quantities by appropriate powers of $-\eta_0$. The stress-energy tensor ------------------------ With the previous two-point functions in hand, we can study the equilibration of our mode with respect to the BD mode. Moreover, we can calculate relevant quantities such as the stress-energy tensor and in particular the energy density $\left\langle T^0_{\phantom{b}0} \right\rangle$. We need to compute the expectation value of the stress-energy tensor in a particular state corresponding to the density matrix $\rho(\eta)$. The momentum-energy tensor in operator form is $$\begin{aligned} T_{\mu\nu} = &\left(1-2\xi_B\right)\nabla_\mu\Phi\nabla_\nu\Phi + \left(2\xi_B-\frac{1}{2}\right)g_{\mu\nu}g^{\alpha\beta}\nabla_\alpha\nabla_\beta\Phi + g_{\mu\nu} V(\Phi) \\ &-\xi_B\Phi^2\left(R_{\mu\nu} - \frac{1}{2}g_{\mu\nu}R\right) + 2\xi_B\Phi(g_{\mu\nu}\Box - \nabla_\mu\nabla_\nu)\Phi.\end{aligned}$$ \ Thus, in a de Sitter background, $$\begin{aligned} \label{eq:T00} %\begin{split} \left\langle T^0_{\phantom{b}0} \right\rangle = & \left\langle \frac{\Phi'^2}{2a^2} + \frac{1}{2a^2}(1-4\xi_B)(\nabla \Phi)^2 + V(\Phi) - \xi_B G^0_{\phantom{b}0}\Phi^2 + 2 \xi_B \Phi \left[3\frac{a'}{a^3}\Phi' - \frac{1}{a^2} \nabla^2\Phi \right] \right\rangle, %\end{split}\end{aligned}$$ \ where $G^{\mu}_{\phantom{b}\nu} = R^{\mu}_{\phantom{b}\nu} - \frac{1}{2} \delta^{\mu}_{\phantom{b}\nu} R$ is the Einstein tensor. $\rho(\eta)$ is written in terms of the Fourier components of the fluctuation fields. Hence, $T^0_{\phantom{b}0}$ also needs to be expanded in terms of such fluctuations. Additionally, we have $\pi = \frac{\Phi'}{a^2}$ and need to hermitianize $\Phi\Phi'$ so that $\Phi\Phi'$ goes to $\frac{1}{2a^2}(\Phi\tilde{\pi} + \tilde{\pi}\Phi)$. Therefore, combining the previous results with Eq. , one obtains $$\begin{aligned} \label{eq:T00Fourier} \left\langle T^0_{\phantom{b}0} \right\rangle = \int \frac{d^3k}{(2\pi)^3} & \Bigg[ \frac{1}{2a^6} \left\langle\Pi_{\vec{k}}\Pi_{-\vec{k}}\right\rangle + \left(\frac{1}{2a^2}(k^2 + a^2 V''(\Phi)) - \xi_BG^0_{\phantom{b}0} \right) \left\langle \Phi_{\vec{k}}\Phi_{-\vec{k}}\right\rangle \nonumber \\ &+3\xi_B\frac{a'}{a^5}\left\langle \Phi_{\vec{k}}\Pi_{-\vec{k}} + \Pi_{\vec{k}}\Phi_{-\vec{k}}\right\rangle \Bigg],\end{aligned}$$ \ where in our case, $a = -\frac{1}{\eta H}$, $V''(\Phi)= m^2, \frac{m^2}{H^2} = \frac{9}{4} - \nu^2$, and $G^0_{\phantom{b}0} = -3H^2$. Because of the divergences notably appearing in $\left\langle\Pi_{\vec{k}}\Pi_{-\vec{k}}\right\rangle$ and $\left\langle \Phi_{\vec{k}}\Pi_{-\vec{k}} + \Pi_{\vec{k}}\Phi_{-\vec{k}}\right\rangle$, the previous integration is not straight forward, even in the massless minimally coupled case. A more in depth analysis of Eq. will be presented in the next section. Numerical Work {#sec:numerics} ============== Numerical approach ------------------ Now that we have calculated all the relevant correlation functions involving our state and used those to compute the pertinent observables, we turn to a numerical analysis of these quantities. Before doing this, however, a rescaling of our mode equations, fields $\phi_k$ and corresponding momenta $\Pi_k$ should be performed. Time will be measured in units of $\eta_0$, $\langle \Phi_{\vec{k}}\Phi_{-\vec{k}}\rangle$ in units of $(\eta_0H)^2$ and $\langle \Pi_{\vec{k}}\Pi_{-\vec{k}}\rangle$ in units of $(\eta_0H)^{-2}$. Rescale the momenta by $\eta_0$ and the modes by $\sqrt{-\eta_0}$, and let $q = -k\eta_0$ and let $x = 1 - \frac{\eta}{\eta_0}$ represent our new “time” variable. Since we are only interested in conformal times $\eta \in [\eta_0,0)$, we have $x \in [0,1)$. A given mode labeled by the comoving wavenumber $k$ crosses the de Sitter horizon when $k\eta= -1$, which corresponds to $x = 1 - \frac{1}{q}$. Then the BD mode and mode equations respectively become $$\label{eq:BDresc} {\cal U}_q(x) = \frac{\sqrt{\pi(1-x)}}{2}H^{(2)}_\nu\left(q(1-x)\right),$$ \ and $$\label{eq:moderesc} \phi_q^{\prime \prime}(x) + \left(q^2+\frac{\frac{1}{4}-\nu^2}{(1-x)^2}\right) \phi_q(x)=0,$$ \ where a prime now denotes a derivative with respect to $x$. Notice that, going from Eq. to Eq. , we made the substitution $\frac{m^2}{H^2} = \frac{9}{4} - \nu^2$. The initial conditions previously defined now give $$\label{eq:ICresc} \phi_q(0) = \frac{1}{\sqrt{2}\left(q^2+\frac{9}{4}-\nu^2\right)^{\frac{1}{4}}} \text{\hspace{10 pt} and \hspace{10 pt}} \phi_q^{\prime}(0) = \left[i\left(q^2 + \frac{9}{4} - \nu^2\right)^{\frac{1}{2}}+1\right]\phi_q(0).$$ \ The measure of the equilibration of our state to the BD state will be quantified by the approach of the corresponding correlators to the standard BD ones. We will examine this approach both mode by mode as well as in terms of momentum integrated quantities. For simplicity, we focus on the massless, minimally coupled case below. Correlators {#subsec:correlators} ----------- ### Single mode case We consider ratios of the form $\frac{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}}{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}}$, $\frac{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(Mode)}}{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(BD)}}$,and $\frac{\left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}}{ \left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}}$, where $(Mode)$ stands for a correlation function evaluated in our ansatz and $(BD)$ for the same quantity examined in the Bunch-Davies state. Below are plots of all such ratios. Fig. \[phiRatio3q\] shows $\frac{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}}{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}}$ for $q = 1, 10$, and 100. For $q = 1$, corresponding to a mode that is crossing the horizon at $\eta = \eta_0$, the ratio seems to settle well below unity, increasing monotonically until it plateaus for larger $x$ values, meaning that no equilibrium between $\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}$ and $\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}$ is reached. This is not surprising. Indeed, since all modes for which $q \in [0,1]$ are essentially frozen we should not expect anything dynamical to happen to their matching correlation functions. Thus it seems clear that for modes crossing the horizon or outside of it, information about the initial state is never lost. As $q$ increases to 10 or even 100, $\frac{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}}{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}}$ is characterized by an undamped oscillatory behavior about $1$ with higher $q$’s having smaller amplitudes. The absence of damping is due to the fact that taking a ratio of $\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle$ in different states erases the contributions of the scale factors, as can be seen from Eq. , hence the red-shifting of the modes due to the expansion of the universe is removed. Additionally, since our state can be viewed as a Bogoliubov transform of the BD state, $|\phi_q(x)|^2$ just oscillates about $|{\cal U}_q(x)|^2$ with constant amplitude. Given the form of $\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle$ the same should occur between $\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}$ and $\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}$. ![The ratio$\frac{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}}{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}}$ for $q = 1, 10,$ and $100$. For $q = 1$ (dotted line) the ratio clearly does not asymptote to $1$, while for $q = 10$ and $q = 100$ (dashed and solid lines, respectively) it oscillates about $1$ without any damping, but with amplitude decreasing with increasing $q$ values. Such fine-grained curves do not observe equilibration.[]{data-label="phiRatio3q"}](phiRatio3q.pdf){width="5"} The ratio $\frac{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(Mode)}}{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(BD)}}$ in Fig. \[piRatio3q\], presents similarities with Fig. \[phiRatio3q\] for $q = 10$ and $q = 100$, namely the ratio corresponding to such modes is oscillatory about an equilibrium value of $1$ and undamped. Moreover for $q = 1$, no approach to unity is observed. ![The ratio $\frac{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(Mode)}}{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(BD)}}$ for $q = 1, 10$, and 100. Similarly to what was observed in Fig. \[phiRatio3q\], it appears the ratio evaluated at $q = 1$ does not approach $1$, while $\frac{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(Mode)}}{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(BD)}}$ taken for $q = 10$ or $q = 100$ oscillates without damping about $1$, with an amplitude that decreases as $q$ becomes higher. As in Fig. \[phiRatio3q\] no signs of equilibration are found.[]{data-label="piRatio3q"}](piRatio3q.pdf){width="5"} The last ratio of correlators, $\frac{\left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}}{ \left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}}$, is shown in Fig. \[mixRatio3q\]. When $q = 1$, this quantity appears to grow monotonically for all $x$, corroborating the absence of equilibrium for such corresponding modes. The striking feature of Fig. \[mixRatio3q\], manifesting itself when compared to Fig. \[phiRatio3q\] and \[piRatio3q\], is that the ratios tend to oscillate about $1$ for $q > 1$, but now with an amplitude that diminishes as a function of time. The size of the oscillations is now damped linearly, while being constant in the first two plots. Such a disparity is due to the presence of scale factors in that do not cancel upon taking a quotient of two-point functions evaluated in different states. ![The ratio $\frac{\left\langle \Pi_{\vec{q}}\Phi_{-\vec{q}} + \Phi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(Mode)}}{ \left\langle \Pi_{\vec{q}}\Phi_{-\vec{q}} + \Phi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(BD)}}$ for $q = 1, 10$, and $100$. Similar conclusions to the ones in Fig. \[phiRatio3q\] and \[piRatio3q\] can be made, namely, the higher the $q$-mode the smaller the amplitude of oscillations. However, contrary to the previous two figures, the ratios corresponding to $q = 10$ and $100$ oscillate about $1$ with amplitude decreasing linearly.[]{data-label="mixRatio3q"}](mixRatio3q.pdf){width="5"} Despite the described differences found when comparing Fig. \[phiRatio3q\], \[piRatio3q\], and \[mixRatio3q\], we argue that one can draw similar conclusions regarding the lack of equilibration. It is not surprising that correlation functions for modes crossing, near crossing, or outside the horizon do not exhibit equilibration, as such modes freeze out. Moreover the oscillatory behavior shown for higher $q$ modes in Fig. \[phiRatio3q\] and \[piRatio3q\] also does not reflect equilibration. At first glance, the curves in Fig. \[mixRatio3q\] seem to indicate an approach to the BD mode, since all curves approach unity over time. However, $x \rightarrow 1$ corresponds to $t \rightarrow +\infty$ for cosmic time $t$. We feel the slowness of the approach to unity of the curves in Fig. \[mixRatio3q\] leaves us unconvinced that this quantity should be regarded as equilibrating. The $q=1$ and $q=10$ curves clearly do not actually reach unity as $x \rightarrow 1$, and the same is true of the $q=100$ curve, although this is harder to see from the plot. The lack of equilibration of the two-point functions for single modes is hardly the final word. After all one cannot learn about the equilibration of a box of gas by following a single energy eigenstate of the microscopic system, no matter how strongly the equilibration is realized overall. We next consider correlation functions averaged over a range of $q$’s, as a way to represent coarse-graining. Although our setup is rather formal, we believe these averaged quantities bring us closer to representing realistic observables. ### Quantities averaged over modes We integrated all our two-point functions over finite ranges of $q$: $[1,3]$, $[3,9]$, and $[10, 20]$. Such domains in $q$ have been chosen to demonstrate the difference between modes that sit inside the horizon (with large wavelengths for $q \in [3, 9]$ or $q$ an order of magnitude away from horizon-crossing for $q \in [10, 30]$) and those that are traversing or near the horizon. We have found that the general behaviors can be identified without including even higher values of $q$. Our ratios then become $\frac{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$, $\frac{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$,and $\frac{\left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{ \left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}},$ where $q_{min}$ is our lower limit of integration and $q_{max}$ our upper limit. As shown in Fig. \[phiRatioInt\], integrating $\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}$ and $\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}$ over $q$ introduces damping in the ratio of the two for mode ranges well within the horizon, while averaging over the near-horizon-crossing range ($q \in [ 1, 3]$) does not. For the former modes, the ratios oscillate about $1$ but a damping occurs over time such that the two-point functions eventually asymptote to $1$. For $q \in [10, 30]$, the ratio clearly becomes $1$, i.e, equilibration of $\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}$ with $\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}$ is reached. Looking at higher $q$-modes, we observed that the higher the $q$-domain the earlier the equilibration, since such modes have smaller amplitudes. Comparing with Fig. \[phiRatio3q\], we can infer that the damping is due to the integration over $q$-modes. Hence we may conclude that, from the perspective of the field correlation functions, equilibrium is attained for sets of modes that start well inside the horizon, with $q$ of order $10$ and beyond. ![The ratio $\frac{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ integrated for $q \in [1, 3]$, $q \in [3, 9]$, and $q \in [10, 30]$. For $q_{min} = 1$ and $q_{max} = 3$ (dotted line) the ratio clearly deviates from $1$, corroborating the fact that modes near horizon-exit and beyond, do not equilibrate. For other domains $[q_{min}, q_{max}]$ (dashed and solid lines) $\frac{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ oscillates about $1$ with a clear damping over time.[]{data-label="phiRatioInt"}](phiRatioInt.pdf){width="5"} We may draw similar conclusions from Fig. \[piRatioInt\] as in Fig. \[phiRatioInt\]. For $q \in [3, 9]$ and $q \in [10,30]$, $\frac{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ appears oscillatory about the equilibrium position and damped. For the former domain, the ratio does not exactly achieve equilibrium but approaches it. It clearly hits $1$ for $q \in [10, 30]$. For $q_{min} = 1$, no equilibration occurs. Therefore, we may conclude that modes such that $q$ is of order $10$ and beyond equilibrate, from the point of view of the momentum correlator. ![The ratio $\frac{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ integrated for $q \in [1, 3]$, $q \in [3, 9]$, and $q \in [10, 30]$. For $q_{min} = 1$ and $q_{max} = 3$ (dotted line) the ratio oscillates about $1$ without damping, while for other ranges $[q_{min}, q_{max}]$, the ratio damps out close to $1$. For $q \in [10, 30]$ (solid line), it clearly achieves $1$ for higher $x$-values. In other words coarse-graining $\frac{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ over such modes results in equilibration of the numerator and denominator.[]{data-label="piRatioInt"}](piRatioInt.pdf){width="5"} In Fig. \[mixRatioInt\], we observe a slight difference in the behavior of the lowest $q$-range modes. For $q \in [1, 3]$, $\frac{\left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{ \left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ appears to plateau for $x \geq 0.7$. Nevertheless no approach to unity can be found. This again proves that modes which are crossing or near-crossing the horizon do not equilibrate. Plots generated after integrating for $q \in [3, 9]$, and $q \in [10, 30]$, show the same trends as in Fig. \[phiRatioInt\] and \[piRatioInt\]. Such modes approach (for $q_{min} = 3$) equilibrium or equilibrate ($q_{min} = 10$ and higher). ![The ratio $\frac{\left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{ \left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ integrated for $q \in [1, 3]$, $q \in [3, 9]$, and $q \in [10, 30]$. For the first domain of $q$ (dotted line), $\left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}$ never equilibrates to $\left\langle \Phi_{\vec{q}}\Pi_{-\vec{q}} +\Pi_{\vec{q}}\Phi_{-\vec{q}} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}$. For $q_{min} = 3$ or $10$, the ratio is damped over time, and equilibrium is reached for $q$-modes of order and greater than $10$.[]{data-label="mixRatioInt"}](mixRatioInt.pdf){width="5"} In summary, when we ask whether the correlation functions of our state and the BD state approach one other, the answer seems to be that it depends on which modes are being considered. For those that remain well inside the horizon, we see the tendency of our state to approach the BD one, while for low $q$-modes, this does not occur. Stress-energy tensor -------------------- In terms of our variables $x$ and $q$, Eq. in the massless minimally coupled case becomes $$\label{eq:T00q} \left\langle T^0_{\phantom{b}0} \right\rangle = \int \frac{d^3q}{(2\pi)^3} \left[ \frac{(1-x)^6}{2} \left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}}\right\rangle + \frac{(1-x)^2}{2}q^2 \left\langle \Phi_{\vec{q}}\Phi_{-\vec{q}}\right\rangle\right].$$ \ Let $\left\langle T^0_{\phantom{b}0} \right\rangle_q$ be the integrand of Eq. . Fig. \[t00Ratio3q\] represents $\frac{\left\langle T^0_{\phantom{b}0} \right\rangle_q^{(Mode)}}{\left\langle T^0_{\phantom{b}0} \right\rangle_q^{(BD)}}$ for $q = 1, 10$ and 100. As seen when analyzing two-point functions, the ratio settles away from $1$ when $q = 1$ and exhibits an oscillatory behavior about 1 for the other $q$-modes. However, contrary to our previous observations, the oscillations are characterized by an amplitude that increases as a function of $x$. This is rather puzzling. Indeed, as discussed in section \[subsec:correlators\], our state should be fully described by the two-point functions. $\left\langle T^0_{\phantom{b}0} \right\rangle_q$ itself is a function of two of them, in the massless and minimally coupled case. Thus we should expect to draw the same conclusions as in \[subsec:correlators\]. Note, however, that our conclusions about equilibration as perceived from the correlators originated after integrating them over $q$. This suggests that we should adopt the same approach here. ![The ratio $\frac{\left\langle T^0_{\hspace{5 pt} 0} \right\rangle_q^{(Mode)}}{\left\langle T^0_{\hspace{5 pt} 0} \right\rangle_q^{(BD)}}$for $q = 1$, $q = 10$ and $q = 100$. For $q = 1$, the ratio settles down well below the equilibrium position, while for $q = 10$ and $100$, it oscillates with increasing amplitude about $1$. None of the curves present equilibration, similar to what was observed for other fine-grained quantities. []{data-label="t00Ratio3q"}](t00Ratio3q.pdf){width="5"} ![The stress-energy tensor $\left\langle T^0_{\hspace{5 pt} 0} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}$ corresponding to $\left\langle T^0_{\hspace{5 pt} 0} \right\rangle_q^{(Mode)}$ integrated between $q = 1$ and $q = 50$. The expectation value monotonically approaches $0$ as $x$ increases.[]{data-label="t00Int"}](t00Int.pdf){width="4"} In Fig. \[t00Int\] one can observe $\left\langle T^0_{\phantom{b}0} \right\rangle_q^{(Mode)}$ integrated between $q = 1$ and $q = 50$. Let us label it $\left\langle T^0_{\phantom{b}0} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}$. The distinctive feature of the plot is the fact that the integrated expectation value of the stress-energy tensor decreases monotonically as a function of $x$, and eventually reaches $0$. Given that $\left\langle\Phi_{\vec{q}}\Phi_{-\vec{q}}\right\rangle$ falls off as a function of $x$, and so does $1-x$ (representing the scale factor in our rescaled equations) for $x \in [0,1)$, such a behavior makes sense from the point of view of those quantities. The correlator $\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}}\right\rangle$, however, rises as a function of $x$. Since it does so as $(1-x)^{-2}$, the presence of $(1-x)^{6}$ in front of $\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}}\right\rangle$ in Eq. is responsible for an overall decrease. In other words, the expansion of the universe takes care of any diverging behavior in $\left\langle\Pi_{\vec{q}}\Pi_{-\vec{q}}\right\rangle$. Looking at $\left\langle T^0_{\phantom{b}0} \right\rangle_q$ $q$-mode per $q$-mode the same declining trend was found, regardless of the chosen vacuum state. Equilibrium however was so far considered from the point of view of ratios of functions evaluated in our state to those evaluated in the Bunch-Davies state. ![The ratio $\frac{\left\langle T^0_{\hspace{5 pt} 0} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle T^0_{\hspace{5 pt} 0} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ obtained using $q_{min} = 1$ and $q_{max} = 50$ as our limits of integration. The ratio seems to oscillate about $1.0004$ with an amplitude that decreases up to $x = 0.5$ but keeps increasing afterward.[]{data-label="t00RatioInt"}](t00RatioInt.pdf){width="4"} The quantity $\frac{\left\langle T^0_{\phantom{b}0} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle T^0_{\phantom{b}0} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ is plotted in Fig. \[t00RatioInt\]. Again, the limits of integration were chosen to be $q_{min} = 1$ and $q_{max} = 50$. The ratio appears to oscillate with damping until about $x = 0.5$, but the amplitude of oscillations keeps increasing afterwards until very late times. This is quite unexpected as, from the point of view of the two-point functions, the ratios appeared damped monotonically with increasing $x$ values, after coarse-graining. One could now ask why we used such low limits of integration. Using higher limits resulted in jaggedness in the plots coming from the higher frequency modes in the integral, which are difficult to integrate numerically. Since we are dealing with Hankel functions, themselves highly oscillatory, it is not astonishing that numerical integrators will have difficulty handling them. The fact that a lower step size in $x$ modified the observed jaggedness backs this hypothesis up. Smaller steps in $x$ however means greater computing time. Analyzing plots with $q_{max} > 50$ revealed that the equilibrium position of oscillations in $\frac{\left\langle T^0_{\phantom{b}0} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle T^0_{\phantom{b}0} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$ would decrease to become closer to $1$. Given the problems encountered after integrating numerically, we chose to call upon Riemann sums of $\left\langle T^0_{\phantom{b}0} \right\rangle_q$ instead of integrals. A step size in $q$ of order unity seemed appropriate and sufficient to draw our conclusions. Fig. \[t00SumMaxVar\] shows $\frac{\sum\limits_{q=1}^{q_{max}}\left\langle T^0_{\phantom{b}0} \right\rangle_q^{(Mode)}}{\sum\limits_{q=1}^{q_{max}}\left\langle T^0_{\phantom{b}0} \right\rangle_q^{(BD)}}$ for $q_{max} = 50,$ 250, and 500 focusing on late times ($x \in [0.900, 0.999]$). From the plots one can infer that the higher the $q_{max}$ the lower the amplitude of oscillations of our ratio. Additionally, the equilibrium position of the latter gets arbitrarily near 1 as $q_{max}$ increases. For a value $q_{max} =500$, the ratio appears to remain at 1, up to 5 digits. This corroborates the fact that some equilibration process occurs as long as $q$-modes of order 100 and more are included. One last important characteristic that can be found in the figure, is the fact that the quotient does not seem to diverge at very late times but flattens out. ![The quantity $\frac{\sum\limits_{q=1}^{q_{max}}\left\langle T^0_{\hspace{5 pt} 0} \right\rangle_q^{(Mode)}}{\sum\limits_{q=1}^{q_{max}}\left\langle T^0_{\hspace{5 pt} 0} \right\rangle_q^{(BD)}}$ for $q_{max} = 50$, 250, and 500, for $x \in [0.900, 0.999]$. Higher $q_{max}$ values correspond to lower amplitude of oscillations of the quotient, and an equilibrium position closer to 1. For $q_{max} = 500$, the ratio is indistinguishable from 1 up to four decimal places.[]{data-label="t00SumMaxVar"}](t00RiemSumRatioMin1MaxVarVLate.pdf){width="5"} Another path to consider is changing our lower limit $q_{min}$ in the integration, $q_{min} = 1$ corresponding to modes exiting the horizon. As seen in \[subsec:correlators\] correlators characterizing such modes behave much differently than those for higher $q$ values. Fig. \[t00SumMinVar\] shows $\frac{\sum\limits_{q=q_{min}}^{500}\left\langle T^0_{\phantom{b}0} \right\rangle_q^{(Mode)}}{\sum\limits_{q=q_{min}}^{500}\left\langle T^0_{\phantom{b}0} \right\rangle_q^{(BD)}}$ for $q_{min} = 1$, 50, and 250. Similar conclusions as the ones obtained in Fig. \[t00SumMaxVar\] can be drawn (except from the perspective of $q_{min}$), namely the higher the value of $q_{min}$, the closer the central value about which oscillations occur is to unity. Also, the amplitude decreases with higher $q_{min}$’s. ![The ratio $\frac{\sum\limits_{q=q_{min}}^{500}\left\langle T^0_{\hspace{5 pt} 0} \right\rangle_q^{(Mode)}}{\sum\limits_{q=q_{min}}^{500}\left\langle T^0_{\hspace{5 pt} 0} \right\rangle_q^{(BD)}}$ for $q_{min} = 1$, 50, and 250, for $x \in [0.900, 0.999]$. The equilibrium position of the quotients and their amplitude of oscillations go down as $q_{min}$ rises.[]{data-label="t00SumMinVar"}](t00RiemSumRatioMinVarMax500VLate.pdf){width="5"} The stress-energy tensor seemed at first to be telling us a slightly different story about the equilibration of our state. However, focusing on the late time behavior of $\frac{\left\langle T^0_{\phantom{b}0} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle T^0_{\phantom{b}0} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$, and/or changing the domain of $q$-values to sum over, allows us to reconcile the conclusions coming from our different quantities. Indeed, lower $q$-modes (of order unity) cross the horizon at earlier values of $x$ and have a much larger amplitude of oscillations as compared to the modes with $q$-values that are one or more orders of magnitude higher. Our relatively low values of $q_{max}$ means any disparity from equilibrium would become mostly washed out by increasing $q_{max}$ by a factor of 10 and using proper integration techniques. Nevertheless one should still expect the late time increase to be observed with increasing precision, even though it would decrease in amplitude. A reasonable explanation lies in the horizon-exit of the modes at different times. Such modes are still accounted for in our sums at late times, responsible for the seemingly anomalous rise of $\frac{\left\langle T^0_{\phantom{b}0} \right\rangle^{(Mode)}_{[q_{min}, q_{max}]}}{\left\langle T^0_{\phantom{b}0} \right\rangle^{(BD)}_{[q_{min}, q_{max}]}}$. Hence, the effects observed in Fig. \[t00RatioInt\] should be attributed to the limit in precision, the bounds in $q$ of our numerical calculation, and horizon-crossing effects. Conclusions =========== To check for the approach to equilibrium for any given system, said system has to be disturbed from the putative equilibrium state. Then, its relaxation, or lack thereof back to the original state can be studied. This is what we did here: we disturbed de Sitter space by attaching to it a flat space segment for conformal times $\eta \leq \eta_0$, and considered what happened to the quantum state of a test scalar field in this geometry. The claim being tested is that this state should relax to the “thermal” Bunch-Davies state. The simplicity of the system allowed us to fully characterize the state by its various two-point functions and we used the ratios of these two-point functions in the disturbed state to their values in the BD state as our diagnostics of equilibration. We also used the stress-energy tensor as a check on whether our state evolved to the BD one. What we found was that if we considered these quantities mode by mode there was no evidence of equilibration. Coarse-grained quantities (integrated over a range of $q$-modes) did show evidence of equilibration in cases where the modes were well within the horizon. It seems that the importance of coarse-graining in our analysis here is no different than it is in more familiar equilibrating systems. Integrating our quantities over momentum modes that remained inside a horizon defined by $\eta_0$, we saw that they did equilibrate, and the smaller the wavelengths of our modes, the earlier the equilibration. Particular attention was given to the analysis of the momentum-energy tensor which, initially, presented disparities when compared to the other observables. The range of horizon-exit times of our modes, the necessity to restrict our domain in $q$-space, and a finite precision in our numerical integrations accounted for such differences. While our results corroborate the attractor behavior of the Bunch-Davies state, no notion of thermality was discussed. The Bunch-Davies state however is considered thermal [@BirrellDavies1982], upon calculation of how an Unruh-DeWitt detector responds when the field is placed in such a state. Such ambiguity will be investigated and hopefully alleviated in future work ([@AlbrechtRichardHolman2014]) in which we address the thermality of our state, after developing a different way to calculate the response rate of an Unruh-DeWitt detector. We thank McCullen Sandora for useful discussions. R. H. was supported in part by the Department of Energy under grant DE-FG03-91-ER40682 as well as the John Templeton Foundation. He also thanks the Department of Physics at UC Davis for hospitality while this work was in progress. A. A. and B. R. were supported in part by DOE Grants DE-FG02-91ER40674 and DE-FG03- 91ER40674 and the National Science Foundation under Grant No. PHY11-25915.
--- abstract: 'We predict the possibility to generate a finite stationary spin current by applying an unbiased ac driving to a quasi-one-dimensional asymmetric periodic structure with Rashba spin-orbit interaction and strong dissipation. We show that under a finite coupling strength between the orbital degrees of freedom the electron dynamics at low temperatures exhibits a pure spin ratchet behavior, [*i.e.*]{} a finite spin current and the absence of charge transport in spatially asymmetric structures. It is also found that the equilibrium spin currents are not destroyed by the presence of strong dissipation.' author: - 'Sergey Smirnov,$^1$ Dario Bercioux,$^{1,2}$ Milena Grifoni,$^1$ and Klaus Richter$^1$' title: Quantum dissipative Rashba spin ratchets --- An opportunity to induce a net stationary particle current by unbiased external forces applied to a quantum dissipative one-dimensional (1D) periodic structure is provided when the system does not possess a center of inversion in real space [@Reimann]. Then the particle transport occurs due to the ratchet effect and the device works as a Brownian motor [@Astumian]. In the deep quantum regime the charge ratchet effect can only be achieved when at least the two lowest Bloch bands contribute to transport [@Grifoni]. Recently a new research field of condensed matter physics, spintronics, has emerged. One of its central issues is how to generate pure spin currents (SC) in paramagnetic systems due to only spin-orbit interactions and without applied magnetic fields. Rashba spin-orbit interaction (RSOI) [@Rashba] represents one of the possible tools to reach this goal since the spin-orbit coupling strength can be externally controlled by a gate voltage. One way to get pure SC is due to the intrinsic spin Hall effect [@Murakami; @Sinova] expected in a high-mobility two-dimensional semiconductor systems with RSOI [@Wunderlich]. Such pure SC were experimentally detected through the reciprocal spin-Hall effect in Ref. [@Valenzuela]. An alternative is to induce pure SC through absorption of polarized light [@Zhou]. The generation of pure SC by coherent spin rectifiers [@Scheid] has been discussed only recently for a finite size setup with RSOI. However, the presence of dissipation has not been considered up to now. ![\[figure\_1\] (Color online) A schematic picture of the isolated asymmetric periodic quasi-1D structure described by the Hamiltonian (\[isolated\_hamiltonian\]). In the center of the quasi-1D wire the periodic potential is weaker and gets stronger closer to the edges. Thus the electron group velocity is higher in the central region and tails off away from the center.](figure_1){width="5.8"} In this letter we address the challenging task of how to implement devices which can work both as Brownian charge and spin motors. Here a natural and also principle question for spintronics arises: Is it possible to switch a device working as a charge ratchet to a pure spin ratchet mode where the charge current (CC) is completely blocked? As mentioned above, when in a dissipative system without RSOI transport is restricted to only one Bloch band, the charge ratchet mechanism does not exist [@Grifoni]. Whether the same effect takes place in a dissipative system with RSOI is an open and non-trivial question. In fact, the Rashba Hamiltonian is not invariant under reflection of a transport direction. Thus the Rashba Hamiltonian itself already has a built-in spatial asymmetry which due to the spin-orbit coupling can be further mixed with the periodic potential symmetry/asymmetry. The presence of dissipation additionally increases the complexity of the problem because the influence of a dissipative environment on the orbital motion changes through RSOI the spin dynamics. In this work we focus on the moderate-to-strong dissipation case and address how to implement a device which under influence of unbiased external ac-driving yields a finite stationary spin current and at the same time blocks the directed stationary charge transport. To concretize our idea of a Brownian spin motor we consider a dissipative periodic system with RSOI and show that the spin-orbit interaction alone is not enough to produce SC: The system must additionally lack the spatial symmetry and its orbital degrees of freedom must be coupled. The full Hamiltonian of our problem is $\hat{H}_{\text{full}}(t)=\hat{H}+\hat{H}_{\text{ext}}(t)+\hat{H}_\text{bath}$, where $\hat{H}$ is the Hamiltonian of the isolated periodic system, $\hat{H}_{\text{ext}}(t)$ describes an external driving, and $\hat{H}_\text{bath}$ is responsible for dissipative processes. The isolated quasi-1D periodic system is formed in a two-dimensional electron gas (2DEG) with RSOI using a periodic potential along the $x$-axis and a harmonic confinement along the $z$-axis: $$\begin{split} \hat{H}\!\!=\!\!\frac{\hbar^2\hat{{{\bfk}}}^2}{2m}\!+\!\frac{m\omega_0^2\hat{z}^2}{2}\!-\! \frac{\hbar^2k_{\text{so}}}{m}\bigl(\hat{\sigma}_{x}\hat{k}_z-\hat{\sigma}_z\hat{k}_x\bigl)+U_\gamma(\hat{x},\hat{z}), \end{split} \label{isolated_hamiltonian}$$ where $U_\gamma(\hat{x},\hat{z})=U(\hat{x})(1+\gamma\hat{z}^2/L^2)$, $\hat{{{\bfk}}}$ is related to the momentum operator as $\hat{{{\bfp}}}=\hbar\hat{{{\bfk}}}$, $\omega_0$ is the harmonic confinement strength, $k_\text{so}$ the spin-orbit coupling strength, $U(\hat{x})$ the periodic potential with the period $L$, and $\gamma\geqslant 0$ the orbit-orbit coupling strength. This isolated structure is sketched in Fig. \[figure\_1\] as it could be realized by appropriate gate evaporation techniques applied to 2DEGs formed in III-V compounds. The periodic structure is subject to an external homogeneous time-dependent electric field, ${{\bfE}}(t)\equiv E(t)\hat{e}_x$. It can be experimentally implemented using for example linearly polarized light. This yields $\hat{H}_{\text{ext}}=eE(t)\hat{x}$, where $e$ is the elementary charge. We use the time dependence $eE(t)\equiv F\cos(\Omega(t-t_0))$, which is unbiased. The system is also coupled to a thermal bath. We assume the transverse confinement to be strong enough so that the probabilities of the direct bath-excited transitions between the transverse modes are negligibly small. Thus the environment couples to the electronic degrees of freedom only through $\hat{x}$. Furthermore, in the spirit of the Caldeira and Leggett model [@Caldeira], we consider a harmonic bath with bilinear system-bath coupling. The dynamical quantities of interest are the ratchet charge and spin currents $J_\text{C,S}(t)$ given as the statistical average of the longitudinal charge and spin current operators, $J_\text{C,S}(t)\equiv\text{Tr}[\hat{J}_\text{C,S}\hat{\rho}(t)]$, where $\hat{\rho}(t)$ is the reduced statistical operator of the system, that is the full one with the bath degrees of freedom traced out. The CC operator is $\hat{J}_\text{C}(t)=-ed\hat{x}/dt$ and for the SC operator we use the definition suggested in Ref. [@Shi], $\hat{J}_\text{S}(t)=d\bigl(\hat{\sigma}_z\hat{x}\bigl)/dt$. It is convenient to calculate the traces using the basis which diagonalizes both $\hat{x}$ and $\hat{\sigma}_z$, because this requires to determine only the diagonal elements of the reduced density matrix. As shown in Ref. [@Smirnov], for a periodic system with RSOI the energy spectrum can be derived from the corresponding truly 1D problem without RSOI. This leads to so-called Bloch sub-bands. The 2DEG is assumed to be sufficiently dilute to neglect the Pauli exclusion principle in the temperature range of our problem. The upper limit of this temperature range is considered to be low enough so that only the lowest Bloch sub-bands are populated. The basis which diagonalizes $\hat{x}$ and $\hat{\sigma}_z$ becomes in this case discrete. The total number of the Bloch sub-bands is equal to the product of the number, $N_\text{B}$, of the lowest Bloch bands from the corresponding truly 1D problem without RSOI, the number, $N_\text{t}$, of the lowest transverse modes and the number of spin states. In this work we shall use the model with $N_\text{B}=1$, $N_\text{t}=2$. The total number of the Bloch sub-bands in our problem is thus equal to four. Using $N_\text{B}=1$ we also assume that the external field is weak enough and does not excite electrons to higher Bloch bands. The representation in terms of the eigen-states of $\hat{x}$ for a model with discrete $x$-values is called discrete variable representation (DVR) [@Grifoni; @Harris]. Let us call $\sigma$-DVR the representation in which both the coordinate and spin operators are diagonal. Denoting the $\sigma$-DVR basis states as $\{|\alpha\rangle\}$ and eigen-values of $\hat{x}$ and $\hat{\sigma}_z$ in a state $|\alpha\rangle$ by $x_\alpha$ and $\sigma_\alpha$, respectively, the CC and SC are rewritten as $J_\text{C}(t)=-e\sum_\alpha x_\alpha\dot{P}_\alpha(t)$ and $J_\text{S}(t)=\sum_\alpha\sigma_\alpha x_\alpha\dot{P}_\alpha(t)$, where $P_\alpha(t)\equiv\langle\alpha|\hat{\rho}(t)|\alpha\rangle$ is the population of the $\sigma$-DVR state $|\alpha\rangle$ at time $t$. We are interested in the long time limit $\bar{J}^\infty_\text{C,S}$ of the currents $\bar{J}_\text{C,S}(t)$, averaged over the driving period $2\pi/\Omega$. The advantage of working in the $\sigma$-DVR basis is that real-time path integral techniques can be used to trace out exactly the bath degrees of freedom [@Grifoni_1; @Weiss]. Moreover, at driving frequencies larger than the ones characterizing the internal dynamics of the quasi-1D system coupled to the bath, the averaged populations $\bar{P}_\alpha(t)$ can be found from the master equation, $$\dot{\bar{P}}_\alpha(t)=\sum_{\beta,(\beta\neq\alpha)}\bar{\Gamma}_{\alpha\beta}\bar{P}_\beta(t)- \sum_{\beta,(\beta\neq\alpha)}\bar{\Gamma}_{\beta\alpha}\bar{P}_\alpha(t), \label{averaged_master_equation}$$ valid at long times. In Eq. (\[averaged\_master\_equation\]) $\bar{\Gamma}_{\alpha\beta}$ is an averaged transition rate from the state $|\beta\rangle$ to the state $|\alpha\rangle$. The first task is thus to identify the $\sigma$-DVR basis. The eigen-states $|l,k_\text{B},j,\sigma\rangle$ of $\hat{\sigma}_z$ were found in [@Smirnov] for the case $\gamma=0$. The results obtained in [@Smirnov] are straightforwardly generalized to our model since for $N_\text{t}=2$ the operator $\hat{z}^2$ (and any even power of $\hat{z}$) is effectively diagonal. The quantum numbers $l$, $k_\text{B}$, $j$, $\sigma$ stand for the Bloch band index, quasi-momentum, transverse mode index and $z$-projection of the spin, respectively. As mentioned above $l=1$, $j=0,1$. One further finds $$\begin{split} &\langle l',k_\text{B}',j',\sigma'|\hat{x}|l,k_\text{B},j,\sigma\rangle=\\ &=\delta_{j',j}\delta_{\sigma',\sigma}\;\; {_j}\langle l',k_\text{B}'+\sigma k_\text{so}|\hat{x}|l,k_\text{B}+\sigma k_\text{so}\rangle_j, \end{split} \label{x_l_kb_j_sigma}$$ where the index $j$ under the bra- and ket-symbols indicates that the corresponding electronic states are obtained using the periodic potential $U_{\gamma,j}(x)\equiv U(x)[1+\gamma\hbar(j+1/2)/m\omega_0 L^2]$. For a fixed value of $j$ the diagonal blocks in Eq. (\[x\_l\_kb\_j\_sigma\]) are unitary equivalent and thus the eigen-values of $\hat{x}$ do not depend on $\sigma$. The eigen-values of the matrix ${_j}\langle l',k_\text{B}'|\hat{x}|l,k_\text{B}\rangle_j$ are analytically found and have the form $x_{\zeta,m,j}=mL+d_{\zeta,j}$, where $m=0,\pm1,\pm2\ldots$, $\zeta=1,2,\ldots,N_\text{B}$ and the eigen-values $d_{\zeta,j}$ are distributed within one elementary cell. Thus one can label the eigen-states of $\hat{x}$ as $|\zeta,m,j,\sigma\rangle$. The corresponding eigen-values are $x_{\zeta,m,j,\sigma}=x_{\zeta,m,j}$. We see that the $\sigma$-DVR basis states $|\alpha\rangle$ introduced above are just the $|\zeta,m,j,\sigma\rangle$ states, that is $\{|\alpha\rangle\}\equiv\{|\zeta,m,j,\sigma\rangle\}$. To calculate CC and SC we use the tight-binding approximation assuming that the matrix elements $\langle\zeta',m',j',\sigma'|\hat{H}|\zeta,m,j,\sigma\rangle$ with $|m'-m|>1$ are negligibly small. Let us introduce the definitions for the states $|m,\xi\rangle\equiv|\zeta=1,m,\xi\rangle$ where $\{\xi\}=\{(j,\sigma)\}$ and $\xi=1\Leftrightarrow(0,1)$, $\xi=2\Leftrightarrow(0,-1)$, $\xi=3\Leftrightarrow(1,1)$, $\xi=4\Leftrightarrow(1,-1)$. Correspondingly, we introduce hopping matrix elements $\Delta_{\xi',\xi}^{m',m}\equiv\langle m',\xi'|\hat{H}|m,\xi\rangle$ ($m'\neq m$ and/or $\xi'\neq\xi$) and on-site energies $\varepsilon_\xi\equiv\langle m,\xi|\hat{H}|m,\xi\rangle$. Due to the harmonic confinement and RSOI the system is split into two channels: one with $\xi=1,4$ and another with $\xi=2,3$. The two channels are independent of each other, that is, transitions between them are forbidden. This picture is general and valid for an arbitrary number of the transverse modes. For clarity we below only consider the channel with $\xi=1,4$. Two independent channels were also found for a different type of confinement in Ref. [@Perroni]. Assuming that the hopping matrix elements are small enough we can use the second-order approximation [@Grifoni] for the averaged transition rates in Eq. (\[averaged\_master\_equation\]). We have $$\begin{split} &\bar{\Gamma}_{\xi'\!,\xi}^{m'\!,m}\!\!\!\!=\!\! \frac{|\Delta_{\xi'\!,\xi}^{m'\!,m}|^2}{\hbar^2} \!\!\!\!\int_{-\infty}^{\infty}\!\!\!\!\!\!\!\!\!d\tau\!\exp\!\biggl[\!-\frac{(x_{m,\xi}\!-\!x_{m',\xi'})^2}{\hbar} Q[\tau,\!J(\omega)]+\\ &+\text{i}\frac{\varepsilon_\xi-\varepsilon_{\xi'}}{\hbar}\tau\biggl] J_0\biggl[\frac{2F(x_{m,\xi}-x_{m',\xi'})}{\hbar\Omega}\sin\biggl(\frac{\Omega\tau}{2}\biggl)\biggl], \end{split} \label{transition_rate}$$ where $x_{m,\xi}\equiv x_{\zeta=1,m,\xi}=mL+d_\xi$ with $d_\xi\equiv d_{1,j}$. In Eq. (\[transition\_rate\]) $J_0(x)$ denotes the zero-order Bessel function and $Q[\tau,J(\omega)]$ is the twice integrated bath correlation function being a function of time $\tau$ and a functional of the bath spectral density $J(\omega)$ [@Grifoni; @Weiss]. The dependence of the transition rates on the orbit-orbit coupling $\gamma$ comes from two sources. The first one is the Bloch amplitudes and the second is the difference $\Delta d\equiv d_{1,0}-d_{1,1}$. In a tight-binding model the periodic potential is strong and thus $\Delta d$ can be made less than all the relevant length scales, $\Delta d/l_\text{r}\ll 1$, where $l_\text{r}=\text{min}[L,\:\sqrt{\hbar/m\omega_0},\:\hbar\Omega/F]$. Hence the main effect of the orbit-orbit coupling on $\bar{\Gamma}_{\xi',\xi}^{m',m}$ comes only through the Bloch amplitudes, and we neglect terms of order $\mathcal{O}(\Delta d/l_\text{r})$. We then arrive at the main results of our work, the absence of the charge transport, $\bar{J}_\text{C}^\infty=0$, and the expression for the non-equilibrium spin current (NESC), $\bar{J}^\infty_\text{n-e,S}\equiv \bar{J}_\text{S}^\infty-\bar{J}_\text{e,S}^\infty$: $$\begin{split} \bar{J}^\infty_\text{n-e,S}=&-2L\biggl(\frac{I_{14}I_{41}}{I_{14}+I_{41}}- \frac{I^{(0)}_{14}I^{(0)}_{41}}{I^{(0)}_{14}+I^{(0)}_{41}}\biggl)\frac{k_\text{so}^2\hbar^3\omega_0}{m}\times\\ &\times\sum_{k_\text{B},k_\text{B}'}\sin[(k_\text{B}-k_\text{B}')L]\text{Im}[\mathcal{F}_{k_\text{B},k_\text{B}'}], \end{split} \label{stationary_averaged_spin_current_b}$$ where $I_{\xi',\xi}$, $I^{(0)}_{\xi',\xi}$ are the integrals from (\[transition\_rate\]) with and without driving, $F\neq 0$ and $F=0$, respectively, and $$\begin{split} \mathcal{F}_{k_\text{B},k_\text{B}'}&\equiv u_{\gamma,0;1,k_\text{B}+k_\text{so}}^\text{DVR}(d_{1,0}) u_{\gamma,1;1,k_\text{B}'-k_\text{so}}^\text{DVR}(d_{1,1})\times\\ &\times[u_{\gamma,1;1,k_\text{B}-k_\text{so}}^\text{DVR}(d_{1,1})u_{\gamma,0;1,k_\text{B}'+k_\text{so}}^\text{DVR}(d_{1,0})]^*, \end{split} \label{F_function}$$ where $u_{\gamma,j;1,k_\text{B}}^\text{DVR}(d_{1,j})$ is the DVR Bloch amplitude of the first band for electrons in the periodic potential $U_{\gamma,j}(x)$. In Eq. (\[stationary\_averaged\_spin\_current\_b\]) we have eliminated from $\bar{J}_\text{S}^\infty$ the equilibrium spin current (ESC), $\bar{J}_\text{e,S}^\infty$, following Ref. [@Rashba_1]. The fact that the ESC turns out to be finite shows that the definition of SC suggested in Ref. [@Shi] does not automatically eliminate the presence of ESC. However, as pointed out in Ref. [@Shi], this current really vanishes in insulators. This can be seen from Eq. (\[stationary\_averaged\_spin\_current\_b\]). When the potential is strong, electrons are localized, the dependence of the function $\mathcal{F}_{k_\text{B},k_\text{B}'}$ on the quasi-momentum disappears, and as a result both ESC and NESC are equal to zero. This reasonable result is ensured by the spin current definition taking proper care of the spin torque. It is interesting to note that ESCs are present even in a system with strong dissipation. As recently proposed in Ref. [@Sonin], ESCs can effectively be measured using a Rashba medium deposited on a flexible substrate playing a role of a mechanical cantilever. ![\[figure\_2\] (Color online) Non-equilibrium spin current, $\bar{J}^\infty_\text{n-e,S}$, as a function of the amplitude, $F$, of the driving force for different values of the viscosity coefficient $\eta$. Temperature $k_\text{Boltz.}T=0.5$, spin-orbit coupling strength $k_\text{so}L=\pi/2$, orbit-orbit coupling strength $\gamma=0.1$, driving frequency $\Omega=1$. The inset displays the shape of the periodic potential.](figure_2){width="6.4"} We can determine the conditions under which the SC is finite. First of all from Eq. (\[stationary\_averaged\_spin\_current\_b\]) it follows that the spin-orbit coupling must be finite, [*i.e.*]{} $k_\text{so}\neq 0$. Further, from Eq. (\[F\_function\]) one observes, that when $\gamma=0$, the Bloch amplitudes do not depend on $j$, $u_{\gamma=0,j;1,k_\text{B}}^\text{DVR}(d_{1,j})\equiv u_{1,k_\text{B}}^\text{DVR}(d_{1})$, and since $[u_{1,k_\text{B}}^\text{DVR}(d_{1})]^*=u_{1,-k_\text{B}}^\text{DVR}(d_{1})$ (time-reversal symmetry), the function $\mathcal{F}_{k_\text{B},k_\text{B}'}$ becomes even with respect to its arguments. Then from Eq. (\[stationary\_averaged\_spin\_current\_b\]) one gets zero SC. Thus the second condition is the presence of the orbit-orbit coupling. Finally, since for a symmetric periodic potential the Bloch amplitudes are real functions, we conclude that the function $\mathcal{F}_{k_\text{B},k_\text{B}'}$ is also real in this case, that is $\text{Im}[\mathcal{F}_{k_\text{B},k_\text{B}'}]=0$. As a result the third condition is the presence of spatial asymmetry. Below we present corresponding numerical results. All energies and frequencies are given in units of $\hbar\omega_0$ and $\omega_0$, respectively. The parameters are taken for an InGaAs/InP quantum wire: $\hbar\omega_0=0.9$ meV; $\alpha\equiv\hbar^2k_\text{so}/m=9.94\cdot10^{-12}$ eV$\cdot$m; $m=0.037m_0$, respectively. For $k_\text{so}L=\pi/2$ one gets $L=0.32$ $\mu\text{m}$. The dependence of the NESC on the amplitude of the external driving is shown in Fig. \[figure\_2\] for the asymmetric periodic potential (see inset) $U(x)=\sum_{n=0}^2 V_n\cos(2\pi nx/L-\phi_n)$ with $V_0=4$, $V_1=-V_0$, $V_2=3.89$, $\phi_0=\phi_2=0.0$, $\phi_1=1.9$. The gap between the Bloch bands with $l=1$ and $l=2$ is $\Delta E_{12}\thickapprox 10.5$. In Fig. \[figure\_2\] $FL,\,\hbar\Omega<\Delta E_{12}$ that is the numerical results are consistent with the theoretical model assumptions. As an example we have used an Ohmic bath with the spectral density $J(\omega)=\eta\omega\exp(-\omega/\omega_c)$, where the viscosity coefficient (in units of $m\omega_0$) is $\eta=0.25,\;0.5,\;0.75$, and the cutoff frequency is $\omega_c=10$. As it can be seen, the NESC has an oscillating nature. However, the oscillation amplitude goes down when the driving increases. Physically such behavior can be attributed to an effective renormalization of the band structure in a high-frequency electric field [@Grifoni_1]. The group velocity decreases in a non-monotonous way which due to RSOI slows down the spin kinetics. For increasing values of $\eta$ the dissipation induced decoherence in the system gets more pronounced. The system becomes more classical and thus the tunneling processes become less intensive. This leads to the spin current reduction which one observes in Fig. \[figure\_2\]. ![\[figure\_3\] (Color online) Non-equilibrium spin current, $\bar{J}^\infty_\text{n-e,S}$, as a function of the spin-orbit coupling strength, $k_\text{so}$, for different values of the orbit-orbit coupling strength, $\gamma$. The driving amplitude and viscosity coefficient are $F=2\hbar\omega_0/L$, $\eta=0.5$. The other parameters are as in Fig. \[figure\_2\].](figure_3){width="6.4"} In Fig. \[figure\_3\] the NESC is plotted versus $k_\text{so}L$ while $\gamma$ plays the role of a parameter. The oscillations of the NESC have minima located at $nG/2$ where $n=0,1,2,\ldots$, and $G$ is the reciprocal lattice vector. Physically this reflects the fact that for those values of $k_\text{so}$ the Rashba split becomes minimal due to the periodicity of the energy spectrum in the ${{\bfk}}$-space. The magnitude of these oscillations decreases with decreasing orbit-orbit coupling, and the current vanishes for $\gamma=0$. In summary, we have studied stationary quantum transport in a driven dissipative periodic quasi-one-dimensional system with Rashba spin-orbit interaction and orbit-orbit coupling. The spin ratchet effect has been investigated and an analytical expression for the spin current has been derived and analyzed. This analysis has revealed that for the case of moderate-to-strong dissipation the necessary conditions for non-vanishing spin currents are the spatial asymmetry of the periodic potential as well as a finite strength of the spin-orbit interaction and orbit-orbit coupling. It has been demonstrated that in a dissipative system equilibrium spin currents can exist. Our numerical calculations have shown characteristic oscillations of the spin current as a function of the amplitude of the driving force and the spin-orbit coupling strength. Finally, we note, that since the spin current has the in-plane polarization, it can be efficiently measured by a magneto-optic Kerr microscope using the cleaved edge technology as suggested recently in Ref. [@Kotissek]. We thank J. Peguiron for useful discussions. Support from the DFG under the program SFB 689 is acknowledged. [20]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , ****, (). , ****, (). , , , , ****, (). , ****, (). , , , ****, (). , , , , , , ****, (). , , , , ****, (). , ****, (). , ****, (). , , , , ****, (). , ****, (). , , , , ****, (). , , , ****, (), arXiv:0705.3830v2. , , , ****, (). , ****, (). , ** (, ), ed. , , , , ****, (). , ****, (). , ****, (). , , , , , , , , ****, ().
--- author: - | M. Unverzagt[^1], P. Aguar-Bartolomé, J. Ahrens, J.R.M. Annand, H.J. Arends, R. Beck, V. Bekrenev, B. Boillat, A. Braghieri, D. Branford, W.J. Briscoe, J.W. Brudvik, S. Cherepnya, R. Codling, E.J. Downie, L.V. Fil’kov D.I. Glazier, R. Gregor, E. Heid, D. Hornidge, O. Jahn, V.L. Kashevarov, R. Kondratiev, M. Korolija, M. Kotulla, D. Krambrich, B. Krusche, M. Lang, V. Lisin, K. Livingston, S. Lugert, I.J.D. MacGregor, D.M. Manley, M. Martinez-Fabregate, J.C. McGeorge, D. Mekterovic, V. Metag, B.M.K. Nefkens, A. Nikolaev, R. Novotny, R.O. Owens, P. Pedroni, A. Polonski, S.N. Prakhov, J.W. Price, G. Rosner, M. Rost, T. Rostomyan, S. Schumann, D. Sober, A. Starostin, I. Supek, C.M. Tarbert, A. Thomas, Th. Walcher, D.P. Watts, F. Zehr\ \ (Crystal Ball at MAMI, TAPS and A2 Collaborations) date: 'Received: date / Revised version: date' title: 'Determination of the Dalitz plot parameter $\alpha$ for the decay $\eta \rightarrow 3\pi^0$ with the Crystal Ball at MAMI-B' --- Introduction {#hdintro} ============ The $\eta \rightarrow 3\pi^0$ decay violates isospin symmetry. Therefore, it offers a unique possibility to study symmetries and symmetry-breaking characteristics of strong interactions. Because electromagnetic contributions to the amplitude can be neglected [@Sut66; @Bau96; @Dit08], this decay occurs due to the isospin breaking part of the QCD Hamiltonian: $$\mbox{\cmsy H \usefont{T1}{ptm}{m}{n}}_{\not{\:\mathrm{I}}} = \frac{1}{2}(m_{\mathrm{u}}-m_{\mathrm{d}})(\bar{\mathrm{u}}\mathrm{u} - \bar{\mathrm{d}}\mathrm{d}). \label{eqIsoBreak}$$ Therefore, the amplitude is proportional to the mass difference $m_\mathrm{u} - m_\mathrm{d}$ of the two lightest quarks u and d. Calculations of the decay amplitude are usually based on the framework of Chiral Perturbation Theory ($\chi$PT). The leading order term $ \mbox{\cmsy O \usefont{T1}{ptm}{m}{n}}(p^2)$ of the momentum expansion yields the constant amplitude [@Osb70]: $$A(\eta \rightarrow 3\pi^0) = \frac{B_0 (m_\mathrm{u}-m_\mathrm{d})}{\sqrt{3}F_{\pi}^2} \sim (m_\mathrm{u}-m_\mathrm{d}), \label{eqAmpp2}$$ where $B_0$ and the pion decay constant $F_\pi$ are the low-energy constants of $\chi$PT. There is further theoretical work determining the amplitude in the second [@Gas85] and the third [@Bij07] chiral order, both of which include loop diagrams describing final-state rescatterings of the pions. Higher order rescattering effects are examined by using dispersion methods [@Kam96; @Ani96] or the unitarised $\chi$PT approach (UCHPT) [@Bor05] based on the Bethe-Salpeter equation. The squared absolute value of the decay amplitude may be expanded around the centre of the Dalitz plot [@PDG08] and Bose symmetry dictates the form: $$|A(\eta \rightarrow 3\pi^0)|^2 = |N|^2 (1 + 2 \alpha z + \ldots). \label{eqAmpExpand}$$ $N$ is a normalisation constant, which is equal to the amplitude that would apply, if the decay proceeded only according to the available phase space. The Dalitz plot parameter, $\alpha$, describes the pion energy dependence of the squared absolute value of the decay amplitude up to first order of the expansion. The parameter $z$ is given by [@PDG08] $$z = 6 \sum_{i=1}^3 \left( \frac{E_i-m_{\eta}/3}{m_{\eta}-3m_{\pi^0}}\right)^2 = \frac{\rho^2}{\rho_{max}^2}. \label{eqz}$$ Here $E_i$ represents the pion energies in the $\eta$ rest frame and $\rho$ is the distance from the centre to a point in the Dalitz plot. $\rho_{max}$ is the maximum value of $\rho$. $z$ varies from $z=0$, where all three pions have the same energy $E_i = m_{\eta}/3$, to $z=1$, where one of the pions is at rest. So, determining the Dalitz plot parameter offers a nice possibility to test the different theoretical calculations based on $\chi$PT (see table \[tbtheo\]). [lcc]{} Calculation & Refs. & $\alpha$\ $\chi$PT $ \mbox{\cmsy O \usefont{T1}{ptm}{m}{n}}(p^2)$ & [@Bij02] & 0\ $\chi$PT $ \mbox{\cmsy O \usefont{T1}{ptm}{m}{n}}(p^4)$ & [@Bij02] & $0.015$\ $\chi$PT $ \mbox{\cmsy O \usefont{T1}{ptm}{m}{n}}(p^6)$ & [@Bij07] & $0.013 \pm 0.032$\ Dispersion & [@Kam96] & $-0.007 \ldots -0.014$\ UCHPT & [@Bor05] & $-0.031 \pm 0.003$\ In lowest order of $\chi$PT there is no final-state interaction between the three pions. Thus, the amplitude is constant [@Osb70] and $\alpha = 0$ [@Bij02]. With the amplitude including one-loop diagrams [@Gas85] Bijnens [@Bij02] calculated $\alpha = 0.015$. Including two-loop contributions, Bijnens and Ghorbani [@Bij07] get $\alpha = 0.013 \pm 0.032$. Contrary to the experimental results summarised in table \[tbexp\], all calculations based on $\chi$PT alone predict a positive value. Using instead dispersion relations [@Kam96] in combination with extended Khuri-Treiman equations [@Khu60] gives the result $-0.014 \leq \alpha \leq -0.007$, depending on the input parameters of the calculation. The unitarised $\chi$PT approach [@Bor05] based on the Bethe-Salpeter equation constrains its free parameters to known branching ratios and $\pi \pi$ scattering phases from [@Eid04]. The fit yields $\alpha = -0.031 \pm 0.003$, within one standard deviation in agreement with the high-statistics experimental results of the Crystal Ball at BNL [@Tip01] and KLOE collaborations [@Amb07]. Table \[tbexp\] summarises the experimental world data set for the Dalitz plot parameter, $\alpha$. The Crystal Ball collaboration [@Tip01] measured $\alpha = -0.031 \pm 0.004$ from a sample of $9.5 \cdot 10^5$ events obtained at BNL, while KLOE [@Amb07] found $\alpha = -0.027 \pm 0.004^{+0.004}_{-0.006}$ with $6.5 \cdot 10^5$ events. All other experiments listed in table \[tbexp\] collected not more than $1.2 \cdot 10^5$ $\eta \rightarrow 3\pi^0$ events. The PDG [@PDG08] currently quotes $\alpha = -0.031 \pm 0.004$, but this value is dominated by the result of the Crystal Ball collaboration, since the latest publication of the KLOE collaboration has not been included. [lcc]{} Experiment & Refs. & $\alpha$\ Crystal Ball at BNL & [@Tip01] & $-0.031 \pm 0.004$\ KLOE & [@Amb07] & $-0.027 \pm 0.004^{+0.004}_{-0.006}$\ GAMS-2000 & [@Ald84] & $-0.022 \pm 0.023$\ Crystal Barrel & [@Abe98] & $-0.052 \pm 0.017 \pm 0.010$\ SND & [@Ach01] & $-0.010 \pm 0.021 \pm 0.010$\ CELSIUS/WASA & [@Bas07] & $-0.026 \pm 0.010 \pm 0.010$\ WASA at COSY & [@Ado08] & $-0.027 \pm 0.008 \pm 0.005$\ When proposing the measurement of the $\eta \rightarrow 3\pi^0$ decay at MAMI, only the result of one high-statistics experiment, the Crystal Ball at BNL, was published. This lack of precise data led to the conclusion that a new high-statistics measurement of the Dalitz plot parameter was needed, again using the Crystal Ball, but with the $\eta$-photoproduction mechanism instead of a hadronic pion beam at BNL. This need for another precise measurement of the Dalitz plot parameter was emphasised even more, when the KLOE collaboration announced their preliminary result $\alpha = -0.014 \pm 0.004_{\mathrm{stat}} \pm 0.005_{\mathrm{syst}}$ [@Cap05], which showed a big discrepancy with the BNL value. The KLOE collaboration at DA$\phi$NE used $\mathrm{e}^+ \mathrm{e}^- \rightarrow \phi \rightarrow \eta \gamma$ as the production reaction. Although the revised KLOE result [@Amb07] agrees with the Crystal Ball at BNL value within the given errors, the fundamental importance of the Dalitz plot parameter still requires further experimental input. This paper describes an analysis of tagged photon experiments carried out at the electron accelerator MAMI-B (Mainz Microtron B) [@Her83; @Wal90] in the years 2004 and 2005 within the Crystal Ball/TAPS collaboration [@Unv08]. Data from two different experiments with the CB/TAPS-setup, that differed in the trigger conditions and the tagged photon energy range, were analysed. The first was dedicated to the neutral decays of the $\eta$ meson and especially to the rare $\eta$ decays. The second investigated radiative $\pi^0$-photoproduction. Experimental setup {#hdsetup} ================== The Dalitz plot parameter, $\alpha$, was determined from $3\pi^0$ decays of $\eta$ mesons produced in the $\gamma \mathrm{p} \rightarrow \eta \mathrm{p}$ reaction. The photons were emitted by bremsstrahlung of 883MeV electrons from the MAMI-B [@Her83; @Wal90] accelerator. The electrons were separated from the photons and momentum analysed by the Glasgow tagged photon spectrometer [@Ant91; @Hal96] at Mainz, with an energy resolution of approximately 2MeV. The photon energies were determined by energy conservation with a maximum photon flux above the $\eta$ threshold of roughly $1 \cdot 10^5 \gamma/(\mathrm{s\;MeV})$. ![The Crystal Ball/TAPS setup.[]{data-label="psSim"}](cball.eps) The 4.8cm long liquid hydrogen target was located at the centre of the spherical Crystal Ball (CB) [@Ore82; @Sta01] photon spectrometer. The CB consisted of 672 optically insulated NaI(Tl) crystals, each read out by an individual photomultiplier. It covered the full azimuthal angle range for polar angles between 20$^\circ$ and 160$^\circ$. Each crystal had the shape of a 41cm (15.7 radiation lengths) long truncated pyramid pointing towards the centre of the CB. Electromagnetic showers were measured with an energy resolution of $\sigma/E_\gamma = 0.02/(E_\gamma /\mathrm{GeV})^{1/4}$. The resolution in the polar angle $\sigma_\theta$ was $2^\circ$ to $3^\circ$, while for the azimuthal angle it was $\sigma_\phi = \sigma_\theta / \sin \theta$. The target at the centre of the CB was surrounded by a thin particle identification detector (PID) [@Wat04] to register charged particles hitting the CB. It consisted of 24 30cm long and 2mm thick optically isolated plastic scintillator bars arranged parallel to the beam axis, so that each covered $15^\circ$ of the azimuthal angle range. The forward angles between $\theta = 4^\circ$ and $\theta = 20^\circ$ were covered by TAPS [@Nov91; @Gab94] consisting of 510 BaF$_2$ crystals arranged as a wall, preceded by a single layer of 510 5mm thick veto plastic scintillators. The CB and TAPS together covered roughly 97% of the total solid angle (fig. \[psSim\]). TAPS was positioned 173cm downstream of the centre of the CB, giving the opportunity for an efficient time-of-flight analysis for particle identification. Each of the hexagonally shaped BaF$_2$ crystals had an inner diameter of 5.9cm and a length of 25cm, which corresponds to approximately 12 radiation lengths. The shower energy resolution was $\sigma /E_\gamma = 0.0079/(E_\gamma /\mathrm{GeV})^{0.5}+0.018$. The angular resolution for 300MeV photons was $0.7^\circ$ full width at half maximum (FWHM). The experiment trigger comprised two event conditions. First, the total sum of the CB photomultiplier analogue signals had to exceed a threshold that corresponded to approximately 390MeV for one experiment and 60MeV for the other. Secondly, the sector multiplicity in the CB and TAPS had to be greater than 2. Here up to 16 adjacent crystals made up a sector in the CB, giving 45 hardwired sectors. In TAPS each sector was one quarter of the wall. If at least one of the signals in the sector exceeded a threshold of 20 to 40MeV, depending on the relative calibration of the photomultiplier signals, it contributed to the multiplicity. The determination of the Dalitz plot parameter requires the elimination of the phase space contribution to the amplitude. Therefore, a Monte Carlo simulation of the experiment was produced with $\alpha = 0$. For the analysis described in this paper $100 \cdot 10^6$ $\eta \rightarrow 3\pi^0$ events were generated and the particle propagation simulated with the GEANT v3.21 software. These events were analysed in the same way as the experimental data, resulting in an average reconstruction efficiency of $\varepsilon_{\eta \rightarrow 3\pi^0} \approx 23\,\%$. Data analysis {#hdanalysis} ============= The Dalitz plot parameter, $\alpha$, for the decay $\eta \rightarrow 3\pi^0$ was calculated from the analysis of the reaction $$\gamma \mathrm{p} \rightarrow \eta \mathrm{p} \rightarrow 3\pi^0 \mathrm{p} \rightarrow 6\gamma \mathrm{p}.$$ The first step of the event selection demanded six clusters in the CB and TAPS identified as photons, ignoring the other particle types like protons, charged pions and neutrons. With the CB and the PID, clusters were identified as charged particles by checking the agreement of the azimuthal angles of CB clusters with the $\phi$ angle of the hit PID elements. Photons had no corresponding PID hit. The charged particles were then divided into protons and charged pions by comparing the energy of the clusters with the energy deposit in the PID scintillator. Neutrons could not be separated from photons in the CB. In TAPS photons, protons, charged pions and neutrons could be identified using information from the veto detectors, comparing the time of flight with the energy deposition in the BaF$_2$ crystals and analysing the pulse-shape of the BaF$_2$ signals. A cluster was formed by a group of adjacent crystals, which had registered parts of the electromagnetic shower initiated by an incoming particle. But a crystal could only contribute to a cluster, if its energy deposit exceeded 2MeV in the CB and 4MeV in TAPS. The energy of the cluster was calculated by the sum of the energies of all contributing crystals. The cluster direction was determined by the weighted sum of the directions of the contributing crystals, using the square-root of the energies as weight. Only clusters, which had a total energy of 20MeV or higher, were used in the analysis. ![image](ESum.eps)![image](NMult.eps) ![image](TotalConf.eps)![image](MPhot.eps) ![image](MissMass.eps)![image](MPi0.eps) The next step in the analysis of the simulated data was to implement a software trigger system. This was crucial, since electronics and photomultipliers of the PID in the exit region of the CB screened TAPS from particles emerging from the target. Some photons were converted into electron positron pairs in this material and subsequently identified as charged particles in TAPS. Protons lost parts of their energy in this inactive material. But these effects could be simulated precisely as discussed in [@Unv08]. Figure \[psTrig\] shows the agreement between the experimental and simulated trigger conditions for selected $\eta \rightarrow 3\pi^0$ events. Note, that a sector in the trigger did not directly correspond to a cluster in the calorimeters. Trigger sectors had a much tighter restriction due to the high thresholds on single crystals compared to the total energy threshold on the clusters. Therefore, more than half of the selected events had a multiplicity lower than six. In further analysis of the experimental and the simulated data, the reaction hypothesis $\eta \rightarrow 3\pi^0 \rightarrow 6\gamma$ was tested with a kinematic fitting technique [@Ave91]. The measured energies $E$ and the $\theta$ and $\phi$ angles were used as input parameters to a fit with five constraints: the invariant and the missing masses of the six identified photons had to be equal to the masses of the $\eta$ meson and the proton, respectively, and the invariant masses of each of the three photon pairs had to give the $\pi^0$ mass. All 15 possible combinations to form three pairs from six photons were tested in separate fits. Events in which at least one fit had a confidence level (CL) higher than 2% were considered as $\eta \rightarrow 3\pi^0$ events. This cut was chosen to reject most of the background, but lose only a small fraction of events of interest. The adjusted photon energies and angles from the fit with the highest CL were used to calculate the Dalitz plot parameter, $\alpha$. The resolutions of the three variables $E$, $\theta$ and $\phi$ used in the fits were determined by a Monte Carlo simulation of the CB/TAPS-setup. Photons of different energies were generated emerging isotropically from the centre of the CB and the detector response was simulated in GEANT. The resolutions $\sigma_E$, $\sigma_\theta$ and $\sigma_\phi$ were then obtained by comparing the reconstructed variables with the initially generated values. Since the events were selected by cutting on the CL of the fits, it was important that the simulated CL distribution agreed with the measured distribution. Then cuts made at different confidence levels removed the same fraction of events from the measured and the simulated samples. Figure \[psFit\] shows good agreement between the two distributions (top left). The rise at low confidence levels was produced by events where parts of the electromagnetic shower leaked out of the detector, such as when clusters were located at the edge of the CB. ![image](WQeta.eps)![image](WQpi0.eps) In addition, good agreement between the experiment and the simulation had to be achieved in the distributions of the variables used as constraints in the kinematic fit. This is also illustrated in fig \[psFit\]. Here the invariant mass of the six photons, calculated using the directly measured energies and momenta, is shown for the events selected by the kinematic fit. There is very good agreement between the two distributions, which have maxima at 547.7MeV and widths of $\sigma \approx 16$MeV. The missing mass of the measured photon 4-momenta is shown in the lower left plot. Both curves have a maximum at 936.7MeV and a width of $\sigma = 18.2$MeV, indicating that the chosen events satisfied this constraint. The $\pi^0$ mass spectrum is shown in the lower right plot, where the invariant masses of the three photon pairs from the fit with the highest CL are shown. The two distributions with peaks at 134.4MeV and widths of $\sigma \approx 7$MeV agree within 4%. As a further check of the analysis and the simulation, total cross sections for $\eta$- and $\pi^0$-photoproduction were determined and compared to previous experiments and model fits to experimental data. Figure \[psWQ\] shows these cross sections for the energy range of 700 to 820MeV. The total $\eta$ cross section is compared to the measurement carried out with TAPS at MAMI [@Kru95] and the Eta-MAID model fit [@Chi02], which is dominated by this TAPS experiment in the threshold region. It is clearly seen that the results obtained agree with both, the TAPS data and the Eta-MAID calculation. Figure \[psWQ\] (right) shows the total $\pi^0$-photoproduction cross section as determined using an analysis similar to that described above. The only difference was that just two constraints, namely the invariant and the missing mass of two identified photons, were used. The $\pi^0$ cross section shows good agreement with the MAID2007 [@Dre99; @Dre07] and SAID [@Arn96] models. The agreement is also very good in the region of the $\mathrm{\Delta}$ resonance, which is not shown here. Although these comparisons indicate that the selected event sample was almost free of background, possible contaminating background reactions were considered. In the examined energy region the main background contribution comes from the direct $3\pi^0$-production through the reaction $\gamma \mathrm{p} \rightarrow 3\pi^0 \mathrm{p}$. To study the fraction of such background events in the selected $\eta$ sample $10^7$ $3\pi^0$ events were simulated and analysed, giving a total reconstruction efficiency $\varepsilon_{3\pi^0} \approx 5\,\%$. The background contribution was calculated using the estimate of the total cross section made in [@Jun05], which resulted in $\sigma (\gamma \mathrm{p} \rightarrow 3\pi^0 \mathrm{p}) \approx 0.4\,\mu\mathrm{b}$ for photon beam energies $E_\gamma < 1100$MeV. It was assumed to be constant over the examined energy region. The contamination was then estimated to be $$\frac{N_{3\pi^0}}{N_{\eta \rightarrow 3\pi^0}} = \frac{\sigma (\gamma \mathrm{p} \rightarrow 3\pi^0 \mathrm{p})}{\bar{\sigma}_{\eta} \cdot \mathrm{BR}(\eta \rightarrow 3\pi^0)} \cdot \frac{\varepsilon_{3\pi^0}}{\varepsilon_{\eta \rightarrow 3\pi^0}} \approx 2\,\%,$$ where $\varepsilon_{3\pi^0}$ and $\varepsilon_{\eta \rightarrow 3\pi^0}$ are the reconstruction efficiencies for the direct $3\pi^0$-production and the $\eta \rightarrow 3\pi^0$ decay, respectively, and BR($\eta \rightarrow 3\pi^0$) is the branching ratio for the given decay. The total $\eta$-photoproduction cross section was averaged over the observed photon beam energy range of 700MeV to 820MeV, resulting in $\bar{\sigma}_\eta \approx 14\,\mu\mathrm{b}$. The 2% contribution is much smaller than the estimated statistical and systematic uncertainties (see section \[hdresults\]), which were determined to be of the order of 5 to 10% each. Therefore, it was neglected. Another background process, namely double $\pi^0$-production with two cluster split-offs, was also found to be negligible. Other background processes were kinematically not possible due to the restricted tagged photon energy range of $E_\gamma \leq 820$MeV. Results {#hdresults} ======= Dalitz plot parameter {#hddalitz} --------------------- The Dalitz plot parameter, $\alpha$, for the $\eta \rightarrow 3\pi^0$ decay was determined by comparing the simulated with the measured $z$ distribution. The events were selected by kinematic fits testing the hypothesis $\gamma \mathrm{p} \rightarrow \eta \mathrm{p} \rightarrow 3\pi^0 \mathrm{p}$ at the 2% CL. The simulation was based on pure phase-space distributions with $\alpha = 0$. The dashed line in fig. \[psZ\] (top) shows the generated $z$ distribution for full detector acceptance and 100% detection efficiency. The solid line illustrates that realistic acceptance and efficiency already introduce a slope to the simulated distribution, thus, affecting our result for the Dalitz plot parameter. From the distribution of the difference between reconstructed and initially generated $z$ values resolution for the variable $z$ was found to be $\sigma_z \approx 4.5\,\%$ and, thus, a bin width for the $z$ distribution of 0.05 was chosen. ![The $z$ distribution from the $\eta \rightarrow 3\pi^0$ decay events, selected by the kinematic fit. *Top*: Comparison of the simulated distributions for complete detector acceptance and 100% detection efficiency (dashed line) and for a realistic simulation (solid line). *Bottom*: Comparison of the distributions reconstructed from the realistic simulation (same solid line as in the upper picture) and the measured data (grey shaded).[]{data-label="psZ"}](ZVergleich.eps "fig:") ![The $z$ distribution from the $\eta \rightarrow 3\pi^0$ decay events, selected by the kinematic fit. *Top*: Comparison of the simulated distributions for complete detector acceptance and 100% detection efficiency (dashed line) and for a realistic simulation (solid line). *Bottom*: Comparison of the distributions reconstructed from the realistic simulation (same solid line as in the upper picture) and the measured data (grey shaded).[]{data-label="psZ"}](ZEta6Eta.eps "fig:") In order to determine the Dalitz plot parameter, the experimental $z$ distribution was divided by the simulated $z$ distribution. This ratio $R(z)$ was then fitted with the function $c(1+2 \alpha z)$, according to eq. \[eqAmpExpand\], using $c$ and $\alpha$ as free parameters in the fit. The deviation of $R(z)$ from pure phase space is then reflected in the value of $\alpha$. This deviation is illustrated in the lower picture of fig. \[psZ\]. Here the simulated $z$ distribution was normalised with $c$. The ratio $R(z)$, also scaled with $c$, and the line fits for the two experiments are presented in fig. \[psSlope\]. The upper limit of the fit region was fixed at $z=0.9$, because the last two bins had much poorer statistics and showed systematic deviations from a straight line. This effect occurred due to slightly different resolutions used in the kinematic fit for the experimental and the simulated data. These had to be applied to match the experimental and the simulated CL distributions. As the cut on the CL was the major restriction in the presented analysis, agreement in the CL distributions was desired. So, the last two bins were excluded by limiting the fit region to $z < 0.9$. The systematic effect of these two bins on the Dalitz plot parameter was examined in a test, fitting a line to the region $0 < z <1$ (see section \[hdsyst\]). The two experiments differed in the CB energy sum trigger threshold of $E_{\mathrm{thr}}^1 \approx 390$MeV and $E_{\mathrm{thr}}^2 \approx 60$MeV, and the tagged photon energies, 680MeV$\leq E_{\gamma}^1 \leq$820MeV compared with 200MeV$\leq~E_{\gamma}^2~\leq$820MeV. The results of the standard analysis described above for these two experiments, $\alpha_1$ and $\alpha_2$, are listed in the first line of table \[tbalpha\]. The experimental statistics and the reduced $\chi^2$-values ($\chi^2$/ndf) derived from the fits to the $z$ distribution are also given. The listed statistical uncertainties were taken from the errors of the fits. The results from the two experiments agree very well. Systematic uncertainty {#hdsyst} ---------------------- The systematic uncertainties were estimated in a series of tests, varying only one analysis parameter at a time. The results from all tests for both experiments are listed in table \[tbalpha\]. For each test the difference compared to the standard analysis is indicated. The first test demanded detection of exactly one proton in addition to the six photons as an event selection criterion ($6\gamma$+1p), requiring accepted events to have a detector cluster structure consistent with the $\gamma \mathrm{p} \rightarrow \eta \mathrm{p} \rightarrow 3\pi^0 \mathrm{p}$ reaction. The results of this test show, within one standard deviation, good agreement with the standard values for both experiments. ![Ratio $R(z)$ of the simulated and the measured $z$ distributions for the two experiments. The Dalitz plot parameter, $\alpha$, was obtained directly from the fit of the function $c(1+2 \alpha z)$. In the figure, $R(z)$ has been scaled with $c$ so that the fitted lines have intercepts equal to 1 on the $R(z)$-axis. *Top*: For the $\eta$ decays experiment. *Bottom*: For the radiative $\pi^0$-production experiment.[]{data-label="psSlope"}](SlopeEta.eps "fig:") ![Ratio $R(z)$ of the simulated and the measured $z$ distributions for the two experiments. The Dalitz plot parameter, $\alpha$, was obtained directly from the fit of the function $c(1+2 \alpha z)$. In the figure, $R(z)$ has been scaled with $c$ so that the fitted lines have intercepts equal to 1 on the $R(z)$-axis. *Top*: For the $\eta$ decays experiment. *Bottom*: For the radiative $\pi^0$-production experiment.[]{data-label="psSlope"}](SlopeMDM.eps "fig:") Then there was a series of tests where just the cut on the CL was altered between 1% and 50%. For the standard analysis it was set to 2%. With the cut on the lower value of 1% the effect of relaxing the acceptance criteria, and possibly including some events from background processes, was investigated. It showed small deviations compared to the standard analysis. Raising the cut to 5%, 10%, 20% and 50% led to a higher purity of the chosen event sample. Most results show slight differences compared to the standard value. Cuts on high confidence levels led to larger deviation. But, since at the same time the statistical significances decreased, the results are still within one standard deviation compared to the standard analysis. [cc|ccc|ccc]{} & Test & $\alpha_1$ & $N_1$ / $10^3$ & $\chi^2$/ndf & $\alpha_2$ & $N_2$ / $10^3$ & $\chi^2$/ndf\ 1 & Standard & $-0.0315 \pm 0.0020$ & 1120 & 20/16 & $-0.0324 \pm 0.0024$ & 724 & 10/16\ 2 & 6$\gamma$+1p & $-0.0323 \pm 0.0033$ & 430 & 15/16 & $-0.0357 \pm 0.0035$ & 322 & 18/16\ 3 & CL=1 & $-0.0308 \pm 0.0020$ & 1177 & 20/16 & $-0.0331 \pm 0.0023$ & 759 & 9/16\ 4 & CL=5 & $-0.0327 \pm 0.0021$ & 1022 & 22/16 & $-0.0336 \pm 0.0025$ & 663 & 12/16\ 5 & CL=10 & $-0.0328 \pm 0.0022$ & 919 & 21/16 & $-0.0322 \pm 0.0026$ & 600 & 11/16\ 6 & CL=20 & $-0.0332 \pm 0.0024$ & 774 & 17/16 & $-0.0296 \pm 0.0028$ & 510 & 16/16\ 7 & CL=50 & $-0.0295 \pm 0.0031$ & 453 & 20/16 & $-0.0297 \pm 0.0036$ & 303 & 11/16\ 8 & $E_{\mathrm{thr}}=380$MeV & $-0.0323 \pm 0.0020$ & 1121 & 20/16 & $---$ & $---$ & $---$\ 9 & $E_{\mathrm{thr}}=388$MeV & $-0.0316 \pm 0.0020$ & 1120 & 20/16 & $---$ & $---$ & $---$\ 10 & $E_{\mathrm{thr}}=392$MeV & $-0.0313 \pm 0.0020$ & 1119 & 20/16 & $---$ & $---$ & $---$\ 11 & $E_{\mathrm{thr}}=400$MeV & $-0.0307 \pm 0.0020$ & 1117 & 21/16 & $---$ & $---$ & $---$\ 12 & no TAPS & $-0.0300 \pm 0.0024$ & 797 & 13/16 & $-0.0316 \pm 0.0028$ & 516 & 11/16\ 13 & no ID & $-0.0311 \pm 0.0023$ & 838 & 12/16 & $-0.0306 \pm 0.0027$ & 556 & 12/16\ 14 & $z\leq1$ & $-0.0295 \pm 0.0019$ & 1149 & 31/18 & $-0.0322 \pm 0.0022$ & 742 & 13/18\ 15 & $z<0.75$ & $-0.0319 \pm 0.0024$ & 1005 & 19/13 & $-0.0312 \pm 0.0029$ & 650 & 8/13\ 16 & $z<0.6$ & $-0.0284 \pm 0.0034$ & 812 & 13/10 & $-0.0277 \pm 0.0041$ & 524 & 7/10\ The trigger threshold for the CB energy sum was determined for one experiment to be $E_{\mathrm{thr}}^1 \approx 390$MeV. Since this value could not be fixed exactly, in a series of tests it was varied around the standard value. But these tests were only performed for the experiment with the higher cut, because the threshold of the second experiment, roughly 60MeV, was so low that almost no $\eta \rightarrow 3\pi^0$ events were rejected (compare fig. \[psTrig\]). The results of these tests showed good agreement with the standard value. In another test TAPS clusters were ignored in the analysis (no TAPS), so that all six required photons had to be detected by the CB. This was a check, if the acceptance in forward directions was simulated precisely enough. Such a test was necessary, because a lot of inactive material in the exit region of the CB screened TAPS from particles emerging from the target. Both results are slightly lower than the standard values, but are within one $\sigma$ compared to them. Possible systematic effects from misidentification of photons as charged particles were studied in an analysis (no ID) omitting the particle identification methods described above. All clusters, registered in the CB and TAPS, were accepted as photon candidates. Practically no deviations from the standard values were found. As mentioned above, in a test the fit region was extended to $z=1.0$. With the fit over the full $z$ range the influence of the last two bins at high $z$ values was investigated. The results in table \[tbalpha\] show no effect for the second, but a drop in $\alpha$ for the first experiment. Also the reduced $\chi^2$ indicates the the description of a straight line does not hold here for the full $z$ range. But within one standard deviation the test results are compatible with the standard values. The upper limit of $z=0.75$ was chosen for a test to investigate possible influences of the region in the Dalitz plot with $z > 0.756$, where statistics significantly decrease (see fig. \[psZ\]). In eq. \[eqz\] $z$ is given in such a way that all points with $z \leq 0.756$ have the same probability for the pure phase space process. In [@Nis07] it is shown that at $z=0.756$ a cusp in $R(z)$ arises. The result of this test indicates just minor changes compared to the standard values. As discussed in [@Nis07], the ratio $R(z)$ exhibits two more cusps, one at $z=0.597$, the minimum value to reach the $\pi^+ \pi^-$ threshold in the Dalitz plot, and another at $z=0.882$, corresponding to the maximum value to touch this line. These cusps arise due to the $\pi^+ \pi^- \rightarrow \pi^0 \pi^0$ charge exchange-reaction, which also produces a cusp in the $\pi^0 \pi^0$ mass distribution of the $\eta \rightarrow 3\pi^0$ decay [@Dit08; @Mei97; @Mei98; @Cab04; @Cab05; @Bel06; @Bis07]. The cusp at higher $z$ in the ratio $R(z)$ was already excluded by the standard analysis. Reduction to $z=0.6$ makes a substantial decrease in the result for $\alpha$, but the statistical error is also larger. In [@Dit08] Ditsche, Kubis and Meißner calculated that, if the Dalitz plot parameter is obtained by a fit over the range $0 \leq z \leq 0.597$, a drop of 5% in the absolute value of $\alpha$ should be visible, compared to a fit over the full $z$ range. The results from test 16 show a decrease of roughly 10% and 15% for the two experiments. But within the increased errors, the test results still agree with the standard values. So, no reliable statement on the influence of the cusps in $R(z)$ can be made and the differences in $\alpha$ were just taken into account in the determination of the systematic uncertainties. The systematic uncertainties were then calculated for both experiments separately according to $$\Delta_{\mathrm{syst}}(\alpha_i) = \frac{\sum_k \left( \alpha^{\prime}_{ik} - \alpha_i \right) \cdot N_{ik}}{\sum_k N_{ik}} \label{eqsyserror}$$ where $\alpha_i$ stands for the standard values of the two experiments. The $\alpha^{\prime}_{ik}$ are the results of the different tests listed in table \[tbalpha\] with their statistics $N_{ik}$. Thus, the systematic uncertainties were calculated by the sum of the differences between the test results and the standard values, weighted with the statistics of the tests. Positive and negative deviations were handled separately in this process. Final results {#hdfinal} ------------- As final results $$\alpha_1 = -0.0315 \pm 0.0020 ^{+0.0012}_{-0.0009}$$ $$\alpha_2 = -0.0324 \pm 0.0024 ^{+0.0016}_{-0.0014}$$ were obtained. The result of the analysis presented in this paper was then calculated by the weighted mean of the two values, while the inverse variances were taken as weights. The statistical uncertainty was calculated from $$\sigma_{\mathrm{stat}}^2 = \frac{1}{\sum_i (1/\sigma_i^2)}.$$ The systematic uncertainty was taken from the highest systematic uncertainty given above. As final result we quote $$\alpha = -0.032 \pm 0.002_{\mathrm{stat}} \pm 0.002_{\mathrm{syst}}$$ This value is consistent with the current PDG average [@PDG08], which is dominated by the Crystal Ball result measured at BNL, and agrees reasonably well with the results published by the KLOE [@Amb07], CELSIUS/WASA [@Bas07] and WASA-at-COSY [@Ado08] collaborations. After the upgrade of MAMI to 1.5GeV maximum electron energy [@Kai08], a new experiment on the neutral decays of the $\eta$ meson was performed in the tagged photon energy range of 700MeV to 1400MeV. An independent analysis of this new experiment [@Pra08] obtained approximately $3 \cdot 10^6$ $\eta \rightarrow 3\pi^0$ events, giving a value for $\alpha$, which is in very good agreement with the result presented in this paper. Summary {#hdsummary} ======= The Dalitz plot parameter, $\alpha$, for the $\eta \rightarrow 3\pi^0$ decay has been measured with the Crystal Ball detector and TAPS at the electron accelerator facility MAMI-B in Mainz. The $\eta$ mesons were produced by bremsstrahlung photons emitted by the 883MeV electrons and detected by the Glasgow tagged photon spectrometer. The kinematic fitting analysis selected $1.8 \cdot 10^6$ $\gamma \mathrm{p} \rightarrow \eta \mathrm{p} \rightarrow 3\pi^0 \mathrm{p}$ events and the result $\alpha = -0.032 \pm 0.002_{\mathrm{stat}} \pm 0.002_{\mathrm{syst}}$ is in good agreement with other high-precision experiments. A possible influence on the Dalitz plot parameter by the $\pi^+ \pi^- \rightarrow \pi^0 \pi^0$ contribution to the amplitude was found to be small, but is included in the systematic uncertainty. Acknowledgements {#acknowledgements .unnumbered} ---------------- The authors wish to thank the accelerator group of MAMI for the precise and very stable beam conditions. This work was supported by the Deutsche Forschungsgemeinschaft (SFB 443, SFB/TR 16), the DFG-RFBR (Grant No. 05-02-04014), European Community-Research Infrastructure Activity under the FP6 “Structuring the European Research Area programme" (HadronPhysics, Contract No. RII3-CT-2004-506078), the NSERC (Canada), Schweizerischer Nationalfonds, U.K. EPSRC, the U.S. DOE and U.S. NSF. We thank the undergraduate students of Mount Allison and George Washington Universities for their assistance. D.G. Sutherland, Phys. Lett. **23**, (1966) 384. R. Baur, J. Kambor and D. Wyler, Nucl. Phys. **B460**, (1996) 127. C. Ditsche, B. Kubis and U.-G. Mei[ß]{}ner, to be published in Eur. Phys. J. **C**, (2008) \[arXiv:0812.0344 \[hep-ph\]\]. H. Osborn and D.J. Wallace, Nucl. Phys. **B20**, (1970) 23. J. Gasser and H. Leutwyler, Nucl. Phys. **B250**, (1985) 539. J. Bijnens and K. Ghorbani, JHEP **0711**, (2007) 030. J. Kambor, C. Wiesendanger, D. Wyler, Nucl. Phys. **B465**, (1996) 215. A.V. Anisovich and H. Leutwyler, Phys. Lett. **B375**, (1996) 335. B. Borasoy and R. Ni[ß]{}ler, Eur. Phys. J. **A26**, (2005) 383. C. Amsler *et al.* (Particle Data Group), Phys. Lett. **B667**, (2008) 1. J. Bijnens and J. Gasser, Phys. Scripta **T99**, (2002) 34. N.N. Khuri and S.B. Treiman, Phys. Rev. **119**, (1960) 1115. S. Eidelman *et al.* \[Particle Data Group\], Phys. Lett. **B592**, (2004) 1. W.B. Tippens *et al.*, Phys. Rev. Lett. **87**, (2001) 192001. F. Ambrosino *et al.*, arXiv:0707.4137 \[hep-ex\]. C. Baglin *et al.*, Nucl. Phys. **B22**, (1970) 66. D. Alde *et al.*, Z. Phys. **C25**, (1984) 225. A. Abele *et al.*, Phys. Lett. **B417**, (1998) 193. M.N. Achasov *et al.*, JETP Lett. **73**, (2001) 451. M. Bashkanov *et al.*, Phys. Rev. **C76**, (2007) 048201. C. Adolph *et al.*, arXiv:0811:2763 \[nucl-ex\]. T. Capussela *et al.*, Acta Phys. Slov. **56**, (2005) 341. H. Herminghaus *et al.*, IEEE Trans. Nucl. Sci. **30**, (1983) 3274. Th. Walcher, Prog. Part. Nucl. Phys. **24**, (1990) 189. M. Unverzagt, PhD thesis, Rheinische Friedrich-Wilhelms-Universität Bonn, Germany, 2007. I. Anthony, J.D. Kellie, S.J. Hall, G.J. Miller and J. Ahrens, Nucl. Instrum. Meth. **A301**, (1991) 230. S.J. Hall, G.J. Miller, R. Beck and P. Jennewein, Nucl. Instrum. Meth. **A368**, (1996) 698. M. Oreglia *et al.*, Phys. Rev. **D25**, (1982) 2259. A. Starostin *et al.*, Phys. Rev. **C64**, (2001) 055205. D. Watts, in *Calorimetry in Particle Physics: Proceedings of the 11th International Conference, Perugia, Italy, 2004*, edited by C. Cecchi, P. Lubrano and M. Pepe (World Scientific, Singapore, 2005), p. 560. R. Novotny, IEEE Trans. Nucl. Sci. **38**, (1991) 379. A.R. Gabler *et al.*, Nucl. Instrum. Meth. **A346**, (1994) 168. P. Avery, . B. Krusche *et al.*, Phys. Rev. Lett **75**, (1995) 3023. W.-T. Chiang, S.N. Yang, L. Tiator, M. Vanderhaegen, D. Drechsel, Phys. Rev. **C68**, (2003) 045202. D. Drechsel, O. Hanstein, S.S. Kamalov, L. Tiator, Nucl. Phys. **A645**, (1999) 145. D. Drechsel, S.S. Kamalov, L. Tiator, Eur. Phys. J. **A34**, (2007) 69. R.A. Arndt *et al.*, Phys. Rev **C53**, (1996) 430. J. Junkersfeld, PhD thesis, Rheinische Friedrich-Wilhelms-Universität Bonn, Germany, 2005. R. Nißler, PhD thesis, Rheinische Friedrich-Wilhelms-Universität Bonn, Germany, 2007. U.-G. Meißner, G. Müller and S. Steininger, Phys. Lett. **B406**, (1997) 154 \[Erratum-ibid. **B407**(1997) 454\]. U.-G. Meißner, Nucl. Phys. **A629**, (1998) 72C. N. Cabibbo, Phys. Rev. Lett. **93**, (2004) 121801. N. Cabibbo, G. Isidori, JHEP **0503**, (2005) 021. J. Belina, Diploma thesis, Universität Bern, Switzerland, 2006. M. Bissegger, A. Fuhrer, J. Gasser, B. Kubis and A. Rusetsky, Phys. Lett. **B659**, (2008) 576. K.-H. Kaiser *et al.*, Nucl. Instrum. Meth. **A593**, (2008) 159. S. Prakhov *et al.*, submitted to Phys. Rev. **C**, (2008) \[arXiv:0812.1999v2 \[hep-ex\]\]. [^1]: *e-mail*: unvemarc@kph.uni-mainz.de
--- address: - 'Instituto de Matemática e Estatística, Universidade Federal da Bahia, Ondina, Salvador, BA, 40.170-115, Brasil' - 'Departamento de Matemática, Universidade Federal de Santa Catarina, Trindade, Florianópolis, SC, 88.040-900, Brasil' - 'Department of Mathematics and Statistics, University of Ottawa, Ottawa, Ontario, K1N 6N5, Canada' author: - Vladimir Pestov bibliography: - 'etaas\_biblio.bib' title: Elementos da teoria de aprendizagem de máquina supervisionada --- [^1] [^1]: IMPA, 2019
--- abstract: | The famous results of M.G. Kreĭn concerning the description of selfadjoint contractive extensions of a Hermitian contraction $T_1$ and the characterization of all nonnegative selfadjoint extensions $\wt A$ of a nonnegative operator $A$ via the inequalities $A_K\le \wt A \le A_F$, where $A_K$ and $A_F$ are the Kreĭn-von Neumann extension and the Friedrichs extension of $A$, are generalized to the situation, where $\wt A$ is allowed to have a fixed number of negative eigenvalues. These generalizations are shown to be possible under a certain minimality condition on the negative index of the operators $I-T_1^*T_1$ and $A$, respectively; these conditions are automatically satisfied if $T_1$ is contractive or $A$ is nonnegative, respectively. The approach developed in this paper starts by establishing first a generalization of an old result due to Yu.L. Shmul’yan on completions of $2\times 2$ nonnegative block operators. The extension of this fundamental result allows us to prove analogs of the above mentioned results of M.G. Kreĭn and, in addition, to solve some related lifting problems for $J$-contractive operators in Hilbert, Pontryagin and Kreĭn spaces in a simple manner. Also some new factorization results are derived, for instance, a generalization of the well-known Douglas factorization of Hilbert space operators. In the final steps of the treatment some very recent results concerning inequalities between semibounded selfadjoint relations and their inverses turn out to be central in order to treat the ordering of non-contractive selfadjoint operators under Cayley transforms properly. address: - | Department of Mathematics and Statistics\ University of Vaasa\ P.O. Box 700, 65101 Vaasa\ Finland - | Department of Mathematics and Statistics\ University of Vaasa\ P.O. Box 700, 65101 Vaasa\ Finland author: - 'D. Baidiuk' - 'S. Hassi' title: 'Completion, extension, factorization, and lifting of operators with a negative index' --- Introduction ============ Almost 70 years ago in his famous paper [@Kr47] M.G. Kreĭn proved that for a densely defined nonnegative operator $A$ in a Hilbert space there are two extremal extensions of $A$, the Friedrichs (hard) extension $A_F$ and the Kreĭn-von Neumann (soft) extension $A_K$, such that every nonnegative selfadjoint extension $\wt A$ of $A$ can be characterized by the following two inequalities: $$(A_F+a)^{-1}\le ({\widetilde{A}}+a)^{-1}\le (A_K+a)^{-1}, \quad a>0.$$ To obtain such a description he used Cayley transforms of the form $$T_1=(I-A)(I+A)^{-1} \quad T=(I-\wt A)(I+\wt A)^{-1},$$ to reduce the study of unbounded operators to the study of contractive selfadjoint extensions $T$ of a Hermitian nondensely defined contraction $T_1$. In the study of contractive selfadjoint extensions of $T_1$ he introduced a notion which is nowadays called “the shortening of a bounded nonnegative operator $H$ to a closed subspace $\sN$” of $\sH$ as the (unique) maximal element in the set $$\{\,D \in [\sH] :\, 0 \leq D \leq H, \, \ran D \subset \sN\,\},$$ which is denoted by $H_\sN$; cf. [@AD; @AT; @Pek]. Using this notion he proved the existence of a minimal and maximal contractive extension $T_m$ and $T_M$ of $T_1$ and that $T$ is a selfadjoint contractive extension of $T_1$ if and only if $T_m\le T\le T_M$, more explicitly that $T=T_m+(I+T)\sN$ and $T=T_M-(I-T)\sN$ when $\sN=\sH\ominus\dom T_1$. Later the study of nonnegative selfadjoint extensions of $A\ge 0$ was generalized to the case of nondensely defined operators $A\ge 0$ by T. Ando and K. Nishio [@AN], as well as to the case of linear relations (multivalued linear operators) $A\ge 0$ by E.A. Coddington and H.S.V. de Snoo [@CS]. Further studies followed this work of M.G. Kreĭn; the approach in terms of “boundary conditions” to the extensions of a positive operator $A$ was proposed by M.I. Vishik [@V] and M.S. Birman [@B]; an exposition of this theory based on the investigation of quadratic forms can be found from [@AS]. An approach to the extension theory of symmetric operators based on abstract boundary conditions was initiated even earlier by J.W. Calkin [@Cal39] under the name of reduction operators, and later, independently the technique of boundary triplets was introduced to formalize the study of boundary value problems in the framework of general operator theory; see [@Koch; @Bruk; @GG; @DM1; @MMM2; @DM2]. Later the extension theory of unbounded symmetric Hilbert space operators and related resolvent formulas originating also from the work of M.G. Kreĭn [@Kr44; @Kr46], see also e.g. [@LT], was generalized to the spaces with indefinite inner products in the well-known series of papers by H. Langer and M.G. Kreĭn, see e.g. [@KL1; @KL2], and all of this has been further investigated, developed, and extensively applied in various other areas of mathematics and physics by numerous other researchers. In spite of the long time span, natural extensions of the original result of M.G. Kreĭn in [@Kr47] have not occurred in the literature. Obviously the most closely related result appears in [@CG92], where for a given pair of a row operator $T_r=(T_{11},T_{12})\in[\sH_1\oplus\sH_1',\sH_2]$ and a column operator $T_c=\col(T_{11},T_{21})\in[\sH_1,\sH_2\oplus\sH_2']$ the problem for determining all possible operators $\wt T\in[\sH_1\oplus\sH_1',\sH_2\oplus\sH_2']$ acting from the Hilbert space $\sH_1\oplus\sH_1'$ to the Hilbert space $\sH_2\oplus\sH_2'$ such that $$P_{\sH_2}\wt T=T_r, \quad \wt T\uphar \sH_1=T_c,$$ and such that the following negative index (number of negative eigenvalues) conditions are satisfied $$\kappa_1:=\nu_-(I-\wt T^*\wt T)=\nu_-(I-T_c^*T_c),\quad \kappa_2:=\nu_-(I-\wt T\wt T^*)=\nu_-(I-T_rT_r^*),$$ is considered. The problem was solved in [@CG92 Theorem 5.1] under the condition $\kappa_1,\kappa_2<\infty$. In the literature cited therein appears also a reference to an unpublished manuscript by H. Langer and B. Textorius with the title “Extensions of a bounded Hermitian operator $T$ preserving the number of negative squares of $I-T^*T$”, where obviously a similar problem for a given bounded Hermitian (column operator) $T$ has been investigated; see [@CG92 Section 6]. However, in [@CG92] the existence of possible extremal extensions in the solution set in the spirit of [@Kr47], when it is nonempty, have not been investigated. Also possible investigations of analogous results for unbounded symmetric operators with a fixed negative index seem to be unavailable in the literature. In this paper we study specific classes of such “quasi-contractive” bounded symmetric operators $T_1$ with $\nu_-(I-T_1^*T_1)<\infty$ as well as “quasi-nonnegative” operators $A$ with $\nu_-(A)<\infty$ and the existence and description of all possible selfadjoint extensions $T$ and $\wt A$ of them which preserve the given negative indices $\nu_-(I-T^2)=\nu_-(I-T_1^*T_1)$ and $\nu_-(\wt A)=\nu_-(A)$, respectively, under a further minimality condition on the negative index $\nu_-(I-T_1^*T_1)$ and $\nu_-(A)$. Under such conditions it is shown that if there is a solution then there are again two extremal extensions which then describe the whole solution set via two operator inequalities, just as in the original paper of M.G. Kreĭn. The approach developed in this paper differs from the approach in [@Kr47]. In fact, the approach used in a recent paper of Hassi, Malamud and de Snoo [@HMS04], a technique appearing also in an earlier paper of Kolmanovich and Malamud [@KM1], will be successfully generalized. In [@HMS04] the original results of M.G. Kreĭn have been proved in the general setting of a not necessarily densely defined nonnegative operator and, more generally, for a nonnegative linear relation $A$. The starting point in our approach is to establish a generalization of an old result due to Yu.L. Shmul’yan [@S59] on completions of $2\times 2$ nonnegative block operators where the result was applied for introducing so-called Hellinger operator integrals. Our extension of this fundamental result is given in Section \[sec1\]; see Theorem \[T:1\] (for the case $\kappa<\infty$) and Theorem \[T:1ext\] (for the case $\kappa=\infty$). Obviously, these results can be considered to be the most important inventions in the present paper and it is possible that several further applications for them will occur in forthcoming literature. In this paper we will extensively apply Theorem \[T:1\]. In Section \[sec2\] this result is specialized to a class of $2\times 2$ block operators to characterize occurrence of a minimal negative index for the so-called Schur complement of the block operator, see Theorem \[thmB\]. This result can be viewed also as a factorization result and, in fact, it yields a generalization of the well-known Douglas factorization of Hilbert space operators in [@Douglas], see Proposition \[BKcor1\], which is completed by a generalization of Sylvester’s criterion on additivity of inertia on Schur complements in Proposition \[sylvester\]. In Section \[sec3\] Theorem \[T:1\], or its special case Theorem \[thmB\], is applied to solve lifting problems for $J$-contractive operators in Hilbert, Pontryagin and Kreĭn spaces in a new simple way, the most general version of which is formulated in Theorem \[Lifthm\]: this result was originally proved in [@CG89 Theorem 2.3] with the aid of [@ACG87 Theorem 5.3]; for a special case, see also [@Drit90; @DritRov90]. In the Hilbert space case this problem has been solved in [@AG82; @DaKaWe; @ShYa], further proofs and facts can be found e.g. from [@Ar2006; @AHS2007; @BN2; @KM1; @MMM3]. Section \[sec4\] contains the extension of the fundamental result of M.G. Kreĭn in [@Kr47], see Theorem \[T:contr\], which characterizes the existence and gives a description of all selfadjoint extension $T$ of a bounded symmetric operator $T_1$ satisfying the following minimal index condition $\nu_-(I-T^2)=\nu_-(I-T_{11}^2)$ by means of two extreme extensions via $T_m\le T\le T_M$. In Section \[sec5\] selfadjoint extensions of unbounded symmetric operators, and symmetric relations, are studied under a similar minimality condition on the negative index $\nu_-(A)$; the main result there is Theorem \[KreinThm\]. It is a natural extension of the corresponding result of M.G. Kreĭn in [@Kr47]. The treatment here uses Cayley transforms and hence is analogous to that in [@Kr47]. However, the existence of two extremal extensions in this setting and the validity of all the operator inequalities appearing therein depend essentially of very recent “antitonicity results” proved for semibounded selfadjoint relations in [@BHSW2014] concerning correctness of the implication $H_1\le H_2$ $\Rightarrow$ $H_2^{-1}\le H_1^{-1}$ in the case that $H_1$ and $H_2$ have some finite negative spectra. In this section also an analog of the so-called Kreĭn’s uniqueness criterion for the equality $T_{m}=T_{M}$ is established. A completion problem for block operators {#sec1} ======================================== By definition the modulus $|C|$ of a closed operator $C$ is the nonnegative selfadjoint operator $|C|=(C^*C)^{1/2}$. Every closed operator admit a polar decomposition $C=U|C|$, where $U$ is a (unique) partial isometry with the initial space $\cran |C|$ and the final space $\cran C$, cf. [@Kato]. For a selfadjoint operator $H=\int_{\dR} t\, dE_t$ in a Hilbert space $\sH$ the partial isometry $U$ can be identified with the signature operator, which can be taken to be unitary: $J=\sign(H)=\int_{\dR}\,\sign(t)\,dE_t$, in which case one should define $\sign(t)=1$ if $t\ge 0$ and otherwise $\sign(t)=-1$. Completion to operator blocks with finite negative index. --------------------------------------------------------- The following theorem solves a completion problem for a bounded incomplete block operator $A^0$ of the form $$\label{A0} A^0= \begin{pmatrix} A_{11}&A_{12}\\ A_{21}&\ast \end{pmatrix} \begin{pmatrix} \sH_1\\ \sH_2 \end{pmatrix} \to \begin{pmatrix} \sH_1\\ \sH_2 \end{pmatrix}$$ in the Hilbert space $\sH=\sH_1\oplus\sH_2$. \[T:1\] Let $\sH=\sH_1\oplus\sH_2$ be an orthogonal decomposition of the Hilbert space $\sH$ and let $A^0$ be an incomplete block operator of the form . Assume that $A_{11}=A_{11}^*$ and $A_{21}=A_{12}^*$ are bounded, $\nu_-(A_{11})={\kappa}<\infty$, where ${\kappa}\in\mathbb{Z}_+$, and let $J=\sign(A_{11})$ be the (unitary) signature operator of $A_{11}$. Then: 1. There exists a completion $A\in[\sH]$ of $A^0$ with some operator $A_{22}=A_{22}^*\in[\sH_2]$ such that $\nu_-(A)=\nu_-(A_{11})={\kappa}$ if and only if $$\label{E:1} \ran A_{12}\subset\ran |A_{11}|^{1/2}.$$ 2. In this case the operator $S=|A_{11}|^{[-1/2]}A_{12}$, where $|A_{11}|^{[-1/2]}$ denotes the (generalized) Moore-Penrose inverse of $|A_{11}|^{1/2}$, is well defined and $S\in[\sH_2,\sH_1]$. Moreover, $S^*JS$ is the smallest operator in the solution set $$\label{E:sol} \mathcal{A}:={\left\{A_{22}=A_{22}^*\in[\sH_2]:\, A=(A_{ij})_{i,j=1}^{2}:\nu_-(A)={\kappa}\right\}}$$ and this solution set admits a description as the (semibounded) operator interval given by $$ \mathcal{A}={\left\{A_{22}\in[\sH_2]:\, A_{22}=S^*JS+Y,\, Y=Y^*\ge 0\right\}}.$$ \(i) Assume that there exists a completion $A_{22}\in\mathcal{A}$. Let ${\lambda}_{\kappa}\leq{\lambda}_{{\kappa}-1}\leq...\leq{\lambda}_1<0$ be all the negative eigenvalues of $A_{11}$ and let ${\varepsilon}$ be such that $|{\lambda}_1|>{\varepsilon}>0$. Then $0\in{\rho}(A_{11}+{\varepsilon})$ and hence one can write $$\label{E:Silv} \begin{split} &\begin{pmatrix} I&0\\ -A_{21}(A_{11}+{\varepsilon})^{-1}&I \end{pmatrix} \begin{pmatrix} A_{11}+{\varepsilon}&A_{12}\\ A_{21}&A_{22}+{\varepsilon}\end{pmatrix} \begin{pmatrix} I&-(A_{11}+{\varepsilon})^{-1}A_{12}\\ 0&I \end{pmatrix}\\= &\begin{pmatrix} A_{11}+{\varepsilon}&0\\ 0&A_{22}+{\varepsilon}-A_{21}(A_{11}+{\varepsilon})^{-1}A_{12} \end{pmatrix} \end{split}$$ The operator in the righthand side of has ${\kappa}$ negative eigenvalues if and only if $$\label{E:2} A_{21}(A_{11}+{\varepsilon})^{-1}A_{12}\leq A_{22}+{\varepsilon}$$ or equivalently $$\label{E:3} \int\limits_{-\|A_{11}\|}^{\|A_{11}\|}(t+{\varepsilon})^{-1}d\|E_tA_{12}f\|^2\leq{\varepsilon}\|f\|^2+(A_{22}f,f),$$ where $E_t$ is the spectral family of $A_{11}$. We rewrite in the form $$ \int_{[-\|A_{11}\|,0)} (t+{\varepsilon})^{-1}d\|E_tA_{12}f\|^2+ \int_{[0,\|A_{11}\|]}(t+{\varepsilon})^{-1}d\|E_tA_{12}f\|^2\leq{\varepsilon}\|f\|^2+(A_{22}f,f),$$This yields the estimate $$\label{E:4} \int_{[0,\|A_{11}\|]} (t+{\varepsilon})^{-1}d\|E_tA_{12}f\|^2\leq{\varepsilon}\|f\|^2+(A_{22}f,f)-\frac{1}{{\lambda}_1+{\varepsilon}}\|A_{12}f\|^2.$$ By letting ${\varepsilon}\searrow 0$ in the monotone convergence theorem implies that $$P_+A_{12}f\in\ran A_{11+}^{1/2}\subset \ran |A_{11}|^{1/2}$$ for all $f\in\sH_2$; here $A_{11+}=\int_{[0,\|A_{11}\|]} t\, dE_t$ stands for the nonnegative part of $A_{11}$ and $P_+$ is the orthogonal projection onto the corresponding closed subspace $\cran A_{11+}=\int_{[0,\|A_{11}\|]} \, dE_t$. This implies that $\ran A_{12}\subset\ran |A_{11}|^{1/2}$. Conversely, if $\ran A_{12}\subset\ran |A_{11}|^{1/2}$, then the operator $S:=|A_{11}|^{[-1/2]}A_{12}$ is well defined, closed and bounded, i.e., $S\in [\sH_2,\sH_1]$. Since $A_{12}=|A_{11}|^{1/2}S$, it follows from $A_{21}=S^*|A_{11}|^{1/2}$ and $$\label{Amin} A= \begin{pmatrix} |A_{11}|^{1/2}\\ S^*J \end{pmatrix}J \begin{pmatrix} |A_{11}|^{1/2}&JS\\ \end{pmatrix}:\ \nu_-(A)={\kappa},$$ that the operator $A_{22}=S^*JS$ gives a completion for $A^0$. \(ii) According to (i) $A_{21}=S^*|A_{11}|^{1/2}$, and $S^*JS\in[\sH_2]$ gives a solution to the completion problem . Now $$s-\lim_{{\varepsilon}\searrow 0}A_{21}(A_{11}+{\varepsilon})^{-1}A_{12}=s-\lim_{{\varepsilon}\searrow 0}S^*|A_{11}|^{1/2}(A_{11}+{\varepsilon})^{-1}|A_{11}|^{1/2}S=S^*JS$$ and if $A_{22}$ is an arbitrary operator in the set , then by letting ${\varepsilon}\searrow 0$ one concludes that $S^*JS\leq A_{22}$. Therefore, $S^*JS$ satisfies the desired minimality property. To prove the last statement assume that $Y\in[\sH_2]$ and that $Y\ge 0$. Then $A_{22}=S^*JS+Y$ inserted in $A^0$ defines a block operator $A_Y\ge A_{\rm{min}}$. In particular, $\nu_-(A_Y)\le \nu_-(A_{\rm{min}})=\kappa<\infty$. On the other hand, it is clear from the formula $$\label{AY} A_Y= \begin{pmatrix} |A_{11}|^{1/2}\\ S^*J \end{pmatrix}J \begin{pmatrix} |A_{11}|^{1/2}&JS\\ \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 & Y \\ \end{pmatrix}$$ that the $\kappa$-dimensional eigenspace corresponding to the negative eigenvalues of $A_{11}$ is $A_Y$-negative and, hence, $\nu_-(A_Y)\ge \kappa$. Therefore, $\nu_-(A_Y)=\kappa$ and $Y\in\mathcal{A}$. Notice that in the factorization $A_{12}=|A_{11}|^{1/2}S$, $S$ is uniquely determined under the condition $\ran S\subset \cran A_{11}$ (which implies that $\ker A_{12}=\ker S$); cf. [@Douglas]. In the case that $\kappa=0$, the result in Theorem \[T:1\] reduces to the well-known criterion concerning completion of an incomplete block operator to a nonnegative operator; cf. [@S59]. In the case of matrices acting on a finite dimensional Hilbert space, the result with $\kappa>0$ has been proved very recently in the appendix of [@DHS2012], where it was applied in solving indefinite truncated moment problems. In the present paper Theorem \[T:1\] will be one of the main tools for further investigations. Completion to operator blocks with an infinite negative index. -------------------------------------------------------------- The completion result in Theorem \[T:1\] is of some general interest already by the substantial number of its applications known in the case of nonnegative operators. In this section the completion problem is treated in the case that $\kappa=\infty$. For this purpose some further notions will be introduced. Recall that a subspace $\sM\subset \sH$ is said to be uniformly $A$-negative, if there exists a positive constant $\nu>0$ such that $(Af,f)\le -\nu \|f\|^2$ for all $f\in\sM$. It is maximal uniformly $A$-negative, if $\sM$ has no proper uniformly $A$-negative extension. The completion problem is now extended by claiming from the completions the following maximality property: $$\label{Max} \text{there exists a subspace $\sM\subset \sH_1$ which is maximal uniformly $A$-negative}.$$ \[T:1ext\] Let $A^0$ be an incomplete block operator of the form in the Hilbert space $\sH=\sH_1\oplus\sH_2$. Let $A_{11}=A_{11}^*$ and $A_{21}=A_{12}^*$ be bounded, let $J=\sign(A_{11})$ be the (unitary) signature operator of $A_{11}$, and, in addition, assume that there is a spectral gap $(-\delta,0)\subset \rho(A_{11})$, $\delta>0$. Then: 1. There exists a completion $A\in[\sH]$ of $A^0$ with some operator $A_{22}=A_{22}^*$ satisfying the condition if and only if $$ \ran A_{12}\subset\ran |A_{11}|^{1/2}$$ 2. In this case the operator $S=|A_{11}|^{[-1/2]}A_{12}$, where $|A_{11}|^{[-1/2]}$ denotes the (generalized) Moore-Penrose inverse of $|A_{11}|^{1/2}$, is well defined and $S\in[\sH_2,\sH_1]$. Moreover, $S^*JS$ is the smallest operator in the solution set $$ \mathcal{A}:={\left\{A_{22}=A_{22}^*\in[\sH_2]:\, A=(A_{ij})_{i,j=1}^{2} \textrm{ satisfies } \eqref{Max} \right\}}$$ and this solution set admits a description as the (semibounded) operator interval given by $$ \mathcal{A}={\left\{A_{22}\in[\sH_2]:\, A_{22}=S^*JS+Y,\, Y=Y^*\ge 0\right\}}.$$ To prove this result suitable modifications in the proof of Theorem \[T:1\] are needed. \(i) First assume that $A_{22}\in\mathcal{A}$ gives a desired completion for $A^0$. If ${\varepsilon}\in(0,\delta)$ then $0\in{\rho}(A_{11}+{\varepsilon})$ and therefore the block operator $(A_{ij})$ satisfies the formula . We claim that the condition implies the inequality for all sufficiently small values ${\varepsilon}>0$. To see this let $\sM\subset \sH_1$ be a subspace for which the condition is satisfied. Then $(A_{11}f,f)\le -\nu \|f\|^2$ for some fixed $\nu>0$ and for all $f\in\sM$. Assume that for some $0<{\varepsilon}_0<\max\{\nu,\delta\}$ is not satisfied. Then $((A_{22}+{\varepsilon}_0-A_{21}(A_{11}+{\varepsilon}_0)^{-1}A_{12})v_0,v_0)<0$ holds for some vector $v_0\in\sH_2$. Define $\sL=W_{{\varepsilon}_0}^{-1}(\sM+\spn\{v_0\})$, where $$W_{{\varepsilon}_0}= \begin{pmatrix} I&-(A_{11}+{\varepsilon}_0)^{-1}A_{12}\\ 0&I \end{pmatrix}.$$ Clearly, $W_{{\varepsilon}_0}$ is bounded with bounded inverse and it maps $\sM$ bijectively onto $\sM$, so that $\sL$ is a $1$-dimensional extension of $\sM$. It follows from that for all $f\in\sL$, $$(Af,f)+{\varepsilon}_0 \|f\|^2 = \left(\begin{pmatrix} A_{11}+{\varepsilon}_0 &0\\ 0&A_{22}+{\varepsilon}_0 -A_{21}(A_{11}+{\varepsilon}_0)^{-1}A_{12} \end{pmatrix} u,u \right)<0,$$ where $u=W_{{\varepsilon}_0}f \in \sM+\spn\{v_0\}$. Therefore, $\sL$ is a proper uniformly $A$-negative extension of $\sM$; a contradiction, which shows that holds for all $0<{\varepsilon}<\max\{\nu,\delta\}$. Then, as in the proof of Theorem \[T:1\] it is seen that $\ran A_{12}\subset \ran |A_{11}|^{1/2}$; note that in the estimate $\lambda_1$ is to be replaced by $-\delta$. Conversely, if $\ran A_{12}\subset\ran |A_{11}|^{1/2}$, then $S=|A_{11}|^{[-1/2]}A_{12}\in [\sH_2,\sH_1]$ and the block operator $A$ in gives a completion. To prove that $A$ satisfies observe that if $\sM$ is a uniformly $A$-negative subspace in $\sH$, then $\begin{pmatrix} |A_{11}|^{1/2}&JS\\ \end{pmatrix}$ maps it bijectively onto a uniformly $J$-negative subspace in $\sH_1$. The spectral subspace corresponding to the negative spectrum of $A_{11}$ is maximal uniformly $J$-negative in $\sH_1$ and also uniformly $A$-negative in $\sH$. By the above mapping property this subspace must be maximal uniformly $A$-negative in $\sH$. \(ii) If $A_{22}=A_{22}^*$ defines a completion $A\in[\sH]$ of $A^0$ such that is satisfied then by the proof of (i) the inequality holds for all sufficiently small values ${\varepsilon}>0$. Now the minimality property of $S^*JS$ can be obtained in the same manner as in Theorem \[T:1\]. As to the last statement again for every $Y\in[\sH_2]$, $Y\ge 0$, the block operator $A_Y$ defined in the proof of Theorem \[T:1\] satisfies $A_Y\ge A_{\rm{min}}$. Hence, every uniformly $A_Y$-negative subspace is also uniformly $A_{\rm{min}}$-negative. Now it follows from the formula that the spectral subspace corresponding to the negative spectrum of $A_{11}$, which is maximal uniformly $A_{\rm{min}}$-negative, is also maximal uniformly $A_Y$-negative. Hence, $A_Y$ satisfies and $Y\in\mathcal{A}$. Some factorizations of operators with finite negative index {#sec2} =========================================================== Theorems \[T:1\] and \[T:1ext\] contain a valuable tool in solving a couple of other problems, which initially do not occur as a completion problem of some symmetric incomplete block operator. In this section it is shown that Theorem \[T:1\] (a) can be used to characterize existence of certain $J$-contractive factorization of operators via a minimal index condition; (b) implies an extension of the well-known Douglas factorization result with a certain specification to the Bognár-Krámli factorization; (c) yields an extension of a factorization result of Shmul’yan for $J$-bicontractions; (d) allows an extension of a classical Sylvester law of inertia of a block operator, which is originally used in characterizing nonnegativity of a bounded block operator via Schur complement. Some simple inertia formulas are now recalled. The factorization $H=B^*EB$ clearly implies that $\nu_\pm(H)\le \nu_\pm(E)$. If $H_1$ and $H_2$ are selfadjoint operators, then $$H_1+H_2=\begin{pmatrix} I \\ I \end{pmatrix}^* \begin{pmatrix} H_1 & 0 \\ 0 & H_2 \end{pmatrix} \begin{pmatrix} I \\ I \end{pmatrix}$$ shows that $\nu_\pm(H_1+H_2)\le \nu_\pm(H_1)+\nu_\pm(H_2)$. Consider the selfadjoint block operator $H\in[\sH_1\oplus\sH_2]$ of the form $$\label{H} H=\begin{pmatrix} A & B^* \\ B & J_2 \end{pmatrix},$$ where $J_2=J_2^*=J_2^{-1}$. By applying the above mentioned inequalities shows that $$\label{minneg} \nu_\pm(A)\le \nu_\pm(A-B^*J_2B)+\nu_\pm(J_2).$$ Assuming that $\nu_-(A-B^*J_2B)$ and $\nu_-(J_2)$ are finite, the question when $\nu_-(A)$ attains its maximum in , or equivalently, $\nu_-(A-B^*J_2B)\ge \nu_-(A)-\nu_-(J_2)$ attains its minimum, turns out to be of particular interest. The next result characterizes this situation as an application of Theorem \[T:1\]. Recall that if $A=J_A |A|$ is the polar decomposition of $A$, then one can interpret $\sH_A=(\cran A,J_A)$ as a Kreĭn space generated on $\cran A$ by the fundamental symmetry $J_A=\sgn(A)$. \[thmB\] Let $A\in[\sH_1]$ be selfadjoint, $B\in[\sH_1,\sH_2]$, $J_2=J_2^*=J_2^{-1}\in[\sH_2]$, and assume that $\nu_-(A),\nu_-(J_2)<\infty$. If the equality $$\label{min} \nu_-(A) = \nu_-(A-B^*J_2B)+\nu_-(J_2)$$ holds, then $\ran B^*\subset \ran |A|^{1/2}$ and $B^*=|A|^{1/2}K$ for a unique operator $K\in[\sH_2,\sH_A]$ which is $J$-contractive: $J_2-K^*J_A K\ge 0$. Conversely, if $B^*=|A|^{1/2}K$ for some $J$-contractive operator $K\in[\sH_2,\cran A]$, then the equality is satisfied. Assume that is satisfied. The factorization $$H=\begin{pmatrix} A & B^* \\ B & J_2 \end{pmatrix} = \begin{pmatrix} I & B^*J_2\\ 0 & I \end{pmatrix} \begin{pmatrix} A-B^* J_2B& 0 \\ 0 & J_2 \end{pmatrix} \begin{pmatrix} I & 0 \\ J_2 B & I \end{pmatrix}$$ shows that $\nu_-(H)=\nu_-(A-B^* J_2B)+\nu_-(J_2)$, which combined with the equality gives $\nu_-(H)=\nu_-(A)$. Therefore, by Theorem \[T:1\] one has $\ran B^*\subset \ran |A|^{1/2}$ and this is equivalent to the existence of a unique operator $K\in [\sH_2,\cdom A]$ such that $B^*=|A|^{1/2}K$; i.e. $K=|A|^{[-1/2]}B^*$. Furthermore, $K^*J_{A}K\leq J_2$ by the minimality property of $K^*J_{A}K$ in Theorem \[T:1\], in other words $K$ is a $J$-contraction. Converse, if $B^*=|A|^{1/2}K$ for some $J$-contraction $K\in [\sH_2,\cdom A]$, then clearly $\ran B^*\subset\ran |A|^{1/2}$. By Theorem \[T:1\] the completion problem for $H^{0}$ has solutions with the minimal solution $S^*J_{A}S$, where $S=|A|^{[-1/2]}B^*=|A|^{[-1/2]}|A|^{1/2}K=K$. Furthermore, by $J$-contractivity of $K$ one has $K^*J_{A}K\le J_2$, i.e. $J_2$ is also a solution and thus $\nu_-(H)=\nu_-(A)$ or, equivalently, the equality is satisfied. While Theorem \[thmB\] is obtained as a direct consequence of Theorem \[T:1\] it will be shown in the next section that this result yields simple solutions to a wide class of lifting problems for contractions in Hilbert, Pontryagin and Kreĭn space settings. Before deriving the next result some inertia formulas for a class of selfadjoint block operators are recalled. Consider the following two representations $$\begin{split} \begin{pmatrix} J_1& T^*\\ T & J_2 \end{pmatrix}&= \begin{pmatrix} I&0\\ TJ_1&I \end{pmatrix} \begin{pmatrix} J_1&0\\ 0&J_2-TJ_1T^* \end{pmatrix} \begin{pmatrix} I&J_1T^*\\ 0&I \end{pmatrix}\\ &=\begin{pmatrix} I&T^*J_2\\ 0&I \end{pmatrix} \begin{pmatrix} J_1-T^*J_2 T&0\\ 0& J_2 \end{pmatrix} \begin{pmatrix} I&0\\ J_2T&I \end{pmatrix}, \end{split}$$ where $J_i=J_i^*=J_i^{-1}$, $i=1,2$. Since here the triangular operators are bounded with bounded inverse, one concludes that $\ran (J_2-TJ_1T^*)$ is closed if and only if $\ran(J_1-T^*J_2 T)$ is closed. Furthermore, one gets the following inertia formulas; cf. e.g. [@ACG87 Proposition 3.1]. \[inertia\] With the above notations one has $$\nu_\pm(J_1-T^*J_2T)+\nu_\pm(J_2)=\nu_\pm(J_2-TJ_1T^*)+\nu_\pm(J_1),$$ $$\nu_0(J_1-T^*J_2T)=\nu_0(J_2-TJ_1T^*).$$ The next result contains two general factorization results: assertion (i) contains an extension of the well-known Douglas factorization, see [@Douglas; @FW], and assertion (ii) is a specification of the so-called Bognár-Krámli factorization, see [@Bognarbook]: $A=B^*J_2B$ holds for some bounded operator $B$ if and only if $\nu_\pm(J_2)\ge \nu_\pm(A)$. \[BKcor1\] Let $A$, $B$, and $J_2$ be as in Theorem \[thmB\], and assume that $\nu_-(A)=\nu_-(J_2)<\infty$. Then: 1. The inequality $$\label{min2} A\ge B^*J_2 B$$ holds if and only if $B=C|A|^{1/2}$ for some $J$-contractive operator $C\in[\sH_A,\sH_2]$; in this case $C$ is unique and, in addition, $J$-bicontractive, i.e., $J_A-C^*J_2 C\ge 0$ and $J_2-CJ_A C^*\ge 0$. 2. The equality $$\label{min3} A = B^*J_2 B$$ holds if and only if $B=C|A|^{1/2}$ for some $J$-isometric operator $C\in[\sH_A,\sH_2]$; again $C$ is unique. In addition, $C$ is unitary if and only if $\ran B$ is dense in $\sH_2$. \(i) The inequality means that $\nu_-(A - B^*J_2 B)=0$. Hence the assumption $\nu_-(A)=\nu_-(J_2)<\infty$ implies the equality . Therefore, the desired factorization for $B$ is obtained from Theorem \[thmB\]. Conversely, if $B=C|A|^{1/2}$ for some $J$-contractive operator $C$ then holds by Theorem \[thmB\] and the assumption $\nu_-(A)=\nu_-(J_2)<\infty$ implies that $\nu_-(A - B^*J_2 B)=0$. The fact that $C$ is actually $J$-bicontractive follows directly from Lemma \[inertia\]. \(ii) Assume that holds. Then by part (i) it remain to prove that in the factorization $B=C|A|^{1/2}$ the operator $C$ is isometric. Substituting $B=C|A|^{1/2}$ into gives $$A=|A|^{1/2}C^*J_2C|A|^{1/2}.$$ Since $\dom C,\, \ran C^* \subset \cran A$ and $A=|A|^{1/2}J_A|A|^{1/2}$, the previous identity implies the equality $J_A=C^*J_2C$, i.e., $C$ is $J$-isometric. Conversely, if $C$ is $J$-isometric then clearly holds. Since $B=C|A|^{1/2}$ and $C\in[\sH_A,\sH_2]$, it is clear that $B$ has dense range in $\sH_2$ precisely when the range of $C$ is dense in $\sH_2$. The (Kreĭn space) adjoint is a bounded operator with $\dom C^{[*]}=\sH_2$. By isometry one has $C^{-1}\subset C^{[*]}$, and thus $C^{-1}$ is also bounded, densely defined and closed. Thus, the equality $C^{-1}=C^{[*]}$ prevails, i.e., $C$ is $J$-unitary. Conversely, if $C$ is unitary then $C^{-1}=C^{[*]}$ holds and $\ran C=\dom C^{[*]}=\sH_2$. Consequently, $\ran B=\ran C|A|^{1/2}$ is dense in $\sH_2$. If, in particular, $\nu_-(A)=\nu_-(J_2)=0$ then $0\le A\le B^*B$ and Proposition \[BKcor1\] combined with Theorem \[T:1\] yields the factorization and range inclusion results proved in [@Douglas Theorem 1] with $A$ replaced by $A^*A$. In particular, notice that if $\ran B^*\subset \ran |A|^{1/2}$, then already Theorem \[T:1\] alone implies that $S=|A|^{[-1/2]}B^*$ is bounded and hence $B^*B=|A|^{1/2}SS^*|A|^{1/2}\le \|S\|^2 A$. Assertions in part (ii) of Corollary \[BKcor1\] can be found in the literature with a different proof. In fact, the first statement in (ii) appears in [@ACG87 Proposition 2.1, Corollary 2.6] while the second statement in (ii) is proved in [@CG89 Corollary 1.3]. Another extension for Douglas’ factorization result can be found from [@Rod07]. For a general treatment of isometric (not necessarily densely defined) operators and isometric relations appearing in the proof of Proposition \[BKcor1\] the reader is referred to [@AIbook], [@DHMS2006 Section 2], and [@DHMS2012]. A slightly different viewpoint to Proposition \[BKcor1\] gives the following statement, which can be viewed as an extension of a theorem by Shmul’yan [@S67] on the factorization of bicontractions on Kreĭn spaces; for a related abstract Leech theorem, see [@DritRov90 Section 3.4]. \[BKcor2\] Let $A\in[\sH_1]$ be selfadjoint, let $B\in[\sH_1,\sH_2]$, and let $J_2=J_2^*=J_2^{-1}\in[\sH_2]$ with $\nu_-(J_2)<\infty$. Then: 1. $$ A\ge B^*J_2 B \quad\text{and}\quad \nu_-(A) = \nu_-(J_2)$$if and only if $B=C|A|^{1/2}$ for some $J$-bicontractive operator $C\in[\sH_A,\sH_2]$; in this case $C$ is unique. 2. $$ A=B^*J_2 B \quad\text{and}\quad \nu_-(A) = \nu_-(J_2)$$if and only if $B=C|A|^{1/2}$ for some $J$-bicontractive operator $C$ which is also $J$-isometric, i.e., $J_A-C^*J_2 C= 0$ and $J_2-CJ_A C^*\ge 0$; again $C$ is unique. Observe that if $C$ is $J$-bicontractive, then an application of Lemma \[inertia\] shows that $\nu_-(J_2)=\nu_-(J_A)=\nu_-(A)$. Now the stated equivalences can be obtained from Proposition \[BKcor1\]. This section is finished with an extension of the classical Sylvester’s criterion, that is actually obtained as a consequence of Theorem \[T:1\]. \[sylvester\] Let $A=(A_{ij})_{i,j=1}^{2}$ be an arbitrary selfadjoint block operator in $\sH=\sH_1\oplus\sH_2$, which satisfies the range inclusion , and let $S=|A_{11}|^{[-1/2]}A_{12}$. Then $\nu_-(A)<\infty$ if and only if $\nu_-(A_{11})<\infty$ and $\nu_-(A_{22}-S^*JS)<\infty$; in this case $$\nu_-(A)=\nu_-(A_{11})+\nu_-(A_{22}-S^*JS).$$ In particular, $A\ge 0$ if and only if $\ran A_{12}\subset\ran |A_{11}|^{1/2}$, $A_{11}\ge 0$, and $A_{22}-S^*JS\ge 0$. By the assumption $S=|A_{11}|^{[-1/2]}A_{12}$ is an everywhere defined bounded operator and, since $A_{11}=|A_{11}|^{1/2}J|A_{11}|^{1/2}$ (cf. Theorem \[T:1\]), the following equality holds: $$A=\begin{pmatrix} |A_{11}|^{1/2} &0\\ S^*J&I \end{pmatrix} \begin{pmatrix} J & 0\\ 0& A_{22}-S^*JS \end{pmatrix} \begin{pmatrix} |A_{11}|^{1/2} & JS \\ 0& I \end{pmatrix},$$ i.e. $A=B^*EB$ where $E$ stands for the diagonal operator with $\nu_-(E)=\nu_-(A_{11})+\nu_-(A_{22}-S^*JS)$ and the triangular operator $B$ on the right side is bounded and has dense range in $\cran A_{11}\oplus\sH_2$. Clearly, $\nu_-(A)\le \nu_-(E)$ and it remains to prove that if $\nu_-(A)<\infty$ then $\nu_-(A)=\nu_-(E)$. To see this assume that $\nu_-(A)<\nu_-(E)$. We claim that $\ran B$ contains an $E$-negative subspace $\sL$ with dimension $\dim \sL>\nu_-(A)$. Assume the converse and let $\sL\subset \ran B$ be a maximal $E$-negative subspace with $\dim \sL\le \nu_-(A)$. Then $(E\sL)^\perp$ must be $E$-nonnegative, since if $v\perp E\sL$ and $(Ev,v)<0$, then $\spn\{v+\sL\}$ would be a proper $E$-negative extension of $\sL$. Since $E\sL$ is finite-dimensional and $\ran B$ is dense in $\cran A_{11}\oplus\sH_2$, $\ran B$ has dense intersection with $(\cran A_{11}\oplus\sH_2)\ominus E\sL$, and hence the closure of this subspace is also $E$-nonnegative. Consequently, $\nu_-(E)=\nu_-(\sL)$, a contradiction with the assumption $\nu_-(E)>\nu_-(A)$. This proves the claim that $\ran B$ contains an $E$-negative subspace $\sL$ with $\dim \sL>\nu_-(A)$. However, then the subspace $\sL'=\{u\in \cran A_{11}\oplus\sH_2:Bu\in\sL\}$ satisfies $\dim \sL'\ge \dim \sL$ and, moreover, $\sL'$ is $A$-negative: $(Au,u)=(EBu,Bu)<0$, $u\in\sL'$, $u\neq 0$. Thus, $\nu_-(A)\ge \dim \sL$, a contradiction with $\dim \sL>\nu_-(A)$. This completes the proof. Proposition \[sylvester\] completes Theorem \[T:1\]: it shows that if $\ran A_{12}\subset\ran |A_{11}|^{1/2}$ then $A_{11}=J|A_{11}|$ and $A_{12}=|A_{11}|^{1/2}S$ imply that $A_{21}|A_{11}|^{[-1/2]}J|A_{11}|^{[-1/2]}A_{12}=S^*JS$. Hence the negative index of $A$ can be calculated by using the following version of a *generalized of Schur complement*: $$\label{genSchur} \nu_-(A)=\nu_-(A_{11})+\nu_-(A_{22}-A_{21}|A_{11}|^{[-1/2]}J|A_{11}|^{[-1/2]}A_{12}).$$ The addition made in Proposition \[sylvester\] concerns selfadjoint operators $A_{22}$ that are not solutions to the original completion problem for $A^0$. Lifting of operators with finite negative index {#sec3} =============================================== As a first application of the completion problem solved in Section \[sec1\] it is shown how nicely some lifting results established in a series of papers by Arsene, Constantinescu, and Gheondea, see [@AG82; @ACG87; @CG89; @CG92], as well as in Dritschel, see [@Drit90; @DritRov90] (see also further references appearing in these papers), on contractive operators with finite number of negative squares can be derived from Theorem \[T:1\]. For this purpose some standard notations are now introduced. Let $(\sH_1,(\cdot,\cdot)_{1})$ and $(\sH_2,(\cdot,\cdot)_{2})$ be Hilbert spaces and let $J_1$ and $J_2$ be symmetries in $\sH_1$ and $\sH_2$, i.e. $J_i=J_i^*=J_i^{-1}$, so that $(\sH_i,(J_i\cdot,\cdot)_{i})$, $i=1,2$, becomes a Kreĭn space. Then associate with $T\in[\sH_1,\sH_2]$ the corresponding defect and signature operators $$D_T=|J_1-T^*J_2T|^{1/2},\quad J_T=\sign(J_1-T^*J_2T), \quad \sD_T=\cran D_T,$$ where the so-called defect subspace $\sD_T$ can be considered as a Kreĭn space with the fundamental symmetry $J_T$. Similar notations are used with $T^*$: $$D_{T^*}=|J_2-TJ_1T^*|^{1/2},\quad J_{T^*}=\sign(J_2-TJ_1T^*), \quad \sD_{T^*}=\cran D_{T^*}.$$ By definition $J_TD_T^2=J_1-T^*J_2T$ and $J_TD_T=D_TJ_T$ with analogous identities for $D_{T^*}$ and $J_{T^*}$. In addition, $$\label{eqC3} (J_1-T^*J_2T)J_1T^*=T^*J_2(J_2-TJ_1T^*), \, (J_2-TJ_1T^*)J_2T=TJ_1(J_1-T^*J_2T).$$ Recall that $T\in[\sH_1,\sH_2]$ is said to be a $J$-contraction if $J_1-T^*J_2T\ge 0$, i.e. $\nu_-(J_1-T^*J_2T)=0$. If, in addition, $T^*$ is a $J$-contraction, $T$ is termed as a $J$-bicontraction, in which case $\nu_-(J_1)=\nu_-(J_2)$ by Lemma \[inertia\]. In what follows it is assumed that $$\kappa_1:=\nu_-(J_1-T^*J_2T)<\infty,\quad \kappa_2:=\nu_-(J_2-TJ_1T^*)<\infty.$$ In this case Lemma \[inertia\] shows that $$\label{iner01} \nu_-(J_2) = \nu_-(J_1) + \kappa_2-\kappa_1.$$ The aim in this section is to show applicability of Theorem \[T:1\] in establishing formulas for so-called liftings $\wt T$ of $T$ with prescribed negative indices $\wt\kappa_1$ and $\wt\kappa_2$ for the defect subspaces. Given a bounded operator $T\in[\sH_1,\sH_2]$ the problem is to describe all operators $\wt T$ from the extended Kreĭn space $(\sH_1\oplus\sH_1^\prime,J_1\oplus J_1^\prime)$ to the extended Kreĭn space $(\sH_2\oplus\sH_2^\prime,J_2\oplus J_2^\prime)$ such that $$\textbf{$(*)$} \qquad P_2 \wt T\uphar \sH_1 = T \quad\text{and}\quad \nu_-(\wt J_1-\wt T^*\wt J_2 \wt T)=\wt\kappa_1, \quad \nu_-(\wt J_2-\wt T\wt J_1 \wt T^*)=\wt\kappa_2,$$with some fixed values of $\wt\kappa_1,\wt\kappa_2<\infty$. Here $P_i$ stands for the orthogonal projection from $\wt\sH_i=\sH_i\oplus\sH_i^\prime$ onto $\sH_i$ and $\wt J_i=J_i\oplus J_i^\prime$, $i=1,2$. In addition, it is assumed that the exit spaces are Pontryagin spaces, i.e., that $$\nu_-(J_1^\prime), \nu_-(J_2^\prime) <\infty.$$ Following [@ACG87; @CG89] consider first the following column extension problem: $\textbf{$(*)_c$}$ Give a description of all (column) operators $T_c=\col\begin{pmatrix}T & C \end{pmatrix}\in [\sH_1,\sH_2\oplus\sH_2^\prime]$, such that $\nu_-(J_1-T_c^*\wt J_2T_c)=\wt \kappa_1\,(<\infty)$. Since $J_1-T_c^*\wt J_2T_c=J_1-T^*J_2T-C^*J_2^\prime C$, then necessarily (see Section \[sec2\]) $$\wt\kappa_1 \ge \kappa_1-\nu_-(C^*J_2^\prime C)\ge \kappa_1 -\nu_-(J_2^\prime).$$ Moreover, it is clear that $\wt\kappa_2\ge \kappa_2$, since $J_2-TJ_1T^*$ appears as the first diagonal entry of the $2\times 2$ block operator $\wt J_2-T_c J_1T_c^*$ when decomposed w.r.t. $\wt\sH_i=\sH_i\oplus\sH_i^\prime$, $i=1,2$. With the minimal value of $\wt \kappa_1$ all solutions to this problem will now be described by applying Theorem \[T:1\] to an associated $2\times 2$ block operator $T_C$ appearing in the proof below; in fact the result is just a special case of Theorem \[thmB\]. \[corL1\] Let $\wt\kappa_1=\nu_-(J_1-T_c^*\wt J_2T_c)$ and assume that $\wt\kappa_1=\kappa_1-\nu_-(J_2^\prime)(\ge 0)$. Then $\ran C^*\subset \ran D_T$ and the formula $$T_c=\begin{pmatrix}T \\ K^*D_{T} \end{pmatrix}$$ establishes a one-to-one correspondence between the set of all solutions to Problem $\textbf{$(*)_c$}$ and the set of all $J$-contractions $K\in[\sH'_2,\sD_{T}]$. To make the argument more explicit consider the following block operator $$T_C:=\begin{pmatrix} J_1-T^*J_2T & C^* \\ C & J_2^\prime \end{pmatrix} = \begin{pmatrix} I & C^*J_2^\prime \\ 0 & I \end{pmatrix} \begin{pmatrix} J_1-T_c^*\wt J_2T_c & 0 \\ 0 & J_2^\prime \end{pmatrix} \begin{pmatrix} I & 0 \\ J_2^\prime C & I \end{pmatrix}.$$ Clearly $\nu_-(T_C)=\nu_-(J_1-T_c^*\wt J_2T_c)+\nu_-(J_2^\prime)<\infty$, which combined with $\wt\kappa_1=\kappa_1-\nu_-(J_2^\prime)$ shows that $\nu_-(T_C)=\kappa_1=\nu_-(J_1-T^*J_2T)$. Now, the statement is obtained from Theorem \[T:1\] or, more directly, just by applying Theorem \[thmB\]. \(i) The above proof, which essentially makes use of an associated $2\times 2$ block operator $T_c$ (being a special case of the block operator $H$ in behind Theorem \[thmB\]), is new even in the case of Hilbert space contractions. In particular, it shows that the operator $K$ in Lemma \[corL1\] coincides with the operator $S$ that gives the minimal solution $S^*J_{T}S$ to the completion problem associated with $T_C$; the $J$-contractivity of $K$ itself is equivalent to the fact that $T_C$ is also a solution precisely when $\wt\kappa=\kappa-\nu_-(J_2^\prime)$. \(ii) The existence of a solution to Problem $\textbf{$(*)_c$}$ is proved here using only the condition $\wt\kappa_1=\kappa_1-\nu_-(J_2^\prime)\,(\ge 0)$. The corresponding result in [@CG89 Lemma 2.2] is formulated (and formally also proved) under the additional condition $\wt\kappa_2=\kappa_2$. In the case that $\nu_-(J_1)<\infty$ the equality $\wt\kappa_2=\kappa_2$ follows automatically from the equality $\wt\kappa_1=\kappa_1-\nu_-(J_2^\prime)$: to see this apply to $T$ and $T_c$, which leads to $\nu_-(J_1)+\kappa_2=\nu_-(J_1)+\wt\kappa_2$, so that $\nu_-(J_1)<\infty$ implies $\kappa_2=\wt\kappa_2$. Naturally, in Lemma \[corL1\] the condition $\wt\kappa_2=\kappa_2$ follows from the condition $\wt\kappa_1=\kappa_1-\nu_-(J_2^\prime)$ also in the case where $\nu_-(J_1)=\infty$; see Corollary \[k2cor\] below. Finally, it is mentioned that for a Pontryagin space operator $T$ the result in Lemma \[corL1\] was proved in [@ACG87 Lemma 5.2]. In a dual manner we can treat the following row extension problem; again initially considered in [@ACG87; @CG89]: $\textbf{$(*)_r$}$ Give a description of all operators $T_r=\begin{pmatrix}T & R\end{pmatrix}\in [\sH_1\oplus\sH'_1,\sH_2]$, such that $\nu_-(\wt J_1-T_r^* J_2 T_r)=\wt\kappa_2\,(<\infty)$. Analogous to the case of column operators, $J_2-T_r\wt J_1T_r^*=J_2-TJ_1T^*-RJ_1^\prime R^*$ gives the estimate $$ \wt\kappa_2 \ge \kappa_2-\nu_-(RJ_1^\prime R^*) \ge \kappa_2-\nu_-(J_1^\prime).$$Moreover, it is clear that $\wt\kappa_1\ge \kappa_1$. With the minimal value of $\wt \kappa_2$ all solutions to Problem $\textbf{$(*)_r$}$ are established by applying Theorem \[T:1\] to an associated $2\times 2$ block operator $T_R$. \[corL2\] Let $\wt\kappa_2=\nu_-(J_2-T_r\wt J_1T_r^*)$ and assume that $\wt\kappa_2=\kappa_2 - \nu_-(J_1^\prime)(\ge 0)$. Then $\ran R\subset \ran D_{T^*}$ and the formula $$T_r=\begin{pmatrix}T & D_{T^*} B\end{pmatrix}$$ establishes a one-to-one correspondence between the set of all solutions to Problem $\textbf{$(*)_r$}$ and the set of all $J$-contractions $B\in[\sH'_1,\sD_{T^*}]$. To prove the statement via Theorem \[thmB\] (or Theorem \[T:1\]) consider $$T_R:=\begin{pmatrix} J_2-TJ_1T^* & R \\ R^* & J_1^\prime \end{pmatrix} = \begin{pmatrix} I & RJ_1^\prime \\ 0 & I \end{pmatrix} \begin{pmatrix} J_2-T_r\wt J_1T_r^* & 0 \\ 0 & J_1^\prime \end{pmatrix} \begin{pmatrix} I & 0 \\ J_1^\prime R^* & I \end{pmatrix}.$$ Then clearly $\nu_-(T_R)=\nu_-(J_2-T_r\wt J_1T_r^*)+\nu_-(J_1^\prime)$ and hence the assumption $\wt\kappa_2=\kappa_2 - \nu_-(J_1^\prime)$ is equivalent to $\nu_-(T_R)=\kappa=\nu_-(J_2-TJ_1T^*)$. Hence, again the statement follows from Theorem \[thmB\]. Remarks similar to those made after Lemma \[corL1\] can be done here, too. In particular, the corresponding result in [@CG89 Lemma 2.1] is formulated under the additional condition $\wt\kappa_1=\kappa_1$: here this equality will be a consequence from the equality $\wt\kappa_2=\kappa_2 - \nu_-(J_1^\prime)$; cf. Corollary \[k2cor\] below. To prove the main result concerning parametrization of all $2\times 2$ liftings in a larger Kreĭn space with minimal signature for the defect operators an indefinite version of the commutation relation of the form $TD_T=D_{T^*}T$ is needed; these involve so-called link operators introduced in [@ACG87 Section 4]. We will give a simple proof for the construction of link operators (see [@ACG87 Proposition 4.1]) by applying Heinz inequality combined with the basic factorization result from [@Douglas]. The first step is formulated in the next lemma, which is connected to a result of M.G. Kreĭn [@Kr47b] concerning continuity of a bounded Banach space operator which is symmetric w.r.t. to a continuous definite inner product; the existence of link was proved proved in [@ACG87] via this result of Kreĭn. Here a statement, analogous to that of Kreĭn, is formulated in pure Hilbert space operator language by using the modulus of the product operator; see [@DritRov90 Lemma B2], where Kreĭn’s result is presented with a proof due to W.T. Reid. \[2norms\] Let $S\in[\sH_1,\sH_2]$ and let $H\in[\sH_2]$ be nonnegative. Then $$HS=(HS)^* \quad\Rightarrow\quad |HS| \le \mu H \text{ for some } \mu <\infty.$$ Since $HS$ is selfadjoint, one obtains $$(HS)^2=HSS^*H \le \mu^2 H^2, \quad \mu = \|S\|<\infty.$$ Now by Heinz inequality (see e.g. [@BirSol87 Theorem 10.4.2]) we get $$|HS|=(HSS^*H)^{1/2} \le \mu H. \qedhere$$ \[linkoper\] Let $T\in[\sH_1,\sH_2]$ and let $J_1$ and $J_2$ be symmetries in $\sH_1$ and $\sH_2$ as above. Then there exist unique operators $L_T\in [\sD_T,\sD_{T^*}]$ and $L_{T^*}\in [\sD_{T^*},\sD_T]$ such that $$D_{T^*}L_T=TJ_1 D_T\uphar \sD_T, \quad D_T L_{T^*}=T^*J_2 D_{T^*}\uphar \sD_{T^*};$$ in fact, $L_T=D_{T^*}^{[-1]}TJ_1 D_T\uphar \sD_T$ and $L_{T^*}=D_T^{[-1]}T^*J_2 D_{T^*}\uphar \sD_{T^*}$. Denote $S=J_{T^*}J_2TJ_TJ_1T^*$. Then implies that $$ D_{T^*}^2 S=(J_2-TJ_1T^*)J_2TJ_TJ_1T^* =TJ_1(J_1-T^*J_2T)J_TJ_1T^* =TJ_1D_T^2J_1T^*\ge 0,$$so that $D_{T^*}^2 S$ is nonnegative and, in particular, selfadjoint. By Lemma \[2norms\] with $\mu=\|S\|$ one has $$0\le TJ_1D_T^2J_1T^*=D_{T^*}^2 S \le \mu D_{T^*}^2.$$ This last inequality is equivalent to the factorization $TJ_1D_T\uphar \sD_T=D_{T^*} L_T$ with a unique operator $L_T\in [\sD_T,\sD_{T^*}]$, see [@Douglas Theorem 1], which by means of Moore-Penrose generalized inverse can be rewritten as indicated. The second formula is obtained by applying the first one to $T^*$. The following identities can be obtained with direct calculations; see [@ACG87 Section 4]: $$\label{Link2} \begin{array}{c} L_T^* J_{T^*}\uphar \sD_{T^*}=J_{T}L_{T^*};\\ (J_T-D_TJ_1D_T)\uphar \sD_T=L_T^*J_{T^*}L_T;\\ (J_{T^*}-D_{T^*}J_2D_{T^*})\uphar \sD_{T^*}=L_{T^*}^* J_T L_{T^*}. \end{array}$$ The next corollary contains the promised identity $\wt\kappa_1=\kappa_1$ under the assumption $\wt\kappa_2=\kappa_2-\nu_-(J_2^\prime)\ge 0$ in Lemma \[corL1\]. Similarly $\wt\kappa_1=\kappa_1-\nu_-(J_1^\prime)$ implies $\wt\kappa_2=\kappa_2$; the general result for the first case can be formulated as follows (and there is similar result for the latter case). \[k2cor\] Let $R$ be a bounded operator such that $\ran R\subset \ran D_{T^*}$ and let $T_r$ be the corresponding row operator and denote $\wt\kappa_1=\nu_-(\wt J_1-T_r^*J_2T_r)$. Then $R=D_{T^*}B$ for a (unique) bounded operator $B\in[\sH'_1,\sD_{T^*}]$ and $$\wt\kappa_1=\kappa_1+\nu_-(J_1^\prime- B^*J_{T^*} B).$$ In particular, $J$-contractivity of $B$ is equivalent to $\wt\kappa_1=\kappa_1$. Recall that $\ran R\subset \ran D_{T^*}$ is equivalent to the factorization $R=D_{T^*}B$. By applying the commutation relations in Corollary \[linkoper\] together with the identities one gets the following expression for $ J_{T_r}D_{T_r}^2$: $$\label{Jtr} \begin{array}{rl} J_{T_r}D_{T_r}^2 & = \begin{pmatrix} J_1-T^*J_2T & -T^*J_2D_{T^*}B \\ -B^*D_{T^*} J_2 T & J_1^\prime - B^*D_{T^*} J_2 D_{T^*}B\end{pmatrix} \\ & = \begin{pmatrix} J_TD_T^2 & -D_T L_{T^*}B \\ -B^*L^*_{T^*} D_T & J_BD_{B}^2 + B^*L^*_{T^*} J_T L_{T^*}B\end{pmatrix}. \end{array}$$ Now apply Proposition \[sylvester\] and calculate the Schur complement, cf. , $$J_BD_{B}^2 + B^*L^*_{T^*} J_T L_{T^*}B -B^*L^*_{T^*} D_T (D_T^{[-1]}J_TD_T^{[-1]}) D_T L_{T^*}B =J_BD_{B}^2,$$ to see that $\wt\kappa_1=\nu_-(J_1-T^*J_2T)+\nu_-(J_1^\prime- B^*J_{T^*} B)$. By means of Lemmas \[corL1\], \[corL2\] and the link operators in Corollary \[linkoper\] one can now establish the main result concerning the lifting problem $(*)$. First notice that if Problem $(*)$ has a solution, then by treating $\wt T$ as a row extension of its first column $T_c$ and as a column extension of its first row $T_r$ one gets from the inequalities preceding Lemmas \[corL1\], \[corL2\] the estimates $$\label{iner03} \begin{array}{l} \wt\kappa_1\ge \kappa_1(T_r)-\nu_-(J_2^\prime)\ge \kappa_1-\nu_-(J_2^\prime); \\ \wt\kappa_2\ge \kappa_2(T_c)-\nu_-(J_1^\prime)\ge \kappa_2-\nu_-(J_1^\prime). \end{array}$$ Under the minimal choice of the indices $\wt\kappa_1$ and $\wt\kappa_2$ Problem $(*)$ is already solvable; all solutions are described by the following result, which was initially proved in [@CG89 Theorem 2.3] with the aid of [@ACG87 Theorem 5.3]. Here a different proof is presented, again based on an application of Theorem \[T:1\]. \[Lifthm\] Let $\wt T$ be a bounded operator from $(\sH_1\oplus\sH_1^\prime,J_1\oplus J_1^\prime)$ to $(\sH_2\oplus\sH_2^\prime,J_2\oplus J_2^\prime)$ such that $P_2 \wt T\uphar \sH_1 = T$. Assume that $0\leq \kappa_1 - \nu_-(J_2^\prime)=\wt\kappa_1<\infty$ and $0\leq \kappa_2 - \nu_-(J_1^\prime)=\wt\kappa_2<\infty$. Then the Problem $(*)$ is solvable and the formula $$\wt T=\begin{pmatrix}T & D_{T^*} \Gamma_1 \\ \Gamma_2 D_T & -\Gamma_2 L_T^* J_{T^*} \Gamma_1 + D_{\Gamma_2^*}\Gamma D_{\Gamma_1}\end{pmatrix}$$ establishes a one-to-one correspondence between the set of all solutions to Problem $(*)$ and the set of triplets $\{\Gamma_1,\Gamma_2,\Gamma\}$ where $\Gamma_1\in[\sH'_1,\sD_{T^*}]$ and $\Gamma_2^*\in[\sH'_2,\sD_T]$ are $J$-contractions and $\Gamma\in[\sD_{\Gamma_1},\sD_{\Gamma_2^*}]$ is a Hilbert space contraction. Assume that there is a solution $\wt T$ to Problem $(*)$ and write it in the form $$\wt T=\begin{pmatrix} T & R \\ C & X \end{pmatrix}$$ with the first column denoted by $T_c$ and first row denoted by $T_r$, and assume that $\wt\kappa_1=\kappa_1 - \nu_-(J_2^\prime)$ and $\wt\kappa_2= \kappa_2 - \nu_-(J_1^\prime)$. Then shows that $\kappa_1=\kappa_1(T_r)$ and $\kappa_2=\kappa_2(T_c)$. Hence Lemma \[corL2\] can be applied by viewing $\wt T$ as a row extension of $T_c$ to get a range inclusion and then from Corollary \[k2cor\] one gets the equality $\wt\kappa_1=\kappa_1(T_c)$. Similarly applying Lemma \[corL1\] and the analog of Corollary \[k2cor\] to column operator $\wt T$ one gets the equality $\wt\kappa_2=\kappa_2(T_r)$. Thus $\kappa_1(T_c)=\kappa_1 - \nu_-(J_2^\prime)$ and $\kappa_2(T_r)=\kappa_2 -\nu_-(J_1^\prime)$. Consequently, one can apply Lemma \[corL1\] to the first column $T_c$ and Lemma \[corL2\] to the first row $T_r$ to get the stated factorizations $C=\Gamma_2 D_T$ and $R=D_{T^*}\Gamma_1$ with unique $J$-contractions $\Gamma_1$ and $\Gamma_2^*$. To establish a formula for $X$ we proceed by considering the block operator $$H:=\begin{pmatrix} J_{T_r}D_{T_r}^2 & T_{r,2}^* \\ T_{r,2} & J_2^\prime \end{pmatrix},$$ where $T_{r,2}$ denotes the second row of $\wt T$. It is straightforward to derive the following formula for the Schur complement $$J_{T_r}D_{T_r}^2-T_{r,2}^* J_2^\prime T_{r,2}=\wt J_1-\wt T^*\wt J_2 \wt T.$$ Thus $\nu_-(H)=\wt\kappa_1+\nu_-(J_2^\prime)=\kappa_1=\nu_-(J_{T_r})$ and one can apply Theorem \[T:1\] to get the factorization $T_{r,2}^*=D_{T_r} \wt K$ with a unique $\wt K\in[\sH_2^\prime,\sD_{T_r}]$ satisfying $\wt K^* J_{T_r} \wt K \le J_2^\prime$, i.e., $\wt K$ is a $J$-contraction; see Theorem \[thmB\]. It follows from that $$J_{T_r}D_{T_r}^2 = \begin{pmatrix} D_T & 0 \\ -\Gamma_1^* L^*_{T^*}J_T & D_{\Gamma_1}\end{pmatrix} \begin{pmatrix} J_T & 0 \\ 0 & I_{D_{\Gamma_1}} \end{pmatrix} \begin{pmatrix} D_T & -J_T L_{T^*}\Gamma_1 \\ 0 & D_{\Gamma_1}\end{pmatrix} =:B^*\wh J B.$$ Since here $\nu_-(J_{T_r})=\kappa_1=\nu_-(J_{T})$ and $B$ is a triangular operator whose range is dense in $\sD_T\oplus \sD_{\Gamma_1}$ (the diagonal entries $D_T$ and $D_{\Gamma_1}$ of $B$ have dense ranges by definition), there is a unique Pontryagin space $J$-unitary operator $U$ from $\sD_{T_r}$ onto $\sD_T\oplus \sD_{\Gamma_1}$ such that $B=UD_{T_r}$; see Proposition \[BKcor1\] (ii). It follows that $K^*:=(U^{-1})^*\wt K$ is a $J$-contraction from $\sH_2^\prime$ to $\sD_T\oplus \sD_{\Gamma_1}$ and $KB=\wt K^* D_{T_r}=T_{r,2}$. Now $J_2^\prime-K\wh J K^*\ge 0$ gives $$\label{eqKK} 0\le K_1K_1^*\le J_2^\prime-K_0 J_T K_0^*, $$ where $K=(K_0\,K_1)$ is considered as a row operator, and $T_{r,2}=KB$ reads as $$\Gamma_2D_{T}=K_0 D_T, \quad X=-K_0 J_T L_{T^*}\Gamma_1 + K_1 D_{\Gamma_1}.$$ Since all contractions that are involved are unique, $K_0=\Gamma_2$, $J_2^\prime-K_0 J_T K_0^* = D_{\Gamma_2^*}^2$, and implies that there is a unique Hilbert space contraction $\Gamma\in[\sD_{\Gamma_1},\sD_{\Gamma_2^*}]$ such that $K_1=D_{\Gamma_2^*}\Gamma$. The desired formula for $\wt T$ is proven (cf. ). It is clear from the proof that every operator $\wt T$ of the stated form is a solution and that there is one-to-one correspondence via the triplets $\{\Gamma_1,\Gamma_2,\Gamma\}$ of $J$-contractions. \(i) By replacing $\wt T$ with its adjoint $\wt T^*$ it is clear that all formulas remain the same and are obtained by changing $T$ with $T^*$ and interchanging the roles of the indices $1$ and $2$; see also . This connects the considerations with row and column operators to each other. \(ii) If $\kappa_1=0$ so that $J_1-T^*J_2T\ge 0$, then the above proof becomes slightly simpler since then $J_{T_r}$, $J_T$, and $J_2^\prime$ are identity operators and $\wt K$ is a Hilbert space contraction. Then Theorem \[Lifthm\] gives all contractive liftings of a contraction in a Kreĭn space. If in addition $\kappa_2=0$, then one gets all bicontractive liftings of a bicontraction in a Kreĭn space with Pontryagin spaces as exit spaces. In the case special case that the exit spaces are Hilbert spaces ($\nu_-(J_1)=\nu_-(J_2)=0$ and $\kappa_1=\kappa_2=0$) Theorem \[Lifthm\] coincides with [@Drit90 Theorem 3.6]. In fact, the present proof can be seen as a further development of the proof appearing in that paper; see also further references and historical remarks given in [@Drit90; @DritRov90]. Contractive extensions of contractions with minimal negative indices {#sec4} ==================================================================== Let $\sH_1$ be a closed linear subspace of the Hilbert space $\sH$, let $T_{11}=T_{11}^*\in[\sH_1]$ be an operator such that $\nu_-(I-T_{11}^2)=\kappa<\infty$. Denote $$\label{JJpm} J=\sign(I-T_{11}^2), \quad J_+=\sign(I-T_{11}),\, \text{ and }\, J_-=\sign(I+T_{11}),$$ and let $\kappa_+=\nu_-(I-T_{11})$ and $\kappa_-=\nu_-(I+T_{11})$. It is obvious that $J=J_-J_+=J_+J_-$. Moreover, there is an equality $\kappa=\kappa_- +\kappa_+$ as stated in the next lemma. \[sign\] Let $T=T^*\in[\sH_1]$ be an operator such that $\nu_-(I-T^2)=\kappa<\infty$ then $\nu_-(I-T^2)=\nu_-(I+T)+\nu_-(I-T).$ Let $E_t(\cdot)$ be resolution of identity of $T$. Then by the spectral mapping theorem the spectral subspace corresponding to the negative spectrum of $I-T^2$ is given by $E_t((\infty;-1)\cup(1;\infty))=E_t((-\infty;-1))\oplus E_t((1;\infty))$. Consequently, $\nu_-(I-T^2)=\dim E_t((-\infty;-1))+\dim E_t((1;\infty))=\nu_-(I+T)+\nu_-(I-T)$. The next problem concerns the existence and a description of selfadjoint operators $T$ such that $\wt A_+=I+T$ and $\wt A_-=I-T$ solve the corresponding completion problems $$\label{E:A} A_{\pm}^0= \begin{pmatrix} I\pm T_{11}&\pm T_{21}^*\\ \pm T_{21}&\ast \end{pmatrix},$$ under *minimal index conditions* $\nu_-(I+T)=\nu_-(I+T_{11})$, $\nu_-(I-T)=\nu_-(I-T_{11})$, respectively. Observe, that if $I\pm T$ provides an arbitrary completion to $A_{\pm}^0$ then clearly $\nu_-(I \pm T)\ge \nu_-(I \pm T_{11})$. Thus by Lemma \[sign\] the two minimal index conditions above are equivalent to the single condition $\nu_-(I-T^2)=\nu_-(I-T_{11}^2)$. Unlike with the case of a selfadjoint contraction $T_{11}$, this problem need not have solutions when $\nu_-(I-T_{11}^2)>0$. It is clear from Theorem \[T:1\] that the conditions $\ran T_{21}^*\subset \ran|I-T_{11}|^{1/2}$ and $\ran T_{21}^*\subset \ran|I+T_{11}|^{1/2}$ are necessary for the existence of solutions; however alone they are not sufficient. The next theorem gives a general solvability criterion for the completion problem and describes all solutions to this problem. As in the definite case, there are minimal solutions $A_+$ and $A_-$ which are connected to two extreme selfadjoint extensions $T$ of $$\label{Tcol} T_1=\begin{pmatrix} T_{11}\\ T_{21} \end{pmatrix}:\sH_1\to\begin{pmatrix}\sH_1\\\sH_2\end{pmatrix},$$ now with finite negative index $\nu_-(I-T^2)=\nu_-(I-T_{11}^2)>0$. The set of all solutions $T$ to the problem will be denoted by $\Ext_{T_1,\kappa}(-1,1)$. \[T:contr\] Let $T_1$ be a symmetric operator as in with $T_{11}=T_{11}^*\in[\sH_1]$ and $\nu_-(I-T_{11}^2)=\kappa<\infty$, and let $J=\sign(I-T_{11}^2)$. Then the completion problem for $A_{\pm}^0$ in has a solution $I\pm T$ for some $T=T^*$ with $\nu_-(I-T^2)=\kappa$ if and only if the following condition is satisfied: $$\label{crit} \nu_-(I-T_{11}^2)=\nu_-(I-T_1^*T_1).$$ If this condition is satisfied then the following facts hold: 1. The completion problems for $A_{\pm}^0$ in have minimal solutions $A_\pm$. 2. The operators $T_m:=A_+-I$ and $T_M:=I-A_-\in \Ext_{T_1,\kappa}(-1,1)$. 3. The operators $T_m$ and $T_M$ have the block form $$\label{E:T} T_m= \begin{pmatrix} T_{11}&D_{T_{11}}V^*\\ VD_{T_{11}}&-I+V(I-T_{11})JV^* \end{pmatrix},\ T_M= \begin{pmatrix} T_{11}&D_{T_{11}}V^*\\ VD_{T_{11}}&I-V(I+T_{11})JV^* \end{pmatrix},$$ where $D_{T_{11}}:=|I-T_{11}^2|^{1/2}$ and $V$ is given by $V:=\clos(T_{21}D_{T_{11}}^{[-1]})$. 4. The operators $T_m$ and $T_M$ are extremal extensions of $T_1$: $$\label{E:Ext} T\in\Ext_{T_1,\kappa}(-1,1)\ \text{ iff }\ T=T^*\in[\sH],\quad T_m\leq T\leq T_M.$$ 5. The operators $T_m$ and $T_M$ are connected via $$\label{E:5} (-T)_m=-T_M, \quad (-T)_M=-T_m.$$ It is easy to see that $\kappa=\nu_-(I-T_{11}^2)\le \nu_-(I-T_1^*T_1)\le \nu_-(I-T^2)$. Hence the condition $\nu_-(I-T^2)=\kappa$ implies . The sufficiency of this condition is established while proving the assertions (i)–(iii) below. \(i) If the condition is satisfied then $\ran T_{21}^*\subset \ran|I- T_{11}^2|^{1/2}$ by Lemma \[corL1\]. In fact, this inclusion is equivalent to the inclusions $\ran T_{21}^*\subset \ran|I\pm T_{11}|^{1/2}$, which by Theorem \[T:1\] means that both of the completion problems, $A_{\pm}^0$ in , are solvable. Consequently, the following operators $$\label{E:S pm} S_-=|I+T_{11}|^{[-1/2]}T_{21}^*,\quad S_+=|I-T_{11}|^{[-1/2]}T_{21}^*$$ are well defined and they provide the minimal solutions $A_\pm$ to the completion problems for $A_\pm^0$ in . Notice that the assumption that there is a simultaneous solution $I\pm T$ with a *single* selfadjoint operator $T$ is not yet used here. \(ii) & (iii) Proof of (i) shows that the inclusion $\ran T_{21}^*\subset \ran|I- T_{11}^2|^{1/2}$ holds. This last inclusion alone is equivalent to the existence of a (unique) bounded operator $V^*=D_{T_{11}}^{[-1]}T_{21}^*$ with $\ker V\supset \ker D_{T_{11}}$, such that $T_{21}^*=D_{T_{11}}V^*$. The operators $T_m:=A_+-I$ and $T_M:=I-A_-$ (see proof of (i)) can be now rewritten as in . Observe that $$S_\mp=|I\pm T_{11}|^{[-1/2]}D_{T_{11}}V^*=P_\mp|I\mp T_{11}|^{1/2}V^*=|I\mp T_{11}|^{1/2}P_\mp V^*,$$ where $P_\mp$ are the orthogonal projections onto $$(\ker|I\pm T_{11}|^{1/2})^{\perp}=(\ker|I\pm T_{11}|)^{\perp}={\overline}{\ran}|I\pm T_{11}|={\overline}{\ran}|I\pm T_{11}|^{1/2}.$$ Since $\ker V\supset \ker D_{T_{11}}$ implies ${\overline}{\ran}V^*\subset {\overline}\ran D_{T_{11}}\subset {\overline}{\ran}|I\pm T_{11}|^{1/2}$, it follows that $$S_-=|I-T_{11}|^{1/2}V^*,\quad S_+=|I+T_{11}|^{1/2}V^*.$$ Consequently, see , $$S_-^*J_-S_-=V|I-T_{11}|^{1/2}J_-|I-T_{11}|^{1/2}V^*=V(I-T_{11})JV^*,$$ $$S_+^*J_+S_+=V|I+T_{11}|^{1/2}J_+|I+T_{11}|^{1/2}V^*=V(I+T_{11})JV^*,$$ which implies the representations for $T_m$ and $T_M$ in . Clearly, $T_m$ and $T_M$ are selfadjoint extensions of $T_1$, which satisfy the equalities $$\nu_-(I+T_{m})=\kappa_-,\quad \nu_-(I-T_{M})=\kappa_+.$$ Moreover, it follows from that $$\label{E:6} T_M-T_m= \begin{pmatrix} 0&0\\ 0&2(I-VJV^*) \end{pmatrix}.$$ Now the assumption will be used again. Since $\nu_-(I-T_{1}^*T_{1})=\nu_-(I-T_{11}^2)$ and $T_{21}=VD_{T_{11}}$ it follows from Lemma \[corL1\] that $V^*\in[\sH_2,\sD_{T_{11}}]$ is $J$-contractive: $I-VJV^*\ge 0$. Therefore, shows that $T_M\geq T_m$ and $I+T_M\geq I+T_m$ and hence, in addition to $I+T_m$, also $I+T_M$ is a solution to the problem $A_{+}^0$ and, in particular, $\nu_-(I+T_M)=\kappa_-=\nu_-(I+T_m)$. Similarly, $I-T_M\le I-T_m$ which implies that $I-T_m$ is also a solution to the problem $A_{-}^0$, in particular, $\nu_-(I-T_m)=\kappa_+=\nu_-(I-T_M)$. Now by applying Lemma \[sign\] we get $$\nu_-(I-T_m^2)=\nu_-(I-T_m)+\nu_-(I+T_m)=\kappa_++\kappa_-=\kappa,$$ $$\nu_-(I-T_M^2)=\nu_-(I-T_M)+\nu_-(I+T_M)=\kappa_++\kappa_-=\kappa.$$ Therefore, $T_m,T_M\in \Ext_{T_1,\kappa}(-1,1)$ which in particular proves that the condition is sufficient for solvability of the completion problem . \(iv) Observe, that $T\in\Ext_{T_1,\kappa}(-1,1)$ if and only if $T=T^*\supset T_1$ and $\nu_-(I\pm T)=\kappa_\mp$. By Theorem \[T:1\] this is equivalent to $$\label{E:7} S_-^*J_-S_--I\leq T_{22}\leq I-S_+^*J_+S_+.$$ The inequalities are equivalent to . \(v) The relations follow from and . For a Hilbert space contraction $T_1$ one has $\nu_-(I-T_{11}^2)\le \nu_-(I-T_1^*T_1)=0$, i.e., the criterion is automatically satisfied. In this case Theorem \[T:contr\] has been proved in [@HMS04]. As Theorem \[T:contr\] shows, under the minimal index condition $\nu_-(I-T^2)=\nu_-(I-T_{11}^2)$, the solution set $\Ext_{T_1,\kappa}(-1,1)$ admits the same attractive description as an operator interval determined by the two extreme extensions $T_m$ and $T_M$ as was originally proved by M.G. Kreĭn in his famous paper [@Kr47] when describing all contractive selfadjoint extensions of a Hilbert space contraction. In particular, Theorem \[T:contr\] shows that if there is a solution to the completion problem , i.e. if $T_1$ satisfies the index condition , then all selfadjoint extensions $T$ of $T_1$ satisfying the equality $\nu_-(I-T^2)= \nu_-(I-T_1^*T_1)$ are determined by the operator inequalities $T_m\le T\le T_M$. Notice that $T$ belongs to the solution set $\Ext_{T_1,\kappa}(-1,1)$ precisely when $T=T^*\supset T_1$ and $\nu_-(I\pm T)=\kappa_\mp$. This means that every selfadjoint extension of $T_1$ for which $(I-T^2)=\nu_-(I-T_1^*T_1)$ admits precisely $\kappa_-$ eigenvalues on the interval $(-\infty,-1)$ and $\kappa_+$ eigenvalues on the interval $(1,\infty)$; in total there are $\kappa=\kappa_- +\kappa_+$ eigenvalues outside the closed interval $[-1,1]$. The fact that the numbers $\kappa_\mp=\nu_-(I\pm T)$ are constant in the solution set $\Ext_{T_1,\kappa}(-1,1)$ is crucial for dealing properly with the Cayley transforms in the next section. A generalization of M.G. Kreĭn’s approach to the extension theory of nonnegative operators {#sec5} ========================================================================================== Some antitonicity theorems for selfadjoint relations ---------------------------------------------------- The notion of inertia of a selfadjoint relation in a Hilbert space is defined by means of its associated spectral measure. In what follows the Hilbert space is assumed to be separable. Let $H$ be a selfadjoint relation in a separable Hilbert space $\sH$ and let $E_t(\cdot)$ be the spectral measure of $H$. The inertia of $H$ is defined as the ordered quadruplet $\sfi(H)=\bigl\{\sfi^+(H),\sfi^-(H),\sfi^0(H),\sfi^\infty(H)\bigr\}$, where $$\begin{split} \sfi^+(H)&=\dim \ran E_t((0,\infty)),\qquad \sfi^-(H)=\dim \ran E_t((-\infty,0)),\\ \sfi^0(H)&=\dim\ker H,\qquad\qquad\quad\, \sfi^\infty(H)=\dim \mul H. \end{split}$$ In particular, for a selfadjoint relation $H$ in $\dC^n$, the quadruplet $\sfi(H)$ consists of the numbers of positive, negative, zero, and infinite eigenvalues of $H$; cf. [@BHSW2014]. Hence, if $H$ is a selfadjoint matrix in $\dC^n$, then ${\sf i}^\infty(H)=0$ and the remaining numbers make up the usual inertia of $H$. The following theorem characterizes the validity of the implication $$H_1\le H_2 \quad\Rightarrow \quad H_2^{-1}\le H_1^{-1}$$ for a pair of bounded selfadjoint operators $H_1$ and $H_2$ having bounded inverses; it in the infinite dimensional case has been proved independently in [@S91; @DM91b; @HaNo0]; cf. also [@HaNo]. Some extensions of this result, where the condition $\min\{\sfi^+_2,\sfi^-_1\}<\infty$ is relaxed, are also contained in [@S91; @HaNo0; @HaNo]. \[antith2\] Let $H_1$ and $H_2$ be bounded and boundedly invertible selfadjoint operators in a separable Hilbert space $\sH$. Let $\sfi(H_j)=\{\sfi^+_j,\sfi^-_j,\sfi^0_j,\sfi^\infty_j\}$ be the inertia of $H_j$, $j=1,2$, and assume that $\min\{\sfi^+_2,\sfi^-_1\}<\infty$ and that $H_1\leq H_2$. Then $$H_2^{-1} \leq H_1^{-1} \quad \textrm{if and only if} \quad {\sf i}(H_1) = {\sf i}(H_2).$$ Very recently two extensions of Theorem \[antith2\] have been established in [@BHSW2014] for a general pair of selfadjoint operators and relations without any invertibility assumptions. For the present purposes we need the second main antitonicity theorem from [@BHSW2014], which reads as follows. \[antinew2\] Let $H_1$ and $H_2$ be selfadjoint relations in a separable Hilbert space $\sH$ which are semibounded from below. Let $\sfi(H_j)=\{\sfi^+_j,\sfi^-_j,\sfi^0_j,\sfi^\infty_j\}$ be the inertia of $H_j$, $j=1,2$, and assume that $\sfi^-_1<\infty$ and that $H_1 \leq H_2$. Then $$H_2^{-1} \leq H_1^{-1} \quad \textrm{if and only if} \quad {\sf i}_1^- = {\sf i}_2^-.$$ The ordering appearing in Theorem \[antinew2\] is defined via $$H_1 \leq H_2 \quad \Leftrightarrow \quad 0\le (H_2-aI)^{-1}\le (H_1-aI)^{-1},$$ where $a<\min\{\mu(H_1),\mu(H_2)\}$ is fixed and $\mu(H_i)\in\dR$ stands for the lower bound of $H_i$, $i=1,2$. Notice that the conditions $H_1 \le H_2$ and $\sfi^-_1<\infty$ imply $\sfi^-_2<\infty$; in particular these conditions already imply that the inverses $H_1^{-1}$ and $H_2^{-1}$ are also semibounded from below. For further facts on ordering of semibounded selfadjoint operators and relations the reader is referred to [@Kato; @BHSW2014]. Cayley transforms ----------------- Define the linear fractional transformation $\cC$, taking a linear relation $A$ into a linear relation $\cC(A)$, by $$\label{bilin} \cC(A)=\{\, \{f+f',f-f'\} :\, \wh f=\{f,f'\} \in A\,\}= -I+2(I+A)^{-1}.$$ Clearly, $\cC$ maps the (closed) linear relations one-to-one onto themselves, $\cC^{2}=I$, and $$\label{bilin0} \cC(A)^{-1}=\cC(-A),$$ for every linear relation $A$. Moreover, $$ \begin{split} &\dom \cC(A)=\ran (I+A), \quad \ran \cC(A)= \ran (I-A), \\ &\ker (\cC(A)-I)=\ker A, \quad \ker (\cC(A)+I)=\mul A. \end{split}$$In addition, $\cC$ preserves closures, adjoints, componentwise sums, orthogonal sums, intersections, and inclusions. The relation $\cC(A)$ is symmetric if and only if $A$ is symmetric. It follows from and $$\label{bilin2} \|f+f'\|^2-\|f-f'\|^2=4\RE(f',f)$$ that $\cC$ gives a one-to-one correspondence between nonnegative (selfadjoint) linear relations and symmetric (respectively, selfadjoint) contractions. Observe the following mapping properties of $\cC$ on the extended real line $\dR\cup\{\pm\infty\}$: $$\label{Hin0} \begin{split} &\cC([0,1])=[0,1];\quad \cC([-1,0])=[1,+\infty];\\ &\cC([1,+\infty])=[-1,0];\quad \cC([-\infty,-1])=[-\infty,-1]. \end{split}$$ If $H$ is a selfadjoint relation then $$ \sfi^-(I+H)=\sfi^-(\cC(H)+I),\qquad \sfi^-(I-H)=\sfi^-(\cC(H)^{-1}+I), $$ and hence $$\label{Hin1} \begin{split} &\sigma(H)\cap (-\infty,-1)=\sigma(\cC(H))\cap (-\infty,-1),\\ &\sigma(H)\cap (1,+\infty)=\sigma(\cC(H)^{-1})\cap (-\infty,-1)=\sigma(\cC(H))\cap (-1,0); \end{split}$$ which can also be seen from . M.G. Kreĭn’s approach to the extension theory with a minimal negative index --------------------------------------------------------------------------- The crucial step in the M.G. Kreĭn’s approach to the extension theory of nonnegative operators is the connection to the selfadjoint contractive extensions of a Hermitian contraction $T$ via the Cayley transform in . The extension of this approach to the present indefinite situation is based on the fact that the Cayley transform still reverses the ordering of selfadjoint extensions due to the antitonicity result formulated in Theorem \[antinew2\] and the fact that in Theorem \[T:contr\] $T\in\Ext_{T_1,{\kappa}}(-1,1)$ if and only if $T=T^*\supset T_1$ and $\nu_-(I\pm T)={\kappa}_\mp$. A semibounded symmetric relation $A$ is said to be quasi-nonnegative if the associated form $a(f,f):=(f',f)$, $\{f,f'\}\in A$, has a finite number of negative squares, i.e. every $A$-negative subspace $\sL\subset \dom A$ is finite-dimensional. If the maximal dimension of $A$-negative subspaces is finite and equal to $\kappa\in\dZ_+$, then $A$ is said to be $\kappa$-nonnegative; the more precise notations $\nu_-(a)$, $\nu_-(A)$ are used to indicate the maximal number of negative squares of the form $a$ and the relation $A$, respectively; here $\nu_-(a)=\nu_-(A)$. A selfadjoint extension $\wt A$ of $A$ is said to be a $\kappa$-nonnegative extension of $A$ if $\nu_-(\wt A)=\kappa$. The set of all such extension will be denoted by $\Ext_{A,\kappa}(0,\infty)$. If $A$ is a closed symmetric relation in the Hilbert space $\sH$ with $\kappa_-(A)<\infty$, then the subspace $\sH_1:=\ran(I+A)$ is closed, since the Cayley-transform $T_1=\cC(A)$ is a closed bounded symmetric operator in $\sH$ with $\dom T_1=\sH_1$. Let $P_1$ be the orthogonal projection onto $\sH_1$ and let $P_2=I-P_1$. Then the form $$\label{defa1} a_1(f,f):=(P_1f',f), \quad \{f,f'\}\in A,$$ is symmetric and it has a finite number of negative squares. \[a1lemma\] Let $A$ be a closed symmetric relation in $\sH$ with $\kappa_-(A)<\infty$ and let $T_1=\cC(A)$. Then the form $a_1$ is given by $$\label{a1form} a_1(f,f)=a(f,f)+\|P_2 f \|^2$$ with $\nu_-(a_1)\le \nu_-(A)$. Moreover, $$4a_1(f,f)=\|g\|^2-\|T_{11}g\|^2, \quad 4a(f,f)=\|g\|^2-\|T_1g\|^2,$$ where $\{f,f'\}\in A$, $g=f+f'$, and $T_{11}=P_1T_1$. In addition, $T_{21}=P_2T_1$ satisfies $\|T_{21}g \|^2=\|P_2f\|=-(P_2 f,f')$. The formula shows that if $T_1=\cC(A)$ and $\{f,f'\}\in A$, then $$\|g\|^2-\|T_1g\|^2=4 (f',f)=4 a(f,f), \quad g=f+f'\in \dom T_1= \sH_1.$$ Moreover, $T_{21}g=P_2(f-f')=2P_2f=-2P_2f'$ gives $(P_2f',f)=-\|P_2 f\|^2$ and $$\|T_{21}g \|^2=-4(P_2 f',P_2 f)=-4(P_2f',f).$$ In particular, follows from $$a(f,f)=(P_1f',f)+(P_2f',f)=a_1(f,f)-\|P_2 f\|^2.$$ Finally, combined with $\|T_{21}g\|^2=4\|P_2f\|^2$ leads to $$4a_1(f,f)=\|g\|^2-\|T_1g\|^2+\|T_{21}g\|^2=\|g\|^2-\|T_{11}g\|^2. \qedhere$$ The main result in this section concerns the existence and a description of all selfadjoint extensions $\wt A$ of a symmetric relation $A$ for which $\nu_-(\wt A)<\infty$ attains the minimal value $\nu_-(a_1)$. A criterion for the existence of such a selfadjoint extension is established, in which case all such extensions are described in a manner that is familiar from the case of nonnegative operators. To formulate the result assume that the selfadjoint quasi-contractive extensions $T_m$ and $T_M$ of $T_1$ as in Theorem \[T:contr\] exist, and denote the corresponding selfadjoint relations $A_F$ and $A_K$ by $$\label{1.28} A_F = X(T_{m})=-I+2(I+T_m)^{-1}, \quad A_K = X(T_{M})=-I+2(I+T_M)^{-1}.$$ \[KreinThm\] Let $A$ be a closed symmetric relation in $\sH$ with $\nu_-(A)<\infty$ and denote $\kappa=\nu_-(a_1)\,(\le \nu_-(A))$, where $a_1$ is given by . Then $\Ext_{A,\kappa}(0,\infty)$ is nonempty if and only if $\nu_-(A)=\kappa$. In this case $A_F$ and $A_K$ are well-defined and they belong to $\Ext_{A,\kappa}(0,\infty)$. Moreover, the formula $$\label{CTA} {\widetilde{A}}=-I+2(I+T)^{-1}$$ gives a bijective correspondence between the quasi-contractive selfadjoint extensions $T\in\Ext_{T_1,{\kappa}}(-1,1)$ of $T_1$ and the selfadjoint extensions ${\widetilde{A}}={\widetilde{A}}^*\in \Ext_{A,\kappa}(0,\infty)$ of $A$. Furthermore, ${\widetilde{A}}={\widetilde{A}}^*\in \Ext_{A,\kappa}(0,\infty)$ precisely when $$\label{AKAF} A_K\le \wt A\le A_F,$$ or equivalently, when $A_F^{-1}\le \wt A^{-1}\le A_K^{-1}$, or $$\label{resolA} (A_F+I)^{-1}\le ({\widetilde{A}}+I)^{-1}\le (A_K+I)^{-1}.$$ The set $\Ext_{A^{-1},\kappa}(0,\infty)$ is also nonempty and $\wt A\in \Ext_{A,\kappa}(0,\infty)$ if and only if $\wt A^{-1}\in \Ext_{A^{-1},\kappa}(0,\infty)$. The extreme selfadjoint extensions $A_F$ and $A_K$ of $A$ are connected to those of $A^{-1}$ via $$\label{symme} (A^{-1})_F=(A_K)^{-1}, \quad (A^{-1})_K=(A_F)^{-1}.$$ Since $\nu_-(A)<\infty$ the Cayley transform $T_1=\cC(A)$ defines a bounded symmetric operator in $\sH$ with $\sH_1=\dom T_1=\ran (I+A)$. It follows from Lemma \[a1lemma\] that $$\nu_-(A)=\nu_-(a)=\nu_-(I-T_1^*T_1), \quad \nu_-(a_1)=\nu_-(I-T_{11}^2),$$ and therefore the condition $\nu_-(A)=\kappa$ is equivalent to solvability criterion in Theorem \[T:contr\]. Moreover, $\wt A$ is a selfadjoint extension of $A$ if and only if $T=\cC(\wt A)$ is selfadjoint extension of $T_1$ and by Lemma \[a1lemma\] the equality $\nu_-(\wt A)=\nu_-(I-T^2)$ holds. Therefore, it follows from Theorem \[T:contr\] that the set $\Ext_{A,\kappa}(0,\infty)$ is nonempty if and only if $\nu_-(A)=\kappa$ and in this case the formula establishes a one-to-one correspondence between the sets $\Ext_{A,\kappa}(0,\infty)$ and $\Ext_{T_1,{\kappa}}(-1,1)$. Next the characterizations and for the set $\Ext_{A,\kappa}(0,\infty)$ are established. Let $\wt A\in \Ext_{A,\kappa}(0,\infty)$ and let $T=\cC(\wt A)$. According to Theorem \[antinew2\] $T=\cC(\wt A)\in \Ext_{T_1,{\kappa}}(-1,1)$ if and only if $T$ satisfies the inequalities $T_m\le T\le T_M$. It is clear from the formulas and that the inequalities $I+T_m\le I+T\le I+T_M$ are equivalent to the inequalities . On the other hand, $\nu_-(I-T_{11}^2)=\nu_-(I-T^2)$ and hence the indices $\kappa_+=\nu_-(I-T_{11})=\nu_-(I-T)$ and $\kappa_-=\nu_-(I+T_{11})=\nu_-(I+T)$ do not depend on $T=\cC(\wt A)$; cf. . The mapping properties of the Cayley transform imply that the number of eigenvalues of $\wt A$ on the open intervals $(-\infty,-1)$ and $(-1,0)$ are also constant and equal to $\kappa_-$ and $\kappa_+$, respectively. In particular, since $\kappa_-=\nu_-(I+T)$ is constant we can apply Theorem \[antith2\] to conclude that the inequalities $I+T_m\le I+T\le I+T_M$ are equivalent to $$(I+T_M)^{-1}\le (I+T)^{-1}\le (I+T_m)^{-1},$$ which due to the formulas and can be rewritten as $A_F+I\le \wt A+I \le A_K+I$, or as $A_F\le \wt A\le A_K$. This proves . Since $\nu_-(\wt A)=\kappa=\kappa_-+\kappa_+$ is also constant, an application of Theorem \[antinew2\] shows that the inequalities are also equivalent to $A_F^{-1}\le \wt A^{-1}\le A_K^{-1}$. As to the inverse $A^{-1}$, notice that $\nu_-(A^{-1})=\nu_-(A)$. Moreover, since $A^{-1}=\cC(-T_1)$ it is clear that $\ran (I+A^{-1})=\dom T_1$ and thus the form associated to $A^{-1}$ via satisfies $a_1^{(-1)}(f',f')=(P_1f,f')=(P_1f',f)=a_1(f,f)$. In particular, $\nu_-(a_1^{(-1)})=\nu_-(a_1)$. Moreover, it is clear that $\nu_-(A^{-1})=\nu_-(A)$. Consequently, the equality $\nu_-(A)=\nu_-(a_1)$ is equivalent to the equality $\nu_-(A^{(-1)})=\nu_-(a_1^{(-1)})$. Furthermore, it is clear that $\wt A\in \Ext_{A,\kappa}(0,\infty)$ if and only if $\wt A^{-1}\in \Ext_{A^{-1},\kappa}(0,\infty)$. Finally, the relations are obtained from , , and . It follows from Theorem \[KreinThm\] that the extensions ${\widetilde{A}}\in \Ext_{A,\kappa}(0,\infty)$ admit a uniform lower bound $\mu\le \mu(\wt A)\,(\mu\le 0)$. Consequently, the resolvents of these extensions satisfy $$\label{inequ} (A_F+a)^{-1}\le ({\widetilde{A}}+ a)^{-1}\le (A_K+a)^{-1}, \quad a>-\mu.$$ This follows from the formula $$({\widetilde{A}}+a)^{-1} = \frac{1}{a-1}\, I-\frac{2}{(a-1)^2}\, \left(T+\frac{a+1}{a-1}\right)^{-1}$$ and the fact that $T=\cC(\wt A)\in \Ext_{T_1,{\kappa}}(-1,1)$ has precisely $\kappa_-$ eigenvalues below the number $-{(a+1)}/{(a-1)}<-1$, so that the inequalities $T_m\leq T\leq T_M$ in Theorem \[T:contr\] imply the inequalities by Theorem \[antith2\]. The antitonicity Theorems \[antith2\], \[antinew2\] can be also used as follows. If the inequalities and $A_F^{-1}\le \wt A^{-1}\le A_K^{-1}$ hold, then $\kappa=\nu_-(\wt A)=\nu_-(A_K)=\nu_-(A_F)$ is constant. If, in addition, is satisfied, then it follows from that $\kappa_-=\nu_-(I+\wt A)=\nu_-(I+A_K)=\nu_-(I+A_F)$ is constant, so that also $\kappa_+=\nu_-(I-\wt A)=\nu_-(I-A_K)=\nu_-(I-A_F)$ is constant. However, in this case the equality $\nu_-(a_1)=\nu_-(A)$ need not hold and there can also be selfadjoint extensions $\wt A$ of $A$ with $$\nu_-(\wt A)=\nu_-(A_K)=\nu_-(A_F)>\nu_-(A)\ge \nu_-(a_1),$$ which neither satisfy the inequalities and , nor the equalities $\nu_-(I+\wt A)=\kappa_-$ and $\nu_-(I-\wt A)=\kappa_+$. It is emphasized that the result in Theorem \[KreinThm\] characterizes all selfadjoint extensions in $\Ext_{A,\kappa}(0,\infty)$ under the minimal index condition $\kappa=\nu_-(a_1)=\nu_-(A)$. In the case that $A$ is nonnegative one has automatically $\kappa=\nu_-(a_1)=\nu_-(A)=0$. Therefore, Theorem \[KreinThm\] is a precise generalization of the famous characterization of the class $\Ext_{A}(0,\infty)$ (with $\kappa=0$) due to M.G. Kreĭn [@Kr47] to the case of a finite negative (minimal) index $\kappa>0$. The selfadjoint extensions $A_F$ and $A_K$ of $A$ are called the Friedrichs (hard) and the Kreĭn-von Neumann (soft) extension, respectively; these notions go back to [@F; @JvN]. The extremal properties of the Friedrichs and Kreĭn-von Neumann extensions were discovered by Kreĭn [@Kr47] in the case when $A$ is a densely defined nonnegative operator. The case when $A\ge 0$ is not densely defined was considered by T. Ando and K. Nishio [@AN], and E.A. Coddington and H.S.V. de Snoo [@CS]. In the nonnegative case the formulas can be found in [@AN] and [@CS]. Kreĭn’s uniqueness criterion ---------------------------- To establish a generalization of Kreĭn’s uniqueness criterion for the equality $A_F=A_K$ in Theorem \[KreinThm\], i.e., for $\Ext_{A,\kappa}(0,\infty)$ to consists only of one extension, we first derive some general facts on $J$-contractions by means of their commutation properties. Let $\sH_1$ and $\sH_2$ be Hilbert spaces with symmetries $J_1$ and $J_2$, respectively, and let $T\in[\sH_1,\sH_2]$ be a $J$-contraction, i.e., $J_1-T^*J_2T\geq 0$. Let $D_T$ and $D_{T^*}$ be the corresponding defect operators and let $J_T$ and $J_{T^*}$ be their signature operators as defined in Section \[sec3\]. The first lemma connects the kernels of the defect operators $D_T$ and $D_{T^*}$. \[lm0\] Let $T\in [\sH_1,\sH_2]$, let $J_i$ be a symmetry in $\sH_i$, $i=1,2$, and let $D_T$ and $D_{T^*}$ be the defect operators of $T$ and $T^*$, respectively. Then $$\label{lm01} J_2T(\ker D_{T})=\ker D_{T^*}, \quad T^*J_2(\ker D_{T^*})=\ker D_{T}.$$ In particular, $$ \ker D_{T}=\{0\} \mbox{ if and only if } \ker D_{T^*}=\{0\}.$$ It suffices to show the first identity in . If ${\varphi}\in \ker D_T=\ker J_TD_T^2$, then the second identity in implies that $J_2T {\varphi}\in \ker J_{T^*}D_{T^*}^2=\ker D_{T^*}$. Hence, $J_2T(\ker D_{T}) \subset \ker D_{T^*}$. Conversely, let ${\varphi}\in \ker D_{T^*}$. Then $0=J_{T^*}D_{T^*}^2{\varphi}$ or, equivalently, ${\varphi}= J_2TJ_1T^*{\varphi}$, and here $J_1T^*{\varphi}\in \ker D_T$ by the first identity in . This proves the reverse inclusion. \[lm2\] Let the notations be as in Lemma \[lm0\]. Then $$\ran T \cap \ran D_{T^*}=\ran TJ_1D_T=\ran D_{T^*}L_T,$$ where $L_T$ is the link operator defined in Corollary \[linkoper\]. By the commutation formulas in Corollary \[linkoper\] $\ran TJ_1D_T=\ran D_{T^*}L_T \subset \ran T \cap \ran D_{T^*}$. Hence, it suffices to prove the inclusion $$\ran T \cap \ran D_{T^*} \subset \ran TJ_1D_T.$$ Suppose that $\varphi \in \ran T \cap \ran D_{T^*}$. Then Corollary \[linkoper\] shows that $T^*J_2\varphi = D_T f$ for some $f\in \sD_T$, while the second identity in implies that $$(J_2-TJ_1T^*)J_2{\varphi}=TJ_1D_Tg,$$ for some $g\in \sD_T$. Therefore, $$\varphi=(J_2-TJ_1T^*)J_2\varphi+TJ_1T^*J_2\varphi =TJ_1D_Tg+TJ_1D_Tf=TJ_1D_T(g+f)$$ and this completes the proof. We can now characterize $J$-isometric operators $T\in [\sH_1,\sH_2]$ as follows. \[lm3\] With the notations as in Lemma \[lm0\] the following statements are equivalent: 1. $T$ is $J$-isometric, i.e., $T^*J_2T=J_1$; 2. $\ker T=\{0\}$ and $\ran T \cap \ran D_{T^*} =\{0\}$; 3. for some, and equivalently for every, subspace $\sL$ with $\ran J_2T\subset \sL$ one has $$\label{apu0} \sup_{f\in\sL}\frac{|(f,T\varphi)|}{\|D_{T^*}f\|}=\infty \quad \mbox{for every } \varphi\in \sH_1\backslash\{0\},$$ i.e., there is no constant $0\le C<\infty$ satisfying $|(f,T\varphi)|\le C \|D_{T^*}f\|$ for every $f\in \sL$, if $\varphi\neq 0$. \(i) $\Rightarrow$ (iii) Let $\sL$ be an arbitrary subspace with $\ran J_2T\subset\sL$. Assume that the supremum in is finite for some $\varphi=J_1\psi\in \sH_1$. Then there exists $0\le C<\infty$, such that $$|(f,TJ_1\psi)|\le C\|D_{T^*}f\| \quad \mbox{ for every } f\in \sL.$$ Since $\ran J_2T\subset \sL$ and $T$ is $J$-isometric, also the following inequality holds: $$\label{apu02} \|\psi\|^2=(J_1T^*J_2T\psi,\psi) \le C\|D_{T^*}J_2T\psi\|.$$ By taking adjoints (and zero extension for $L_{T^*}$) in the second identity in Corollary \[linkoper\] it is seen that $D_{T^*}J_2T\psi=L_{T^*}^*D_T \psi=0$, since $T$ is $J$-isometric. Hence implies $\varphi=J_1\psi=0$. Therefore holds for every $\varphi\neq 0$. \(iii) $\Rightarrow$ (ii) Assume that is satisfied with some subspace $\sL$. If (ii) does not hold, then either $\ker T\neq \{0\}$, in which case does not hold for $0\neq \varphi\in \ker T$, or $\ran T\cap \ran D_{T^*}\neq \{0\}$. However, then with $0\neq T\varphi=D_{T^*}h$ the supremum in is finite even if $f$ varies over the whole space $\sH_2$. Thus, if (ii) does not hold then fails to be true. \(ii) $\Rightarrow$ (i) Let $\ran T \cap \ran D_{T^*} =\{0\}$. Then by Lemma \[lm2\] $TJ_1D_T=0$ and it follows from $\ker{T}=\{0\}$ that $D_T=0$, i.e., $T$ is isometric. This completes the proof. After these preparations we are ready to prove the analog of Kreĭn’s uniqueness criterion for the equality $T_{m}=T_{M}$ in the case of quasi-contractions appearing in Theorem \[T:contr\]. \[pr1.2\] Let the Hilbert space $\sH$ be decomposed as $\sH=\sH_1\oplus \sH_2$ and let $T_1 \in [\sH_1,\sH]$ be a symmetric quasi-contraction satisfying the condition in Theorem \[T:contr\]. Then $T_{m}=T_{M}$ if and only if $$\label{1.20} \sup_{f\in \sH_1}\frac{|(T_1f,{\varphi})|^2}{(|I-T_1^*T_1|f,f)}=\infty \quad \mbox{for every } {\varphi}\in\sH_2\setminus \{0\}.$$ Let $J=\sign(I-T_{11}^2)$. According to Theorem \[T:contr\] there is $V\in[\sD_{T_{11}},\sH_2]$, such that $T_{21}=VD_{T_{11}}$; moreover, $V^*$ a $J$-contraction, i.e., $I-VJV^*\ge 0$. This implies that $$\label{apu03} (T_1f,{\varphi})=(T_{21}f,{\varphi})=(D_{T_{11}}f,V^*{\varphi}),$$ and a direct calculation shows that $$\label{apu04} I-T^*_1T_1=I-T_{11}^2-T_{21}^*T_{21}=JD_{T_{11}}^2-D_{T_{11}}V^*VD_{T_{11}}= D_{T_{11}}D_VJ_V D_VD_{T_{11}}.$$ By construction $D_V\in[\sD_{T_{11}}]$ and therefore $\ran D_VD_{T_{11}}$ is dense in $\sD_V=\cran D_V$. Furthermore, since $V^*$ is $J$-contractive it follows from Lemma \[inertia\] that $\nu_-(J_V)=\nu_-(J)=\nu_-(I-T_{11}^2)$ and, therefore, the assumption shows that $\nu_-(J_V)=\nu_-(I-T_1^*T_1)$. Now according to Proposition \[BKcor1\] (ii) if follows from that there is a unique $J$-unitary operator $C\in[\sD_{T_1},\sD_V]$ such that $D_VD_{T_{11}}=CD_{T_{1}}$. In view of $T_m=T_M$ if and only if $V^*$ is $J$-isometric. Since $\ran JV^*\subset {\overline}\ran D_{T_{11}}$, it follows from Proposition \[lm3\] that $T:=V^*$ satisfies the condition with $\sL={\overline}\ran D_{T_{11}}$. On the other hand, it follows from and the $J$-unitarity of $C\in[\sD_{T_1},\sD_V]$ that $$\|D_VD_{T_{11}}\|\le \|C\|\, \|D_{T_1}\|,\quad \|D_{T_1}\|\le \|C^{-1}\|\, \|D_VD_{T_{11}}\|.$$ By combining this equivalence between the norms of $\|D_{T_1}\|$ and $\|D_VD_{T_{11}}\|$ with the equality one concludes that $V^*$ satisfies the condition precisely when $T_1$ satisfies the condition . This result can be translated to the situation of Theorem \[KreinThm\] via Cayley transform to get the analog of Kreĭn’s uniqueness criterion for the equality $A_F=A_K$. \[Aunique\] Let $A$ be a closed symmetric relation in $\sH$ satisfying the condition $\nu_-(A)=\nu_-(a_1)<\infty$ in Theorem \[KreinThm\]. Then the equality $A_{F}=A_{K}$ holds if and only if the following condition is fulfilled: $$\label{1.20A} \sup_{g\in \sH_1}\frac{|((A+I)^{-1}g,{\varphi})|^2}{(|\widehat{A}|g,g)}=\infty \quad \mbox{for every } {\varphi}\in\ker (A^*+I)\setminus \{0\},$$ where $\widehat{A}=(I+A)^{-*}A(I+A)^{-1}$ is a bounded selfadjoint operator in $\sH_1=\ran (A+I)$. Let $T_1=\cC(A)$ so that $\{f,f'\}\in A$ if and only if $\{f+f',2f\}\in T_1+I$; see . Then with $g=f+f'\in\dom T_1=\sH_1$ and $\varphi\in \sH_2=(\dom T_1)^\perp$ one has $$(T_1g,\varphi)=((T_1+I)g,\varphi)=2((A+I)^{-1}g,\varphi). $$ Let $A_s=P_sA$ be the operator part of $A$; here $P_s$ stand for the orthogonal projection onto $\mul A=(\dom A^*)^\perp=\ker (T_1+I)$. Then the form $a(f,f)=(f',f)$ associated with $A$ can be rewritten as $a(f,f)=(A_sf,f)$, $f\in \dom A$, and thus $$((I-T_1^*T_1)g,g)=4(f',f)=4(A_s(I+A)^{-1}g,(I+A)^{-1}g)),$$ where $2(I+A)^{-1}=T_1+I$ is a bounded operator from $\sH_1$ into $\sH$. Then clearly $\widehat{A}=(I+A)^{-*}A_s(I+A)^{-1}$ is a bounded selfadjoint operator in $\sH_1$ and, moreover, $\nu_-(\widehat{A})=\nu_-(a)=\nu_-(I-T_1^*T_1)$; see Lemma \[a1lemma\]. Thus, it follows from Proposition \[BKcor1\] that there is a $J$-unitary operator $C$ from $\cran \widehat{A}$ into $\sD_{T_1}$ such that $D_{T_1}=C|\widehat{A}|^{1/2}$. As in the proof of Theorem \[KreinThm\] this implies the equivalence of the conditions and . Observe that if $A$ is nonnegative then with $\{f,f'\}\in A$ and $g=f+f'\in\sH_1$, $$((A+I)^{-1}g,\varphi)=(f,\varphi), \quad (A_s(I+A)^{-1}g,(I+A)^{-1}g))=(A_sf,f),$$ and, therefore, in this case the condition can be rewritten as $$\sup_{\{f,f'\}\in A} \frac{|(f,\varphi)|^2}{(f',f)}=\infty \quad \mbox{for every } \varphi \in\ker (A^*+I)\setminus \{0\},$$ the criterion which for a densely defined operator $A$ was obtained in [@Kr47] and for a nonnegative relation $A$ can be found from [@HMS04; @Ha04]. **Acknowledgements.** Main part of this work was carried out during the second author’s sabbatical year 2013-2014; he is grateful about the Professor Pool’s grant received from the Emil Aaltonen Foundation. [99]{} Alonso, A. and Simon, B., The Birman-Kreĭn-Vishik theory of self-adjoint extensions of semibounded operators, J. Operator Theory, **4** (1980), 251–270. Anderson, W.N. and Duffin, B.J., Series and Parallel Addition of Matrices, J. Math. Anal. Appl., **26** (1969), 576–594. Anderson, W.N. and Trapp, G.E., Shorted Operators, II, SIAM J. Appl. Math., **28** (1975), 60–71. Ando, T. and Nishio, K., Positive self-adjoint Extensions of Positive Symmetric Operators. Tohóku Math. J., **22** (1970), 65–75. Arlinskiĭ Yu.M., Extremal extensions of a $C(\alpha)$-suboperator and their representations, Oper. Theory Adv. Appl., **162** (2006), 47–69. Arlinskiĭ Yu.M., Hassi S., and de Snoo H.S.V., Parametrization of contractive block-operator matrices and passive discrete-time systems, Complex Analysis and Operator Theory, **1** (2007), 211–233. Arsene, Gr. and Gheondea, A., Completing Matrix Contractions, J. Operator Theory, **7** (1982), 179–189. Arsene, Gr., Constantinescu, T., Gheondea, A., Lifting of Operators and Prescribed Numbers of Negative Squares, Michigan Math. J., **34** (1987), 201–216. Azizov, T.Ya. and Iokhvidov, I.S., *Linear operators in spaces with indefinite metric*, John Wiley and Sons, New York, 1989. Behrndt, J., Hassi, S., de Snoo, H.S.V., and Wietsma, H.L., Antitonicity of the inverse for selfadjoint matrices, operators, and relations, Proc. Amer. Math. Soc. (to appear). Birman, M.S., On the self-adjoint extensions of positive definite operators, Mat. Sb., **38** (1956), 431–450. Birman, M.S. and Solomjak, M.Z., *Spectral theory of selfadjoint operators in Hilbert space*, Translated from the 1980 Russian original by S. Khrushchev and V. Peller. Mathematics and its Applications (Soviet Series). D. Reidel Publishing Co., Dordrecht, 1987. Bognár, J., *Indefinite Inner Product Space*, Springer-Verlag, Berlin, 1974. Brasche, J.F. and Neidhardt, H., Some remarks on Kreĭn’s extension theory, Math. Nachr., **165** (1994), 159–181. Bruk, V.M., On One Class of Boundary Value Problems with a Spectral Parameter in the Boundary Condition, (Russian) Mat. Sbornik, **100**, No.2 (1976), 210–216. Calkin, J.W., Symmetric boundary conditions, Trans. of AMS, **45**, No.3 (1939), 369–442. Coddington, E.A. and de Snoo, H.S.V.: Positive Selfadjoint Extensions of Positive Symmetric Subspaces, Math. Z., **159** (1978), 203–214. Constantinescu, T. and Gheondea, A.: Minimal Signature of Lifting operators. I, J. Operator Theory, **22** (1989), 345–367. Constantinescu, T. and Gheondea, A.: Minimal Signature of Lifting operators. II, J. Funct. Anal., **103** (1992), 317–352. Davis, Ch., Kahan, W.M., and Weinberger, H.F., Norm preserving dilations and their applications to optimal error bounds, SIAM J. Numer. Anal., **19**, no. 3 (1982), 445–469. Derkach, V.A., Hassi, S., Malamud, M.M., and de Snoo, H.S.V., Boundary Relations and their Weyl Families, Trans. Amer. Math. Soc. **358** (2006), 5351–5400. Derkach, V.A., Hassi, S., Malamud, M.M., and de Snoo, H.S.V., Boundary Triplets and Weyl Functions. Recent Developments, in: Operator Methods for Boundary Value Problems, London Mathematical Society, Lecture Notes Series, **404** (2012), 161–220. Derkach, V.A., Hassi, S., and de Snoo, H.S.V., Truncated moment problem in the class of generalized Nevanlinna functions, Math. Nachr., **285**, No. 14-15 (2012), 1741–1769. Derkach, V.A. and Malamud, M.M., Generalized Resolvents and the Boundary Value Problems for Hermitian Operators with Gaps, J. Funct. Anal., **95**, No.1 (1991), 1–95. Derkach, V.A. and Malamud, M.M., On a generalization of the Kreĭn-Stieltjes class of functions. (Russian) Izv. Akad. Nauk Armenii Mat. **26**, No. 2 (1991), 115–137; translation in J. Contemp. Math. Anal., **26**, No. 2 (1991), 16–37. Derkach, V.A. and Malamud, M.M., The Extension Theory of Hermitian Operators and the Moment Problem, J. Math. Sci., **73**, No.2 (1995), 141–242. Douglas, R.G., On Majorization, Factorization and Range Inclusion of Operators in Hilbert space, Proc. Amer. Math. Soc., **17** (1966), 413–416. Dritschel, M.A., A lifting theorem for bicontractions on Kreĭn spaces, J. Funct. Anal., **89** (1990), 61–89. Dritschel, M.A. and Rovnyak, J., Extension theorems for contraction operators on Kreĭn spaces. *Extension and interpolation of linear operators and matrix functions*, 221–305, Oper. Theory Adv. Appl., **47**, Birkhäuser, Basel, 1990. Fillmore, P.A. and Williams, J.P., On operator ranges, Adv. Math., **7** (1971), 254–281. Friedrichs, K.O., Spektraltheorie halbbeschränkter Operatoren und Anwendung auf die Spektralzerlegung von Differentialoperatoren, Math. Ann., **109** (1934), 465–487. Gorbachuk, V.I. and Gorbachuk, M.L., *Boundary problems for differential operator equations*, Naukova Dumka, Kiev, 1984 (Russian). Translated and revised from the 1984 Russian original: Mathematics and its Applications (Soviet Series), 48. Kluwer Academic Publishers Group, Dordrecht, 1991. Hassi, S., On the Friedrichs and the Kreĭn-von Neumann extension of nonnegative relations”, In *Contributions to Management Science, Mathematics and Modelling*. Acta Wasaensia, **122** (2004), 37–54. Hassi, S., Malamud, M.M., and de Snoo, H.S.V., On Kreĭn’s Extension Theory of Nonnegative Operators, Math. Nachr., **274/275** (2004), 40–73. Hassi, S. and Nordström, K., Antitonicity of the inverse of selfadjoint operators, Schwerpunktprogramm der Deutschen Forschungsgemeinschaft, Forschungsbericht No. **364**, 1992. Hassi, S. and Nordström, K., Antitonicity of the inverse and $J$-contractivity, Operator Theory Adv. Appl., **61** (1993), 149–161. Kato, T., *Perturbation Theory for Linear Operators*, Springer-Verlag, Berlin, Heidelberg, 1995. Kochubeĭ, A.N., Extensions of symmetric operators and symmetric binary relations, Math.Zametki, **17**, No.1 (1975), 41–-48 (Russian). English translation in Math. Notes, **17** (1975), 25–28. Kolmanovich, V.U. and Malamud, M.M., Extensions of Sectorial operators and dual pair of contractions, (Russian) [Manuscript No 4428-85. Deposited at Vses. Nauchn-Issled, Inst. Nauchno-Techn. Informatsii, VINITI 19 04 85, Moscow]{}, R ZH Mat 10B1144, (1985), 1–57. Kreĭn, M.G., On hermitian operators with defect indices $(1,1)$, Dokl. Akad. Nauk SSSR, **43** (1944), 339–342. Kreĭn, M.G., On resolvents of Hermitian operator with deficiency index $(m,m)$, Dokl. Akad. Nauk SSSR, **52** (1946), 657–660. Kreĭn, M.G., Theory of Selfadjoint Extensions of Semibounded Operators and Its Applications, I, Mat. Sb. **20**, No.3 (1947), 431–498. Kreĭn, M.G., Completely continuous linear operators in function spaces with two norms, Akad. Nauk Ukrain. RSR. Zbirnik Prac’ Inst. Mat., No. 9 (1947), 104–129. Kreĭn, M.G. and Langer, H., On Defect Subspaces and Generalized Resolvents of Hermitian Operator in the Space $\Pi_\kappa$. (Russian) Funct. Anal. and Appl., **5**, No.2 (1971), 59–71. Kreĭn, M.G. and Langer, H., On Defect Subspaces and Generalized Resolvents of Hermitian Operator in the Space $\Pi_\kappa$. (Russian) Funct. Anal. and Appl., **5**, No.3 (1971), 54–69. Langer, H. and Textorius, B., On Generalized Resolvents and $Q$-Functions of Symmetric Linear Relations (Subspaces) in Hilbert Space, Pacific J. Math., **72**, No.1 (1977), 135–165. Malamud M.M., Certain classes of extensions of a lacunary Hermitian operator, Ukrainian Math. J., **44**, No.2 (1992), 190–204. Malamud M.M., On some classes of extensions of sectorial operators and dual pair of contractions, Oper. Theory Adv. Appl., **124** (2001), 401–448. von Neumann, J., Allgemeine Eigenwerttheorie Hermitescher Funktionaloperatoren, Math. Ann., **102** (1929), 49–131. Pekarev, E.L., Shorts of operators and some extremal problems, Acta Sci. Math. (Szeged), **56** (1992), 147–163. Rodman, L., A note on indefinite Douglas’ lemma. *Operator theory in inner product spaces*, Oper. Theory Adv. Appl., **175** (2007), 225–229. Shmul’yan, Yu. L., A Hellinger operator integral, (Russian) Mat. Sb. (N.S.), **49**, No.91 (1959), 381–430. Shmul’yan, Yu. L., Division in the class of $J$-expansive operators, (Russian) Mat. Sb. (N.S.) **74**, No.116 (1967), 516–525. Shmul’yan, Yu. L., A question regarding inequalities between Hermitian operators, Mat. Zametki **49** (1991), 138–141 (Russian) \[English translation: Math. Notes, **49** (1991), 423–425\]. Shmul’yan, Yu. L. and Yanovskaya, R.N., Blocks of a contractive operator matrix, Izv. Vyssh. Uchebn. Zaved. Mat., No. 7 (1981), 72–75. Vishik M.I., On general boundary problems for elliptic differential equations, Trans. Moscow Math. Soc., **1** (1952), 186–246. (Russian) \[English translation: Amer. Math. Soc. Transl., **24** (1963), 107–172\].
--- abstract: 'We analyze within mean-field theory as well as numerically a KPZ equation that describes nonequilibrium wetting. Both complete and critical wettitng transitions were found and characterized in detail. For one-dimensional substrates the critical wetting temperature is depressed by fluctuations. In addition, we have investigated a region in the space of parameters (temperature and chemical potential) where the wet and nonwet phases coexist. Finite-size scaling analysis of the interfacial detaching times indicates that the finite coexistence region survives in the thermodynamic limit. Within this region we have observed (stable or very long-lived) structures related to spatio-temporal intermittency in other systems. In the interfacial representation these structures exhibit perfect triangular (pyramidal) patterns in one (two dimensions), that are characterized by their slope and size distribution.' author: - 'F. de los Santos' - 'M.M. Telo da Gama' - 'M.A. Muñoz' title: Nonequilibrium wetting transitions with short range forces --- Introduction ============ When a bulk phase $\alpha$ is placed into contact with a substrate, a layer of a second, coexisting, phase $\beta$ may form if the substrate preferentially adsorbs it. At a [*wetting transition*]{}, the thickness of the $\beta$ layer diverges. Equilibrium wetting has been experimentally observed and theoretically investigated using, among many other techniques, interface displacement models [@margarida; @fisher; @bhl]. Within this approach one considers the local height of the $\alpha \beta$ interface measured from the substrate, $h({\bf x})$, and constructs an effective interface Hamiltonian, ${\cal H}(h)$ [@nota1]. In equilibrium situations, one typically has $${\cal H}(h)=\int_0^\infty d{\bf x}\Bigg[ {1 \over 2} \nu (\nabla h)^2 +V(h)\Bigg]. \label{ham}$$ where $\nu$ is the interfacial tension of the $\alpha \beta$ interface (or the interfacial stiffness if the medium is anisotropic) and $V(h)$ accounts for the interaction between the substrate and the $\alpha \beta$ interface. If all the microscopic interactions are short-ranged, one may take for sufficiently large $h$ at bulk coexistence [@bhl] $$V(h)=b(T)e^{-h}+ce^{-2h}, \label{potential}$$ where $T$ is the temperature, $b(T)$ vanishes linearly as $T-T_W$, $T_W$ being the wetting temperature, and $c>0$ [@nota2]. By minimizing (\[ham\]) one finds [@margarida; @bhl] a [*critical wetting*]{} transition at $b=0$, i.e. the interface height (or equivalently, the wetting layer thickness), $\langle h \rangle$, diverges continuously as $b \sim T-T_W \to b_W=0^-$. Equilibrium critical wetting has been studied for decades and a rich (non-classical) behavior predicted [@margarida; @fisher]. Wetting transitions may also be driven by the chemical potential difference between the $\beta$ and $\alpha$ phases, $\mu$. In this case wetting occurs at any temperature above $T_W$ (i.e. for $b>b_W$) as $\mu =0$ is approached from the $\alpha$ phase. This is always a continuous transition and it is known as [*complete wetting*]{} [@margarida]. A study of complete wetting requires adding a linear term $\mu h$ to the Hamiltonian (\[ham\]). A dynamic model for the growth of wetting layers has been proposed through the Langevin equation [@lipowsky] $$\partial_t h({\bf x},t) = -{\delta {\cal H} \over \delta h}+\eta= \nu \nabla^2h -{\partial V \over \partial h} + \eta, \label{eweq}$$ where $\eta$ is Gaussian white noise with mean and variance $$\begin{aligned} \langle \eta ({\bf x},t)\rangle &=&0, \nonumber \\ \langle \eta({\bf x},t) \eta({\bf x}',t') \rangle &=& 2 D \delta(t-t') \delta ({\bf x}-{\bf x}'). \label{noise}\end{aligned}$$ Equation (\[eweq\]) is an Edwards-Wilkinson (EW) growth equation [@ew; @barabasi] in the presence of an effective interface potential $V(h)$. It describes the relaxation of the interfacial height $h$ towards its equilibrium value, i.e. the value of $h$ that minimizes $\cal H$. Within this context $\mu$ can be viewed as an external driving force acting on the interface. Recall that in the absence of the wall, the corresponding equilibrium states for $\mu<0$ and $\mu>0$ are the $\alpha$ and $\beta$ phases, respectively, whereas phase coexistence occurs at $\mu=0$. Equilibrium models, however, are not sufficient to study wetting in Nature, since in a wide range of phenomena ([*e.g.*]{}, growth of thin films, spreading of liquids on solids or crystal growth) thermal equilibrium may not hold. Nonequilibrium wetting transitions have been recently studied by Hinrichsen et al. [@haye1] in a lattice (restricted solid-on-solid) model with dynamics that do not obey detailed balance. The continuum nonequilibrium counterpart of this discrete model, is a Kardar-Parisi-Zhang (KPZ) equation in the presence of a bounding potential, whose properties have been analyzed extensively by one of the authors and collaborators [@MN; @genovese]. Clearly this is the most natural extension of the EW equilibrium growth model to non-equilibrium situations. In fact, in the absence of a substrate the KPZ non-linearity, $\lambda (\nabla h)^2$, is generically the most relevant nonequilibrium perturbation to the equilibrium EW equation [@barabasi]. The KPZ nonlinearity is related to lateral growth, and although this mechanism is unlikely to be relevant in simple fluids, it may determine the wetting behavior of systems with anisotropic interactions for which the growth of tilted interfaces depends on their orientation [@wiese]. For instance, it has been shown that crystal growth from atomic beams is described by the KPZ equation [@villain]. From a theoretical point of view a key and ambitious task is that of developing general criteria to establish whether the KPZ non-linear term should be included in a given interfacial model. Related works published recently by Müller et al. [@muller], Giada and Marsili [@marsili], Hinrichsen et al. [@haye2], and ourselves [@nos], consider similar non-equilibrium models in the presence of various types of walls. In this paper, we further study the KPZ interfacial equation in the presence of different types of potentials, attractive and repulsive. We focus on the connection of the associated phenomenology with non-equilibrium wetting and depinning transitions. In particular, we will stress that the transitions called “first-order non-equilibrium wetting” in [@haye2] are not wetting transitions but, rather, depinning transitions. Also, we study for the first time the two-dimensional version of this model, and report on new phenomenology. The remainder of this paper is organized as follows. In the next section we introduce our KPZ-like model. A mean-field picture is provided in section III, and its predictions numerically tested in section IV. The conclusions are summarized in the final section. The model ========= The model under study is defined by the Langevin equation $$\partial_t h({\bf x},t)= \nu \nabla^2 h +\lambda (\nabla h)^2 -{\partial V(h)\over \partial h} + \eta, \label{kpzyv}$$ where $V(h)= -(a+1)h + b e^{-h}+c e^{-2h}/2$, $c>0$, $\eta$ obeys (\[noise\]) and $ a+1 =\mu $ is a chemical potential. In the absence of the exponential terms (the limiting wall) the interface moves with a nonzero positive (negative) mean velocity for $\mu$ larger (smaller) than a certain critical value $\mu_c$. In one dimension $\mu_c$ can be found analytically since both the KPZ and EW equations have the same Gaussian steady-state height distribution [@barabasi]. Thus $\mu_c= \lambda \langle(\nabla h)^2\rangle$, which for discrete lattices can be approximated by [@krug] $\mu_c=-(D \lambda) /(2\nu \Lambda)$, $\Lambda$ being the lattice cutoff. Note that for $\lambda \not=0$ a nonzero chemical potential is required to balance the force exerted by the nonlinear term on the tilted interfacial regions. For $\lambda =0$ the model reduces to the equilibrium one and $\mu_c=0$ as usual. For negative values of $\lambda$ the interface is (on average) pushed against the wall, while for positive $\lambda$ it is pulled away from the wall. Thus the behavior of the system is determined by the sign of $\lambda$ [@note]. In this paper we will consider $\lambda <0$ only (which corresponds to the case studied using microscopic models [@haye2]). Results for positive values of $\lambda$ will be published elsewhere. It is our purpose to study the effects of a substrate that adsorbs preferentially one of the two phases on the stationary properties of the interface. This is achieved by considering $b$ to be negative in equation (\[kpzyv\]). This same equation, (\[kpzyv\]), has been recently studied by Giada [*et al.*]{} [@marsili] as a generic nonequilibrium continuum model for interfacial growth. However, their choice of control parameters [@marsili] privileges the role of the noise as the driving force of the nonequilibrium transitions. By contrast, motivated by the role of the chemical potential and temperature in equilibrium wetting, we fix the noise intensity and choose $a$ and $b$ as control parameters that are the fields driving critical and complete wetting transitions. To establish the analogy with equilibrium, let us stress that just like in [*equilibrium complete wetting*]{}, [*nonequilibrium complete wetting*]{} occurs when the attractive potential $V(h)$ is not capable of binding the interface, at temperatures above the wetting transition temperature, $b > b_W$, as the chemical potential approaches that of ‘bulk coexistence’, $\mu \to \mu_c$. At this transition the interface begins to move and $\langle h \rangle$ diverges. On the other hand, the [*nonequilibrium*]{} analogue of [*critical wetting*]{} corresponds to the unbinding of a bound interface at ‘bulk coexistence’, $\mu = \mu_c$, as $b \to b_W^-$. In order to analyze equation (\[kpzyv\]) it is convenient to perform a Cole-Hopf change of variable $h({\bf x},t)=-\ln n({\bf x},t)$, leading to $$\partial_t n=\nu \nabla^2 n-{\partial V(n)\over \partial n} + n \eta, \label{toral}$$ with $V(n)=an^2/2+ bn^3/3 +c n^4/4$. This describes the interface problem as a diffusion-like equation with [*multiplicative noise*]{} [@MN; @equiv]. In this representation, the unbinding from the wall ($\langle h \rangle \to\infty$) corresponds to a transition into an [*absorbing state*]{} $\langle n \rangle \to 0$. In the following we will use both languages, $h$ and $n$, indistinctly although the natural description of wetting is in terms of $h$. The case $b>0$ was studied in [@MN; @haye1], while $b<0$ is the case studied in [@nos; @haye2; @marsili; @raul; @muller]. Note that we have made use of Ito calculus, and thus equation (\[toral\]) should be interpreted in the Ito sense [@vankampen]. In general, potentials of the form $b n^{p+2}/(p+2)+c n^{2p+2}/(2p+2)$ with $p>0$ result in equivalent effective Hamiltonians since, when expressed in terms of $h$, $p$ can be eliminated by redefining the height scale. The case $p=2$ (with fixed $b<0$) has been studied in [@raul] in the context of stochastic spatio-temporal intermittency (STI). The unsuspected connections between these two problems are illustrated in the following sections. Mean field ========== In this section we analyze equation (\[toral\]) at the mean field level. We begin by discretizing (\[toral\]) on a regular $d$-dimensional lattice $$\partial_t n_i= {\nu \over 2d} \sum_{j}(n_j-n_i)- {\partial V(n_i) \over \partial n_i} + n_i \eta_i, \label{disc}$$ where $n_i=n(x_i,t)$ and the sum over $j$ runs over the nearest neighbors of $i$. The Fokker-Planck equation for the one-site stationary probability $P(n_i)$ can be easily worked-out. In mean-field approximation (i.e. substituting the values of the nearest neighbors by the average $n$ value), the stationary Fokker-Planck equation for $D=1$ is $${\partial\over \partial n}\bigg[ {\partial V(n)\over \partial n} +\nu (n-\n) P_t(n)\bigg]+ {\partial^2\over \partial n^2} \Big(n^2 P_t(n)\Big)=0 \label{fokkerplanck}$$ and its associated solution $$P(n,\n) = N {1 \over n^2} \exp -\int_0^n {V'(n) +\nu (n-\n) \over n^2} dn,$$ where the integration constant $N$ is determined by a normalization condition and $\n$ is obtained from the self-consistency requirement $$\n={\int_0^\infty dn \ n P(n,\n) \over \int_0^\infty dn \ P(n,\n)} = F(\n). \label{meanfield}$$ Let us consider two limiting cases where analytic solutions of Eq.(\[meanfield\]) can be worked out. In the zero-dimensional case, or equivalently $\nu=0$, the solution of (\[fokkerplanck\]) reads $$P(n)= N {\exp{-\Big( {b \over p}n^p+ {c \over 2p}n^{2p} \Big)} \over n^{a+2}},$$ that, in terms of heights, is $P(h) \sim \exp[(a+1)h - b e^{-h}/p-c e^{-2h}/2p]$ and yields the effective potential $V_{eff}(h)=-\ln P(h)= -(a+1)h + b e^{-h}/p+c e^{-2h}/2p$. Clearly this coincides with the potential in (\[kpzyv\]). A [*complete wetting*]{} transition occurs when approaching $a=-1$ with $b>0$ and critical wetting is found at $a=-1$ with $b=0$. For spatially extended systems, in the $\nu =\infty$ limit, a saddle-point expansion in $\nu$ yields $V'(n)=0$ [@vandenbroeck; @marsili]. Thus, the dynamical behavior in this limit is that of the deterministic mean-field version of (\[fokkerplanck\]): for any $p>0$, there is a line of second order wetting transitions at $a=0$ and $b>0$, and a line of first-order transitions at $a>0$ and $b=-(p+2) \sqrt{ac/(p+1)}$. These lines meet at a tricritical point at the origin. For values of $\nu$ other than zero or $\infty$, the self-consistency equation $\n =F(\n)$ has to be solved numerically. Without loss of generality, we set $p=2$, $c=1$, and illustrate in Fig. \[pdandtipos\] the three different regimes: one stable solution at $\n =0$ (dash-dotted line); one unstable solution at $\n =0$ and a stable one at $\n \not= 0$ (solid line); two stable solutions and an unstable one (dashed line). Stable solutions can be identified by a negative slope of $F(\n)-\n$ at the intersection point [@shiino]. A nonzero solution emerging continuously from $\n =0$ as a function of $a$ and $b$, signals a second order transition. This is the case for the dash-dotted and the solid lines in the inset of Fig. \[pdandtipos\]. By contrast, when the nontrivial solution appears discontinuously as a function of $a$ and $b$, the transition is first-order (dash-dotted and dashed lines). The corresponding phase diagram is depicted in Fig. \[pdandtipos\]. The solid line is a second-order phase boundary from non-wet to wet substrates. Between the two dotted lines, the wet and non-wet phases coexist as stationary solutions of the dynamical equation. The three lines join at the tricritical point at $a=b=0$. In order to determine the order parameter critical exponent in mean field approximation, we proceed as in [@genovese]. First, we rewrite (\[meanfield\]) as $$\n^{-1}=- \partial_{\n} \ln \int_0^\infty dt t^a \exp \bigg(-{b \over p} t^p -{t^{2p} \over 2p}\bigg) e^{-\n t}.$$ Next, we introduce a Gaussian transformation and expand the resulting integrals for small $\n$. We find $ \n \sim |a|^{1/p}$, and thus $a_c=0$ and $\beta =1/p$. Beyond mean-field theory ======================== In this section we explore whether the mean-field phase diagram structure survives when the effects of fluctuations are taken into account. Mean-field exponents are expected to hold above the upper critical dimension $d_c$, which in the present case, equation (\[toral\]), is known to be $d_c=2$, (corresponding to 3 bulk dimensions and in the weak coupling regime of the KPZ) [@MN]. For positive values of b and $d>2$, the second term $n^{2p+2}$ in the effective potential is irrelevant [@MN] and the we are left with $$\partial_t n= \nu \nabla^2 n -an -b n^{p+1}+ n\eta,$$ defining the [*multiplicative-noise*]{} (MN) universality class [@MN]. We have solved (\[toral\]) numerically for different system dimensions. In particular, for a one-dimensional substrate we have considered a system size $L=1000$, $\nu=p=2$, $D=1$ and $c=1.5$. The time step and the mesh size were set to 0.001 and 1, respectively. We started by determining the chemical potential for which the free interface has zero average velocity. For the parameters given above we found $a_c\approx -0.064$. Then we fix $a=a_c$ and calculate $\langle n(t) \rangle$ for different values of $b$ and large $t$. Length and time units are in lattice spacings and Monte Carlo steps, respectively. Critical wetting ---------------- To study critical wetting we set $a=a_c$ and consider small values of $b$ for which an initially pinned interface remains pinned, and increase progressively $b$ until the non-wet phase becomes unstable at $b_W$. The critical point is estimated as the value $b_W$ that maximizes the linear correlation coefficient of $\log \langle n \rangle$ versus $\log|b-b_W|$; the critical exponent is then determined from the corresponding slope (Fig. \[critical\]). It is found that the critical “temperature” is depressed from its mean-field value $b_W=0$ to $b_W= -0.70 \pm 0.01$, with an associated critical exponent $\beta =1.20 \pm 0.01$ (the error in the exponent comes from a least-squares fit). Below (above) that value we find first (second) order depinning transitions, by varying $a$. Therefore, as in mean field, there is a “tricritical” point, joining a line of second order transitions ($b>b_W$) with one of first order transitions ($b<b_W$). [*The critical exponents and universality of this multiplicative-noise tricritical point has not been investigated before*]{}. The finite coexistence region allows us to define critical wetting along a range of different paths, delimited by the dashed lines in the mean-field diagram of Fig. \[pdandtipos\]. We have checked that the value of $\beta$ does not change when the critical point is approached along different paths within this region. Further numerical and analytical studies of this new universality class will be left for future work. Complete Wetting, $b >b_W$ case ------------------------------- We consider a one-dimensional substrate, with $b=1>b_W$, let the system evolve to the stationary state and then compute the order parameter $\langle n \rangle$ for different values of $a$ near its critical value. As $a \to a_c$ a continuous transition into an absorbing state $n=0$ is observed; it is the non-equilibrium counterpart of complete-wetting. The associated critical exponent is found to be $\beta =1.65 \pm 0.05$ in good agreement with the prediction for the MN class, $\beta=1.5 \pm 0.1$ [@genovese]. Other positive values of $b$ yield similar results. In addition, we have simulated systems above the upper critical dimension, in $d=3$, with $L=25$, $b=5$ and other parameters as in the one-dimensional case. Our best estimate for $\beta$ is $\beta=0.96 \pm 0.05$, indicating that this transition is governed by the weak noise fixed point of the MN class [@MN]. For larger values of the noise amplitude we find a strong coupling transition, in agreement with the theoretical predictions [@MN; @genovese]. Finally, we note that both numerical and renormalization group arguments lead to $\beta=1$ in the weak coupling regime, independently of the value of $p$ [@MN]. This is at odds with the mean-field prediction $\beta=1/p$. Birner et al. [@birner] have recently suggested a transition from $1/p$ to a nonuniversal behavior depending on the ratio of the noise to the strength of the spatial coupling. However, this discrepancy appears to be generic since different types of mean-field approaches yield the same (incorrect) result and the origin of the discrepancy remains unclear [@genovese]. Since the Cole-Hopf transform of the MN equation is the same as KPZ with an additional exponential term, the MN exponents are those of KPZ iff the extra term is an irrelevant term of the KPZ renormalization group flow. Note that the Cole-Hopf transform fixes the value of $\lambda /\nu$ and thus the $\lambda =0$ limit can not be considered when this transformation is used. In addition the potential $V(h)$ is a relevant term of the EW equation. In this regime adding a non-linear potential is a relevant perturbation and it does indeed change/determine the wetting exponents (cf. with the literature on equilibium wetting [@margarida]). Thus [*EW plus a (non-linear) wetting potential is not equivalent to KPZ in the weak coupling regime plus the same wetting potential*]{}. Depinning transition at $b <b_W$ -------------------------------- As expected, no transition is found as $a \to a_c$ when equation (\[toral\]) is solved numerically for $b<b_W$. Of course, the system undergoes a pinning/depinning transition when crossing the $a_c=0$ boundary line, but this transition is driven by the chemical potential difference rather than by the substrate potential and thus it is unrelated to wetting, where phase coexistence of the “liquid” and “gas” phases is required (i.e. $a=a_c$). A very rich phenomenology associated with these transitions. have been found, however. For $b=-4$ we find that the non-wet phase becomes unstable at $a^* \approx 1.3$ and that the wet phase becomes unstable at $a_c \approx -0.064$ (as before). Consequently, in the range $a_c < a < a^\ast$ both phases coexist. This means that if the interface is initially close to the wall ($n > 1$) it remains pinned, while if it is initially far from the wall ($n \lappeq 1$) it detaches and moves away with a constant velocity. Therefore, the system undergoes a first-order transition as a function of $a$. In order to establish the phase boundaries we have used the following criteria. The stability of the pinned phase may be characterized by the time $\tau$ taken by the interface to depin in the limit $L \to \infty$. $\tau$ can be defined as the time taken by the last site of the interface to detach, $h({\bf x})>0$ or $n({\bf x})<1$ $\forall {\bf x}$. Similarly, we may define $\tau$ as the time characterizing the asymptotic exponential decay of $\langle n(t) \rangle$, where the angular brackets denote averages of independent runs ( typically $10^5-10^6$ in our simulations). We have verified that both definitions yield analogous results. As shown in Fig. \[histo\]A, for $a>a^*$, $\tau$ saturates with increasing system size and thus the interface detaches in a finite time. Within the coexistence region we have found two different regimes: close to the stability threshold of the pinned phase there is a narrow stripe $1.22 \lappeq a \lappeq 1.3$ where the detaching time grows approximately as a power-law. For $a_c \lappeq a \lappeq 1.22$ $\tau$ grows exponentially with $L$. In both cases $\tau$ diverges as $L \to \infty$, implying that the pinned phase is stable in the thermodynamic limit. Due to the very large characteristic times, we cannot discard the possibility that the power-laws are also (asymptotically) exponentials. The study of the asymptotic behavior of the detaching times requires longer simulations, beyond our current computer capabilities. Finally, the non-monotonic behavior of the characteristic times, as well as the step in the curve for $a=1.28$, may be accounted for by the presence of two different competing mechanisms as described in [@nos]: once a site is detached it pulls out its neighbors which, in turn, pull out their neighbors in a cascade effect until the whole interface is depinned in a time which grows linearly with $L$. This is more likely for small systems, but the probability that a site gets detached increases with the system size. Another way to characterize the power-law regime is to analyze the single-site stationary probability density function (ss-pdf), defined as the average of $n(t)$ over pinned states rather than over all runs. Figure \[histo\] shows the unnormalized ss-pdf for different values of $a$. In the exponential-regime ($a<1.22$) the histogram exhibits a maximum at a pinned state with $\langle n \rangle > 0$. In the power-law-regime, however, the histogram develops a secondary maximum near $n=0$, indicating that a fraction of the interface depins. As $a$ increases, the secondary maximum, at zero $n$, increases while the maximum, at finite $n$, decreases. At the stability edge ($a^*\approx 1.3$) the histogram changes abruptly into a delta function at $n=0$ and the pinned phase becomes unstable. The differences between the exponential and power-law regimes are also observed in a space-time snapshot of a numerical solution of (\[toral\]). In Fig. \[confn\] we plot the stationary field $n$, for $a=1.28$, exhibiting patterns characteristic of STI [@raul]. The main feature of these patterns is the appearance of depinned patches (absent for values of $a$ in the exponential regime) with a wide range of sizes and life-times within the pinned phase. This regime, overlooked in previous studies of nonequilibrium depinning transitions [@haye2; @marsili], seems to correspond to the power-law regime described earlier. It is therefore restricted to a narrow range between the exponential and the depinned regimes. This finding is at odds with results of previous work claiming that STI is generic in the coexistence region [@raul]. A typical profile in terms of $h$ is shown in Fig. \[confh\]. The depinned interfacial regions form triangles with constant average slope $s$. These triangular droplets are similar to those described in the discrete model of [@haye2]. By taking averages of (\[kpzyv\]), the typical slope, $s$, of the triangular facets is determined through $$|\lambda_R| s^2 =a+1, \label{slopes}$$ where $\lambda_R$ is the renormalized non-linear coefficient of the KPZ equation. In order to verify equation (\[slopes\]) we have fixed $\nu=p=D=1$, $\lambda=-1$ and $a=2.16$. Averaging over 250.000 different triangles, we find an average slope $s=1.781$, while the value of $\lambda_R$ calculated from the tilt-dependent velocity of the depinned interface [@barabasi] yields $\lambda_R=-0.9934$ from which $s=1.784$, in excellent agreement with the previously measured value. We also studied the size distribution of triangles within the power-law regime. Our results correspond to $\nu=p=D=1$, $\lambda=-1$, $L=500$, $b=-4$, and the following values of $a$: $2.12, 2.14, 2.15$, and $2.16$, and are summarized in Fig. \[tridistri\]. $a \approx 2.10$ is the boundary between the power-law and exponential regimes and the pinned phase is unstable for $a^* \approx 2.18$. The maximum size of the depinned regions increases as this instability is approached. Our data suggests an exponential dependence on the size of the triangular base. This indicates that there is a maximum size for the depinned regions and thus rigorous scale invariance (typical of growth driven by a coarsening mechanism) of the STI region is ruled out. More explicitly, the distribution of triangle sizes, $l$, is described very well by the function $\exp [3.44(a-2.176)l]$, implying that the exponential slopes in Fig. \[tridistri\] are proportional to $a-a^*$. Clearly, triangles with a base less than $\sim 2, 3$ cannot be visualized due to the discretization of equation (\[toral\]). A simple extrapolation indicates that the triangles become imperceptibly small for $a \approx 2.05$, in good agreement with the value obtained for the boundary between the power-law and exponential regimes. Therefore, we cannot rule out the possibility that the triangles are ubiquitous throughout the coexistence region (although not always visible in a discrete numerical simulation) in which case the power-law dettaching times should turn into exponentials for large enough times and system sizes. In this case the force exerted by the non-linear KPZ term on the triangular facets against the direction of growth, at late times, guarantees the stability of the pinned phase [@haye2] throughout the finite coexistence region. Finally, we study the phase-coexistence regime in a two-dimensional system to check whether the triangular patterns survive in higher dimensionalities. In particular, we consider a system size $100 \times 100$ and take $a=2$, $b=-4$, well within the coexistence region. We find structures as those shown in Fig. \[pyramid\]: the triangles becoming pyramids. Note that the edges of the pyramid bases are parallel to the axes of the discretization-lattice. This suggests that the pyramids are lattice artefacts and that a continuum system may exhibit conical structures. Conclusions =========== We have investigated a continuum model for nonequilibrium wetting transitions. The model consists of a KPZ equation in the presence of a short-ranged substrate potential, and is the most natural non-equilibrium extension of the interface displacement models used in equilibrium wetting. It can be mapped into a multiplicative noise problem, enabling simple theoretical calculations at the mean-field level. Numerical simulations reproduce a phase diagram analogous to that obtained within mean-field, including first as well as second-order phase transitions. In particular, we have found complete wetting and critical wetting transitions, as well as a finite area in the temperature-chemical potential phase diagram where pinned and depinned phases coexist. This finite coexistence region allows us to define critical wetting along a range of paths that are, however, characterized by the same critical exponents. Within this area we identified two regimes. In the first, the lifetime of the pinned phase grows exponentially with increasing system size and its ss-pdf is bell-shaped. The second one exhibits STI, lifetimes consistent with a power-law, and a double-peaked ss-pdf. The main feature of the latter regime is the presence of triangular structures that have been characterized by their slopes and size distributions. An interesting open problem is that of the equilibrium limit of non-equilibrium wetting. The Cole-Hopf transform precludes the limit $\lambda =0$ to be studied using this method. Moreover, we have noted how the behavior of the EW equation in the presence of a wetting potential differs from the weak-noise regime of the MN equation. This leaves the crossover to equilibrium wetting an open challenge. In addition, the effects of long-ranged potentials on the phenomenology described here remain to be investigated. Finally it would be extremely interesting to develop experiments in order to explore the rich, non-equilibrium phenomenology described in the previous sections; liquid-crystals are in our opinion good candidates for this. It is our hope that this work will stimulate experimental studies in this direction. acknowledgments =============== We acknowledge financial support from the E.U. through Contracts No. ERBFMRXCT980171 and ERBFMRXCT980183, by the Ministerio de Ciencia y Tecnología (FEDER) under project BFM2001-2841 and from the Fundação para a Ciência e a Tecnologia, contract SFRH/BPD/5654/2001. [99]{} S. Dietrich, in [*Phase Transitions and Critical Phenomena*]{}, vol. 12, edited by C. Domb and J. Lebowitz, Academic Press (1983); D.E. Sullivan and M.M. Telo da Gama, in [*Fluid Interfacial Phenomena*]{}, ed. C.A. Croxton. Wiley, New York (1986). M.E. Fisher, in [*Jerusalem Winter School for Theoretical Physics: Statistical Mechanics of Membranes and Surfaces*]{}, vol. 5, edited by D. Nelson, T.Piran and S. Weinbeg, World Scientific (1989). E. Brézin, B.I. Halperin, and S. Leibler, J. Phys. (Paris) [**44**]{}, 775 (1983). The derivation of this functional is far from trivial. Ideally one should constrain the interface, away from its equilibrium flat position, in the configuration $h({\bf x})$ and using the microscopic Hamiltonian, take a partial trace over the bulk variables. Note also that the displacement model assumes the existence of an interface and thus is only valid below the bulk critical temperature, where distinct “liquid” and “gas” phases are defined. If, instead, one considers long-range (van der Waals) interactions, the potential has the general form [@margarida]. $$V(h)=b(T) h^{-m}+c h^{-n}, \qquad n>m>0.$$ R. Lipowsky, J. Phys. A [**18**]{}, L585 (1985). S.F. Edwards and D.R. Wilkinson, Proc. R. Soc. London A [**381**]{}, 17 (1982). A.-L. Barabási and H.E. Stanley, [*Fractal Concepts in Surface Growth*]{}, Cambridge University Press, Cambridge 1995. H. Hinrichsen, R. Livi, D. Mukamel, and A. Politi, Phys. Rev. Lett. [**79**]{}, 2710 (1997). F. de los Santos, M.M. Telo da Gama, and M.A. Muñoz, Europhys. Lett. [**57**]{}, 803 (2002). G. Grinstein, M. A. Muñoz, and Y. Tu Phys. Rev. Lett. [**76**]{}, 4376 (1996). Y. Tu, G. Grinstein and M. A. Muñoz, Phys. Rev. Lett. [**78**]{}, 274 (1997). M. A. Muñoz and T. Hwa, Europhys. Lett. [**41**]{}, 147 (1998). T. Birner, K. Lippert, R. Müller, A. Künel, and U. Behn, Phys. Rev. E [**65**]{}, 046110 (2002). W. Genovese and M.A. Muñoz, Phys. Rev. E [**60**]{}, 69 (1999). See, P. Le Doussal and K. J. Wiese, Condmat/0208204; and references therein. J. Villain, J. Phys. I, [**19**]{}, 1 (1991); A. Pimpinelli and J. Villain, [*The Physics of Crystal Growth*]{}, Cambridge University Press, New York, 1998. R. Müller, K. Lippert, A. Künel, and U. Behn, Phys. Rev. E [**56**]{}, 2658 (1997). L. Giada and M. Marsili, Phys. Rev. E [**62**]{}, 6015 (2000). H. Hinrichsen, R. Livi, D. Mukamel, and A. Politi, Phys. Rev. E [**61**]{}, R1032 (2000). J. Krug and P. Meakin, J. Phys. A [**23**]{}, L987 (1990). The influence of the sign of $\lambda$ has been previously investigated in [@haye1; @haye2; @MN]. The equivalence of equations (\[kpzyv\]) and (\[toral\]) is reminiscent of a result in equilibrium wetting establishing that at the mean-field level (dropping the noise and ignoring spatial variations of the order parameter) critical wetting in systems with long-range forces is equivalent to complete wetting in systems with short-range interactions; R. Lipowsky, Phys. Rev. Lett. [**52**]{}, 1429 (1984). M.G. Zimmermann, R. Toral, O. Piro, and M. San Miguel, Phys. Rev. Lett. [**85**]{}, 3612 (2000). N.G. van Kampen, [*Stochastic Processes in Physics and Chemistry*]{}, North Holland (1992). C. van der Broeck, J.M.R. Parrondo, J. Armero, and A. Hernández-Machado, Phys. Rev. E [**49**]{}, 2639 (1994). M. Shiino, Phys. Rev. A [**36**]{}, 2393 (1987). ![Mean-field phase diagram and typical solutions of the equation $F(\n)=\n$ (temperature in units such that $k_B=1$.[]{data-label="pdandtipos"}](ep8065.fig1.eps){width="10cm"} ![Top: linear correlation coefficient for $\ln n$ as function of $\ln |b-b_W|$ for different values of $b_W$. The maximum gives the best estimate of $b_W=-0.70 \pm 0.01$. Bottom: log-log plot of $\ln n$ as function of $\ln |b-b_W|$ in the vicinity of the critical point. The line is a least-squares fit; from its slope the critical wetting exponent is found to be $1.20 \pm 0.02$[]{data-label="critical"}](ep8065.fig2.eps){width="8.5cm"} ![(A) Characteristic depinning times and (B) ss-pdf for various representative values of $a$.[]{data-label="histo"}](ep8065.fig3.eps){width="15cm"} ![Configuration in the $n$-representation for $a=1.18$ and $b=-4$. Depinned regions ($n<1$) are colored in dark-grey and pinned ones ($n>1$) in light-gray. 1000 time slices are depicted at intervals of 50 time uints. The system size is $L=500$.[]{data-label="confn"}](ep8065.fig4.eps){width="12cm"} ![Instantaneous configuration of the interface for time slice 400 (marked with a line in Fig. \[confn\]). Parameters as in Fig. \[confn\].[]{data-label="confh"}](ep8065.fig5.eps){width="8cm"} ![Distribution of triangles as a function of the size of the triangular base, for $a=2.12,2.14,2.15$ and 2.16.[]{data-label="tridistri"}](ep8065.fig6.eps){width="15cm"} ![Snapshot of an interface configuration for a $100 \times 100$ system (not all the substrate is shown) and parameters $a=2$ and $b=-4$. []{data-label="pyramid"}](ep8065.fig7.eps){width="9cm"}
--- author: - Toufik Mansour title: 'Restricted permutations by patterns of type $(2,1)$' --- [ LaBRI (UMR 5800), Université Bordeaux 1, 351 cours de la Libération,\ 33405 Talence Cedex, France\ [toufik@labri.fr]{} ]{} Abstract {#abstract .unnumbered} ======== Recently, Babson and Steingrimsson (see [@BS]) introduced generalized permutations patterns that allow the requirement that two adjacent letters in a pattern must be adjacent in the permutation. In this paper we study the generating functions for the number of permutations on $n$ letters avoiding a generalized pattern $ab\mn c$ where $(a,b,c)\in S_3$, and containing a prescribed number of occurrences of generalized pattern $cd\mn e$ where $(c,d,e)\in S_3$. As a consequence, we derive all the previously known results for this kind of problems, as well as many new results. Introduction ============ [**Classical patterns.**]{} Let $\alpha\in S_n$ and $\tau\in S_k$ be two permutations. We say that $\alpha$ [*contains*]{} $\tau$ if there exists a subsequence $1\leq i_1<i_2<\cdots<i_k\leq n$ such that $(\alpha_{i_1},\dots,\alpha_{i_k})$ is order-isomorphic to $\tau$; in such a context $\tau$ is usually called a [*pattern*]{}. We say $\alpha$ [*avoids*]{} $\tau$, or is $\tau$-[*avoiding*]{}, if such a subsequence does not exist. The set of all $\tau$-avoiding permutations in $S_n$ is denoted $S_n(\tau)$. For an arbitrary finite collection of patterns $T$, we say that $\alpha$ avoids $T$ if $\alpha$ avoids any $\tau\in T$; the corresponding subset of $S_n$ is denoted $S_n(T)$. While the case of permutations avoiding a single pattern has attracted much attention, the case of multiple pattern avoidance remains less investigated. In particular, it is natural, as the next step, to consider permutations avoiding pairs of patterns $\tau_1$, $\tau_2$. This problem was solved completely for $\tau_1,\tau_2\in S_3$ (see [@SS]), for $\tau_1\in S_3$ and $\tau_2\in S_4$ (see [@W]), and for $\tau_1,\tau_2\in S_4$ (see [@B1; @Km] and references therein). Several recent papers deal with the case $\tau_1\in S_3$, $\tau_2\in S_k$ for various pairs $\tau_1,\tau_2$ (see [@CW; @Kr; @MV3] and references therein). Another natural question is to study permutations avoiding $\tau_1$ and containing $\tau_2$ exactly $t$ times. Such a problem for certain $\tau_1,\tau_2\in S_3$ and $t=1$ was investigated in [@R], and for certain $\tau_1\in S_3$, $\tau_2\in S_k$ in [@RWZ; @MV1; @Kr; @MV3; @MV2; @MV4]. The tools involved in these papers include continued fractions, Chebyshev polynomials, and Dyck paths.\ [**Generalized patterns.**]{} In [@BS] Babson and Steingrimsson introduced generalized permutation patterns that allow the requirement that two adjacent letters in a pattern must be adjacent in the permutation. The idea for Babson and Steingrimsson in introducing these patterns was study of Mahonian statistics. Following [@C], we define our [*generalized patterns*]{} as words on the letters $1,2,3,\dots$ where two adjacent letters may or may not be separated by a dash. The absence of a dash between two adjacent letters in a pattern indicates that the corresponding letters in the permutation must be adjacent, and in the order (order-isomorphic) given by the pattern. For example, the subword $23\mn1$ of a permutation $\pi=(\pi_1,\pi_2,\cdots,\pi_n)$ is a subword $(\pi_i,\pi_{i+1},\pi_{j})$ where $i+1<j$ such that $\pi_j<\pi_{i}<\pi_{i+1}$. We say $\tau$ a generalized pattern of type $(2,1)$ if it has the form $ab\mn c$ where $(a,b,c)\in S_3$. There exist six generalized patterns of type $(2,1)$ which are $12\mn3$, $13\mn2$, $21\mn3$, $23\mn1$, $31\mn2$, and $32\mn1$. By the complement symmetric operation we get three different classes: $\{12\mn3, 32\mn1\}$, $\{13\mn2,31\mn2\}$, and $\{21\mn3, 23\mn1\}$. While the case of permutations avoiding a single pattern has attracted much attention, the case of multiple pattern avoidance remains less investigated. In particular, it is natural, as the next step, to consider permutations avoiding pairs of generalized patterns $\tau_1$, $\tau_2$. This problem was solved completely for $\tau_1,\tau_2$ two generalized patterns of length three with exactly one adjacent pair of letters (see [@CM1]). Claesson and Mansour [@CM2] showed (by Clarke, Steingr[í]{}msson and Zeng [@ClStZe97 Corollary 11] result) the distribution of the patterns $2\mn31$ and $31\mn2$ is given by Stieltjes continued fraction as following: \[CSZ\] The following Stieltjes continued fraction expansion holds $$\sum\limits_{n\geq 0}\sum\limits_{\pi\in S_n} x^{1+(12)\pi } y^{(21)\pi} p^{(2\mn31)\pi} q^{(31\mn2)\pi} t^{|\pi|} = \cfrac{1}{1 - \cfrac{x {[\,1\,]_{p,q}} t}{1 - \cfrac{y {[\,1\,]_{p,q}} t}{1 - \cfrac{x {[\,2\,]_{p,q}} t}{1 - \cfrac{y {[\,2\,]_{p,q}} t}{\quad\ddots}}}}}$$ where ${[\,n\,]_{p,q}} = q^{n-1}+pq^{n-2}+\cdots+p^{n-2}q+p^{n-1}$, $(\tau)\pi$ is the number of occurrences of $\tau$ in $\pi$. In the present paper, as consequence to [@CM1] (see also [@C; @CM2]), we exhibit a general approach to study the number of permutations avoiding a generalized pattern of type $(2,1)$ and containing a prescribed number of occurrences of $\tau$ a generalized pattern of type $(2,1)$. As a consequence, we derive all the previously known results for this kind of problems, as well as many new results. Avoiding $12\mn3$ ================= Let $f_{\tau;r}(n)$ be the number of all permutations in $S_n(12\mn3)$ containing $\tau$ exactly $r$ times. The corresponding exponential and ordinary generating function we denote by $\mathcal{F}_{\tau;r}(x)$ and $F_{\tau;r}(x)$ respectively; that is, $\mathcal{F}_{\tau;r}(x)=\sum\limits_{n\geq0}\frac{f_{\tau;r}(n)}{n!}x^n$ and $F_{\tau;r}(x)=\sum\limits_{n\geq0}f_{\tau;r}(n)x^n$. The above definitions we extend to $r<0$ as $f_{\tau;r}(n)=0$ for any $\tau$. Our present aim is to count the number of permutations avoiding $12\mn 3$ and avoiding (or containing exactly $r$ times) an arbitrary generalized pattern $\tau$, since that we introduce another notation. Let $f_{\tau;r}(n; i_1,i_2,\ldots,i_j)$ be the number of permutations $\pi\in S_n(12\mn3)$ containing $\tau$ exactly $r$ times such that $\pi_1\pi_2\dots\pi_j=i_1i_2\dots i_j$. The main body of this section is divided into $6$ subsections corresponding to the cases: $\tau$ is a general generalized pattern, $13\mn2$, $21\mn3$, $23\mn1$, $31\mn2$, and $32\mn1$. $\tau$ a generalized pattern of length $k$ ------------------------------------------ Here we study certain cases of $\tau$, where $\tau$ is a generalized pattern of length $k$ without dashes, or with exactly one dash. \[th11\] Let $k\geq 2$, $P_k(x)=\sum\limits_{j=0}^{k-2}\frac{x^j}{j!}$, $G_0(x)=e^x-P_k(x)$, and for $s\geq 1$ let $G_s(x)=G_0(x)\int_0^xG_{s-1}(t)dt$. Then $$\mathcal{F}_{(k-1)\dots21k;r}(x)=e^{\int P_k(t)dt}\int G_k(t)dt.$$ Let $\alpha\in S_n(12\mn3)$ such that $\alpha_j=n$; so $\alpha_1>\alpha_2>\dots>\alpha_{j-1}$. Therefore, $\alpha$ contains $\tau=(k-1)\dots21k$ exactly $r$ times if and only if $(\alpha_{j+1},\dots,\alpha_n)$ contains $\tau$ exactly $r$ times if $j\leq k-1$, and contains $\tau$ exactly $r-1$ times if $j\geq k$. Thus $$f_{\tau;r}(n)=\sum\limits_{j=1}^{k-1}\binom{n-1}{j-1}f_{\tau;r}(n-j)+\sum\limits_{j=k}^n\binom{n-1}{j-1}f_{\tau;r-1}(n-j).$$ If multiplying by $x^n/(n-1)!$ and summing over all $n\geq 1$ we get $$\mathcal{F}_{(k-1)\dots21k;r}(x)=\sum\limits_{j=0}^{k-2} \frac{x^j}{j!}(\mathcal{F}_{(k-1)\dots21k;r}(x)-\mathcal{F}_{(k-1)\dots21k;r-1})+e^x\mathcal{F}_{(k-1)\dots21k;r-1}(x).$$ The rest is easy to check. [(see Claesson [@C])]{}\[ex11\] Theorem \[th11\] yields for $r=0$ that $$\mathcal{F}_{(k-1)\dots21k;0}(x)=e^{\frac{x^1}{1!}+\frac{x^2}{2!}+\dots+\frac{x^{k-1}}{(k-1)!}}.$$ If $k\rightarrow\infty$, then we get the exponential generating function for the number of $12\mn3$-avoiding permutations in $S_n$ is given by $e^{e^x-1}$. Besides, Theorem \[th11\] yields for given $k\geq 2$ and $r\rightarrow\infty$ that $G_s(x)\rightarrow e^{e^x-\int P_k(x)}$, so $\mathcal{F}_{(k-1)\dots21k;r}(x)\rightarrow e^{e^x-1}$. Claesson [@C] (see also [@CM1 Pro. 28]) proved the number of permutations in $S_n(12\mn3,21\mn3)$ is the same number of involutions in $S_n$. The case of varying $k$ is more interesting. As an extension of this result. \[th12\] For $k\geq 2$, $$\mathcal{F}_{(k-1)\dots21\mn k;0}(x)=e^{\frac{x^1}{1!}+\dots+\frac{x^{k-1}}{(k-1)!}}.$$ Let $\tau=(k-1)\dots 21\mn k$, by definitions we get $$f_{\tau;0}(n)=\sum\limits_{j=1}^n f_{\tau;0}(n;j),\quad f_{\tau;0}(n;n)=f_{\tau;0}(n-1),\eqno(1)$$ and $$f_{\tau;0}(n;i_1,\dots,i_j)=\sum\limits_{i_{j+1}=1}^{i_j-1}f_{\tau;0}(n;i_1,\dots,i_j,i_{j+1})+f_{\tau;0}(n;i_1,\dots,i_j,n)\eqno(2)$$ for all $n-1\geq i_1>i_2>\dots>i_j\geq 1$. Therefore, since $f_{\tau;0}(n;i_1,\dots,i_j)=0$ for all $n-1>i_1>\dots>i_j\geq1$ where $j\geq k-1$ and since $f_{\tau;0}(n;i_1,\dots,i_j,n)=f_{\tau;0}(n-1-j)$ for all $n-1>i_1>\dots>i_j\geq 1$ where $0\leq j\leq k-2$ we have for all $n\geq 1$ $$f_{\tau;0}(n)=\sum\limits_{j=0}^{k-2}\binom{n-1}{j}f_{\tau;0}(n-1-j).$$ The rest is easy to see as proof of Theorem \[th11\]. In view of Example \[ex11\] and Theorem \[th12\] we get the number of permutations in $S_n(12\mn3, (k-1)\dots21\mn k)$ is the same number of permutations in $S_n(12\mn3, (k-1)\dots21k)$. In addition, $$S_n(12\mn3,(k-1)\dots21\mn k)=S_n(12\mn 3,(k-1)\dots21k),\eqno(3)$$ which can prove as follows. Let $\alpha=(\alpha',n,\alpha'')$; since $\alpha$ avoids $12\mn3$ we get $\alpha'$ decreasing, so by the principle of induction on length of $\alpha$ it is easy to see that $\alpha$ avoids $(k-1)\dots21k$ if and only if avoids $(k-1)\dots21\mn k$.\ In [@CM1] showed the number of permutations in $S_n(12\mn3, 13\mn2)$ is given by $2^{n-1}$. The case of varying $k$ is more interesting. As an extension of this result. \[th13\] Let $k\geq 3$, then for all $n\geq 1$ $$\begin{array}{ll} f_{(k-2)\dots21k\mn(k-1);0}(n)&=\sum\limits_{j=0}^{k-3}\binom{n-1}{j}f_{(k-2)\dots21k\mn(k-1);0}(n-1-j)+\\ &\quad\quad+\sum\limits_{j=k-2}^{n-1}\binom{n-j+k-4}{k-3}f_{(k-2)\dots21k\mn(k-1);0}(n-1-j). \end{array}$$ Let $\tau=(k-2)\dots21k\mn(k-1)$ and let $n-1\geq i_1>\dots>i_j\geq 1$, so for $j\leq k-3$ $$f_{\tau;0}(n;i_1,\dots,i_j,n)=f_{\tau;0}(n-1-j)$$ and for $j\geq k-2$ $$\begin{array}{ll} f_{\tau;0}(n;i_1,\dots,i_j,n)&=f_{\tau;0}(n;n-1,n-2,\dots,n-(j-k+3),i_{j-k+4},\dots,i_j,n)=\\ &=f_{\tau;0}(n-j). \end{array}$$ Therefore, Equation (1) and Equation (2) yield the desired result. [(see Claesson and Mansour [@CM2])]{} Theorem \[th13\] yields for $k=3$ the number of permutations in $S_n(12\mn3,13\mn2)$ is given by $2^{n-1}$. Another example, for $k=4$ we get $$\mathcal{F}_{214\mn3;0}(x)=1+\int_0^xe^{2x+x^2/2}dx.$$ As a remark, similarly as proof of Equation (3), we have for all $k\geq3$ $$\mathcal{F}_{(k-2)\dots21k\mn(k-1);0}(x)=\mathcal{F}_{(k-2)\dots21\mn k\mn(k-1);0}(x).$$ $\tau=13\mn2$ ------------- \[th14\] Let $r$ be a nonnegative integer. Then $$F_{13\mn2;r}(x)=\frac{1-x}{1-2x}\delta_{r,0}+\frac{x^2}{1-2x}\sum\limits_{d=1}^r \frac{F_{13\mn2;r-d}(x)-\sum\limits_{j=0}^{d-1}f_{13\mn2;r-d}(j)x^j}{(1-x)^d}.$$ Let $r\geq 0$, $b_r(n)=f_{13\mn2;r}(n)$, and let $1\leq i\neq j\leq n$. If $i<j$, then since the permutations avoiding $12\mn3$ we have $b_r(n;i,j)=0$ for $j\leq n-1$ and $b_r(n;i,n)=b_{r-(n-1-i)}(n-2)$, hence $$\sum\limits_{j=i+1}^n b_r(n;i,j) = b_{r-(n-1-i)}(n-1;n-1)=b_{r-(n-1-i)}(n-2).$$ If $i>j$ then by definitions we have $$b_r(n;i,j) = b_r(n-1;j).$$ Owing to Equation (2) we have showed that, for all $1\leq i \leq n-2$, $$b_r(n;i)=b_{r-(n-1-i)}(n-2)+\sum\limits_{j=1}^{i-1}b_r(n-1;j).\eqno(4)$$ Moreover, it is plain that $$b_r(n;n)=b_r(n;n-1)=b_r(n-1),\eqno(5)$$ and by means of induction we shall show that Equation (4) implies: If $2\leq m \leq n-1$ then $$\begin{array}{l} b_r(n;n-m)=\sum\limits_j (-1)^j\left[ \binom{m-1}{j}+\binom{m-2}{j-1} \right]b_r(n-1-j)+\\ \qquad\qquad\qquad+b_{r-(m-1)}(n-2)- \sum\limits_{d\geq 1}\sum\limits_j(-1)^j\binom{m-2-d}{j}b_{r-d}(n-3-j). \end{array}\eqno(6)$$ First we verify the statement for $m=2$; in this case Equation (6) becomes $$b_r(n;n-2) = b_r(n-1) - 2 b_r(n-2) + b_{r-1}(n-2).$$ Indeed, $$\begin{array}{l} b_r(n;n-m) =\\ \quad\quad = \sum\limits_{j=1}^{n-m-1} b_r(n;n-m,j)+ \sum\limits_{j=n-m+1}^{m}b_r(n;n-m,j)\\ \quad\quad = \sum\limits_{j=1}^{n-m-1} b_r(n-1;j)+ b_{r-(m-1)}(n-1;n-1)\\ \quad\quad = b_r(n-1) - 2 b_r(n-2) + b_{r-(m-1)}(n-2) - \sum\limits_{k=2}^{m-1} b_r(n-1;n-1-k), \end{array}\eqno(7)$$ where the three equalities follow from Equation (2), and Equation (1) together with  (4) and  (5), respectively. Now simply put $m=2$ to obtain Equation (6). Assume that Equation (7) holds for all $k$ such that $2\leq k \leq m-1$. Then, employing the familiar identity $\binom 1 k + \binom 2 k + \cdots +\binom n k = \binom{n+1}{k+1}$, the trailing sum in Equation (7) expands as follows. Since $$\begin{array}{l} \sum\limits_{k=2}^{m-1}\sum\limits_j(-1)^j\Biggl[\binom{k-1}{j}+\binom{k-2}{j-1}\Biggr] b_r(n-2-j)=\\ \quad=\sum\limits_{j}(-1)^j\Biggl[\binom{m-1}{j+1}+\binom{k-2}{j}\Biggr] b_r(n-2-j) -b_r(n-2)=\\ \quad\quad=-\sum\limits_{j}(-1)^j\Biggl[\binom{k-1}{j}+\binom{k-2}{j-1}\Biggr]b_r(n-1-j) +b_r(n-1)-2b_r(n-2), \end{array}$$ $$\sum\limits_{k=2}^{m-1}b_{r-(k-1)}(n-3)=\sum\limits_{d\geq1}b_{r-d}(n-3),$$ and $$\begin{array}{l} \sum\limits_{k=2}^{m-1}\sum\limits_{d\geq1}\sum\limits_j(-1)^j\binom{k-2-d}{j}b_{r-d}(n-4-j)=\\ \qquad\qquad\quad=-\sum\limits_{d\geq 1}\sum\limits_j(-1)^j\binom{m-2-d}{j}b_{r-d}(n-3-j) +\sum\limits_{d\geq 1}b_{r-d}(n-3) \end{array}$$ with using Equation (7) we get that Equation (6) holds for $k=m$, by the principle of induction the universal validity of Equation (6) follows.\ Now, if summing $b_r(n;n-m)$ over all $0\leq m\leq n-1$, then by using Equation (1),  (5), and  (6) we get $$\begin{array}{l} \sum\limits_j(-1)^j\left[\binom{n-1}{j}+\binom{n-2}{j-1}\right]b_r(n-j)=\\ \qquad\qquad\qquad\qquad=\sum\limits_{d\geq1}\sum\limits_j(-1)^j\binom{n-2-d}{j}b_{r-d}(n-2-j). \end{array}\eqno(8)$$ Using [@CM2 Lem. 7] to transfer the above equation in terms of ordinary generating functions $$\begin{array}{l} (1-u)F_{13\mn2;r}\left(\frac{u}{1+u}\right)=\delta_{r,0}+\\ \quad\quad\quad+u^2\sum\limits_{d=1}^r (1+u)^{d-1}\left[ F_{13\mn2;r-d}\left(\frac{u}{1+u}\right)-\sum\limits_{j=0}^{d-1}f_{13\mn2;r-d}(j)\left(\frac{u}{1+u}\right)^j\right]. \end{array}$$ Putting $x=u/(1+u)$ ($u=x/(1-x)$) we get the desired result. An application for Theorem \[th14\] we get the exact formula for $f_{13\mn2;r}(n)$ for $r=0,1,2,3,4$. For all $n\geq 1$, $$\begin{array}{l} f_{13\mn2;0}(n)=2^{n-1};\\ f_{13\mn2;1}(n)=(n-3)2^{n-2}+1;\\ f_{13\mn2;2}(n)=(n^2-3n -6)2^{n-4}+n;\\ f_{13\mn2;3}(n)=1/3(n^3-31n-18)2^{n-5}+n^2-n+1;\\ f_{13\mn2;4}(n)=1/3(n-1)(n^3+7n^2-546n-312)2^{n-8}+2/3(n-1)(n^2-2n+3). \end{array}$$ $\tau=21\mn3$ ------------- By definitions it is easy to obtain the following: \[l161\] Let $n\geq 1$; then $$\begin{array}{l} f_{21\mn3;r}(n)=f_{21\mn3;r}+\sum\limits_{i=1}^{n-1}f_{21\mn3;r}(n;i),\\ f_{21\mn3;r}(n;i)=f_{21\mn3;r}(n-2)+\sum\limits_{j=1}^{i-1}f_{21\mn3;r-(n-i)}(n-2;j),\quad\mbox{for} \ 1\leq i\leq n-1. \end{array}$$ Using the above lemma for given $r$ we obtain the exact formula for $f_{21\mn3;r}$. Here we present the first three cases $r=0,1,2$. \[th16\] For all $n\geq 1$ $$\begin{array}{l} f_{21\mn3;0}(n)=f_{21\mn3;0}(n-1)+(n-1)f_{21\mn3;0}(n-2);\\ \ \\ f_{21\mn3;1}(n)=f_{21\mn3;1}(n-1)+(n-1)f_{21\mn3;1}(n-2)+f_{21\mn3;0}(n-1)-f_{21\mn3;0}(n-2);\\ \ \\ f_{21\mn3;2}(n)=f_{21\mn3;2}(n-1)+(n-1)f_{21\mn3;2}(n-2)+f_{21\mn3;1}(n-1)-f_{21\mn3;1}(n-2)+\\ \qquad\qquad\qquad\qquad\quad\qquad\qquad\qquad\qquad\qquad\qquad+f_{21\mn3;0}(n-1)-2f_{21\mn3;0}(n-2). \end{array}$$ Case $r=0$: Lemma \[l161\] yields $f_{21\mn3;0}(n;i)=f_{21\mn3;0}(n-1)$ for all $1\leq i\leq n-1$, so for all $n\geq 1$ $$f_{21\mn3;0}(n)=f_{21\mn3;0}(n-1)+(n-1)f_{21\mn3;0}(n-2).$$ Case $r=1$: Lemma \[l161\] yields $f_{21\mn3;1}(n;i)=f_{21\mn3;1}(n-2)$ for all $1\leq i\leq n-2$, and $$f_{21\mn3;1}(n;n-1)=\sum\limits_{j=1}^{n-2}f_{21\mn3;0}(n-1;j)+f_{21\mn3;1}(n-2)$$ which equivalent to (by use the case $r=0$) $$f_{21\mn3;1}(n;n-1)=f_{21\mn3;1}(n-2)+f_{21\mn3;0}(n-1)-f_{21\mn3;0}(n-2).$$ Therefore, for all $n\geq 1$ $$f_{21\mn3;1}(n)=f_{21\mn3;1}(n-1)+(n-1)f_{21\mn3;1}(n-2)+f_{21\mn3;0}(n-1)-f_{21\mn3;0}(n-2).$$ Case $r=2$: Similarly as the cases $r=0,1$. $\tau=23\mn1$ ------------- \[th17\] For any $r$ nonnegative integer, $$\begin{array}{l} F_{23\mn1;r}(x)=\frac{\delta_{r,0}}{1-x}+\\ \qquad\qquad\qquad+x^2\sum\limits_{d=0}^r(1-x)^{j-2}\left[F_{23\mn1;r-d} \left(\frac{x}{1-x}\right)-\sum\limits_{j=0}^{d-1}f_{23\mn1;r-d}(j)\left(\frac{x}{1-x}\right)^j\right]. \end{array}$$ By definition it is easy to state the following statement: \[l171\] Let $n\geq 1$; then $$\begin{array}{l} f_{23\mn1;r}(n)=f_{23\mn1;r}(n-1)+\sum\limits_{i=1}^{n-1}f_{23\mn1;r}(n;i),\\ f_{23\mn1;r}(n;i)=f_{23\mn1;r-(i-1)}(n-2)+\sum\limits_{j=1}^{i-1}f_{23\mn1;r}(n-1;j),\quad \mbox{for}\ 1\leq i\leq n-1. \end{array}$$ By consider the same argument proof of Equation (6) with use Lemma \[l171\] we have for all $1\leq m\leq n-1$, $$f_{23\mn1;r}(n;m)=f_{23\mn1;r+1-m}(n-2)+\sum\limits_{d\geq1}\sum\limits_j\binom{m-1-d}{j}f_{23\mn1;r+1-d}(n-3-j).$$ Therefore, by summing $f_{23\mn1;r}(n;m)$ over all $1\leq m\leq n$ we shall show that Lemma \[l171\] implies for all $n\geq 1$, $$f_{23\mn1;r}(n)=f_{23\mn1;r}(n-1)+\sum\limits_{d=0}^r\sum\limits_{j=0}^{n-2-d}\binom{n-2-d}{j}f_{23\mn1;r-d}(n-2-j).$$ Hence, using [@CM2 Lem. 7] we get the desired result. [(see Claesson and Mansour [@CM1 Pro. 7])]{}\[exxx1\] Theorem \[th17\] for $r=0$ yields $$F_{23\mn1;0}(x)=\frac{1}{1-x}+\frac{x^2}{(1-x)^2}F_{23\mn1;0}\left(\frac{x}{1-x}\right).$$ An infinite number of application of this identity we have $$F_{23\mn1;0}(x)=\sum\limits_{k\geq0}\frac{x^{2k}}{p_{k-1}(x)p_{k+1}(x)},$$ where $p_m(x)=\prod_{j=0}^m(1-jx)$. An another example, Theorem \[th17\] for $r=1$ yields (similarly) $$F_{23\mn1;1}(x)=\sum\limits_{d\geq 0}\left[ \frac{x^{2d+2}}{p_{d+1}(x)}\left( \sum\limits_{k\geq0}\frac{x^{2k}}{p_{k+d-1}(x)p_{k+d+1}(x)}-1\right)\right].$$ $\tau=31\mn2$ ------------- By definitions it is easy to state the following: \[l151\] Let $n\geq 1$; then $$\begin{array}{l} f_{31\mn2;r}(n;n)=\sum\limits_{j=1}^{n-1}f_{31\mn2;r+1-j}(n-1;n-j),\\ f_{31\mn2;r}(n;i)=f_{31\mn2;r}(n-1;n-1)+\sum\limits_{j=1}^{i-1}f_{31\mn2;r-(i-1-j)}(n-1;j),\quad\mbox{for}\ 1\leq i\leq n-1. \end{array}$$ \[th15\] Let $r$ be a nonnegative integer, then $f_{31\mn2;r}(n)$ is a polynomial of degree at most $2r+2$ with coefficient in $\mathcal{Q}$, where $n\geq 0$. Using Lemma \[l151\] for $r=0$ we obtain that, first $f_{31\mn2;0}(n;n)=1$ and $f_{31\mn2;0}(n;1)=1$, second $f_{31\mn2;0}(n;j)=j$ for all $1\leq j\leq n-1$. Hence, for all $n\geq 0$, $$f_{31\mn2;0}(n)=\binom{n}{2}+1.$$ Now, assume that $f_{31\mn2;d}(n;j)$ is a polynomial of degree at most $2d+1$ with coefficient in $\mathcal{Q}$ for all $1\leq j\leq n$ where $d=0,1,2,\dots,r-1$. Therefore, Lemma \[l151\] with induction hypothesis imply, first $f_{31\mn2;r}(n;n)$ and $f_{31\mn2;r}(n;1)$ are polynomials of degree at most $2r$, and then $f_{31\mn2;r}(n;j)$ is a polynomial of degree at most $2r+1$. So, by use the principle of induction on $r$ we get that $f_{31\mn2;r}(n;j)$ is a polynomial of degree at most $2r+1$ with coefficient in $\mathcal{Q}$ for all $r\geq 0$. Hence, since $f_{31\mn2;r}(n)=\sum\limits_{j=1}^n f_{31\mn2;r}(n;j)$ we get the desired result. An application for Theorem  \[th15\] with the initial values of the sequence $f_{31\mn2;r}(n)$ we have the exact formula for $f_{31\mn2;r}(n)$ where $r=0,1,2,3$. For all $n\geq 0$, $$\begin{array}{l} f_{31\mn2;0}(n)=1+n(n-1)/2;\\ f_{31\mn2;1}(n)=n(n-1)(n-2)(3n-5)/24;\\ f_{31\mn2;2}(n)=n(n-1)(n-2)(n-3)(5n^2-3n-38)/720;\\ f_{31\mn2;3}(n)=n(n-1)(n-2)(n-3)(n-4)(7n^3+10n^2+205n-1142)/40320. \end{array}$$ $\tau=32\mn1$ ------------- By definitions it is easy to state the following: \[l152\] Let $n\geq 1$; then $$\begin{array}{l} f_{32\mn1;r}(n;n)=\sum\limits_{j=1}^{n-1}f_{31\mn2;r+1-j}(n-1;j),\\ f_{32\mn1;r}(n;i)=f_{32\mn1;r}(n-1;n-1)+\sum\limits_{j=1}^{i-1}f_{31\mn2;r+1-j)}(n-1;j),\quad\mbox{for}\ 1\leq i\leq n-1. \end{array}$$ \[th15b\] Let $r$ be a nonnegative integer, then $f_{32\mn1;r}(n)$ is a polynomial of degree at most $r+1$ with coefficient in $\mathcal{Q}$, where $n\geq r+2$. Using Lemma \[l152\] for $r=0$ we obtain that, first $f_{32\mn1;0}(n;n)=1$ and $f_{32\mn1;0}(n;1)=1$, second $f_{31\mn2;0}(n;j)=2$ for all $2\leq j\leq n-1$. Hence, for all $n\geq 2$, $$f_{31\mn2;0}(n)=2n-2.$$ Let $n\geq r+2$ and let us assume that $f_{32\mn1;d}(n;j)$ is a polynomial of degree at most $d$ with coefficient in $\mathcal{Q}$ for all $1\leq j\leq n$ where $d=0,1,2,\dots,r-1$. Lemma \[l152\] with induction hypothesis imply, first $f_{32\mn1;r}(n;n)$ and $f_{32\mn1;r}(n;1)$ are polynomials of degree at most $r$ with coefficient in $\mathcal{Q}$, and then $f_{31\mn2;r}(n;j)$ where $2\leq j\leq n-1$ is a polynomial of degree at most $r$ with coefficient in $\mathcal{Q}$. So, by use the principle of induction on $r$ we get that $f_{32\mn1;r}(n;j)$ is a polynomial of degree at most $r$ with coefficient in $\mathcal{Q}$ for all $r\geq 0$. Hence, since $f_{32\mn1;r}(n)=\sum\limits_{j=1}^n f_{32\mn1;r}(n;j)$ we get the desired result. An application for Theorem  \[th15b\] with the initial values of the sequence $f_{32\mn1;r}(n)$ we get the exact formula for $f_{32\mn1;r}(n)$ for $r=0,1,2,3$. $$\begin{array}{ll} \mbox{For all}\ n\geq 2,\ & f_{32\mn1;0}(n)=2n-2;\\ \mbox{For all}\ n\geq 3,\ & f_{32\mn1;1}(n)=(n-3)(2n-1);\\ \mbox{For all}\ n\geq 4,\ & f_{32\mn1;2}(n)=(n-4)(n^2-3n+1);\\ \mbox{For all}\ n\geq 5,\ & f_{32\mn1;3}(n)=(n-5)(2n^3-13n^2+47n-6)/6. \end{array}$$ Avoiding $13\mn2$ ================= Let $g_{\tau;r}(n)$ be the number of all permutations in $S_n(13\mn2)$ containing $\tau$ exactly $r$ times. The corresponding ordinary generating function we denote by $G_{\tau;r}(x)$; that is, $G_{\tau;r}(x)=\sum\limits_{n\geq0}g_{\tau;r}(n)x^n$. The above definitions we extend to $r<0$ as $g_{\tau;r}(n)=0$ for any $\tau$. In the current section, our present aim is to count the number of permutations avoiding $13\mn2$ and containing $\tau$ exactly $r$ times where $\tau$ a generalized pattern of type $(2,1)$, and since that we introduce another notation. Let $g_{\tau;r}(n; i_1,i_2,\ldots,i_j)$ be the number of permutations $\pi\in S_n(13\mn2)$ containing $\tau$ exactly $r$ times such that $\pi_1\pi_2\dots\pi_j=i_1i_2\dots i_j$. The main body of this section is divided to three subsections corresponding to the cases $\tau$ is $12\mn3$; $21\mn3$; and $23\mn1$, or $31\mn2$, or $32\mn1$. $\tau=12\mn3$ ------------- \[th21\] Let $r$ be any nonnegative integer number; then there exists a polynomials $p_r(n)$ and $q_{r-1}(n)$ of degree at most $r$ and $r-1$ respectively, with coefficient in $\mathcal{Q}$ such that for all $n\geq 1$, $$g_{12\mn3;r}(n)=p_r(n)\cdot 2^n+q_{r-1}(n).$$ Let $r\geq 0$, and let $1\leq i\neq j\leq n$. If $i<j$, then since the permutations avoiding $13\mn2$ we have $g_{12\mn3;r}(n;i,j)=0$ for $i+2\leq j\leq n$ and $g_{12\mn3;r}(n;i,i+1)=g_{12\mn3;r-(n-1-i)}(n-1;i)$, hence $$\sum\limits_{j=i+1}^n g_{12\mn3;r}(n;i,j)=g_{12\mn3;r-(n-1-i)}(n-1;i).$$ If $i>j$ then by definitions we have $$g_{12\mn3;r}(n;i,j)= g_{12\mn3;r}(n-1;j).$$ Owing to the definitions we have showed that, for all $1\leq i\leq n-2$, $$g_{12\mn3;r}(n;i)=g_{12\mn3;r-(n-1-i)}(n-1;i)+\sum\limits_{j=1}^{i-1}g_{12\mn3;r}(n-1;j).\eqno(1')$$ Moreover, it is plain that $$g_{12\mn3;r}(n;n)=g_{12\mn3;r}(n;n-1)=g_{12\mn3;r}(n-1),\eqno(2')$$ and for all $1\leq j\leq n-r-2$ $$g_{12\mn3;r}(n;j)=0.\eqno(3')$$ Now we ready to prove the theorem. Let $r=0$; by Equation (3’) we get $g_{12\mn3;0}(n;j)=0$ for all $j\leq n-2$ and by Equation (2’) we have $g_{12\mn3;0}(n;n-1)=g_{12\mn3;0}(n;n)=d_0(n-1)$, so $d_r(n)=2^{n-1}$. Therefore, the theorem holds for $r=0$. Let $r\geq 1$, and let us assume that for all $0\leq m\leq s-1$ and all $0\leq s\leq r-1$ there exists a polynomials $p_{m}(n)$ and $q_{m-1}(n)$ of degree at most $m$ and $m-1$ respectively with coefficient in $\mathcal{Q}$ such that $g_{12\mn3;s}(n;n-s-1+m)=p_{m}(n)2^n+q_{m-1}(n)$, and there exists a polynomials $v_s(n)$ and $u_{s-1}(n)$ of degree at most $s$ and $s-1$ respectively with coefficient in $\mathcal{Q}$ such that $g_{12\mn3;s}(n-m)=v_s(n)2^n+u_{s-1}(n)$ where $m=0,1$. So, using Equation (1’) for $m=0,1,\dots,r-1$ and the induction hypothesis imply that there exists a polynomials $a_{m}(n)$ and $b_{m-1}(n)$ of degree at most $m$ and $m-1$ respectively with coefficient in $\mathcal{Q}$ such that $$g_{12\mn3;r}(n;n-r-1+m)=a_{m}(n)2^n+b_{m-1}(n).$$ Besides, Owing to Equations (1’), (2’), and  (3’) we have showed that $$g_{12\mn3;r}(n)=2g_{12\mn3;r}(n-1)+\sum\limits_{j=2}^{r+1}g_{12\mn3;r}(n;n-j),$$ which means that $g_{12\mn3;r}(n)$ is given by $v_r(n)2^n+u_{r-1}(n)$ and $g_{12\mn3;r}(n;n)=g_{12\mn3;r}(n;n-1)=g_{12\mn3;r}(n-1)$. Therefore, the statement holds for $s=r$. Hence, by the principle of induction on $r$ the theorem holds. An application for Theorem \[th21\] with the initial values of the sequence $g_{12\mn3;r}(n)$ we obtain the exact formula for $g_{12\mn3;r}(n)$ for $r=0,1,2,3$. For all $n\geq 1$; $$\begin{array}{l} g_{12\mn3;0}(n)=2^{n-1};\\ g_{12\mn3;2}(n)=(n-3)2^{n-2}+1;\\ g_{12\mn3;2}(n)=(n^2-11n+34)2^{n-4}-n-2;\\ g_{12\mn3;3}(n)=1/3(n^3-24n^2+257n-954)2^{n-5}+n^2+4n+10. \end{array}$$ $\tau=21\mn3$ ------------- \[th22\] Let $r$ be any nonnegative integer number. Then there exist a polynomial $p_r(n)$ of degree at most $r$ with coefficient in $\mathcal{Q}$, such that for all $n\geq r$ $$g_{21\mn3;r}(n)=p_r(n)\cdot 2^n.$$ By definitions it is easy to state \[l221\] Let $n\geq 1$; then $$\begin{array}{l} g_{21\mn3;r}(n)=g_{21\mn3;r}(n-1)+\sum\limits_{i=1}^{n-1}g_{23\mn1;r}(n;i),\\ g_{21\mn3;r}(n;i)=g_{21\mn3;r}(n-1;i)+\sum\limits_{j=1}^{i-1}g_{21\mn3;r-(n-i)}(n-1;j),\quad\mbox{for}\ 1\leq i\leq n-1. \end{array}$$ Lemma \[l221\] implies for $r=0$ as follows. First $g_{21\mn3;0}(n;m)=2^{m-2}$ for all $m\geq 2$, and second $g_{21\mn3;0}(n;1)=1$. Hence $g_{21\mn3;0}(n)=2^{n-1}$, so the theorem holds for $r=0$. Let $r\geq1$ and assume that for $2\leq m\leq n-1$ the expression $\sum\limits_{j=1}^m g_{21\mn3;d}(n;j)$ is given by $q_d^m(n)2^m$ where $q_{d}^m(n)$ is a polynomial of degree at most $d$ with coefficient in $\mathcal{Q}$ for all $0\leq d\leq r-1$. Lemma \[l221\] yields $$\begin{array}{ll} g_{21\mn3;r}(n;1)&=g_{21\mn3;r}(n-1;1),\\ g_{21\mn3;r}(n;2)&=g_{21\mn3;r}(n-1;2),\\ \qquad\qquad\qquad\qquad\vdots\\ g_{21\mn3;r}(n;n-r-1)&=g_{21\mn3;r}(n-1;n-r-1),\\ g_{21\mn3;r}(n;n-r+1)&=g_{21\mn3;r}(n-1;n-r+1)+\sum\limits_{j=1}^{n-r}g_{21\mn3;1}(n-1;j),\\ \qquad\qquad\qquad\qquad\vdots\\ g_{21\mn3;r}(n;n-1)&=g_{21\mn3;r}(n-1;n-1)+\sum\limits_{j=1}^{n-2}g_{21\mn3;r-1}(n-1;j),\\ g_{21\mn3;r}(n;n)&=g_{21\mn3;r}(n-1), \end{array}$$ with induction hypothesis imply for $2\leq m\leq n-1$ $$\sum\limits_{j=1}^m g_{21\mn3;r}(n;j)=\sum\limits_{j=1}^mg_{21\mn3;r}(n-1;j)+q_{r-1}^m(n)2^m,$$ where $q_{r-1}^m(n)$ is a polynomial of degree at most $r-1$ with coefficient in $\mathcal{Q}$. Therefore, for $2\leq m\leq n-1$ $\sum\limits_{j=1}^mg_{12\mn3;r}(n;j)$ can be expressed by $q_r^m(n)2^m$ where $q_r^m(n)$ is a polynomial of degree at most $r$ with coefficient in $\mathcal{Q}$. Hence, with using Lemma \[l221\] we get there exist a polynomial $a_r(n)$ of degree at most $r$ with coefficient in $\mathcal{Q}$ such that $$g_{12\mn3;r}(n)=g_{12\mn3;r}(n-1)+a_r(n)2^n,$$ so the theorem holds. Using Theorem \[th22\] with the initial values of the sequence $g_{21\mn3;r}(n)$ for $r=0,1,2,3$ we get $\begin{array}{lll} {\rm (i)} &\mbox{For all}\ n\geq1, &g_{21\mn3;0}(n)=2^{n-1};\\ {\rm (ii)} &\mbox{For all}\ n\geq2, &g_{21\mn3;1}(n)=(n-2)2^{n-3};\\ {\rm (iii)} &\mbox{For all}\ n\geq3,& g_{21\mn3;2}(n)=(n^2+n-12)2^{n-6};\\ {\rm (iv)} & \mbox{For all}\ n\geq4,& g_{21\mn3;3}(n)=1/3(n-4)(n^2+13n+6)2^{n-8}.\\ \end{array}$ $\tau=23\mn1$, $\tau=31\mn2$, or $\tau=32\mn1$ ---------------------------------------------- Similarly, using the argument proof of Theorem \[th22\] with the principle of induction yield \[th23\] Let $r$ be any nonnegative integer number. Then [(i)]{} there exists a polynomial $p_{r-1}(n)$ of degree at most $r-1$ with coefficient in $\mathcal{Q}$ and a constant $c$, such that for all $n\geq r$ $$g_{23\mn1;r}(n)=c\cdot2^n+p_{r-1}(n).$$ [(ii)]{} there exists a polynomials $p_r(n)$ and $q_{2r-2}(n)$ of degree at most $r$ and $2r-2$ respectively; with coefficient in $\mathcal{Q}$ such that for all $n\geq 1$ $$g_{31\mn2;r}(n)=p_r(n)2^n+q_{2r-2}(n).$$ [(iii)]{} there exist a polynomial $p_{r+2}(n)$ of degree at most $r+2$ with coefficient in $\mathcal{Q}$ such that for all $n\geq r$ $$g_{32\mn1;r}(n)=p_{r+2}(n).$$ Using Theorem \[th23\] with the initial values of the sequences $g_{23\mn1;r}(n)$, $g_{31\mn2;r}(n)$ and $g_{32\mn1;r}(n)$ for $r=0,1,2,3,4$ we get the following: $\begin{array}{lll} {\rm (i)} & \mbox{For all}\ n\geq1,& g_{23\mn1;0}(n)=2^{n-1};\\ {\rm (ii)} & \mbox{For all}\ n\geq2,& g_{23\mn1;1}(n)=2^{n-2}-1;\\ {\rm (iii)}& \mbox{For all}\ n\geq3,& g_{23\mn1;2}(n)=2^{n-1}-n-1;\\ {\rm (iv)} & \mbox{For all}\ n\geq4,& g_{23\mn1;3}(n)=5\cdot2^{n-3}-1/2(n^2-n+8);\\ {\rm (v)} & \mbox{For all}\ n\geq5,& g_{23\mn1;4}(n)=2^{n}-1/6(n+1)(n^2-4n+24). \end{array}$ For all $n\geq 1$; $\begin{array}{ll} {\rm (i)} & g_{31\mn2;0}(n)=2^{n-1};\\ {\rm (ii)} & g_{31\mn2;1}(n)=(n-3)2^{n-2}+1;\\ {\rm (iii)} & g_{31\mn2;2}(n)=(n^2-3n-14)2^{n-4}+1/2(n^2+n+12);\\ {\rm (iv)} & g_{31\mn2;3}(n)=1/3(n^3-55n-90)2^{n-5}+1/12(n^4+11n^2+12n+12). \end{array}$ For all $n\geq 1$; $\begin{array}{ll} {\rm (i)} & g_{32\mn1;0}(n)=1/2n(n-1)+1;\\ {\rm (ii)} & g_{32\mn1;1}(n)=1/6(n-1)(n-2)(2n-3);\\ {\rm (iii)} & g_{32\mn1;2}(n)=1/6(n-2)(n-3))2n-5);\\ {\rm (iv)} & g_{32\mn1;3}(n)=1/8(n-3)(n^3-3n^2-10n+32);\\ {\rm (v)} & g_{32\mn1;4}(n)=1/24(n-4)(3n^3-10n^2-55n+198). \end{array}$ Avoiding $21\mn3$ ================= Let $h_{\tau;r}(n)$ be the number of all permutations in $S_n(21\mn3)$ containing $\tau$ exactly $r$ times. The corresponding exponential and ordinary generating function are denoted by $\mathcal{H}_{\tau;r}(x)$ and $H_{\tau;r}(x)$ respectively; that is, $\mathcal{H}_{\tau;r}(x)=\sum\limits_{n\geq0}\frac{h_{\tau;r}(n)}{n!}x^n$ and $H_{\tau;r}(x)=\sum\limits_{n\geq0}h_{\tau;r}(n)x^n$. The above definitions are extended to $r<0$ as $h_{\tau;r}(n)=0$ for any $\tau$. In the current section, our present aim is to count the number of permutations avoiding $21\mn3$ and containing $\tau$ exactly $r$ times where $\tau$ a generalized pattern of type $(2,1)$, and since that we introduce another notation. Let $h_{\tau;r}(n; i_1,i_2,\ldots,i_j)$ be the number of permutations $\pi\in S_n(21\mn3)$ containing $\tau$ exactly $r$ times such that $\pi_1\pi_2\dots\pi_j=i_1i_2\dots i_j$. The main body of the current section is divided to five subsections corresponding to the cases $\tau$ is a general generalized pattern; $12\mn3$; $13\mn2$, $31\mn2$; $23\mn1$; and $32\mn1$. $\tau$ is a general generalized pattern --------------------------------------- Here we study certain cases of $\tau$, where $\tau$ is a generalized pattern of length $k$ without dashes, or with exactly one dash.\ First of all let us define a bijection $\Phi$ between the set $S_n(12\mn3)$ and the set $S_n(21\mn3)$ as follows. Let $\pi=(\pi',n,\pi'')$, where $n$ the maximal element of $\pi$, be any $12\mn3$-avoiding permutation of $s$ elements; we define by induction $$\Phi(\pi)=(R(\pi'),n,\Phi(\pi'')),$$ where $R(\pi')$ is the reversal of $\pi'$. Since $\pi$ is $12\mn3$-avoiding permutation we have $\pi_1>\dots>\pi_{j-1}$ so by using the principle of induction on length $\pi$ we get $\Phi(\pi)$ is $21\mn3$-avoiding permutation. Also, it is easy to see by using the principle of induction that $\Phi^{-1}=\Phi$, hence $\Phi$ is a bijection. \[th31\] For all $k\geq 1$; $$\mathcal{H}_{12\dots(k-1)k;0}(x)=\mathcal{F}_{(k-1)\dots21k;0}(x),\qquad \mathcal{H}_{12\dots(k-1)k;1}(x)=\mathcal{F}_{(k-1)\dots21k;1}(x).$$ Using the bijection $\Phi:S_n(12\mn3)\rightarrow S_n(21\mn3)$ we get the desired result: the permutation $\pi\in S_n(12\mn3)$ contains $(k-1)\dots1k$ exactly $r$ ($r=0,1$) times if and only if the permutations $\Phi(\pi)$ contains $12\dots(k-1)k$ exactly $r$. [(see Claesson [@C])]{}\[ex31\] Theorem \[th11\] and Theorem \[th31\] yield for $r=0$ that $$\mathcal{H}_{12\dots k;0}(x)=e^{\frac{x^1}{1!}+\frac{x^2}{2!}+\dots+\frac{x^{k-1}}{(k-1)!}}.$$ If $k\rightarrow\infty$, then we get the exponential generating function for the number of $21\mn3$-avoiding permutations in $S_n$ is given by $e^{e^x-1}$. In [@C; @CM1] proved the number of permutations in $S_n(21\mn3)$ avoiding $12\mn3$ is the same number of involutions in $S_n$. The case of varying $k$ is more interesting. As an extension of these results (the proofs are immediately holds by using the bijection $\Phi$). \[th32\] For $k\geq 1$, $$\begin{array}{l} \mathcal{H}_{12\dots(k-1)\mn k;0}(x)=e^{\frac{x^1}{1!}+\dots+\frac{x^{k-1}}{(k-1)!}};\\ \mathcal{H}_{12\dots(k-2)k\mn(k-1);0}(x)=\mathcal{H}_{12\dots(k-2)\mn k\mn(k-1);0}(x) =\mathcal{F}_{(k-2)\dots21k\mn(k-1);0}(x). \end{array}$$ Using the bijection $\Phi$ we get easily other results as follows. \(i) The number of permutations in $S_n$ containing $12\mn3$ exactly once is the same number of permutations containing $21\mn3$ exactly once; \(ii) The number of permutations in $S_n$ containing $12\mn3$ once and containing $(k-1)\dots21\mn k$ (resp. $(k-1)\dots21k$) exactly $r=0,1$ times, is the same number of permutations in $S_n$ containing $21\mn3$ once and containing $12\dots(k-1)\mn k$ (resp. $(12\dots (k-1)k$) exactly $r=0,1$ times. $\tau=12\mn3$ ------------- \[th35\] For all $n\geq 1$, $$\begin{array}{l} h_{12\mn3;0}(n)=h_{12\mn3;0}(n-1)+(n-1)h_{12\mn3;0}(n-2);\\ \ \\ h_{12\mn3;1}(n)=h_{12\mn3;1}(n-1)+(n-1)h_{12\mn3;1}(n-2)+(n-2)h_{12\mn3;0}(n-3);\\ \ \\ h_{12\mn3;2}(n)=h_{12\mn3;2}(n-1)+(n-1)h_{12\mn3;2}(n-2)+(n-2)h_{12\mn3;1}(n-3)+\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\quad+(n-3)h_{12\mn3;0}(n-3). \end{array}$$ Immediately, definitions yield the following statement: \[l351\] Let $n\geq 1$; then $$\begin{array}{l} h_{12\mn3;r}(n)=h_{12\mn3;r}(n-1)+\sum\limits_{j=1}^{n-1}h_{12\mn3;r}(n;j),\\ h_{12\mn3;r}(n;j)=h_{12\mn3;r}(n-2)+\sum\limits_{i=j}^{n-1}h_{12\mn3;r-(n-i-1)}(n-1;i),\quad\mbox{for}\ 1\leq j\leq n-1. \end{array}$$ Case $r=0$: Lemma \[l351\] yields $h_{12\mn3;0}(n;j)=h_{12\mn3;0}(n-2)$ for all $1\leq j\leq n-1$, hence $$h_{12\mn3;0}(n)=h_{12\mn3;0}(n-1)+(n-1)h_{12\mn3;0}(n-2).$$ Case $r=1$: Lemma \[l351\] yields $$h_{12\mn3;1}(n;j)=h_{12\mn3;1}(n-2)+h_{12\mn3;0}(n-1;n-2)$$ for all $1\leq j\leq n-2$, and $h_{12\mn3;1}(n;n-1)=h_{12\mn3;1}(n-2)$, which means that $$h_{12\mn3;1}(n)=h_{12\mn3;1}(n-1)+(n-1)h_{12\mn3;1}(n-2)+(n-2)h_{12\mn3;0}(n-3).$$ Case $r=2$: similarly as the above cases. $\tau=13\mn2$ ------------- \[th32a\] Let $r$ be a nonnegative integer. Then $$H_{13\mn2;r}(x)=\frac{1-x}{1-2x}\delta_{r,0}+\frac{x^2}{1-2x}\sum\limits_{d=1}^r \frac{H_{13\mn2;r-d}(x)-\sum\limits_{j=0}^{d-1}h_{13\mn2;r-d}(j)x^j}{(1-x)^d}.$$ Definitions imply $h_{13\mn2;r}(n;n)=h_{13\mn2;r}(n-1)$, and for $1\leq j\leq n-1$, $$h_{13\mn2;r}(n;j)=\sum\limits_{i=j}^{n-1} h_{13\mn2;r+i-j}(n-1;j).$$ By means of induction it is easy to obtain for $1\leq m\leq n-1$ $$h_{13\mn2;r}(n;n-m)=\sum\limits_{j=0}^{m-1}\binom{m-1}{j}h_{13\mn2;r-j}(n-1-m+j).$$ Now, if summing $h_{13\mn2;r}(n;n-m)$ over all $0\leq m\leq n-1$, then we get $$h_{13\mn2;r}(n)=h_{13\mn2;r}(n-1)+\sum\limits_{d=0}^r\sum\limits_{j=0}^{n-2-d}\binom{n-2-j}{d}h_{13\mn2;r-d}(j+d).$$ To find the desired result, we transfer the last equation to terms of ordinary generating functions by use [@CM2 Lem. 7]. In view of Theorem \[th12\] and Theorem \[th32a\] we have: the number of permutations in $S_n(12\mn3)$ containing $13\mn2$ exactly $r$ times is the same number of permutations in $S_n(21\mn3)$ containing $13\mn2$ exactly $r$ times. To verify that by combinatorial bijective proof let $\pi$ be any $12\mn3$-avoiding permutation; it is easy to see $\pi=(\pi_1,\dots,\pi_{j-1},n,\pi')$ where $\pi_1>\dots>\pi_{j-1}$, so the number of occurrences of $13\mn2$ in $\pi$ is given by $N:=n-1-\pi_{j-1}-(j-2)+N'$ where $N'$ the number occurrences of $13\mn2$ in $\pi'$. On the other hand, let $\beta=\Phi(\pi)$, so by definitions of $\Phi$ with induction hypothesis (induction on length of $\pi$) we get that $\beta$ contains the same number $N$ of occurrences of $13\mn2$. Hence, by means of induction we shall showed that $\Phi$ is a bijection, and $H_{13\mn2;r}(x)=F_{13\mn2;r}(x)$ for all $r\geq 0$. $\tau=23\mn1$ ------------- \[th36\] Let $r$ be any nonnegative integer number. Then there exist a polynomial $p_{r}(n)$ of degree at most $r$ with coefficient in $\mathcal{Q}$ such that for all $n\geq r$ $$h_{23\mn1;r}(n)=p_{r}(n)2^n.$$ Immediately, by definitions we have \[l361\] Let $n\geq 1$; then $$\begin{array}{l} h_{23\mn1;r}(n)=h_{23\mn1;r}(n-1)+\sum\limits_{j=1}^{n-1}h_{23\mn1;r}(n;j),\\ h_{23\mn1;r}(n;j)=\sum\limits_{i=j}^{n-1}h_{23\mn1;r-(j-1)}(n-1;i),\quad\mbox{for}\ 1\leq j\leq n-1. \end{array}$$ Hence, $h_{23\mn1;r}(n;n)=h_{23\mn1;r}(n;1)=h_{23\mn1;r}(n-1)$, $h_{23\mn1;r}(n;j)=0$ for all $r+2\leq j\leq n-1$, and $$h_{23\mn1;r}(n;j)=h_{23\mn1;r}(n;j-1)-h_{23\mn1;r+1-j}(n-1;j-1)$$ for all $2\leq j\leq r+1$. Assume that $h_{23\mn1;d}(n)$ can be expressed as $p_d(n)2^n$ and $h_{23\mn1;d}(n;j)$ can be expressed as $p_{d-1}(n)2^n$ where $2\leq j\leq r+1$ for all $0\leq d\leq r-1$. The statement is trivial for $r=0$, and by using the principle of induction with the above explanation we get, immediately, the desired result. Theorem \[th36\] with the initial values of the sequences $h_{23\mn1;r}(n)$ for $r=0,1,2,3$ yield $$\begin{array}{l} {\rm (i)}\ \mbox{For all}\ n\geq1,\ \ h_{23\mn1;0}(n)=2^{n-1};\\ {\rm (ii)}\ \mbox{For all}\ n\geq2,\ \ h_{23\mn1;1}(n)=(n-2)2^{n-3};\\ {\rm (iii)}\ \mbox{For all}\ n\geq3,\ \ h_{23\mn1;2}(n)=(n-3)(n+8)2^{n-6};\\ {\rm (iv)}\ \mbox{For all}\ n\geq4,\ \ h_{23\mn1;3}(n)=1/3(n-4)(n^2+25n+42). \end{array}$$ $\tau=31\mn2$ ------------- \[th32b\] Let $r$ be a nonnegative integer. Then $$H_{31\mn2;r}(x)=\frac{1-x}{1-2x}\delta_{r,0}+\frac{x^2}{1-2x}\sum\limits_{d=1}^r \frac{H_{31\mn2;r-d}(x)-\sum\limits_{j=0}^{d-1}h_{31\mn2;r-d}(j)x^j}{(1-x)^d}.$$ Definitions imply $h_{31\mn2;r}(n;1)=h_{13\mn2;r}(n-1)$, and for $2\leq j\leq n-1$, $$h_{31\mn2;r}(n;j)=\sum\limits_{i=j}^{n-1}h_{31\mn2;r}(n-1;i)=h_{31\mn2;r}(n-1)-\sum\limits_{i=1}^{j-1}h_{31\mn2;r}(n-1;i).$$ By means of induction it is easy to obtain for $1\leq m\leq n-1$ $$h_{31\mn2;r}(n;m)=\sum\limits_{j=0}^{m-1}(-1)^j\binom{m-1}{j}h_{31\mn2;r}(n-1-j).$$ Similarly as Theorem \[th12\] (or Theorem \[th32a\]), by using the above equation with (it is easy to check by definitions) $$h_{31\mn2;r}(n;n)=\sum\limits_{j=0}^r h_{31\mn2;r-j}(n-1;n-1-j),$$ we get the desired result. Again, we have $H_{31\mn2;r}(x)=F_{13\mn2;r}(x)$ for all $r\geq 0$. But here we failed to find a combinatorial explanation that the number of permutations in $S_n(12\mn3)$ containing $13\mn2$ exactly $r$ times is the same number of permutations in $S_n(21\mn3)$ containing $31\mn2$ exactly $r$ times. $\tau=32\mn1$ ------------- \[th37\] For all $n\geq 1$, $$\begin{array}{l} h_{32\mn1;0}(n)=\sum\limits_{j=0}^{n-2}(-1)^j\binom{n-1}{j+1}h_{32\mn1;r}(n-1-j)+\\ \qquad\qquad\qquad\qquad\qquad+\sum\limits_{j=1}^{r+1}\sum\limits_{i=0}^{j-1}(-1)^i\binom{j-1}{i}h_{32\mn1;r+1-j}(n-2-i). \end{array}$$ Immediately, definitions yield \[l371\] Let $n\geq 1$; then $$\begin{array}{l} h_{32\mn1;r}(n)=h_{32\mn1;r}(n-1)+\sum\limits_{j=2}^{n}h_{32\mn1;r}(n;j),\\ h_{32\mn1;r}(n;j)=h_{32\mn1;r}(n-1)-\sum\limits_{i=1}^{j-1}h_{32\mn1;r}(n-1;i), \quad\mbox{for}\ 1\leq j\leq n-1, \end{array}$$ and $$h_{32\mn1;r}(n;n)=h_{32\mn1;r}(n-1;1)+h_{32\mn1;r-1}(n-1;2)+\dots+h_{32\mn1;0}(n-1;r+1).$$ By means of induction with use Lemma \[l371\] we imply that for all $1\leq m\leq n-1$ $$h_{32\mn1;r}(n;m)=\sum\limits_{i=0}^{m-1}(-1)^i\binom{m-1}{i}h_{32\mn1;r}(n-1-i).$$ On the other hand, using Lemma \[l371\] the third equality and then using Lemma \[l371\] the first equality we get the desired result. For example, Theorem \[th37\] with [@CM2 Lem. 7] (as Example \[exxx1\]) yield the exact formula for $H_{32\mn1;r}(x)$ where $r=0,1$ (see Claesson and Mansour [@CM1] for the case $r=0$). $$\begin{array}{l} H_{32\mn1;0}(x)=\sum\limits_{k\geq0}\frac{x^{2k}}{p_{k-1}(x)p_{k+1}(x)};\\ H_{32\mn1;1}(x)=\sum\limits_{n\geq0}\left[\frac{x^2(1-(n+2)x)}{1-(n+1)x}\sum\limits_{k\geq0}\frac{u^{2(k+n)}}{p_{n+k}(x)p_{n+k+2}(x)}\right]. \end{array}$$ where $p_d(x)=\prod_{j=0}^d(1-dx)$. Further results =============== The first possibility to extend the above result is to fix two numbers of occurrences for two generalized patterns of type $(2,1)$. For example, the number of permutations in $S_n$ containing $12\mn3$ exactly once and containing $13\mn2$ exactly once is given by $$(n^2-7n+14)2^{n-3}-2$$ for all $n\geq1$. Another example, the number of permutations in $S_n$ containing $12\mn3$ exactly twice and containing $13\mn2$ twice is given by $$(n^4-18n^3+163n^2-826n+1832)2^{n-7}-4n-14$$ for all $n\geq 1$. These results can be extended as follows. \[ext1\] Let us denote the number of permutations in $S_n$ containing $12\mn3$ exactly $r$ times and containing $13\mn2$ exactly $s$ times by $a_n^{r,s}$; then there exists a polynomials $p(n)$ and $q(n)$ of degree at most $r+s+1-\delta_{r,0}-\delta_{s,0}$ and $r+s-\delta_{r,0}-\delta_{s,0}$, respectively, such that for all $n\geq 1$ $$a_n^{r,s}=p(n)2^n+q(n).$$ Another direction to extend the results in above sections is to restricted more than two patterns. For example, the number of permutations in $S_n(12\mn3,13\mn2,21\mn3)$ is given by the $(n+1)$th Fibonacci number (see [@CM1]). Again, this result can be extended as follows. \[ext2\] \ (i) The ordinary generating function for the number of permutations in $S_n(12\mn3$, $21\mn3)$ such containing $13\mn2$ exactly $r\geq1$ times is given by $$\frac{x^2(1-x)^{r-1}}{(1-x-x^2)^{r+1}}$$ and for $r=0$ is given by $\frac{1}{1-x-x^2}$.\  \ (ii) The ordinary generating function for the number of permutations in $S_n(12\mn3$, $21\mn3)$ such containing $23\mn1$ exactly $r\geq1$ times is given by $$\frac{x^2(1-x)^{r-1}}{(1-x-x^2)^{r+1}}$$ and for $r=0$ is given by $\frac{1}{1-x-x^2}$. In view of Theorem \[ext2\] suggests that there should exist a bijection between the sets $\{12\mn3,21\mn3\}$-avoiding permutations such containing $13\mn2$ exactly $r$ times and $\{12\mn3,21\mn3\}$-avoiding permutations such containing $23\mn1$ exactly $r$ times for any $r\geq0$. However, we failed to produce such a bijection, and finding it remains a challenging open question. [99]{} , The permutation classes equinumerous to the smooth class, [*Electron. J. Combin.*]{} [**5**]{} (1998), \#R31 , Generalized permutation patterns and a classification of the Mahonian statistics, [*Séminaire Lotharingien de Combinatoire*]{}, B44b:18pp, (2000). , Generalised pattern avoidance, [*European Journal of Combinatorics*]{}, [**22**]{} (2001) 961–973. , Permutations avoiding a pair of generalized patterns of length three with exactly one dash, preprint CO/0107044. , Counting occurrences of a pattern of type $(1,2)$ or $(2,1)$ in permutations, [*Adv. in Appl. Math.*]{}, to appear (2002), preprint CO/0105073. , New [E]{}uler-[M]{}ahonian statistics on permutations and words, [*Adv. in Appl. Math.*]{}, 18(3):237–270, 1997. Forbidden subsequences and Chebyshev polynomials [*Discr. Math.*]{} [**204**]{} (1999) 119–128. , Permutations with restricted patterns and Dyck paths,[*Adv. in Applied Math.*]{} [**27**]{} (2001), 510–530. , Permutations with forbidden subsequnces and a generalized Schröder number, [*Disc. Math.*]{} [**218**]{} (2000), 121–130. , Restricted permutations, continued fractions, and Chebyshev polynomials, [*The Electronic Journal of Combinatorics*]{} [**7**]{} (2000), \#R17. , Restricted 132-avoiding permutations, [*Adv. Appl. Math.*]{} [**126**]{} (2001), 258–269. , Layered restrictions and Chebychev polynomials (2000), [*Annals of Combinatorics*]{}, to appear (2001), preprint CO/0008173. , Restricted permutations and Chebychev polynomials, [*Séminaire Lotharingien de Combinatoire*]{} [**47**]{} (2002), Article B47c. Restricted permutations, [*European Journal of Combinatorics*]{} [**6**]{} (1985) 383–406. , Permutations containing and avoiding $123$ and $132$ patterns, [*Disc. Math. and Theo. Comp. Sci.*]{} [**3**]{} (1999), 151–154. , Permutation patterns and continuous fractions, [Electron. J. Combin.]{} [**6**]{} (1999), \#R38. , Generating trees and forbidden subsequences, [*Disc. Math.*]{} [**157**]{} (1996), 363–372.
--- abstract: 'We report low resolution near infrared spectroscopic observations of the eruptive star FU Orionis using the Integral Field Spectrograph Project 1640 installed at the Palomar Hale telescope. This work focuses on elucidating the nature of the faint source, located $0.5''''$ south of FU Ori, and identified in 2003 as FU Ori S. We first use our observations in conjunction with published data to demonstrate that the two stars are indeed physically associated and form a true binary pair. We then proceed to extract J and H band spectro-photometry using the damped LOCI algorithm, a reduction method tailored for high contrast science with IFS. This is the first communication reporting the high accuracy of this technique, pioneered by the Project 1640 team, on a faint astronomical source. We use our low resolution near infrared spectrum in conjunction with $10.2$ micron interferometric data to constrain the infrared excess of FU Ori S. We then focus on estimating the bulk physical properties of FU Ori S. Our models lead to estimates of an object heavily reddened, $A_V =8-12$, with an effective temperature of $\sim$ 4000-6500 K . Finally we put these results in the context of the FU Ori N-S system and argue that our analysis provides evidence that FU Ori S might be the more massive component of this binary system.' author: - 'Laurent Pueyo , Lynne Hillenbrand , Gautam Vasisht , Ben R. Oppenheimer , John D. Monnier , Sasha Hinkley , Justin Crepp , Lewis C. Roberts Jr, Douglas Brenner , Neil Zimmerman , Ian Parry , Charles Beichman , Richard Dekany , Mike Shao , Rick Burruss , Eric Cady , Jenny Roberts , Rémi Soummer' title: Constraining mass ratio and extinction in the FU Orionis binary system with infrared integral field spectroscopy --- Introduction ============ FU Orionis is the prototype of a small class of rare, eruptive young stars. “FUOr” outbursts are generally interpreted as episodes of significantly enhanced accretion in the early stages of disk evolution in young low-mass T Tauri stars. In 1936, the apparent magnitude of FU Orionis brightened by 6 magnitudes and the object has been slowly fading since then [@1966VA......8..109H; @1977ApJ...217..693H; @1985PAZh...11..846K; @1988ApJ...325..231K; @2005MNRAS.361..942C]. Only a few other sources have been identified to undergo similar eruptive phenomena with V1057 Cyg and V1515 Cyg [@1991ApJ...383..664K; @1975PASP...87..379L; @1977PASP...89..704L] serving along with FU Orionis as the classical examples. In the optical, the absorption spectra of FUOr objects resemble broadened F-G type supergiants along with strong wind/outflow signatures in Balmer and other lines (e.g. CaII, Na I, K I, Li I); and in the near-infrared they have spectral absorption features similar to broadened K-M supergiant atmospheres. They generally lack the rich emission-line spectrum that is characteristic of rapidly accreting young low mass stars. FUOr objects also exhibit strong infrared through millimeter excesses, consistent with the presence of an accretion disk and/or a circumstellar envelope of dust. Based on these diagnostics, about a dozen additional sources have been identified as “FU Ori-like”, with a growing number of them known as wide binary systems (e.g. L1551 IRS5, @1998Natur.395..355R, RNO 1B/C [@1993AJ....105.1505K], AR 6A/B [@2003AJ....126.2936A]), including FU Ori [@2004ApJ...601L..83W]. For a more exhaustive description of the observational properties of FU Ori stars, we direct the reader towards the review papers by and @2010vaoa.conf...19R. The accretion disk model for FUOr systems, and FU Orionis itself in particular, has been constrained using interferometric measurements and developed to a sophisticated level by @2007ApJ...669..483Z [@2010ApJ...713.1134Z]. These authors consider broad-band photometry, spectrophotometry, and high dispersion spectroscopy to derive a model size for the outbursting region of the inner disk extending outward to 0.5-1.0 AU, with an accretion rate of $\dot{M} = 2\times 10^{-4}~M_{\odot} \; yr^{-1}$ onto a central star of mass $0.3 M_{\odot}$. Interferometric measurements by @2011arXiv1106.1440E, building on those of and @2006ApJ...641..547M, confirm the rotating accretion disk model for this source. Chandra characterization of FU Orionis unveiled a hot heavily absorbed variable emission component; the variability is thought to be the signature of coronal emission in the close vicinity of the star, seen through the accreting gas [@2010ApJ...722.1654S]. This interpretation seems consistent with the recently reported presence of strong magnetic field in the innermost region of the accretion disk by @2005Natur.438..466D, and reinforces the explanation of the outburst which invokes a major increase in the surface brightness of an accretion disk because of sudden increase in the accretion rate . An alternative to the accretion disk model for FUOrs, is the hypothesis of a fast rotating G-supergiant photosphere. @2003ApJ...595..384H also showed that the optical and infra-red spectroscopic features of fast rotating G-supergiants can reproduce FUOr observations if the “boxy" line profile shapes can be attributed to core line emission rather than to a rotating disk. In an effort to reconcile both models @2007AstL...33..755K recently proposed a modified accretion disk model with a puffed inner edge, that is consistent with the HST/STIS spectrum of FU Orionis. @Hartmann2012 however fit the @2007ApJ...669..483Z disk model with $A_V = 1.5$ mag to the same UV spectrum though @2007ApJ...669..483Z comment on the blue excess shortward of 2500-2600 Å. Since the star/disk source is also embedded in a dust shell, extinction plays a crucial role in the spectral characterization of FU Orionis at short wavelengths, and @2007AstL...33..755K suggested using the known companion to FU Orionis as a reference to obtain a model independent estimate of the optical extinction. FU Ori S, a stellar companion located $0.4 ''$ to the south west of FU Ori, was first discovered by @2004ApJ...601L..83W using adaptive optics imaging at K-band. @2004ApJ...601L..83W discussed, based on statistical arguments using on J and K colors, that the discovered object was most likely gravitationally bound to FU Ori. Follow up AO observations by [@2004ApJ...608L..65R] did not establish common proper motion due to the large uncertainties on the proper motion of FU Orionis (a result of the large distance to FU Ori $\sim 450$ pc [@1977MNRAS.181..657M]) but provided further evidence of the young age of FU Ori S based on infrared excess inferred from JHK’L’ colors. Spectral line diagnostics in the K-band suggest a late G or K spectral type if the inference of [@2004ApJ...608L..65R] is correct that Na I and Ca I are present in their spectrum while CO absorption is lacking. Speckle interferometry observations at $0.8$ micron by [@2008AstBu..63..357K] find $A_{V} =2.2$ mag towards FU Ori S, provided the star is of spectral type G9 or later. The temperature of the best fitting (unreddened) blackbody to the K-band continuum was $\sim 2500 \; K$ and, based on this, it was suggested that FU Ori S exhibits a considerable infrared excess. L-band photometry is also available from these authors. @2012AJ....143...55B recently reported high spectral resolution J H and K band ($R \sim 3000$) spectra of FU Ori S, obtained with the Gemini NIFS integral field spectrograph. Based on line diagnostics they showed that FU Ori S was a highly accreting K5 type star and suggested that this object might be the more massive component of the system. N-band interferometric measurements by @2009ApJ...700..491M detected the companion, which confirms a substantial infrared excess, but no flux density was reported. [@2010ApJ...722.1654S] identified a centroid offset towards FU Ori S in the soft X-ray component of Chandra observations, and argued that the most likely explanation was that FU Ori S was a weak soft X ray emitter. All of the evidence is consistent with FU Ori S being a young low-mass, perhaps K-type, T-Tauri star. Precision astrometry on follow up AO observation of FU Ori, with a baseline longer than the time elapsed between the @2004ApJ...601L..83W and [@2004ApJ...608L..65R] epochs can firmly establish whether or not FU Ori and FU Ori S are gravitationally bound. Obtaining a well-constrained SED of this object in these spectral regions would allow characterization of FU Ori S independently of FU Ori N. In particular, SED measurements covering the spectral region expected to be dominated by stellar (as opposed to circumstellar) flux can provide an extinction estimate. This measurement may or may not be applicable to both components of the binary (FU Ori N as well as to FU Ori S) depending on the geometry of the circumstellar and circumbinary material. Independent knowledge of the extinction to FU Ori N is necessary to validate the findings based on accretion disk models [@2007AstL...33..755K; @2007ApJ...669..483Z] in which the extinction is either a derived parameter of the models or a required input to them. Understanding the extinction therefore provides helpful insights concerning the nature of the primary (FU Ori) star as well as the geometry of its surroundings, including the companion. In an effort to further elucidate the nature of FU Ori S by quantifying the spectral type and the reddening, and to assist in interpretation of the dust shell encircling the FU Orionis N/S system, we observed this source with the Project 1640 (P1640) Integral Field Spectrograph (IFS). We first summarize our observations, our data reduction and establish the binary nature of the FU Orionis system based on our data. Then we reconstruct the $0.8 $ to $10 $ micron SED using previously published observations and discuss our photometric points in the context of the literature. Based on this reconstructed SED we seek to characterize the near infrared emission of FU Ori S. Our models lead to estimates of an object heavily reddened, A$_V$=8-12, with an effective temperature of $\sim$ 4000-6500 K. We furthermore quantify the amplitude of the infrared excess produced by circumstellar dust around FU Ori S, using our J and H SED as a photometric baseline for interferometric data published by [@2009ApJ...700..491M], reprocessed for the purpose of this paper. We finally put these results in perspective in the context of the FU Ori N-S system and argue that our analysis provides evidence that FU Ori S might be the hotter and therefore more massive component. Observations and data reduction =============================== Observations ------------ FU Ori was observed at Palomar on March 17 th 2009 within the $\sim 4" \times 4"$ field of view of the P1640 Integral Field Spectrograph. The IFS design prioritizes, for a given Field Of View, fine spatial sampling over spectral resolution for (when compared to other microlens based ISF such as OSIRIS @mcelwain-2007-656). This design provides the chromatic information necessary to discriminate faint point sources from optical artifacts [@2002ApJ...578..543S; @JustinSpeckle] in high contrast observations. Specifically, the IFS features a moderate spectral resolution ($R \sim 45$) and high spatial resolution ($\lambda/D=$45-72 mas). Detail of the instrument can be found in [@2011PASP..123...74H]. The data consist of twelve exposures of 127 seconds with the primary star occulted (behind the coronagraphic mask) and two 2 second exposures with the primary star unobscured (off the coronagraphic mask). Each exposure produces a set of $250 \times 250$ spectra on the infrared detector, each spectrum corresponding to the dispersed image of a micro-lens in the focal plane of the IFS. This raw data is then converted to a cube of $23$ image slices, each slice corresponding to the image of the focal plane microlens array at a given wavelength. The wavelength solution of the spectrograph is derived using a set of calibration frames obtained prior to and during the observing sequence [@NeilPipeline]. Determination of the spectrograph response in the particular case of FU Ori is detailed in §. \[sec::SpectralCalibration\]. The FU Ori system is seen through an optical train consisting of the Palomar Adaptive Optics system (PALAO), the Apodised Pupil Lyot Coronagraph [@2009ApJ...695..695S] and the P1640 spectrograph [@2011PASP..123...74H]. The calibrated data (data cubes with speckles that have not been suppressed by post-processing), with slices integrated over the J and the H bands are shown on the top panel of Fig. \[fig::ImagesFuOri\]. We detect FU Ori S at a separation of $0.491'' \pm 0.007$ and a position angle of $ 161.2^{\circ} \pm 1.1$ with respect to FU Ori N. The contrast sensitivity of the data are to $\Delta M_H =6.5 $ mag and $\Delta M_J = 6.2$ mag for companions at $0.5''$ separation, somewhat worse contrast than typical of P1640 contrast curves (e.g. @JustinSpeckle) due to the early stage of the data. While the companion FU Ori S can be identified in H band, it is as bright as the speckles in J and thus can only be distinguished from them using the chromatic diversity provided by the IFS. Recent upgrades of the Palomar high contrast near-infrared system, including a new Adaptive Optics systems [@dekany:62720G] and an interferometric wavefront calibration system [@2010lyot.confE..51V; @wallace:74400S], will further improve the speckle noise level. Direct extraction of the FU Ori S spectrum using the raw slices that were integrated to produce the top panel of Fig. \[fig::ImagesFuOri\], leads to over-estimating the companion’s spectro-photometry because of the speckles’ outward motion at the location of the object as the wavelength increases. In order to mitigate for this effect we sought to improve the Signal to Noise Ratio (SNR) on FU Ori S using the P1640 speckle calibration pipeline presented in [@JustinSpeckle]. This method takes advantage of the chromatic diversity of the IFS [@2002ApJ...578..543S] and combines it with optimal Point Spread Function (PSF) subtraction algorithms, e.g. LOCI; [@2007ApJ...660..770L]. The processed images are shown in the bottom panels of Fig. \[fig::ImagesFuOri\] and demonstrate that our reduction approach considerably increases the SNR of FU Ori S. ![[*Top*]{}: band averaged P1640 images of the FU Orionis system, FU Ori N is occulted by the coronagraphic mask. [*Top Left*]{}: J band. [*Top Right*]{}: H band. FU OriS can be identified in the raw H band images, but is as bright as the speckles in J band. Photometric estimates based on the raw IFS cube will overestimate the flux of FU Ori S because of speckle contamination. [*Bottom*]{}: reduced images, the detectability of FU Ori S is significantly enhanced.[]{data-label="fig::ImagesFuOri"}](fig1.pdf){width="8cm"} IFS spectral calibration {#sec::SpectralCalibration} ------------------------ Calibrating simultaneously the wavelength solution, the atmospheric dispersion and the dispersion intrinsic to the instrument with the low spectral resolution ($R \sim 45$) of P1640 is a delicate exercise. Indeed, since the telluric sky lines are averaged over each wavelength channel, they can not be used as a reference to derive the wavelength solution [@2004PASP..116..362C]. We thus first calibrate the wavelength solution off-line, using a laser tunable source and follow the procedure detailed in [@NeilPipeline]. Moreover, the relatively small field of view of P1640 prevents us from simultaneously obtaining observations of a calibration star in a science exposure and the presence of the coronagraph prevents us from using the primary star as the calibrator. We call the Spectral Response Function (SRF) the wavelength dependent relationship between the spectrum of an astronomical source and its counterpart seen by the P1640 detector. In order to derive an accurate SRF, we use the non-coronagraphic images of FU Ori N that were acquired right before the coronagraphic observation of FU Ori S, combined with the published spectrum of FU Ori in @1996AJ....112.2184G. To do so we first need to establish that the J and H SED of FU Ori N has not varied since the 1994 epoch published by @1996AJ....112.2184G. We first derive three spectral response functions using three well characterized stars ($HD104860$ of spectral type F8V, $HD87696$ / A7V, and $HD109011$ / K2V) that were observed the same night as FU Orionis. We then compare our observed spectrum of FU Ori N, respectively normalized by each one of these three response functions. Fig. \[fig::SpectrumCalibrator\] illustrates this comparison and shows excellent agreement for the calibrator within $0.02$ airmass of FU Orionis (HD109011). From this agreement we conclude that the $R\sim 45$ P1640 J and H SED slopes of FU Ori N have not varied since 1994, and thus combine the data from @1996AJ....112.2184G with our non-coronagraphic images to derive the final spectral response function we use to characterize FU Ori S, as illustrated in the bottom right panel of Fig. \[fig::SpectrumCalibrator\] . Note that the effect of using a calibrator at an airmass that differs from the FU Orionis system is most severe in the blue and red ends of the SED and in the water absorption band between J and H. Since the airmass change occurring during our coronagraphic observing sequence is $0.02$ (i.e. identical to the difference between the FU Ori and the HD109011 exposures, bottom left on Fig. \[fig::SpectrumCalibrator\]) it is very difficult to constrain the spectral calibration uncertainties in these regions of SED. We thus chose to discard these points for our analysis of the SED of FU Ori S. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Determination of the atmospheric and instrumental Spectral Response Function. [*Circles*]{}: P1640 spectrum of FU Ori N, obtained from non-coronagraphic images normalized using three different spectral response functions. [*Solid Line*]{}: IRTF spectrum of FU Ori N, from @1996AJ....112.2184G . [*Top*]{}: SRF calculated using HD104860, chi-squared difference with the IRTF spectrum $12.5$ [*Second from top*]{}: SRF calculated using HD87696, chi-squared difference with the IRTF spectrum $11.2$ [*Second from bottom Left*]{}: SRF calculated using HD109011, chi-squared difference with the IRTF spectrum $4.6$, note that most of the mismatch resides in the water absorption bands. The P1640 spectrum exhibits excellent agreement with the IRTF spectrum when the air-mass of the calibrator star is within $0.02$ from FU Ori. We thus conclude that the SED of FU Orionis has not varied since the 1994 @1996AJ....112.2184G and use the non coronagraphic images of FU Ori N to derive the SRF. [*Bottom* ]{}: The final SFR is computed using FU Ori N as a reference star.[]{data-label="fig::SpectrumCalibrator"}](fig2a.pdf "fig:"){width="6cm"} ![Determination of the atmospheric and instrumental Spectral Response Function. [*Circles*]{}: P1640 spectrum of FU Ori N, obtained from non-coronagraphic images normalized using three different spectral response functions. [*Solid Line*]{}: IRTF spectrum of FU Ori N, from @1996AJ....112.2184G . [*Top*]{}: SRF calculated using HD104860, chi-squared difference with the IRTF spectrum $12.5$ [*Second from top*]{}: SRF calculated using HD87696, chi-squared difference with the IRTF spectrum $11.2$ [*Second from bottom Left*]{}: SRF calculated using HD109011, chi-squared difference with the IRTF spectrum $4.6$, note that most of the mismatch resides in the water absorption bands. The P1640 spectrum exhibits excellent agreement with the IRTF spectrum when the air-mass of the calibrator star is within $0.02$ from FU Ori. We thus conclude that the SED of FU Orionis has not varied since the 1994 @1996AJ....112.2184G and use the non coronagraphic images of FU Ori N to derive the SRF. [*Bottom* ]{}: The final SFR is computed using FU Ori N as a reference star.[]{data-label="fig::SpectrumCalibrator"}](fig2b.pdf "fig:"){width="6cm"} ![Determination of the atmospheric and instrumental Spectral Response Function. [*Circles*]{}: P1640 spectrum of FU Ori N, obtained from non-coronagraphic images normalized using three different spectral response functions. [*Solid Line*]{}: IRTF spectrum of FU Ori N, from @1996AJ....112.2184G . [*Top*]{}: SRF calculated using HD104860, chi-squared difference with the IRTF spectrum $12.5$ [*Second from top*]{}: SRF calculated using HD87696, chi-squared difference with the IRTF spectrum $11.2$ [*Second from bottom Left*]{}: SRF calculated using HD109011, chi-squared difference with the IRTF spectrum $4.6$, note that most of the mismatch resides in the water absorption bands. The P1640 spectrum exhibits excellent agreement with the IRTF spectrum when the air-mass of the calibrator star is within $0.02$ from FU Ori. We thus conclude that the SED of FU Orionis has not varied since the 1994 @1996AJ....112.2184G and use the non coronagraphic images of FU Ori N to derive the SRF. [*Bottom* ]{}: The final SFR is computed using FU Ori N as a reference star.[]{data-label="fig::SpectrumCalibrator"}](fig2c.pdf "fig:"){width="6cm"} ![Determination of the atmospheric and instrumental Spectral Response Function. [*Circles*]{}: P1640 spectrum of FU Ori N, obtained from non-coronagraphic images normalized using three different spectral response functions. [*Solid Line*]{}: IRTF spectrum of FU Ori N, from @1996AJ....112.2184G . [*Top*]{}: SRF calculated using HD104860, chi-squared difference with the IRTF spectrum $12.5$ [*Second from top*]{}: SRF calculated using HD87696, chi-squared difference with the IRTF spectrum $11.2$ [*Second from bottom Left*]{}: SRF calculated using HD109011, chi-squared difference with the IRTF spectrum $4.6$, note that most of the mismatch resides in the water absorption bands. The P1640 spectrum exhibits excellent agreement with the IRTF spectrum when the air-mass of the calibrator star is within $0.02$ from FU Ori. We thus conclude that the SED of FU Orionis has not varied since the 1994 @1996AJ....112.2184G and use the non coronagraphic images of FU Ori N to derive the SRF. [*Bottom* ]{}: The final SFR is computed using FU Ori N as a reference star.[]{data-label="fig::SpectrumCalibrator"}](fig2d.pdf "fig:"){width="6cm"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Extraction of the spectrum of FU Ori S {#sec::SpectralExtraction} -------------------------------------- To mitigate the contamination of the spectro-photometry of FUOri S by residual speckles, we first seek to calibrate these residual quasi-static optical artifacts using an aggressive PSF subtraction algorithm, the Locally Optimized Combination of Images approach [@JustinSpeckle; @2007ApJ...660..770L]. For a given location in the image, LOCI creates a synthetic reference PSF, a weighted sum of images within a collection of reference frames, based on a least-square fit of the neighboring speckles. In order to preserve as much flux from the companion as possible, the least squares fit is calculated using a large “optimization” region of the image, with area expressed as $N_{A}$ PSF cores, while the actual subtraction is carried out in a smaller “subtraction” zone. As discussed in @LaurentSpeckle, direct photometry on images processed using classical LOCI implementations, such as the reduced images in the bottom panel of Fig. \[fig::ImagesFuOri\] for instance, can produce biases in the spectro-photometry. We quantified this effect, by extracting a “zeroth-order” spectrum of FU Ori S from the reduced data cubes of Fig. \[fig::ImagesFuOri\], and used this spectrum to inject a set of synthetic companions in our dataset. The top panel of Fig. \[fig::ExtractedBias\] shows, for a typical azimuthal orientation, the injected synthetic spectrum and a series of extracted spectra after LOCI reduction. Each extracted spectrum was estimated using a reduction with a given area of least-squares minimization, $N_A$. Using such a reduction strategy leads to underestimating the flux of the companion by a factor of two to three. Moreover this bias is wavelength dependent. As presented in @LaurentSpeckle this flux depletion is a combination of a least squares bias common to all LOCI implementations (that can be identified as a “grey” gain that does not alter neither spectral features nor the SED slope), and a spectral cross talk term, specific to integral field spectrographs. We circumvented this problem by using the “damped LOCI” approach introduced in @LaurentSpeckle. d-LOCI relies on a least-squares approach similar to the one described above but adds a supplemental penalty term, that scales with the flux of the discovered companion, in the underlying quadratic cost function. This preserves the flux from faint sources and leads to un-biased SED estimated even in the case of a companion buried under speckles. The results with synthetic companions are shown in the bottom panel of Fig. \[fig::ExtractedBias\]. Injected and extracted spectra agree very well for a wide range of d-LOCI parameters. We further explored potential reduction biases by varying other algorithm parameters that scale with the cross-spectral channels contamination. d-LOCI reductions yield consistent results for a wide range of parameters, with a bias smaller than 5 percent. We thus conclude that our final spectrum is unbiased at the $\sim 5$ percent level. In Figure \[fig::CompareSEDs\] we present our final spectrum for FU Ori S and, for comparison, that of FU Ori N. We derived the error bars as the root mean square of the sum of three separate terms: the error on the spectral response function (e.g. photometric scatter in non coronagraphic images), the scatter of bias estimated on synthetic companions over the ensemble of LOCI parameters explored (smaller than $5$ percent), and the photometric scatter in the reduced images. The spectrum of FU Ori S is rising to the red throughout the J and H bands and appears to be featureless. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Quantification of LOCI and d-LOCI bias using synthetic companions. The spectra are shown before calibration by the Spectral response Function. In both panels the dashed line corresponds to the synthetic companion injected in the coronagraphic PSFs. This “zeroth order” spectrum was obtained using aperture photometry on preliminary LOCI reduced images, and renormalized to match the band averaged photometry of the raw data. The top panel shows the synthetic spectrum extracted after LOCI: while the images exhibit the high SNR illustrated on Fig. \[fig::ImagesFuOri\], the spectro-photometric signal clearly exhibits an algorithmic flux depletion. Moreover this bias depends upon the reduction parameters. We alleviate this problem using a d-LOCI approach, bottom panel, which only exhibits a small bias over a wide range of reduction parameters.[]{data-label="fig::ExtractedBias"}](fig3a.pdf "fig:"){width="8cm"} ![Quantification of LOCI and d-LOCI bias using synthetic companions. The spectra are shown before calibration by the Spectral response Function. In both panels the dashed line corresponds to the synthetic companion injected in the coronagraphic PSFs. This “zeroth order” spectrum was obtained using aperture photometry on preliminary LOCI reduced images, and renormalized to match the band averaged photometry of the raw data. The top panel shows the synthetic spectrum extracted after LOCI: while the images exhibit the high SNR illustrated on Fig. \[fig::ImagesFuOri\], the spectro-photometric signal clearly exhibits an algorithmic flux depletion. Moreover this bias depends upon the reduction parameters. We alleviate this problem using a d-LOCI approach, bottom panel, which only exhibits a small bias over a wide range of reduction parameters.[]{data-label="fig::ExtractedBias"}](fig3b.pdf "fig:"){width="8cm"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![J and H band Spectral Energy Distribution of the FU Orionis system. [*Blue Circles*]{}: P1640 spectrum of the Northern component. [*Red Squares*]{}: P1640 spectro-photometry of the Southern component.[]{data-label="fig::CompareSEDs"}](fig4.pdf){width="8cm"} Physical association of the two components of the FU Orionis system =================================================================== Proper Motion of FU Orionis --------------------------- The proper motion of FU Ori N is reported both in the Carlsberg Meridian Catalog [^1], as $pm RA = 0.116'' /$ year $\pm 0.104$, $pm DEC = 0.0864'' /$ year $ \pm 0.0115$, and Proper Motion Extended Catalog (PPMXL), as $pm RA = 0.0144''/$ year $\pm 0.0108$ and $pm DEC = 0.0721''/$ year $\pm 0.0108'$. Although the proper motion estimates in right ascension have large percentage error, the proper motion in declination is highly significant. If we adopt a distance of $450$ pc to FU Orionis, these two estimates yield surprisingly large transverse velocities, $183$ km/s and $139$ km/s respectively. Tracing back the trajectory of FU Orionis to $\sim 1 \; M_{yrs}$ using these values places this object in the T-Tauri star forming M78. While there exists a multitude of observational evidence for the young age of FU Orionis, its location, seemingly isolated from any star forming cluster, can be seen as somewhat puzzling. This large proper motion could explain this paradox, and in this scenario FU Ori could be a runaway star from M78. However the large ejection velocity required to explain this scenario can only be the result of an unlikely very close encounter early in the history of FU Orionis. Moreover such transverse velocities do not seem compatible with the radial velocity of FU Ori N measured by @2003ApJ...595..384H, which is consistent with the molecular cloud velocity. Alternatively these proper motion estimates, based on optical images, could be biased by the extended optical nebulae surrounding FU Orionis. We further explored this issue and derived proper motion using astrometric estimates based solely on infrared observations, since at longer wavelength FU Ori is a true point source. To do so, we take advantage of the $\sim 10$ years baseline between 2MASS and WISE observations. Using the 2MASS point source catalog and the WISE preliminary catalog, we obtain $pm RA = 0.033''/$ year $\pm 0.14$, $pm DEC = 0.025''/$ year $\pm 0.053$. While the mean value of this proper motion estimate yields a smaller transverse velocity of $56$ km/s, the large declination uncertainty in the preliminary WISE point source catalog, due to a systematic bias in WISE astrometric estimates that will be corrected in a future release, prevents us from making a definitive conclusion regarding the actual true proper motion of FU Orionis. For the purpose of demonstrating that FU Ori North and South are co-moving, we will use the smallest of these three proper motion values, the 2MASS to WISE infrared to infrared estimate. Physical association of FU Ori S and FU Ori N --------------------------------------------- As the one year time baseline between the epochs presented in @2004ApJ...608L..65R and @2004ApJ...601L..83W was not sufficiently large to establish or rule out physical association between the FU Ori North and South, we explore this question using the 2009 epoch from P1640. In our data we detect FU Ori S at a separation of $0.491'' \pm 0.007$ and a position angle of $ 161.2^{\circ} \pm 1.1$ with respect to FU Ori N. The astrometric pupil plane grid, necessary to carry out astrometry on occulted coronagraphic images [@2006ApJ...647..620S; @2010ApJ...712..421H; @2011ApJ...726..104H], was not in the optical path during the observing sequence of FU Orionis. Astrometric estimate were thus carried out using the un-occulted reddest images, where the companion is visible, albeit at very low SNR. Each of the five images chosen in the H band was analysed with the program FITSTARS; which uses an iterative blind-deconvolution that fits the location of delta functions and their relative intensity to the data. The FITSTARS program was presented in [@tenBrummelaar1996; @tenBrummelaar2000]. After throwing out any data that failed to converage to a physical solution, the position angle and separation was computed by a weighted average of the individual values. The weights were set equal to the inverse of the RMS residual of the fit; a standard output of the FITSTARS program. The error bars for position angle and separation were set equal to the weighted standard deviation of the results. To increase confidence in our astrometric point, we included in our common proper motion analysis a second P1640 epoch, obtained in 2011 with the astrometric calibration grid in the optical train. We also complemented our observation using the 2005 epoch obtained as part of the interferometric survey presented in @2009ApJ...700..491M, and re-processed with particular care to extract the flux from FU Ori S. The result of our analysis is summarized in Figure \[fig::FUOriProperMotion\], which includes points previously published, where both the uncertainties in the relative position of both components and the proper motion of FUOri N have been combined as an uncertainty in the position of FU Ori S. The configuration of this system is somewhat cumbersome as the largest source of uncertainty is the lack of a well constrained proper motion estimate for FU Ori N. Even in the presence of such large uncertainty Figure \[fig::FUOriProperMotion\] shows that the combination the five epochs clearly establishes that FU Ori N and FU Ori S are co-moving. We thus conclude that these two objects are physically associated. The relative position of the binary pair across the six epochs in Figure \[fig::FUOriProperMotion\] is given in Table \[tab::FuAstrometry\]. Under the assumption of a face on orbit and a $0.4 \; M_{\bigodot}$ mass for FU Ori N, the largest orbital motion detectable over the 9 year baseline provided by these points is $26$ micro-arcseconds in the NS direction assuming that FU Ori S is massless and $45$ micro-arcseconds in the NS direction if the mass of FU Ori S is twice the mass of FU Ori N. In any case, the astrometric uncertainties in Table \[tab::FuAstrometry\] are much larger than these optimistic projected relative motion, and we conclude that the scatter in these values is not representative of any significant orbital motion. ![Proper motion of the FU Orionis system. The blue dots represent the location of FU Ori N across the five epochs considered. The star motion was calculated using the astrometric difference of the WISE and 2MASs epochs. The uncertainties on this motion, have been cumulated with the uncertainty in the relative position of FU Ori S with respect to FU Ori N to estimate the regions of confidence of the on-sky location of FU Ori S (orange ellipses). For clarity, we arbitrarily used the 2005 epoch as the origin of time for the calculation of the uncertainties in proper motion and the 2002 epoch as the zero point of our diagram. Since the regions of confidence between the 2002 and the 2003 epochs overlap, common proper motion could not be established at the time. However the data points introduced in this paper provide much greater temporal leverage and demonstrate that FU Ori N and FU Ori S are physically associated.[]{data-label="fig::FUOriProperMotion"}](fig5.pdf){width="9cm"} Reconstruction of the 0.8-10 $\mu$m Spectral Energy Distribution of FU Ori S ============================================================================ Consistency with published J and H band observations of FU Ori S ---------------------------------------------------------------- We first focused on comparing the P1640 band averaged photometry with the estimates obtained by @2004ApJ...608L..65R. By integrating the data points in Fig. \[fig::CompareSEDs\] over the J and H bandpasses, we found delta magnitudes between FU Ori N and FU Ori S of $\Delta m_{J} = 5.6$ mag and $\Delta m_{H} = 4.9$ mag that are significantly larger than the values estimated by Reipurth & Aspin, respectively $4.5$ mag and $4.3$ mag. We thus closely inspected the imaging data from @2004ApJ...608L..65R. These images were obtained using the Infrared Camera for Surveys at the Subaru Telescope and are publicly available on the SMOKA archival system [^2]. They consist of a series of images in which the brightest star is saturated, and a series of photometric calibration frames where the brightest star is attenuated by a $1/100$ neutral density filter. The left panel of Fig. \[fig::IRCSImages\] illustrates the raw J band IRCS data; the FU Ori S component is detected at $SNR \sim 3$. It is heavily embedded in the residual adaptive optics halo. We can reproduce the delta magnitudes reported in @2004ApJ...608L..65R by carrying out a gaussian photometric fit on these images. However, in an effort to further separate the photometric contributions of the AO halo and FU Ori S itself, we proceeded to subtract each exposure with a centro-symmetric image of itself (right panel of Fig. \[fig::IRCSImages\], similar to Fig. 1 in @2004ApJ...608L..65R) before carrying out the photometric estimation. This process led to substantially larger estimates of the magnitude difference, $\Delta m_{J} = 5.3$ mag and $\Delta m_{H} = 5$ mag. These values are better matched to our P1640 observations, even consistent within generous error bars. We then carried out the same procedure for the IRCS K’ and L’ band images of FU Ori S. Since the AO halo decreases with wavelength, these long wavelength images exhibit $SNR >10$ and thus we found that both methods outlined in Fig. \[fig::IRCSImages\] yield estimates consistent with the values published by @2004ApJ...608L..65R . The results of our photometric analysis are summarized in Table \[tab::FuPhotometry\]. Note that the main conclusion of @2004ApJ...608L..65R, namely that FU Ori S is a young star very likely to be associated with FU Ori N, relied on red J-H colors. It is thus not impacted by our significantly revised photometric estimates which indicate an even redder object. @2012AJ....143...55B recently reported high spectral resolution J H and K band ($R \sim 3000$) spectra of FU Ori S, obtained with the Gemini NIFS integral field spectrograph. They reported a positive SED slope in J and a negative slope in H, while our estimates, Fig. \[fig::CompareSEDs\], yield positive slopes in both bands. Although this could be the result of extreme near infrared color variability of FU Ori S, we have argued against such varaibility for FU Ori N in section 2.2, and indeed there may be a technical explanation for the difference in spectral slope between the @2012AJ....143...55B and our observations. While P1640 and NIFS are both integral field spectrographs, their design tradeoffs are radically different. NIFS prioritizes spectral resolution ($R \sim 3000$) over spatial resolution ($ \sim 10$ angular resolution units per spaxel at the shortest wavelength). P1640 was designed specifically to explore an orthogonal parameter space, with a fine spatial resolution in order to mitigate speckle contamination ($R \sim 45$, $>2$ spaxels per unit of angular resolution at the shortest wavelength). In order to investigate the H band slope discrepancy, we carried out aperture photometry estimates on our P1640 data without any speckle suppression. This led to an SED estimate similar to the synthetic spectrum shown Fig. \[fig::ExtractedBias\], which mimics the SED profile in @2012AJ....143...55B. We thus conclude that the discrepancy in H band slope could be due to mid-spatial frequency speckle contamination in NIFS, that can not be calibrated because of the large angular extent of the NIFS spaxels. Alternatively it could be the result of a product near infrared variability of FU Ori S and require further monitoring of this system. Since the characterization of FU Ori S in @2012AJ....143...55B is based on line diagnostics that take advantage of the high spectral resolution, revising their estimated SED slope would not alter their conclusions. ![In an effort to reconcile the P1640 photometry with the IRCS data we reprocessed the 2003 epoch published in [@2004ApJ...608L..65R]. [**Left:**]{} raw IRSC PSF. FU Ori S is embedded within the adaptive optics halo. [**Right:**]{} Self-calibrated AO PSF, obtained by self subtracting each frame with its the centro-symmetric image. In the raw image the halo contaminates the photometry of FU Ori S and our gaussian fitting analysis yield estimates similar to [@2004ApJ...608L..65R]. The same analysis on the Self-calibrated AO PSF, where the halo contamination has been mitigated, yields photometric estimates consistent with values obtained on P1640 data.[]{data-label="fig::IRCSImages"}](fig6.pdf){width="10cm"} Assembled Spectral Energy Distribution of FU Ori S -------------------------------------------------- We complemented our J and H SED derived from P1640 data with published measurements spanning $0.8 \; \mu m $ to $10 \;\mu m$. The $0.8 \; \mu m $ point was obtained using the delta magnitude of $\Delta m_{0.8} = 3.96 \pm 0.28.$ reported from speckle imaging by @2008AstBu..63..357K. It was placed within the overall SED of FU Ori S using an extrapolation to $0.8 \; \mu m $ of the SED of FU Ori N published in . We used the K band spectrum published by @2004ApJ...608L..65R, scaled to match the estimated K photometry obtained using our analysis of the IRCS data. Comparison of the @2004ApJ...608L..65R and the Beck & Aspin spectral slopes through the K band shows that they are similar, if not identical, outside of the CO bandhead region. The IRCS L band photometry we derived was also incorporated in our SED reconstruction. Finally the $10.2$ micron $N $ band point was obtained using a non-redundant aperture configuration of the Keck segment tilting experiment, as reported in @2009ApJ...700..491M, but with special attention in the data reduction for our purposes to the presence of two point sources. FU OriN and FU OriS are separated by $475 \pm 10$ mas and have a position angle of $163 \pm 2$ deg; the N-band flux ratio is $\sim10\pm 1.5$. Neither component of the binary is itself spatially resolved in the 10 $\mu$m data. To the best of our knowledge there is no other ultraviolet, optical, infrared, or sub-mm data that spatially resolves the pair and therefore could be incorporated into our analysis. Analysis of the Spectrum and Spectral Energy Distribution of FU Orionis S ========================================================================= ![Overall Spectral Energy Distribution of the FU Orionis system with best fit models. The N-band point is well above any conceivable reddened stellar photosphere, and must therefore be due to circumstellar dust around FU Ori S. The colored curves are NextGen model atmospheres at the representative effective temperature and extinction pairs $T_{eff}=6500$, $A_V=12$ (blue), $T_{eff}=5500$, $A_V=11$ (cyan), $T_{eff}=4500$, $A_V=9$ (green), $T_{eff}=3500$, $A_{V}=9$ (magenta) all with surface gravity $log\; g = 4.5$. The red line is a 450 K blackbody with the same extinction as the matched photosphere. The sum of the photospheric and excess contributions are represented by the black lines, which are normalized to the data at the P1640 $1.7 um$ point.The L-band and N-band points can be fit adequately by adding a single reddened blackbody of temperature $\sim 480$ K to reddened photospheres.[]{data-label="fig::grid"}](fig7.pdf){width="7cm"} Considering first the spectral line information available in our $R\sim 45$ P1640 data, the J- and H-band spectrum lacks any absorption features. This constrains the temperature to be warmer than $\sim$4000 K since at cooler temperatures molecular absorption from CO and $H_2O$ would become apparent even at the low spectral resolution of P1640 and in the presence of high extinction. Meanwhile, at warmer temperature narrow atomic features may be present, but would not be visible at the P1640 resolution. The published $R\sim 1500$ K-band spectrum from @2004ApJ...608L..65R hints at the presence of NaI at 2.206 um and perhaps CaI at 2.26 $\mu$m that the authors invoke to advocate for a temperature corresponding to a G or K spectral type. This is consistent with their otherwise featureless K-band spectrum that implies a temperature warmer than $\sim$4500 K based on the absence of CO absorption, and cooler than $\sim$7000 K due to absence of BrG absorption (though an actively accreting young star could have both CO and BrG absorption filled in to the continum level without these lines appearing explicitly in emission). Higher signal-to-noise and higher spectral resolution data K-band data recently published by @2012AJ....143...55B more clearly exhibit the photospheric absorption features in FU Ori S, and suggest a spectral type of K5. Notably, these authors find both the CO bandhead and BrG in [*emission*]{}, whereas the earlier @2004ApJ...608L..65R spectrum showed no emission in these regions. It is somewhat unusual for a young stellar object to have both clearly present photospheric absorption features and strong CO emission. The overall SED of FU Ori S from R- to J- to H-band is quite red, see Fig. \[fig::grid\]. Considering, in addition, the K-, L-, and N-bands, the SED is also appreciated as broader than a photospheric spectrum. An analysis of the available SED was carried out using both blackbody fitting and comparison to stellar atmospheres in order to derive rough temperature and extinction estimates. We find that the N-band point is well above any conceivable reddened stellar photosphere, and must therefore be due to circumstellar dust around FU Ori S. The L-band point is also likely in excess of the stellar photosphere though for very cool, highly reddened atmospheres, an L-band excess is not required. Without a more fully populated spectral energy distribution the disk can not be characterized in terms of mass or size, but only in terms of the hottest dust closest to the photosphere. The L-band and N-band points can be fit adequately by adding to the reddened photosphere described below a single reddened blackbody of temperature $450-480 \;K$ . The de-redened luminosity of FU Ori N is dominated by its hot inner disk and has been estimated at $226 L_{\odot}$ [@2007ApJ...669..483Z]. When combining effective temperature estimates based on our reconstructed SED (see below) with a stellar radius of $\sim 1 R_{\odot}$ we find that de-reddened luminosity of FU Ori N is $\sim 0.6 L_{\odot}$. As a consequence, assuming a distance of $225$ AU between the two components, we find that for separations $<10$ AU the dusty material around FU Ori S is mostly heated by FU Ori S and that further away heating from FU Ori N becomes dominant. As the temperature from this dust, as per our SED estimate, is $~\sim 480 K$, then it is mostly likely lying within the inner AU of the southern component and is most likely heated by FU Ori S. Moving blueward, the origin of the K-band flux is more ambiguous. The shape of both published K-band spectra (excluding the CO bandhead region which has different slope in the @2004ApJ...608L..65R vs. the @2012AJ....143...55B spectra, perhaps indicating the onset of CO emission) is consistent with an un-reddened blackbody of roughly 2200-2500K. The shape of the JH region P1640 data, however, requires a much more extincted spectrum, for example Av=7 mag for the same 2500 K blackbody. This model does not then fit the published K-band spectral shape though it does pass through the mean K-band flux. Considering alternate blackbodies, Av=10 mag is required for a $\sim$5000K temperature blackbody to fit the JH spectral region, but again this does not fit the K-band spectral shape very well, though does still match the mean K-band flux. Model atmosphere fitting in the JHK wavelength regime introduces sensitivity to spectral signatures that would arise in cool atmospheres. For this purpose we used the NextGen [@2001ApJ...556..357A] models released in 2005[^3]. As seen on Fig. \[fig::gridZoom\], there is a general trade-off between extinction and temperature through the P1640 J-band and H-band region, with $T = 6500, 5500, 4500$, and $3500 K$ models best fit with $A_V = 12, 11, 9$ and $9$ mag, respectively, as illustrated. Cooler temperatures exhibit broad molecular absorption features throughout the J and H bands, which as noted above are not seen. The relatively featureless K-band spectrum additionally rules out the $3500 K$ model for the same reason that as even cooler models are ruled out from the J-and H-band data alone. The ability of a model with given temperature and extinction to match both the steep positive slope through the J- and H-band region and the negative slope through the K-band, depends on the wavelength at which the model is normalized to the data. For example, a normalization at the far red end of the H-band permits a wider range of temperature plus extinction models to pass within the error bars through the R-band photometry and the J- and H-band spectrophotometry, as well as to come close to matching the K-band slope, than can be fit to the same $\chi^2$-based confidence level when normalizing at shorter wavelengths. Shorter wavelength H-band or even J-band normalization allows the red end of the H-band as well as the K-band to serve as a discriminant between models, see Fig. \[fig::gridZoom\]. Overall, we find the best-fit for an atmosphere with $T=4000 K$ and $A_V=8$, normalized at either the short end of H or the long end of J. This effective temperature is consistent with @2012AJ....143...55B [@2004ApJ...608L..65R]. our P1640 observations provide tighter constraints on this quantity along with an estimate of the extinction in the line of sight of FU Ori S. New Results on FU Ori S in the Context of FU Ori N ================================================== ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Zoom on the region of the SED probed by P1640. Same colors as \[fig::grid\].[*Top*]{}: all the models are scaled to match the P1640 spectrum at $1.65 \; \mu$m. [*Bottom*]{}: all the models are scaled to match the P1640 spectrum at $1.4 \; \mu$m. Agreement between data and models depends on the wavelength at which the models are normalized. We find the best-fit for an atmosphere with $T=4000 \; K$ and $A_V=8$, normalized at either the short end of H or the long end of J.[]{data-label="fig::gridZoom"}](fig8a.pdf "fig:"){width="7cm"} ![Zoom on the region of the SED probed by P1640. Same colors as \[fig::grid\].[*Top*]{}: all the models are scaled to match the P1640 spectrum at $1.65 \; \mu$m. [*Bottom*]{}: all the models are scaled to match the P1640 spectrum at $1.4 \; \mu$m. Agreement between data and models depends on the wavelength at which the models are normalized. We find the best-fit for an atmosphere with $T=4000 \; K$ and $A_V=8$, normalized at either the short end of H or the long end of J.[]{data-label="fig::gridZoom"}](fig8b.pdf "fig:"){width="8cm"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ In this paper we have established that FU Ori S is physically related to FU Ori N and the pair forms a true binary system. The P1640 astrometry over 2 epochs combined with literature information establishes common proper motion of the pair. Our spatially resolved spectrophotometry in JH in combination with the limited amount of data at other wavelengths that spatially resolves the binary enables for the first time an assessment of the N/S pair of young stars in the FU Orionis system. The northern component is a long recognized FU Ori star that is visible at optical wavelengths and well-studied in the context of the accretion disk-dominated paradigm for FU Ori objects, the class for which it is the prototype. Previous modelling work has estimated that the underlying star is a 0.3 to 0.5 $M_\odot$ star behind $A_V=1.5-2.4$ mag. The southern component, which we have characterized here, appears to be the more heavily reddened ($A_V=8-12$) component of the system, and perhaps even the slightly hotter ($\sim$ 4000-6500 K) and therefore more massive ($>0.5 M_\odot$) component, depending on the adopted temperature. Both components have infrared excess though there are no spatially resolved flux measurements beyond those at 10.7 $\mu$m that are newly reported here. Notable is the much higher extinction required in order to explain the optical to near-infrared spectral energy distribution of FU Ori S, compared to the more moderate reddening inferred from other studies towards FU Ori N. This difference should be put in perspective with extinction estimates obtained using X-ray observations of this system. Two separate line of sight absorptions are needed to fit the FU Ori X ray spectrum to thermal emission models. Building upon earlier work with XMM-Newton [@2006ApJ...643..995S], @2010ApJ...722.1654S used Chandra ACIS-S data to show that the FU Ori spectrum is fit well with (1) a highly absorbed hot plasma component ($kT \simeq 4.5$ keV; $N_H \simeq 10^{23} \; cm^{-2}$), that is spatially coincident with the primary, and (2) a cooler, less absorbed thermal plasma ($kT \simeq 0.5$ keV; $NH \simeq 10^{22} \; cm^{-2}$) which shows a small centroid offset towards FU Ori S. They suggest that the latter soft component is most likely due to FU Ori S, which contributes at least to $50 \%$ of the soft X-ray counts. Table \[tab::FuExtinction\] reports the values of the estimated optical extinction for the FU Ori N-S pair, [@2007ApJ...669..483Z] and this work respectively, converts them to hydrogen column density, and compares these indirect $N_H$ estimates with the direct X-ray measurement in @2010ApJ...722.1654S. Estimated hydrogen column densities in the direction of FU Ori S obtained using the two methods are of the same order of magnitude and consistent within generous error bars. The independent estimate derived from our observations thus reinforces the notion of the presence of a cool plasma embedding FU Ori S. Moreover it could potentially be useful to externally constrain the soft X-ray component in the global fit presented in @2010ApJ...722.1654S. Such an exercise would yield to fitting the Chandra counts with a large hydrogen column density for the hard X-ray counts, even larger than the values presented in @2010ApJ...722.1654S. This would further reveal a discrepancy between optical extinction measurements and X-Ray absorption the direction of FU Ori N even larger than the factor $>10$ exhibited in Table \[tab::FuExtinction\]. This excess X-ray absorption indicates a large gas-to-dust ratio that could be explained by non-uniformities in the geometry of FU Ori N’ s disk [@2007AstL...33..755K; @1996ApJ...461..933K] or the presence of winds. A full understanding of this paradigm will require further research, and we refer the reader to the detailed discussion in @2010ApJ...722.1654S for a thorough discussion of potential avenues to identify such phenomena. The infrared excess, established here and implied by the data in @2004ApJ...608L..65R as well as the CO plus BrG emission reported by @2012AJ....143...55B (though mysteriously not present in the @2004ApJ...608L..65R spectrum), strongly suggests that the FU Ori S is also a young star/disk system. However, the lack of cool molecular features and the implied temperature of at least $4000$ and perhaps more than $5000 \; K$ actually render FU Ori S the more massive object, in other words, the primary of the system. The observed optical and infra-red brightness ratios can then be explained by both the excess luminosity of FU Ori N due to its outburst state since the 1936 eruption, and the high extinction in the line of sight towards FU Ori S. Dust in the hot inner disk around FU Ori N cannot account for this extinction since the pair separation of $\sim 225$ AU is larger than published estimates of the outer radius of this disk [@2007ApJ...669..483Z]. Perhaps, however, the pair is embedded in the same circumbinary envelope with higher extinction in either a clumpy envelope or the foreground cloud present along the line of sight to the S component than towards the N component. Indeed, the pair could be a binary in the process of fragmentation. The source may thus be analogous to the Z CMa binary system, where the optically visible source is also a candidate FU Ori star and the embedded companion is the more massive object, called a Herbig Ae/Be star in the literature. The Z CMa system is discussed in a separate paper from the P1640 collaboration ([@HinkleyInPrep]). @2004ApJ...608L..65R [@1996MNRAS.278L..23C; @2005MNRAS.361..942C] and @1992ApJ...401L..31B have proposed binary interactions as a suitable trigger for the FU Ori outbursts, causing gravitational instabilities that raise accretion rates by 3-4 orders of magnitude over normal T Tauri quiescent accretion levels. Problems with this origin of the outbursts (see ) include the short predicted rise times, the higher order multiplicity suggested as necessary for aperiodic instabilities than the mere binarity that is observed, and finally that the gravitational instability would affect the entire disk, not just in the inner fraction of an AU that is thought to be participating in the outburst. Note that in our case the separation between FU Ori N-S is too wide for FU Ori S to be responsible for the 1935 outburst. Conclusion ========== We have presented near-infrared Integral Field Spectrograph observations of FU Orionis: our data spatially resolves the FU Ori N and FU Ori S binary components throughout the J- and H-bands, at R$\sim$45. Our astrometric analysis unambiguously establishes common proper motion FU Ori N and S form a binary pair. Our Spectral Energy Distribution analysis suggests that the southern faintest source in this system might actually be the more massive component. Our observations allowed us to retrieve the SED of FU Ori S, which was thus far poorly constrained. In order to carry out unbiased spectro-photometric estimates in the presence of speckles we applied the damped LOCI algorithm [@LaurentSpeckle], a reduction method specifically designed for high contrast science with IFS. This is the first communication reporting the high accuracy, on a faint astronomical source, of this technique pioneered by the Project 1640 team. We combined the P1640 data with all other spatially resolved information (R-band, K-band spectroscopy, L-band, and N-band) available from the literature or from re-analysis of previously existing data. L and N-band point are well above any conceivable reddened stellar photosphere, and must therefore be due to circumstellar dust around FU Ori S, whose average temperature we estimated at $\sim$480 K. The spectral energy distribution of FU Ori S is very red from optical wavelengths through the H-band with a turnover thereafter such that the K-band spectral slope is blue. This combined data set is best explained by an underlying star in the $4000-5000$ K temperature range behind $8-12$ mag of visual extinction. This value of the extinction is in good agreement with estimates obtained using previously published X-ray measurements. In particular our estimate is consistent with the hydrogen column density derived using Chandra soft X-ray counts, which have been mostly attributed to FU Ori S [@2010ApJ...722.1654S]. Our independent constraint on the extinction in the direction of FU Ori S helps in turn to further confirm the anomalously large excess hard X-ray absorption towards FU Ori N. The high visual extinction in the line of sight of FU Ori S, most likely due to the geometry of a potential circumbinary dust shell, and is moreover responsible for the relatively high contrast ratio with respect to FU Ori N. Our SED analysis allows us to combine estimates and of visual extinction and effective temperature, and yields values of the latter twice larger than previously published. FU Ori S is thus much more massive than previously believed and, with a lower bound for the mass estimate of $> 0.5 M_{\bigodot}$, it is probably the most massive component of this system. This source may thus be analogous to the Z CMa binary system. A more detailed comparative analysis of these two objects, putting in perspective the quiescent and outburst near infrared SEDs of each components in both FU Orionis and Z CMa, is a promising avenue to better unravel role of binarity in eruptive protostars. Acknowledgements {#acknowledgements .unnumbered} ================ Project 1640 is funded by National Science Foundation grants AST-0520822, AST-0804417, and AST-0908484. This work was partially funded through the NASA ROSES Origins of Solar Systems Grant NMO710830/102190, the NSF AST-0908497 Grant. The adaptive optics program at Palomar is supported by NSF grants AST-0619922 and AST-1007046. Some of the research research described in this publication was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. LP was supported by an appointment to the NASA Postdoctoral Program at the JPL, Caltech, administered by Oak Ridge Associated Universities through a contract with NASA. LP and SH performed this work in part under contract with the California Institute of Technology (Caltech) funded by NASA through the Sagan Fellowship Program. This work was based in part on data collected at Subaru Telescope and obtained from the SMOKA, which is operated by the Astronomy Data Center, National Astronomical Observatory of Japan. Epoch $\Delta RA $ $\Delta DEC$ Reference ---------------- -------------------------- --------------------------- ---------------------- Oct 27 th 2002 $0.16'' \; \pm 0.03''$ $-0.47'' \; \pm 0.03'' $ @2004ApJ...601L..83W Dec 15 th 2003 $0.147'' \; \pm 0.006''$ $-0.470'' \; \pm 0.006''$ @2004ApJ...608L..65R Feb 20 th 2005 $0.139'' \; \pm 0.017''$ $-0.454'' \; \pm 0.017''$ @2009ApJ...700..491M Jan 23 rd 2008 $0.137'' \; \pm 0.008''$ $-0.474'' \; \pm 0.008''$ @2008AstBu..63..357K Mar 17 th 2009 $0.158'' \; \pm 0.009''$ $-0.465'' \; \pm 0.009''$ This work Dec 12 th 2011 $0.15'' \; \pm 0.02''$ $-0.45'' \; \pm 0.02''$ This work : RELATIVE ASTROMETRY OF THE FU ORIONIS SYSTEM \[tab::FuAstrometry\] Filter $\lambda_c$ FU Ori N FU Ori S Instrument -------- ------------- -------------------- ------------------ ------------------------------- J $1.250$ $6.519 \pm 0.015$ $11.98 \pm 0.19$ P1640$^{(a)}$ J $1.250$ $6.519 \pm 0.015$ $11.69 \pm 0.22$ IRCS $^{(b)}$ H $1.635$ $5.699 \pm 0.029$ $10.36 \pm 0.17$ P1640 $ ^{(a)}$ H $1.635$ $5.699 \pm 0.029$ $10.14 \pm 0.19$ IRCS $^{(b)}$ K’ $2.121$ $5.259 \pm 0.023$ $9.35 \pm 0.15$ IRCS $^{(b)}$ L’ $3.770$ $4.180 \pm 0.039$ $8.09 \pm 0.16$ IRCS $^{(b)}$ N $10.7$ $ 2.2 \pm 0.02 $ $ 4.7 \pm 0.13$ Keck Segment masking $^{(c)}$ : PHOTOMETRY OF THE FU ORIONIS SYSTEM \[tab::FuPhotometry\] Line of sight $A_V$ in mag $^{(a)}$ $N_H =f(A_V)$ in $10^{22}$ cm$^{-2}$ $^{(b)}$ $N_H$ in $10^{22}$ cm $^{-2}$ $^{(c)}$ --------------- ----------------------- ----------------------------------------------- ---------------------------------------- FU Ori S $ 10 \pm 2$ $2 \pm 0.4 $ $1.1 \pm 0.5 $ FU Ori N $2 \pm 0.5$ $0.39 \pm 0.11$ $10 \pm 5$ : EXTINCTION IN THE FU ORIONIS BINARY PAIR \[tab::FuExtinction\] [32]{} natexlab\#1[\#1]{} , F. and [Hauschildt]{}, P. H. and [Alexander]{}, D. R. and [Tamanai]{}, A. and [Schweitzer]{}, A. 2001,, 556, 357-372 , C. & [Reipurth]{}, B. 2003, , 126, 2936 ,[Beck]{}, T. L. and [Aspin]{}, C., 2012, , 143, 55+ Bonnell, I., & Bastien, P. 1992, , 401, L31 Clarke, C. J., & Syer, D. 1996, , 278, L23 Clarke, C., Lodato, G., Melnikov, S. Y., & Ibrahimov, M. A. 2005, , 361, 942 . 2011, , 729, 132-+ , M. C., [Vacca]{}, W. D., & [Rayner]{}, J. T. 2004, , 116, 362 Dekany, R., Bouchez, A., Britton, M., Velur, V., Troy, M., Shelton, J. C., & Roberts, J. 2006in (SPIE), 62720G , J.-F., [Paletou]{}, F., [Bouvier]{}, J., & [Ferreira]{}, J. 2005, , 438, 466 Eisner, J. A., & Hillenbrand, L. A. 2011, arXiv:1106.1440 , D. W. and [Irwin]{}, M. J. and [Helmer]{}, L.,  2002, , 395, 347-356 Forgan, D., & Rice, K. 2010, , 402, 1349 , P.,  1975, , 198, 95-101 , T. P. & [Lada]{}, C. J. 1996, , 112, 2184 , L. & [Kenyon]{}, S. J. 1996, , 34, 207 Hartmann, L., Zhu, Z., ?& Calvet, N. 2011, arXiv:1106.3343 , G. H. 1966, Vistas in Astronomy, 8, 109 —. 1977, , 217, 693 , G. H., [Petrov]{}, P. P., & [Duemmler]{}, R. 2003, , 595, 384 , S., [Oppenheimer]{}, B. R., [Zimmerman]{}, N., [Brenner]{}, D., [Parry]{}, I. R., [Crepp]{}, J. R., [Vasisht]{}, G., [Ligon]{}, E., [King]{}, D., [Soummer]{}, R., [Sivaramakrishnan]{}, A., [Beichman]{}, C., [Shao]{}, M., [Roberts]{}, L. C., [Bouchez]{}, A., [Dekany]{}, R., [Pueyo]{}, L., [Roberts]{}, J. E., [Lockhart]{}, T., [Zhai]{}, C., [Shelton]{}, C., & [Burruss]{}, R. 2011, , 123, 74 , S. and [Monnier]{}, J. D. and [Oppenheimer]{}, B. R., [Roberts]{}, Jr., L. C., [Ireland]{}, M. and [Zimmerman]{}, N., [Brenner]{}, D. and [Parry]{}, I. R., [Martinache]{}, F., [Lai]{}, O., [Soummer]{}, R., [Sivaramakrishnan]{}, A., [Beichman]{}, C., [Hillenbrand]{}, L., [Zhao]{}, M., [Lloyd]{}, J. P., [Bernat]{}, D., [Vasisht]{}, G., [Crepp]{}, J. R. , [Pueyo]{}, L., [Shao]{}, M., [Perrin]{}, M. D., [King]{}, D. L., [Bouchez]{}, A., [Roberts]{}, J. E.,[Dekany]{}, R. and [Burruss]{}, R, 2001, , 726, 104 + , S., [Oppenheimer]{}, B. R., [Brenner]{}, D., [Zimmerman]{}, N., [Roberts]{}, Jr., L. C., [Parry]{}, I. R.,[Soummer]{}, R., [Sivaramakrishnan]{}, A., [Simon]{}, M., [Perrin]{}, M. D., [King]{}, D. L., [Lloyd]{}, J. P., [Bouchez]{}, A., [Roberts]{}, J. E., [Dekany]{}, R., [Beichman]{}, C.,[Hillenbrand]{}, L., [Burruss]{}, R., [Shao]{}, M., [Vasisht]{}, G., 2010 , 712, 421-428 , S. et al., 2012 , S. J., [Hartmann]{}, L., [Gomez]{}, M., [Carr]{}, J. S., & [Tokunaga]{}, A. 1993, , 105, 1505 , S. J., [Hartmann]{}, L., & [Hewett]{}, R. 1988, , 325, 231 , S. J. & [Hartmann]{}, L. W. 1991, , 383, 664 , W. and [Lin]{}, D. N. C., ,  1996, 461, 933+ , E. A. & [Petrov]{}, P. P. 1985, Pis ma Astronomicheskii Zhurnal, 11, 846 , A. S., [Lamzin]{}, S. A., [Errico]{}, L., & [Vittone]{}, A. 2007, Astronomy Letters, 33, 755 , A. S., [Malogolovets]{}, E. V., & [Lamzin]{}, S. A. 2008, Astrophysical Bulletin, 63, 357 , D., [Marois]{}, C., [Doyon]{}, R., [Nadeau]{}, D., & [Artigau]{}, [É]{}. 2007, , 660, 770 , A. U. 1975, , 87, 379 —. 1977, , 89, 704 , F., [Lachaume]{}, R., [Berger]{}, J.-P., [Colavita]{}, M. M., [di Folco]{}, E., [Eisner]{}, J. A., [Lane]{}, B. F., [Millan-Gabet]{}, R., [S[é]{}gransan]{}, D., & [Traub]{}, W. A. 2005, , 437, 627 McElwain, M. W., Metchev, S. A., Larkin, J. E., Barczys, M., Iserlohe, C., Krabbe, A., Quierrenbach, A., Weiss, J., & Wright, S. 2007, The Astrophysical Journal, 656, 505 Millan-Gabet, R., Monnier, J. D., Akeson, R. L., et al. 2006, , 641, 547 Monnier, J. D., Tuthill, P. G., Ireland, M., Cohen, R., Tannirkulam, A., & Perrin, M. D. 2009, , 700, 491 , P. and [Penston]{}, M. V., 1977, , 181, 657-665 Pfalzner, S. 2008, , 492, 735 , [Justin Crepp]{}, [Douglas Brenner]{}, [Ben R. Oppenheimer]{}, [Neil Zimmerman]{}, [Sasha Hinkley]{}, [Ian Parry]{}, [David King]{}, [Gautam Vasisht]{}, [Charles Beichman]{}, [Lynne Hillenbrand]{}, [Richard Dekany]{}, [Mike Shao]{}, [Rick Burruss]{}, [Lewis C.Roberts]{}, [Antonin Bouchez]{}, [Jenny Roberts]{}, & [Remi Soummer]{} 2012, , 199, 6 Reipurth, B., & Aspin, C. 2004, , 608, L65 Reipurth, B., & Aspin, C. 2010, Evolution of Cosmic Objects through their Physical Activity, 19 , L. F., [D’Alessio]{}, P., [Wilner]{}, D. J., [Ho]{}, P. T. P., [Torrelles]{}, J. M., [Curiel]{}, S., [G[ó]{}mez]{}, Y., [Lizano]{}, S., [Pedlar]{}, A., [Cant[ó]{}]{}, J., & [Raga]{}, A. C. 1998, , 395, 355 , A. and [Oppenheimer]{}, B. R., , 2006, 647, 620-629 , S. L. and [Briggs]{}, K. R. and [G[ü]{}del]{}, M., ,  2006, 643, 995-1002 , S. L., [G[ü]{}del]{}, M., [Briggs]{}, K. R., & [Lamzin]{}, S. A. 2010, , 722, 1654 , R., [Pueyo]{}, L., [Ferrari]{}, A., [Aime]{}, C., [Sivaramakrishnan]{}, A., & [Yaitskova]{}, N. 2009, , 695, 695 , W. B. & [Ford]{}, H. C. 2002, , 578, 543 ten Brummelaar, T.A., Mason, B.D., Bagnuolo, Jr. W.G., Hartkopf, W.I., McAlister, H.A., Turner, N.H. 1996, , 112, 1180 ten Brummelaar, T.A., Mason, B.D., McAlister, H.A. Roberts, Jr. L.C., Turner, N.H., Hartkopf, W.I., Bagnuolo Jr., W.G. 2000, , 119, 2403 , G., [Ligon]{}, L., [Roberts]{}, L., [Shao]{}, M., [Zhai]{}, C., [Oppenheimer]{}, B. R., [Hinkley]{}, S., & [Parry]{}, I. 2010, in Proceedings of the conference In the Spirit of Lyot 2010: Direct Detection of Exoplanets and Circumstellar Disks. October 25 - 29, 2010. University of Paris Diderot, Paris, France. Edited by Anthony Boccaletti. , M. H. and [Montmerle]{}, T. and [Grosso]{}, N. and [Feigelson]{}, E. D. and [Verstraete]{}, L. and [Ozawa]{}, H.,  2003, , 408, 581-599 J. Kent Wallace, Rick Burruss, Laurent Pueyo, Remi Soummer, Chris Shelton, Randall Bartos, Felipe Fregoso , Bijan Nemati, Paul Best, John Angione, Procs SPIE,  2009, Vol 7440. , H., [Apai]{}, D., [Henning]{}, T., & [Pascucci]{}, I. 2004, , 601, L83 Zhu, Z., Hartmann, L., Calvet, N., et al. 2007, , 669, 483 , Z., [Espaillat]{}, C., [Hinkle]{}, K., [Hernandez]{}, J., [Hartmann]{}, L. and [Calvet]{}, N., “[The Differential Rotation of FU Ori]{}”,, 2009, 694, L64-L68 Zhu, Z., Hartmann, L., Gammie, C. F., et al. 2010, , 713, 1134 Zimmerman, N., Brenner, D., Oppenheimer, B. R., Parry, I. R., Hinkley, S., Hunt, S., & Roberts, R. 2011, PASP, 123, 904 [^1]: http://archive.ast.cam.ac.uk/camc/ [^2]: http://smoka.nao.ac.jp/ [^3]: http://phoenix.ens-lyon.fr/Grids/NextGen/SPECTRA/
--- abstract: 'On convex co-compact hyperbolic surfaces $X=\Gamma \backslash {\mathbb{H}^2}$, we investigate the behavior of nodal curves of real valued Eisenstein series $F_\lambda(z,\xi)$, where $\lambda$ is the spectral parameter, $\xi$ the direction at infinity. Eisenstein series are (non-$L^2$) eigenfunctions of the Laplacian $\Delta_X$ satisfying $\Delta_X F_\lambda=(\frac{1}{4}+\lambda^2)F_\lambda$. As $\lambda$ goes to infinity (the high energy limit), we show that, for generic $\xi$, the number of intersections of nodal lines with any compact segment of geodesic grows like $\lambda$, up to multiplicative constants. Applications to the number of nodal domains inside the convex core of the surface are then derived.' address: - | McGill University\ Department of Mathematics and Statistics\ 805 Sherbrooke Street West\ Montreal, Quebec, Canada H3A0B9 - | Frédéric Naud\ Laboratoire d’Analyse non-linéaire et Géométrie\ Université d’Avignon, 33 rue Louis Pasteur\ 84000 Avignon\ France. author: - Dmitry Jakobson - Frédéric Naud title: On the nodal lines of Eisenstein series on Schottky surfaces --- [Introduction]{} Let ${\mathbb{H}^2}$ be the hyperbolic plane endowed with the usual metric of constant negative curvature $-1$. Assume that $\Gamma$ is a convex co-compact group of isometries, i.e. a Schottky group with no parabolic elements, and denote by $X=\Gamma\backslash {\mathbb{H}^2}$ the quotient surface. Such a surface has infinite area and the ends are hyperbolic funnels. Let $\Delta_X$ denote the hyperbolic Laplacian on $X$. Its $L^2$-spectrum has been described completely by Lax and Phillips in [@LP2]. The half line $[1/4, +\infty)$ is the continuous spectrum and it contains no embedded eigenvalues. Let $\delta(\Gamma)$ be the [*Hausdorff*]{} dimension of the limit set $\Lambda(\Gamma)$ of $\Gamma$. The limit set $\Lambda(\Gamma)$ is defined has the set of accumulation points in $\partial {\mathbb{H}^2}$ of the orbit of any point $z \in {\mathbb{H}^2}$ under the action of $\Gamma$: $$\Lambda(\Gamma):=\overline{\Gamma.z}\cap \partial {\mathbb{H}^2}.$$ The rest of the spectrum (point spectrum) is empty if $\delta\leq {{\textstyle{\frac{1}{2}}}}$, finite and starting at $\delta(1-\delta)$ if $\delta>{{\textstyle{\frac{1}{2}}}}$. The fact that the bottom of the spectrum is related to the dimension $\delta$ is due to Patterson [@Patterson1]. One way to parametrize the continuous spectrum is through the so-called Eisenstein Series. Before we can give a formal definition of Eisenstein series, let us recall that under the above assumptions, when $\Gamma$ is non-elementary (i.e. $X$ is not a hyperbolic cylinder), $X$ can be decomposed as $$X=X_0\cup \mathcal{F}_1\cup \ldots \cup \mathcal{F}_{n_f},$$ where $X_0$ is a compact surface with geodesic boundary and $\mathcal{F}_1,\ldots ,\mathcal{F}_{n_f}$ are the funnels. Each funnel $\mathcal{F}_j$ is isometric to the cylinder $$(0,2]_\rho \times ({\mathbb{R}}/\ell_j{\mathbb{Z}})_\theta,$$ endowed with the conformally compact metric $$ds^2=\frac{d\rho^2+(1+\rho^2/4)d\theta^2}{\rho^2},$$ where $\rho=0$ corresponds to infinity and $\ell_j$ is the length of the geodesic boundary at $\rho=2$. Let $R_X(s;z,w)$ denote the Schwarz kernel of the resolvent $(\Delta_X-s(1-s))^{-1}$ which by Mazzeo-Melrose [@MazzMel] has a meromorphic continuation (in $s$) to the whole complex plane. Then if $s$ is not a pole, the limit (using the above coordinates in the funnel) $$E_s(z,\xi):=\lim_{\rho\rightarrow 0} \rho^{-s}R_X(s;z,(\rho,\xi))$$ exists and defines an [*eigenfunction*]{} of the Laplacian $\Delta_X E_s(z,\xi)=s(1-s) E(s;z,\xi)$, parametrized by a point $\xi$ at infinity, called [*Eisenstein series*]{}. In particular if $s=1/2+i\lambda$, we have $$\Delta_X E_s(z,\xi)=\left(\frac{1}{4}+\lambda^2 \right) E_s(z,\xi).$$ These Eisenstein series, like their analog in the finite volume case, provide an explicit spectral resolution of the Laplacian [@Borthwick]: $$2\lambda d\Pi_X(\lambda,z,z')=\frac{\vert C(1/2+i\lambda) \vert^2}{2\pi} \sum_{j=1}^{n_f} \int_0^{\ell_j} E_{1/2+i\lambda}(z,\xi) E_{1/2-i\lambda}(z',\xi)d\xi,$$ where $$C(s)=\frac{2^{-s}}{\sqrt{\pi}} \frac{\Gamma(s)}{\Gamma(s-1/2)}.$$ In this paper we want to investigate the zeros sets of [*high energy Eisenstein series*]{}, so we consider real valued Eisenstein functions i.e. we take real parts (which are again eigenfunctions) and set for all $z\in X$ and $\xi$ a direction at infinity, $$F_\lambda(z,\xi):={{\rm Re}}\left(E_{{{\textstyle{\frac{1}{2}}}}+i\lambda}(z,\xi)\right).$$ Below we show a plot in the [*Poincaré disc model*]{} for a symmetric two generator Schottky group with $\xi=i$ and $\lambda=30$. ![image](Biseisenstein.pdf) It is a natural question to investigate the shape and behaviour of the zeros sets (also called nodal lines in dimension $2$) of $F_\lambda(z,\xi)$ as the frequency $\lambda$ goes to infinity. For genuine $L^2$-eigenfunctions on compact manifolds, there is a tremendous amount of work in that direction, and we refer the reader to the recent survey [@Zelditchsurvey]. However, in the non-compact case and infinite volume case, this seems, to our knowledge, to be the very first related work. Numerical experiments show that nodal lines exhibit a mixed behaviour: horocyclic shape close to infinity (as depicted in the above picture) while in the compact core they look more like a genuine high energy eigenfunction, we refer the reader to $\S 5$ for a high energy plot with $\lambda=150$. Even in the case when $\Gamma$ is [*elementary*]{}, the numerics show a highly [*non trivial* ]{} nodal structure. Below we plot the Eisenstein series $F_\lambda(z,\xi)$, in the Poincaré half-plane for $\xi=2.5$, $\lambda=40$. The group is generated by $z\mapsto e^{\ell} z$ with $\ell=1.5$. ![image](Elementary2.pdf) In the case when $\delta(\Gamma)<1/2$, then the lift to ${\mathbb{H}^2}$ of $F_\lambda(z,\xi)$ admits a convergent series expression (in the unit disc model), see [@GuiNaud3], Lemma 5, for a proof of that fact. $$F_\lambda(z,\xi)= \sum_{\gamma\in \Gamma} \left ( \frac{1-\vert \gamma z\vert^2}{\vert \gamma z-\xi\vert^2} \right)^{1/2} \cos\left(\lambda \log \left( \frac{1-\vert \gamma z\vert^2}{\vert \gamma z-\xi\vert^2}\right)\right),$$ where $z \in{\mathbb{H}^2}$ and $\xi \in \partial {\mathbb{H}^2}$ belongs to the domain of discontinuity of $\Gamma$ that is $\partial {\mathbb{H}^2}\setminus \Lambda(\Gamma)$. For each term in this sum, the phase function has its level sets on Horocycles based at the point $\gamma^{-1} \xi$ at infinity, so it is basically a superposition of [*hyperbolic plane waves*]{}. Because $F_\lambda$ is an eigenfunction of an elliptic operator with real analytic coefficients (the hyperbolic laplacian), it is automatically a real analytic function. The nodal sets $\mathcal{N}_ \lambda(\xi)$ are defined as usual by $$\mathcal{N}_ \lambda(\xi):=\{z\in X\ :\ F_\lambda(z,\xi)=0\}.$$ These sets are real analytic curves (with possible isolated singular points) and therefore rectifiable. Let $\sigma$ denote the length measure induced on $\mathcal{N}_ \lambda(\xi)$, then by translating almost verbatim the arguments of Donnelly-Feffermann [@DoFeff] (which is a purely local proof ), one obtains that for all compact $K\subset X$ with non-empty interior, there exists $C_K>0$ such that as $\lambda \rightarrow \infty$, $$C_K^{-1}\lambda \leq \sigma( \mathcal{N}_ \lambda(\xi)\cap K)\leq C_K \lambda.$$ In this paper we go beyond by proving the following result. Given a geodesic $\mathcal{C}$, we will define a notion of $\xi$-non symmetry ($\xi$-NS), see $\S 3$, which rules out cases where the geodesic $\mathcal{C}$ is an axis of symmetry for certain geodesics related to $\xi$. The following holds. \[main\] Assume that $\delta(\Gamma)< {{\textstyle{\frac{1}{2}}}}$. Let $\mathcal{C}$ be a geodesic which satisfies $\xi$-NS. Then for all compact non empty segment $\mathcal{C}_0 \subset \mathcal{C}$, one can find a constant $C_0$ such that as $\lambda$ goes to infinity, we have $$C_0^{-1} \lambda \leq \# (\mathcal{N}_ \lambda(\xi) \cap \mathcal{C}_0) \leq C_0 \lambda.$$ The above statement is non-empty : for all geodesic $\mathcal{C_0}$, $\xi$-NS is satisfied for almost all directions $\xi$ at infinity, see $\S 3$. The upper bound is actually valid in greater generality for real analytic curves and generic $\xi$, see comments in $\S 3$. We point out that several recent papers also focus on proving upper and lower bounds on the number of intersections of nodal lines with geodesics segments. On compact non positively curved surfaces with boundary, Jung and Zelditch [@JZ1], show that $\# (\mathcal{N}_ \lambda \cap \mathcal{C}_0)$ goes to infinity as $\lambda$ goes to infinity, when $\mathcal{C}$ is a boundary curve. On the other hand, a similar statement holds [@JZ2] on a negatively curved surface (without boundary) and when $\mathcal{C}$ satisfies a non symmetry condition. On the modular surface $\mathrm{PSL}_2({\mathbb{Z}}) \backslash {\mathbb{H}^2}$, Jung [@Jung1] obtains effective lower bounds of the type $$C_0^{-1} \lambda_k^{{{\textstyle{\frac{1}{2}}}}-\epsilon} \leq \# (\mathcal{N}_ {\lambda_k} \cap \mathcal{C}_0),$$ for the Maass-Hecke eigenfunctions (with discrete spectral parameter $\lambda_k$ as in our case) for a large portion of $\lambda_k$’s and when $\mathcal{C}$ is a vertical geodesic segment in the modular domain. In [@GRS], Ghosh, Reznikov and Sarnak, assuming Lindelöf’s hypothesis, obtain a related lower bound $$C_0^{-1} \lambda^{\frac{1}{12}-\epsilon} \leq \# (\mathcal{N}_ {\lambda} \cap \mathcal{C}_0),$$ for all $\lambda_k$ large enough. On the Flat $2$-torus, Bourgain and Rudnick [@BR1] were able to show that for [*non geodesic*]{} curves, $$\# (\mathcal{N}_ {\lambda} \cap \mathcal{C}_0) \geq C \lambda^{1-\epsilon}.$$ It seems to us that Theorem \[main\] is the only optimal counting result so far. Of course our setup of infinite volume is helping us somehow, although there are some different technical difficulties to overcome. Theorem \[main\] is a consequence of the following (restriction) equidistribution result, which is of interest in itself. \[main2\] Let $\Gamma$ be a convex co-compact group with $\delta(\Gamma)<{{\textstyle{\frac{1}{2}}}}$. Let $\mathcal{C}_0$ be a finite length geodesic segment of a geodesic satisfying $\xi$-NS. Then for all $\varphi \in C^1(\mathcal{C}_0)$, we have $$\lim_{\lambda \rightarrow +\infty} \int_{\mathcal{C}_0} (F_\lambda(x,\xi))^2 \varphi(x) d\sigma(x)= {{\textstyle{\frac{1}{2}}}}\int_{\mathcal{C}_0} E_1(x,\xi)\varphi(x) d\sigma(x),$$ where $E_1(z,\xi)$ is the positive harmonic Eisenstein series at $s=1$, and $\sigma$ stands for the length measure. More generally, the same statement holds for all real-analytic compact curve $\mathcal{C}_0$, for a generic choice of $\xi$. This above theorem is a “restriction” version of the main equidistribution result of [@GuiNaud3], and this is where the $\xi$-non symmetry assumption is required. Similar equidistribution restriction results are known on compact manifolds (so-called “QER” ) when the geodesic flow is ergodic, also under a non symmetry assumption, see for example Toth-Zelditch [@TZ1]. We also refer to the paper of Dyatlov-Zworski [@DZ1] for a semi-classical framework that generalizes the preceding results. See also Bourgain-Rudnick [@BR1] for related results on the torus. Most of the above mentioned works are motivated by the study of nodal domains. It is a notoriously challenging problem to count them and [@JZ1; @JZ2; @GRS] provide the very first (deterministic) examples of eigenfunctions where one is actually able to show that the number of nodal domains goes to infinity at high frequency. As a corollary of theorem \[main\], we prove the following. Assume that $\Gamma$ is non-elementary, and let $X_0$ denote the convex core of $X$ (the compact part with funnels removed) and let $M_\xi(\lambda)$ be the number of (open) connected components of $$\mathrm{Int}(X_0)\setminus \mathcal{N}_ \lambda(\xi).$$ \[domains\] Under the above hypotheses, for almost all $\xi$, there exists a constant $C>0$ such that for all $\lambda \geq 1$, we have $$M_\xi(\lambda) \leq C\lambda^2.$$ If $\Gamma$ is elementary i.e. $\Gamma\backslash {\mathbb{H}^2}$ is a hyperbolic cylinder, let $\mathcal{C}_0$ denote the unique closed geodesic in $X=\Gamma\backslash {\mathbb{H}^2}$, and let $\mathcal{C}(r)$ be the [*collar*]{} of size $r>0$: $$\mathcal{C}(r):=\{ z\in X\ :\ \mathrm{dist}(z,\mathcal{C}_0)\leq r \},$$ and let $M_\xi(\lambda)$ denote again the number of (open) connected components of $$\mathrm{Int}(\mathcal{C}(r))\setminus \mathcal{N}_ \lambda(\xi).$$ \[collar\] Using the above notations, for almost all $\xi$, there exists a constant $C>0$ such that for all $\lambda \geq 1$, we have $$M_\xi(\lambda) \leq C\lambda^2.$$ It is important to notice that these upper bounds, which are analogs of Courant’s nodal domain theorem, are [*not obvious*]{} facts: eigenfunctions $F_\lambda(z,\xi)$ do not satisfy any boundary condition on $\partial X_0$ or $\partial \mathcal{C}(r)$. It is tempting to believe that this bound is optimal, but we have no serious clue so far. The plan of the paper is as follows. In $\S 2$ we recall some basic facts about hyperbolic planes waves and Eisenstein Series. In $\S 3$ we prove Theorem \[main2\] and a result on the asymptotic average on a geodesic segment (Proposition \[period\]). Theorem \[main2\] will be used for both lower and upper bounds in the proof of Theorem \[main\], while Proposition \[period\] is critical for the lower bound, see $\S 4$ for details. We point out that while the lower bounds and the equidistribution result rely on elementary real analysis (stationary and non-stationnary phase principles for oscillatory integrals), the upper bound requires some complex analysis. This problem is already present for compact problems where the upper bound of Donnelly-Feffermann has not yet been proved in the $C^\infty$ category (Yau’s conjecture). Because the methods we use here are fairly elementary and robust, we expect these set of results to be extendable to variable curvature cases, with a negative pressure condition, as long as some analyticity is available. [**Acknowledgments.**]{} This work was mostly done while FN was a member of UMI 3457 at université de Montréal, supported by CNRS funding. Both authors are supported by ANR “blanc” GeRaSic. DJ is also supported by NSERC, FQRNT and Peter Redpath Fellowship. [Basic estimates and convergence]{} In this section we gather various basic estimates that wil be required later on. We start with some facts on Busemann functions that will be used frequently throughout the paper. Busemann functions ------------------ We will mostly work with the unit disc model $${\mathbb{H}^2}={ \mathbb{D}}=\{z\in {\mathbb{C}}\ :\ \vert z \vert<1 \},$$ endowed with the metric $$ds^2=\frac{4dzd\overline{z}}{(1-\vert z \vert^2)^2}.$$ The hyperbolic distance between $0$ and $z$ is given by $$\label{disthyp} d(0,z)=\log\left( \frac{1+\vert z\vert}{1-\vert z \vert} \right).$$ Given two points $z,w \in {\mathbb{H}^2}$ and $\xi \in \partial {\mathbb{H}^2}$, i.e. $\vert \xi\vert=1$, the [*Busemann function*]{} $B_\xi(z,w)$ by $$B_\xi(z,w)=\lim_{t\rightarrow +\infty} d(z,\xi_t)-d(w,\xi_t),$$ where $t\mapsto \xi_t$ is converging to $\xi$ as $t\rightarrow +\infty$. From that definition ones deduces several standard properties of Busemann functions which can be checked easily. - For all $z,w,y$, we have $B_\xi(z,y)=B_\xi(z,w)+B_\xi(w,y).$ - For all $z,w$, $B_\xi(z,w)=-B_\xi(w,z).$ - For all isometry $g$ of the hyperbolic plane, we have $B_\xi(gz,gw)=B_{g^{-1}\xi}(z,w).$ - The formula holds: $$B_\xi(z,w)=\log\left( \frac{(1-\vert w\vert^2)\vert z-\xi \vert^2}{ (1-\vert z\vert^2)\vert w-\xi \vert^2 } \right).$$ The level sets of $z\mapsto B_\xi(z,w)$ are [*Horocycles*]{} based at $\xi$ . The hyperbolic analog of monochromatic plane waves are functions of the form: $$z\mapsto e^{i\lambda B_\xi(0,z)}.$$ It is shown in [@GuiNaud3] that if $\delta(\Gamma)<{{\textstyle{\frac{1}{2}}}}$, then generalized eigenfunctions $E_{1/2+i\lambda}(z,\xi)$ which are a priori defined through analytic continuation admit the convergent series formula $$E_{{{\textstyle{\frac{1}{2}}}}+i\lambda}(z,\xi)=\sum_{\gamma \in \Gamma} e^{({{\textstyle{\frac{1}{2}}}}+i\lambda)B_\xi(0,\gamma z)}.$$ In particular, the formula for the (real) Eisenstein series becomes $$F_\lambda(z,\xi)= \sum_{\gamma\in \Gamma} e^{\frac{1}{2} B_\xi(0,\gamma z)}\cos(\lambda B_\xi(0,\gamma z) ).$$ We start by a simple estimate. \[est1\] Assume that $z\in K$, where $K$ is a compact subset of ${\mathbb{H}^2}$. Then for all multi index $\alpha=(\alpha_1,\ldots,\alpha_N)$, there exists a constant $C(K,\alpha)$ such that $$\vert \partial_\alpha \left( e^{{{\textstyle{\frac{1}{2}}}}B_\xi(0,\gamma z)} \right) \vert \leq e^{-{{\textstyle{\frac{1}{2}}}}d(0,\gamma 0)} C(K,\alpha),$$ where $z=x_1+ix_2$ and $ \partial_\alpha=\frac{\partial}{\partial x_{\alpha_1}} \ldots \frac{\partial}{\partial x_{\alpha_N}}$. [*Proof*]{}. We only compute the first derivatives, the rest follows by an easy induction. Writing $$e^{{{\textstyle{\frac{1}{2}}}}B_\xi(0,\gamma z)} =e^{{{\textstyle{\frac{1}{2}}}}( B_\xi(0,\gamma 0)+B_{\gamma ^{-1} \xi}(0,z) )},$$ we have for $j=1,2$: $$\partial_j \left( e^{{{\textstyle{\frac{1}{2}}}}B_\xi(0,\gamma z)} \right) = \left ( - \frac{2 x_j}{1-\vert z\vert^2} - \frac{2( (\gamma^{-1}\xi)_j-x_j) }{\vert \gamma^{-1} \xi -z\vert^2} \right) e^{{{\textstyle{\frac{1}{2}}}}B_\xi(0,\gamma 0)}.$$ Now remark that by formula (\[disthyp\]) we have $$B_\xi(0,\gamma 0)=\log\left( \frac{1-\vert \gamma 0 \vert^2}{\vert \gamma 0 -\xi \vert ^2} \right)= -d(0,\gamma 0)+2 \log\left( \frac{1+\vert \gamma 0 \vert}{\vert \gamma 0 -\xi \vert } \right).$$ Because $\xi$ is not in the limit set of $\Gamma$, the distance $\vert \gamma 0 -\xi \vert$ is uniformly bounded from below, so there exists a constant $C_1>0$ such that $$B_\xi(0,\gamma 0)\leq -d(0,\gamma 0)+C_1.$$ Since $z$ is confined to a compact set $K$ and $\gamma^{-1}\xi$ remains on the unit circle, the proof is done. $\square$ This simple estimate implies that if $\delta(\Gamma)< {{\textstyle{\frac{1}{2}}}}$, then the series defining $F_\lambda(z,\xi)$ (and the derivatives) are uniformly convergent on every compact subset of ${\mathbb{H}^2}$. Indeed we recall that [*Poincaré Series*]{} $$P_\Gamma(s)=\sum_{\gamma \in \Gamma} e^{-s d(0,\gamma 0)}$$ are convergent for all $s>\delta(\Gamma)$, see [@Patterson1]. Therefore, $F_\lambda(z,\xi)$ is a $C^\infty$ function, which is not surprising. From this elementary estimate, we have readily the following consequence which is worth highlighting: for every compact set $K\subset {\mathbb{H}^2}$, there exist $C_K>0$ and $\widetilde{C}_K>0$ independent of $\lambda>>1$ such that $$\label{Linfini} \Vert F_\lambda \Vert_{L^\infty(K)}\leq C_K.$$ Notice that in the compact or finite volume case, $L^\infty$ norms of high energy eigenfunctions are usually not expected to be bounded: For example Maass wave forms ($L^2$ eigenfunctions) on the modular surface $PSL_2({\mathbb{Z}})\backslash {\mathbb{H}^2}$ are not $L^\infty$ bounded, see [@LuoSarnak]. Restriction Theorems ==================== In this section we prove the main equidistribution theorem for restriction to geodesics (and more) stated in the introduction. Most of the results will rest on repeated applications of stationary and non-stationary phase formulas. All the computations will be done in the disc model, but of course results do not depend on the choice of a particular model for ${\mathbb{H}^2}$. Asymptotic average on a geodesic segment ---------------------------------------- Geodesics in the disc model will be parametrized in the following way. Let $g$ be a Moebius map of the unit disc, then the image of $$g:(-1,+1) \rightarrow {\mathbb{H}^2}$$ is a geodesic. We denote it by $\mathcal{C}_g$. Conversely all geodesics of the disc can be viewed that way: given a geodesic $\mathcal{C}$ and a point $z_0\in \mathcal{C}$, there exists a Moebius transform $g$ such that $g(0)=z_0$ and $g((-1,+1))=\mathcal{C}$. What we first prove is the following. \[period\] Assume that $\delta(\Gamma) <{{\textstyle{\frac{1}{2}}}}$. Let $\mathcal{C}_{r_0}$ be a geodesic segment in ${\mathbb{H}^2}$, parametrized by $g:[-r_0,+r_0]\rightarrow {\mathbb{H}^2}$. Then there exists a non empty open interval $J\subset [-r_0,+r_0]$ and $C>0$ such that as $\lambda$ goes to infinity, we have $$\sup_{\alpha<\beta \in J} \left \vert \int_{\alpha}^{\beta} F_\lambda(g(r),\xi) dr\right \vert\leq \frac{C}{\lambda}.$$ [*Proof.*]{} Since $\delta<{{\textstyle{\frac{1}{2}}}}$, using the representation of $F_\lambda$ as a sum of convergent series we are left with estimating the sum $$\sum_{\gamma \in \Gamma} \int_{\alpha}^\beta e^{({{\textstyle{\frac{1}{2}}}}+i\lambda)B_\xi(0,\gamma g(r))}dr,$$ where $\alpha<\beta\in J\subset [-r_0,+r_0]$ and $J$ has to be chosen. The choice of $J$ will follow from a careful analysis of the stationary points of the phase $B_\xi(0,g\gamma(z))$. Writing $$B_\xi(0,\gamma g(r))=B_\xi(0,\gamma g(0))+B_{g^{-1}\gamma^{-1}(\xi) }(0,r),$$ we deduce that $$\label{derive1} \frac{d}{dr}( B_\xi(0,\gamma g(r)))=2\frac{r^2a_\gamma-2r+a_\gamma}{(1-r^2)((r-a_\gamma)^2+b_\gamma^2)},$$ where we have set $a_\gamma={{\rm Re}}(g^{-1}\gamma^{-1}(\xi)),\ b_\gamma={{\rm Im}}(g^{-1}\gamma^{-1}(\xi))$. The critical points are then given by $$r^{\pm}_\gamma=\frac{1}{a_\gamma}\left( 1 \pm \sqrt{1-a_\gamma^2} \right)$$ if $a_\gamma \neq 0$, and $0$ otherwise. Remark that only $r_\gamma^-$ can be a critical point ($r_\gamma^+$ is outside the disc) and we have if $a_\gamma \neq 0$, $$\left \vert \frac{d}{dr}( B_\xi(0,\gamma g(r))) \right \vert \geq {{\textstyle{\frac{1}{2}}}}\vert a_\gamma \vert \vert r-r_\gamma^+\vert \vert r-r_\gamma^-\vert$$ $$\geq {{\textstyle{\frac{1}{2}}}}\vert r-r_\gamma^-\vert (1-r_0).$$ More precisely, consider the continuous, injective map $$F:[-1,+1]\rightarrow [-1,+1]$$ defined by $$F(x)=\frac{x}{1+\sqrt{1-x^2}},$$ then $F(a_\gamma)$ is the unique possible critical point of the phase. In all cases, we have a lower bound for the derivative: for all $r\in [-r_0,+r_0]$, $$\left \vert \frac{d}{dr}( B_\xi(0,\gamma g(r))) \right \vert \geq C(r_0)\vert r-F(a_\gamma) \vert,$$ where $C(r_0)$ is uniform in $\gamma$. The goal is now to find a non empty interval that is uniformly away from all the critical points. Let us consider $$K:=\overline{\bigcup_{\gamma \in \Gamma} g^{-1}\circ \gamma^{-1}( \xi)} \subset \partial {\mathbb{H}^2},$$ and set for all $z\in \partial {\mathbb{H}^2}$, $\widetilde{F}(z)=F({{\rm Re}}(z))$. Then define $$\mathcal{B}:=\widetilde{F}(K)\cap [-r_0,+r_0],$$ then $\mathcal{B}$ is a compact subset of $[-r_0,+r_0]$ which contains all the possible critical points of the phases. Observe now that because $F$ is injective and continuous there exists $\eta>0$ such that $$\mathcal{B}=\widetilde{F}(K\setminus(D(-1,\eta)\cup D(+1,\eta)),$$ where $D(z,\eta):=\{\vert w\vert=1\ :\ \vert z-w\vert <\eta \}$. Since $F$ is smooth away from $-1$ and $+1$, we deduce that $$\mathrm{dim}_H(\mathcal{B})\leq \mathrm{dim}_H(K),$$ where $\mathrm{dim}_H$ stands for the Hausdorff dimension. Because we have $$K=g^{-1}( \overline{\bigcup_{\gamma \in \Gamma} \gamma( \xi)} )$$ and since the set of accumulation points of the orbit $\Gamma. \xi$ is exactly the limit set $\Lambda(\Gamma)$, we deduce that $$\mathrm{dim}_H(\mathcal{B})\leq \delta(\Gamma)<1.$$ As a consequence, $[-r_0,+r_0]\setminus \mathcal{B}$ has non empty interior. We therefore pick $J\subset [-r_0,+r_0]$ an interval such that $$\overline{J}\cap \mathcal{B}=\emptyset.$$ On this interval $J$ all points are uniformly away from the “bad critical set” $\mathcal{B}$. This will allow us to use the following version of non-stationary phase estimate. \[nonstat\] Let $I$ be a compact interval. Let $\Phi\in C^2(I)$ and $\varphi \in C^1(I)$. Assume that for all $x\in I,\Phi'(x)\neq 0$. Then one can find a constant $M(I)$ such that for all $a<b\in I$, for all $\lambda \geq 1$ we have $$\left \vert \int_{a}^{b} e^{i\lambda \Phi(x)} \varphi(x)dx\right \vert \leq M \frac{\max(1,\Vert \Phi'' \Vert_{C^0(I)}) \Vert \varphi \Vert_{C^1(I)}}{\lambda (\inf_I \vert \Phi' \vert)^2 }.$$ The proof of this fact is elementary: just integrate by parts. We now apply the above non-stationary principle to each term in the sum $$\sum_{\gamma \in \Gamma} \int_{\alpha}^\beta e^{({{\textstyle{\frac{1}{2}}}}+i\lambda)B_\xi(0,\gamma g(r))}dr.$$ We use the fact that for all $r\in J$, $$\left \vert \frac{d}{dr}( B_\xi(0,\gamma g(r))) \right \vert \geq C(J)>0,$$ and apply Lemma \[est1\] to deduce that uniformly in $\gamma$, $$\int_{\alpha}^\beta e^{({{\textstyle{\frac{1}{2}}}}+i\lambda)B_\xi(0,\gamma g(r))}dr=O\left ( \frac{e^{{{\textstyle{\frac{1}{2}}}}d(0,\gamma 0)}}{\lambda}\right).$$ One has also to check that $\Vert \frac{d^2}{dr^2}( B_\xi(0,\gamma g(r))) \Vert_{C^0(J)}$ is uniformly bounded from above, which follows easily from the formula (\[derive1\]). The end of the proof follows from convergence of Poincaré Series. $\square$ The condition of $\xi$-non symmetry ----------------------------------- We will first state the definition on the universal cover. Given two different points $\eta_1 \neq \eta_2 \in S^1:=\partial {\mathbb{H}^2}$, we denote by $\mathcal{C}_{\eta_1,\eta_2}$ the unique (non oriented) geodesic in ${\mathbb{H}^2}$ whose endpoints are $\eta_1,\eta_2$. Let $\xi \in \partial {\mathbb{H}^2}\setminus \Lambda(\Gamma)$. Let $\mathcal{C}_g:=\mathcal{C}_{g(-1),g(+1)}=g([-1,+1])$ be a parametrized geodesic as above. We say that $\mathcal{C}_{g}$ is $\xi$-non symmetric ($\xi$-NS) iff 1. $\forall\ \gamma_1\neq \gamma_2\in \Gamma,\ \mathcal{C}_{g}\ \mathrm{and}\ \mathcal{C}_{\gamma_1 \xi,\gamma_2 \xi}\ \mathrm{are\ non\ orthogonal}.$ 2. $\forall \gamma_1\neq \gamma_2\in \Gamma,\ \mathcal{C}_{g}\neq \mathcal{C}_{\gamma_1 \xi,\gamma_2 \xi}$. [**Remark**]{}. Condition $(1)$ implies that for all $\gamma_1\neq \gamma_2$, we have $${{\rm Re}}(g^{-1}\gamma_1(\xi)-g^{-1}\gamma_1(\xi)) \neq 0.$$ Indeed, we cannot have $g^{-1}\gamma_1(\xi)=g^{-1}\gamma_1(\xi)$ otherwise we would have $$\gamma_1^{-1}\circ \gamma_2 (\xi)=\xi,$$ which is impossible outside the limit set (recall that $\xi \not \in \Lambda(\Gamma)$). Therefore $${{\rm Re}}(g^{-1}\gamma_1(\xi)-g^{-1}\gamma_1(\xi)) =0 \Rightarrow \overline{g^{-1}(\gamma_1 \xi)})=g^{-1}(\gamma_2 \xi) ),$$ which by conformal invariance of angles implies that $$\mathcal{C}_{g}\ \perp \mathcal{C}_{\gamma_1 \xi,\gamma_2 \xi}.$$ Note that condition $(2)$ is always fulfilled if $\mathcal{C}_g$ is a [*trapped geodesic*]{}, i.e. both endpoints $g(1)$ and $g(-1)$ belong to the limit set $\Lambda(\Gamma)$. We prove below that these conditions have full measure with respect to $\xi$. Let $\mathcal{C}$ be a geodesic. Then for Lebesgue almost all $\xi\in S^1\setminus \Lambda(\Gamma)$, $\mathcal{C}$ satisfies $\xi$-NS. [*Proof*]{}. We assume that $\mathcal{C}$ is parametrized by $$\mathcal{C}=g([-1,+1]),$$ for some Moebius map $g$. First we remark that either $g(1)$ belongs to $\Lambda(\Gamma)$ and $(2)$ is automatically satisfied or $g(1)\not \in \Lambda(\Gamma)$ and its orbit under the action of $\Gamma$ is discrete in $S^1\setminus \Lambda(\Gamma)$. Therefore, $(2)$ is satisfied if we chose $\xi$ to belong to $$S^1\setminus \left ( \Lambda(\Gamma)\cup \bigcup_{\gamma \in \Gamma} \{\gamma g(1) \} \right),$$ which is a set of full measure in $S^1\setminus \Lambda(\Gamma)$. If $(1)$ is violated for $\gamma_1,\gamma_2 \in \Gamma$, we must have $$\overline{g^{-1}(\gamma_1 \xi)})=g^{-1}(\gamma_2 \xi) ).$$ This identity can hold for only finitely many $\xi \in S^1$. Indeed if $h_1,h_2$ are two (orientation preserving) isometries of the hyperbolic disc, the equation $$\label{imp} \overline{h_1(\xi)}=h_2(\xi)$$ has at most two solutions in $S^1=\partial {\mathbb{H}^2}$ : for orientation reasons this equality cannot hold identically on $S^1$, any solution of (\[imp\]) is a root of a non zero polynomial of degree at most $2$. We therefore have to remove from $S^1\setminus \Lambda(\Gamma)$ a countable set of possible solutions to make sure that $(1)$ is satisfied. In a nutshell, both $(1)$ and $(2)$ are satisfied for all $\xi \in S^1\setminus \Lambda(\Gamma)$ except for a countable set, the proof is done. $\square$ On the quotient surface $X=\Gamma \backslash {\mathbb{H}^2}$, the condition $\xi$-NS translates as follows. Given a geodesic $\mathcal{C}$ on the surface, it satisfies $\xi$-NS if $\mathcal{C}$ is [*never equal or orthogonal to geodesics that start and end at $\xi$ (at infinity)*]{}. Indeed, geodesics that start and end at $\xi$ are lifted on ${\mathbb{H}^2}$ to geodesics whose endpoints are equal to $\xi$, mod $\Gamma$, that is geodesics of type $\mathcal{C}_{\gamma_1 \xi,\gamma_2 \xi}$, for some $\gamma_1\neq \gamma_2\in \Gamma$. Proof of the equidistribution result on geodesics ------------------------------------------------- The goal of this subsection is to prove the following fact, which implies straightforwardly Theorem \[main2\]. \[equi2\] Assume that $\Gamma$ is a convex co-compact group with $\delta(\Gamma)<{{\textstyle{\frac{1}{2}}}}$. Let $\mathcal{C}=g([-1,+1])$ be a geodesic satisfying $\xi$-NS. Then for all $0<r_0<1$, for all $\varphi \in C^1([-r_0,+r_0])$, $$\lim_{\lambda \rightarrow +\infty} \int_{-r_0}^{+r_0} \left ( F_\lambda(g(r),\xi)\right)^2 \varphi(r)dr={{\textstyle{\frac{1}{2}}}}\int_{-r_0}^{+r_0} E_1(g(r),\xi)\varphi(r) dr.$$ [*Proof*]{}. We start by writing $$\left( F_\lambda(z,\xi)\right)^2={{\textstyle{\frac{1}{2}}}}\vert E_{1/2+i\lambda}(z,\xi) \vert^2+{{\textstyle{\frac{1}{2}}}}{{\rm Re}}\left( (E_{1/2+i\lambda}(z,\xi))^2 \right),$$ so that we have to investigate $$\int_{-r_0}^{+r_0} \left ( F_\lambda(g(r),\xi)\right)^2 \varphi(r)dr ={{\textstyle{\frac{1}{2}}}}\underbrace{\int_{-r_0}^{+r_0} \vert E_{1/2+i\lambda}(g(r),\xi) \vert^2 \varphi(r)dr}_{I_1(\lambda)}$$ $$+{{\textstyle{\frac{1}{2}}}}\underbrace{{{\rm Re}}\left (\int_{-r_0}^{+r_0} \left ( E_{1/2+i\lambda}(g(r),\xi)\right)^2 \varphi(r)dr \right)}_{I_2(\lambda)} .$$ We will first analyze $I_1(\lambda)$. By uniform convergence we can write $$I_1(\lambda)=\sum_{\gamma_1,\gamma_2 \in \Gamma } \int_{-r_0}^{+r_0} e^{{{\textstyle{\frac{1}{2}}}}(B_\xi(0,\gamma_1g(r) )+ B_\xi(0,\gamma_2g(r) ) )} e^{i\lambda \Phi_{\gamma_1,\gamma_2}(r)}\varphi(r) dr,$$ where $$\Phi_{\gamma_1,\gamma_2}(r)=B_\xi(0,\gamma_1g(r) )-B_\xi(0,\gamma_2g(r) ).$$ Writing $$\Phi_{\gamma_1,\gamma_2}(r)=B_\xi(0,\gamma_1g(0) )-B_\xi(0,\gamma_2g(0) )+ B_{g^{-1}\gamma_1^{-1}\xi}(0,r)-B_{g^{-1}\gamma_2^{-1}\xi}(0,r),$$ we deduce that $$\frac{d}{dr} \left ( \Phi_{\gamma_1,\gamma_2}(r) \right) =2\frac{{{\rm Re}}( g^{-1}\gamma_2^{-1}\xi-g^{-1}\gamma_1^{-1}\xi)(r^2-1)} {\vert r-g^{-1}\gamma_2^{-1}\xi\vert^2\vert r- g^{-1}\gamma_1^{-1}\xi \vert^2},$$ and therefore, $$\inf_{[-r_0,+r_0]} \left \vert \frac{d}{dr} \left ( \Phi_{\gamma_1,\gamma_2}(r) \right) \right \vert \geq \frac{1-r_0^2}{16} \vert {{\rm Re}}( g^{-1}\gamma_2^{-1}\xi-g^{-1}\gamma_1^{-1}\xi) \vert.$$ Because we are assuming property $\xi$-NS, part $(1)$, we know that for all $\gamma_1\neq \gamma_2$ this lower bound cannot vanish. This will allow us to apply the non-stationnary phase Lemma \[nonstat\] to the off-diagonal sums above. More precisely, we have $$I_1(\lambda)=\sum_{\gamma \in \Gamma} \int_{-r_0}^{+r_0} e^{(B_\xi(0,\gamma g(r) )}\varphi(r) dr$$ $$+\sum_{\gamma_1\neq \gamma_2}\int_{-r_0}^{+r_0} e^{{{\textstyle{\frac{1}{2}}}}(B_\xi(0,\gamma_1g(r) )+ B_\xi(0,\gamma_2g(r) ) )} e^{i\lambda \Phi_{\gamma_1,\gamma_2}(r)}\varphi(r) dr,$$ where we can write again by uniform convergence $$\sum_{\gamma \in \Gamma} \int_{-r_0}^{+r_0} e^{(B_\xi(0,\gamma g(r) )}\varphi(r) dr =\int_{-r_0}^{+r_0} E_1(g(r),\xi)\varphi(r) dr.$$ It is important to notice that $E_1(z,\xi)$ is a [*positive non vanishing Harmonic function*]{} on the unit disc which satisfy the trivial lower bound (given by the identity term in the sum): $$E_1(z,\xi)\geq \frac{1-\vert z\vert^2}{\vert z-\xi\vert^2}.$$ To complete the asymptotic analysis of $I_1(\lambda)$, we therefore have to show that the off-diagonal contribution goes to zero as $\lambda$ goes to infinity. Let us write $$\sum_{\gamma_1\neq \gamma_2}\int_{-r_0}^{+r_0} e^{{{\textstyle{\frac{1}{2}}}}(B_\xi(0,\gamma_1g(r) )+ B_\xi(0,\gamma_2g(r) ) )} e^{i\lambda \Phi_{\gamma_1,\gamma_2}(r)}\varphi(r) dr = \sum_{\gamma_1\neq \gamma_2} I_{\gamma_1,\gamma_2}(\lambda).$$ By Lemma \[est1\], we have uniformly in $\lambda$, $$\label{basicest1} \vert I_{\gamma_1,\gamma_2}(\lambda) \vert \leq C(r_0) e^{-{{\textstyle{\frac{1}{2}}}}d(0,\gamma_1 0)-{{\textstyle{\frac{1}{2}}}}d(0,\gamma_2 0)}$$ while by the above analysis of phases $\Phi_{\gamma_1,\gamma_2}$ and Lemma \[nonstat\], we do have for all $\gamma_1\neq \gamma_2$, $$\label{nonstatest1} \vert I_{\gamma_1,\gamma_2}(\lambda) \vert=O\left ( \frac{1}{\lambda} \right).$$ Because we have $$\sum_{\gamma_1,\gamma_2} e^{-{{\textstyle{\frac{1}{2}}}}d(0,\gamma_1 0)-{{\textstyle{\frac{1}{2}}}}d(0,\gamma_2 0)}<+\infty$$ we can deduce that $$\lim_{\lambda\rightarrow +\infty} \sum_{\gamma_1\neq \gamma_2} I_{\gamma_1,\gamma_2}(\lambda)=0.$$ Indeed, fix $\epsilon>0$, and choose $T$ so large that $$C(r_0)\times \sum_{\gamma_1\neq \gamma_2\atop d(0,\gamma_1 0)\geq T\ \mathrm{or}\ d(0,\gamma_2 0)\geq T} e^{-{{\textstyle{\frac{1}{2}}}}d(0,\gamma_1 0)-{{\textstyle{\frac{1}{2}}}}d(0,\gamma_2 0)}\leq \frac{\epsilon}{2},$$ where $C(r_0)$ is the constant in estimate (\[basicest1\]). Writing $$\left \vert \sum_{\gamma_1\neq \gamma_2} I_{\gamma_1,\gamma_2}(\lambda) \right \vert \leq \frac{\epsilon}{2}+\sum_{\gamma_1\neq \gamma_2\atop d(0,\gamma_1 0)< T\ \mathrm{and}\ d(0,\gamma_2 0)<T} \vert I_{\gamma_1,\gamma_2}(\lambda) \vert,$$ using (\[nonstatest1\]), we can choose $\lambda_0$ so large that for all $\lambda$ with $\lambda\geq \lambda_0$, $$\sum_{\gamma_1\neq \gamma_2\atop d(0,\gamma_1 0)< T\ \mathrm{and}\ d(0,\gamma_2 0)<T} \vert I_{\gamma_1,\gamma_2}(\lambda) \vert\leq \frac{\epsilon}{2},$$ and we are done. Next we move on to the analysis of $I_2(\lambda)$. Again using uniform convergence, we have $$\int_{-r_0}^{+r_0} \left ( E_{1/2+i\lambda}(g(r),\xi)\right)^2 \varphi(r)dr =\sum_{\gamma_1,\gamma_2} J_{\gamma_1,\gamma_2}(\lambda),$$ where $$J_{\gamma_1,\gamma_2}(\lambda)=\int_{-r_0}^{+r_0} e^{{{\textstyle{\frac{1}{2}}}}(B_\xi(0,\gamma_1g(r) )+ B_\xi(0,\gamma_2g(r) ) )} e^{i\lambda \Theta_{\gamma_1,\gamma_2}(r)}\varphi(r) dr,$$ with $$\Theta_{\gamma_1,\gamma_2}(r)=B_\xi(0,\gamma_1g(r) )+B_\xi(0,\gamma_2g(r)).$$ Using the same tricks as above, one can compute $$\frac{d}{dr} \left ( \Theta_{\gamma_1,\gamma_2}(r) \right)=2\frac{r^2a_{\gamma_1}-2r+a_{\gamma_1}}{(1-r^2)\vert r- g^{-1}\gamma_1^{-1}\xi \vert^2} +2\frac{r^2a_{\gamma_2}-2r+a_{\gamma_2}}{(1-r^2)\vert r- g^{-1}\gamma_2^{-1}\xi \vert^2}$$ $$=\frac{2}{1-r^2} \frac{P_{\gamma_1,\gamma_2}(r)}{\vert r-g^{-1}\gamma_2^{-1}\xi\vert^2\vert r- g^{-1}\gamma_1^{-1}\xi \vert^2},$$ where $$P_{\gamma,\gamma'}(r)=(a_{\gamma}+a_{\gamma'})r^4-4(a_{\gamma}a_{\gamma'}+1)r^3 +2(a_{\gamma}+a_{\gamma'})r^2$$ $$-4(a_{\gamma}a_{\gamma'}+1)r+a_{\gamma}+a_{\gamma'},$$ and $a_{\gamma} ={{\rm Re}}(g^{-1}\gamma^{-1} \xi )$. Therefore we get the lower bound $$\left \vert \frac{d}{dr} ( \Theta_{\gamma_1,\gamma_2}(r)) \right \vert \geq \frac{1}{8}\vert P_{\gamma_1,\gamma_2}(r) \vert.$$ A key observation is that this [*polynomial has always degree $3$ or $4$*]{}. Indeed, if we have $$a_{\gamma_1}+a_{\gamma_2}=0\ \mathrm{and}\ a_{\gamma_1}a_{\gamma_2}+1=0,$$ then $(a_{\gamma_1},a_{\gamma_2})\in \{ (1,-1);(-1,1) \}$, which would mean that either $$\gamma^{-1}_1 \xi=g(-1),\ \gamma^{-1}_2 \xi=g(1)$$ or $$\gamma^{-1}_1 \xi=g(1),\ \gamma^{-1}_2 \xi=g(-1).$$ This is not possible because of condition $(2)$ in $\xi$-NS. To conclude the proof, we will need the following Van der Corput’s style Lemma, to deal with the possibly highly degenerated stationary phases. \[stat\] Let $I$ be a compact non trivial interval and $F\in C^2(I)$, $\varphi \in C^1(I)$. Assume that for all $x \in I$, we have $$\vert F'(x) \vert \geq C \vert P(x)\vert,$$ where $P(x)$ is a polynomial of degree $d>-\infty$. Then as $\lambda$ goes to infinity, we have $$\int_I e^{i\lambda F(x)}\varphi(x) dx=O\left( \lambda^{-\frac{1}{2d+1}} \right).$$ [*Proof*]{}. Let $P(x)=a_0+a_1x+\ldots+a_d x^d$, with $a_d\neq 0$. Let $x_1,x_2,\ldots,x_d \in {\mathbb{C}}$ be the roots of $P(x)$ so that we can write $$\label{poly1} P(x)=a_d(x-x_1)\ldots (x-x_d).$$ Let $\epsilon>0$ to be specified later on. For all $\epsilon>0$ small enough, set $$I(\epsilon):=\{x\in I\ :\ \forall\ j=1,\ldots,d,\ \vert x-x_j\vert \geq \epsilon\}.$$ Then for all $\epsilon>0$ small enough $I_\epsilon$ is a finite union of closed intervals $$I(\epsilon)=\bigcup_{\ell=1}^{d'} I_\ell(\epsilon),$$ with $d'\leq d$ independent of $\epsilon$. On each interval $I_\ell(\epsilon)$, $F'$ does not vanish so that we can integrate by parts $$\int_{I_\ell(\epsilon)} e^{i\lambda F(x)} \varphi(x)dx=\frac{1}{i\lambda} \left [ e^{i\lambda F(x)} \frac{\varphi(x)}{F'(x)} \right]_{\partial I_\ell(\epsilon)}$$ $$- \frac{1}{i\lambda} \int_{I_\ell(\epsilon)} e^{i\lambda F(x)} \frac{d}{dx}\left ( \frac{ \varphi(x)}{F'(x)} \right )dx.$$ Notice that by (\[poly1\]), we have for all $x\in I(\epsilon)$, $$\vert F'(x)\vert\geq C\vert a_d\vert \epsilon^d,$$ which yields for all $\lambda\geq 1$ and all $\epsilon$ small, $$\left \vert \int_{I(\epsilon)} e^{i\lambda F(x)} \varphi(x)dx \right \vert \leq \frac{\widetilde{C}}{\lambda \epsilon^{2d}},$$ where $\widetilde{C}$ is independent of $\lambda,\epsilon$. Writing $$\int_{I} e^{i\lambda F(x)} \varphi(x)dx=\int_{I(\epsilon)} e^{i\lambda F(x)} \varphi(x)dx+\int_{I\setminus I(\epsilon)} e^{i\lambda F(x)} \varphi(x)dx$$ $$=O(\epsilon)+O\left ( \frac{1}{\lambda \epsilon^{2d}} \right ),$$ we then choose $$\epsilon=\lambda^{-\frac{1}{2d+1}},$$ and the proof is done. $\square$ Note that the rate of decay as estimated above is far from being optimal, but enough for our purpose. We can now finish the proof of the equidistribution theorem. By Lemma \[est1\], we have uniformly in $\lambda$, $$\vert J_{\gamma_1,\gamma_2}(\lambda) \vert \leq C(r_0) e^{-{{\textstyle{\frac{1}{2}}}}d(0,\gamma_1 0)-{{\textstyle{\frac{1}{2}}}}d(0,\gamma_2 0)}$$ while Lemma \[stat\] and the computation of $\Theta'_{\gamma_1,\gamma_2}(r)$ above show that individually as $\lambda$ goes to $+\infty$, $$\vert J_{\gamma_1,\gamma_2}(\lambda) \vert=O\left( \lambda^{-\frac{1}{9}} \right).$$ The same arguments as above then yield $$\lim_{\lambda\rightarrow +\infty} I_2(\lambda)=0,$$ finishing the proof of Theorem \[equi2\]. $\square$ Equidistribution on real analytic curves ---------------------------------------- In this section, we explain in a nutshell how the above equidistribution theorem on geodesics can be extended to all [*real analytic*]{} curves, for almost all $\xi$. The ideas are very similar to the above proof, but the price to pay to obtain a result at this level of generality is that the generic conditions on $\xi$ have no longer a simple geometric interpretation as in the $\xi$-NS statement. We have chosen to include details on this generalization because it could be useful in some situations. Without loss of generality, we will assume that $g$ is a Moebius map of the unit disc and that $\ell:[-r_0,+r_0]\rightarrow {\mathbb{H}^2}$ is a [*real analytic*]{} complex valued map with $\ell(0)=0$ and $\ell'(r)\neq 0$ for all $r\in [-r_0,+r_0]$. We will consider the map $$g\circ \ell: [-r_0,+r_0]\rightarrow {\mathbb{H}^2}$$ as a parametrized curve on which we want to prove the same statement as above. Following the exact same lines, we need to analyze the two phase functions $$\Phi_{\gamma_1,\gamma_2}(r)=B_\xi(0,\gamma_1g(\ell (r)) )-B_\xi(0,\gamma_2g( \ell (r)) ),$$ $$\Theta_{\gamma_1,\gamma_2}(r)=B_\xi(0,\gamma_1g(\ell (r)) )+B_\xi(0,\gamma_2g( \ell (r)) ).$$ Carrying the same computations as in the geodesic case, we have $$\Phi_{\gamma_1,\gamma_2}'(r) =-2{{\rm Re}}\left( \frac{\ell'(r)(g^{-1}\gamma_2^{-1}\xi-g^{-1}\gamma_1^{-1}\xi)} {( \ell(r)-g^{-1}\gamma_2^{-1}\xi)( \ell(r)- g^{-1}\gamma_1^{-1}\xi )}\right).$$ We will show that $\Phi_{\gamma_1,\gamma_2}'$ is non identically vanishing for generic $\xi$. Evaluating the above formula at $r=0$ yields $$\Phi_{\gamma_1,\gamma_2}'(0) =-2{{\rm Re}}\left( \overline{\ell'(0)}(g^{-1}\gamma_2^{-1}\xi-g^{-1}\gamma_1^{-1}\xi)\right).$$ Since we are assuming $\gamma_1\neq \gamma_2$ we can use the exact same ideas as before to show that $$\Phi_{\gamma_1,\gamma_2}'(0)\neq 0$$ for a set of $\xi$ with full measure in the discontinuity set. Being a real-analytic, non identically vanishing function, $\Phi_{\gamma_1,\gamma_2}'(r)$ has a holomorphic extension to an open complex domain $$[-r_0,+r_0]\subset \Omega \subset {\mathbb{C}}$$ and by further shrinking $\Omega$ we can assume that it has finitely many zeros $z_1,\ldots,z_d$ ( repeated with multiplicity ) in $\Omega$. The map $$z\mapsto \frac{\Phi_{\gamma_1,\gamma_2}'(z)}{\prod_{j=1}^d(z-z_j)}$$ is holomorphic, non vanishing on $\Omega$ and therefore there exists $C>0$ such that for all $r\in [-r_0,+r_0]$, $$\vert \Phi_{\gamma_1,\gamma_2}'(r) \vert \geq C \left\vert \prod_{j=1}^d(z-z_j) \right \vert.$$ We can then apply Lemma \[stat\] to show that for generic $\xi$, all $\gamma_1\neq \gamma_2$ $$\lim_{\lambda \rightarrow +\infty} I_{\gamma_1,\gamma_2}(\lambda)=0.$$ We now need to treat the second phase function $\Theta_{\gamma_1,\gamma_2}(r)$. Performing similar calculations we have $$\Theta_{\gamma_1,\gamma_2}'(r) =2{{\rm Re}}\left( \ell'(r)\frac{\vert \ell(r)\vert^2(\xi_1+\xi_2)-2\ell(r)-2\overline{\ell(r)}\xi_1\xi_2+\xi_1+\xi_2} {(1-\vert \ell(r)\vert^2)(\ell(r)-\xi_1)(\ell(r)-\xi_2)}\right),$$ where we have set for simplicity $\xi_1= g^{-1}\gamma_1^{-1}\xi$, $\xi_2=g^{-1}\gamma_2^{-1}\xi $. We obtain for $r=0$, $$\Theta_{\gamma_1,\gamma_2}'(0) =2{{\rm Re}}\left( \overline{\ell'(0)}(g^{-1}\gamma_2^{-1}\xi+g^{-1}\gamma_1^{-1}\xi)\right).$$ We want to show once again that for a generic choice of $\xi$, this is not $0$. First, remark that we cannot have for all $\xi \in S^1$ $$g^{-1}\gamma_2^{-1}\xi=-g^{-1}\gamma_1^{-1}\xi.$$ Indeed, such an identity would imply (by analytic continuation) that for all $z\in {\mathbb{H}^2}$, $$g^{-1}\gamma_2^{-1}\gamma_1 g(z)=-z.$$ If $\gamma_1=\gamma_2$ we clearly have a contradiction while if $\gamma_1\neq \gamma_2$ this formula would show that $ \gamma_2^{-1}\gamma_1 $ is an [*elliptic isometry*]{}, simply because it is conjugated to $z\mapsto -z$, which is elliptic. Because $\Gamma$ is a convex co-compact group whose elements are all hyperbolic (except identity), we have again a contradiction. Therefore $$g^{-1}\gamma_2^{-1}\xi=-g^{-1}\gamma_1^{-1}\xi$$ can hold for at most two points in $S^1$. By removing a countable set of the discontinuity domain, we can rule out this case. We are left with the case $$\overline{\ell'(0)}g^{-1}\gamma_2^{-1}\xi=-\ell'(0)\overline{g^{-1}\gamma_1^{-1}\xi},$$ which can be treated as in the previous section by using an orientation argument. Discarding another countable set of points, we can make sure that for all $\gamma_1,\gamma_2 \in \Gamma$, $$\Theta_{\gamma_1,\gamma_2}'(0)\neq 0.$$ We can then use the same arguments as before and apply Lemma \[stat\] to get decay of oscillatory integrals $$\lim_{\lambda \rightarrow +\infty} J_{\gamma_1,\gamma_2}(\lambda)=0.$$ To conclude this section we point out that it is unclear to us whether Proposition \[period\] holds for general analytic curves, which prevents us from extending the lower bound of Theorem \[main\] to analytic curves. However as pointed out in the next section, it works without major modification for the upper bound, extending the upper bound of Theorem \[main\] to real-analytic curves. Counting intersections of nodal lines with geodesics ==================================================== In this section we prove Theorem \[main\], using the previous equidistribution result. We assume that $\mathcal{C}=g([-1,+1])$ is a fixed geodesic satisfying $\xi$-NS, and that $\delta(\Gamma)<{{\textstyle{\frac{1}{2}}}}$ as we did before. The lower bound --------------- Let $\mathcal{C}_0\subset \mathcal{C}$ be a geodesic segment given by $\mathcal{C}_0=g([-r_0,+r_0])$. We pick $J\subset [-r_0,+r_0]$ so that the conclusion of Proposition \[period\] holds. Since we have $$\int_J \vert F_\lambda(g(r),\xi) \vert dr\geq \Vert F_\lambda \Vert_{L^\infty(g(J))}^{-1} \int_{J} (F_\lambda(g(r),\xi))^2 dr,$$ remembering the bound (\[Linfini\]) we can use Theorem \[equi2\] which says that $$\lim_{\lambda \rightarrow \infty} \int_J (F_\lambda(g(r),\xi))^2 dr={{\textstyle{\frac{1}{2}}}}\int_J E_1(g(r),\xi)dr,$$ to conclude that one can find $C>0$ such that for all $\lambda$ large, $$\int_J \vert F_\lambda(g(r),\xi) \vert dr\geq C.$$ Let $N(\lambda) \geq 0$ be the number of zeros of $r\mapsto F_\lambda(g(r),\xi)$ in the interval $\mathrm{Int}(J)$. By writing $$J=\bigcup_{\ell=0}^{N(\lambda)} J_\ell,$$ where $r\mapsto F_\lambda(g(r),\xi)$ has constant sign on each $J_\ell$, we deduce by Proposition \[period\] that $$0<C\leq \int_J \vert F_\lambda(g(r),\xi) \vert dr = \sum_{\ell=0}^{N(\lambda)} \left \vert \int_{J_\ell} F_\lambda(g(r),\xi) dr \right \vert$$ $$\leq (N(\lambda)+1) \sup_{\alpha<\beta \in J} \left \vert \int_{\alpha}^{\beta} F_\lambda(g(r),\xi) dr\right \vert\leq \frac{\widetilde{C}(N(\lambda)+1)}{\lambda},$$ which implies that for all $\lambda$ large enough, $$N(\lambda)\geq C' \lambda,$$ and the proof of the lower bound is done. The upper bound --------------- As we said in the introduction, we will need to use analyticity to prove the upper bound on the number of intersection of nodal lines with geodesics. We will therefore start by proving the following fact, which is a way to “complexify” restrictions of eigenfunctions $F_\lambda$ to geodesics. \[complexify1\] Let $\mathcal{C}=g([-1,+1])$ be a geodesic. Then for all $\lambda \in {\mathbb{R}}$, the map $z\mapsto F_\lambda(g(z),\xi)$, defined on $(-1,+1)$, admits a holomorphic extension to the unit disc ${ \mathbb{D}}$, which is denoted by $\widetilde{F}_{\lambda,g}(z,\xi)$. Moreover, for all compact subset $K\subset { \mathbb{D}}$, there exist $\beta_K, C_K>0$ such that for all $\lambda\geq 0$, we have $$\sup_{z \in K} \vert \widetilde{F}_{\lambda,g}(z,\xi) \vert\leq C_K e^{\beta_K \lambda}.$$ [*Proof*]{}. We recall that for all $r\in (-1,+1)$, we have the convergent series expansion $$F_\lambda(g(r),\xi)=\sum_{\gamma \in \Gamma} e^{{{\textstyle{\frac{1}{2}}}}B_\xi(0,\gamma g(r))}\cos(\lambda B_\xi(0,\gamma g(r))).$$ Since we have $$B_\xi(0,\gamma g(r))=B_\xi(0,\gamma g(0))+B_{g^{-1}\gamma^{-1}\xi}(0,r),$$ it is enough to continue analytically $$r\mapsto B_{g^{-1}\gamma^{-1}\xi}(0,r)=\log \left ( \frac{1-r^2}{\vert r- g^{-1}\gamma^{-1}\xi \vert^2} \right).$$ We set for simplicity $\eta:=g^{-1}\gamma^{-1}\xi$ and for all $z\in (-1,+1)$, $$G_\eta(z):=\frac{1-z^2}{\vert z- \eta \vert^2}=\frac{1-z^2}{(z-\eta)(z-\overline{\eta})}.$$ Clearly $G_\eta(z)$ extends holomorphically to the unit disc ${ \mathbb{D}}$, where it does not vanish. We can therefore define a complex logarithm by setting for all $z\in { \mathbb{D}}$ $$\label{complexlog} \mathbb{L}(G_\eta)(z):= \int_0^z \frac{G'_\eta(\zeta)}{G_\eta(\zeta)}d\zeta=z\int_0^1\frac{G'_\eta(zt)}{G_\eta(zt)}dt.$$ We obtain a holomorphic function $\mathbb{L}(G_\eta)(z)$ on ${ \mathbb{D}}$ which has the following properties: - $\forall\ r\in (-1,+1),\ \mathbb{L}(G_\eta)(r)=\log G_\eta(r)=B_\eta(0,r)$. - $\forall z\in { \mathbb{D}},\ e^{\mathbb{L}(G_\eta)(z)}=G_\eta(z)$. By using formula (\[complexlog\]), one can check that for all $0<r_1<1$, $$\sup_{\vert z\vert \leq r_1} \left \vert \mathbb{L}(G_\eta)(z) \right \vert \leq C(r_1),$$ where $C(r_1)$ is uniform in $\eta:=g^{-1}\gamma^{-1}\xi$. Writing $$\cos\left(\lambda B_\xi(0,\gamma g(0)) +\lambda \mathbb{L}(G_\eta)(z)\right)$$ $$=\cos\left (\lambda B_\xi(0,\gamma g(0)))\cos(\lambda \mathbb{L}(G_\eta)(z)\right) -\sin(\lambda B_\xi(0,\gamma g(0)))\sin(\lambda \mathbb{L}(G_\eta)(z)),$$ and using the bounds for all $z\in {\mathbb{C}}$, $$\vert \cos(z) \vert \leq 2 e^{\vert {{\rm Im}}(z) \vert},\ \vert \sin(z) \vert \leq 2 e^{\vert {{\rm Im}}(z) \vert},$$ we deduce that for all $\vert z\vert\leq r_1$ and $\lambda \geq 0$, $$\left \vert \cos\left(\lambda B_\xi(0,\gamma g(0)) +\lambda \mathbb{L}(G_\eta)(z)\right) \right \vert \leq \widetilde{C}(r_1) e^{\beta_{r_1}\lambda}.$$ Combining this last bound with Lemma \[est1\] shows uniform convergence on $$\{ \vert z\vert \leq r_1\}$$ of the above series, hence holomorphy and the claimed bound. $\square$ Notice that for a more general real-analytic curve, a similar statement follows straightforwardly, with the difference that it will hold on a smaller domain $\Omega \subset { \mathbb{D}}$. The rest of the proof of the upper bound on the number of intersections of nodal lines with $\mathcal{C}_0 \subset \mathcal{C}$ will follow from Theorem \[equi2\] combined with Jensen’s formula. The version of Jensen’s formula we will use is the following. \[Jensen\] Let $f$ be a holomorphic function on the open disc $D(w,R)$, and assume that $f(w)\neq 0$. let $N_f(r)$ denote the number of zeros of $f$ in the closed disc $\overline{D}(w,r)$. For all $\widetilde{r}<r<R$, we have $$N_f(\widetilde{r})\leq \frac{1}{\log(r/\widetilde{r})} \left ( \frac{1}{2\pi} \int_0^{2\pi} \log \vert f(w+re^{i\theta})\vert d\theta-\log\vert f(w)\vert \right).$$ For a reference on Jensen’s formula, we refer the reader to the classics, for example Titchmarsh [@Tit]. Let $\mathcal{C}_0=g([-r_0,+r_0])$ be a geodesic segment as above, with $0<r_0<1$. $\epsilon>0$ so small that $r_0+3\epsilon <1$ and set $$r_1=r_0+\epsilon,\ r_2=r_0+2\epsilon,\ r_3=r_0+3\epsilon.$$ If $D(w,r)$ denotes the complex open disc with center $w$ and radius $r$, we then have for all $x\in [-\epsilon,+\epsilon]$ $$D(0,r_0)\subset D(x,r_1)\subset D(x,r_2)\subset \overline{D(0,r_3)}\subset { \mathbb{D}}.$$ Let $N(\lambda)$ denote the number of zeros of $r\mapsto F_\lambda(g(r),\xi)$ in the interval $[-r_0,+r_0]$. By applying Theorem \[equi2\] on the short interval $[-\epsilon,+\epsilon]$, we have $$\lim_{\lambda \rightarrow +\infty} \int_{-\epsilon}^{+\epsilon} (F_\lambda(g(r),\xi))^2 dr= {{\textstyle{\frac{1}{2}}}}\int_{-\epsilon}^{+\epsilon} E_1(g(r),\xi)dr,$$ which shows that for all $\lambda$ large enough we have $$0<C_\epsilon:={{\textstyle{\frac{1}{2}}}}\left ( \frac{1}{2\epsilon} \int_{-\epsilon}^{+\epsilon} E_1(g(r),\xi)dr \right )^{1/2}\leq \sup_{r\in [-\epsilon,+\epsilon]} \vert F_\lambda(g(r),\xi) \vert.$$ For all $\lambda$ large, we denote by $x_\lambda \in [-\epsilon,+\epsilon]$ a point such that $$\vert F_\lambda(g(x_\lambda),\xi) \vert= \sup_{r\in [-\epsilon,+\epsilon]} \vert F_\lambda(g(r),\xi) \vert.$$ Applying Jensen’s formula to $\widetilde{F}_{\lambda,g}(z,\xi)$ on $D(x_\lambda,r_1)\subset D(x_\lambda,r_2)$, we have $$N(\lambda)\leq \frac{1}{\log(r_2/r_1)} \left( \frac{1}{2\pi} \int_0^{2\pi} \log \vert \widetilde{F}_{\lambda,g}(x_\lambda+r_2 e^{i\theta},\xi)\vert d\theta \right)$$ $$- \frac{1}{\log(r_2/r_1)}\left (\log\vert F_\lambda(g(x_\lambda),\xi) \vert \right)$$ $$\leq \frac{1}{\log(r_2/r_1)} \left ( \sup_{\vert z \vert \leq r_3}\log \vert \widetilde{F}_{\lambda,g}(z,\xi)\vert +\log(C_\epsilon^{-1}) \right ).$$ Using the estimate of Proposition \[complexify1\], we then deduce that as $\lambda$ goes to infinity, $$N(\lambda)=O(\lambda),$$ and the proof is completed. $\square$ To deal with more general real-analytic curves which extend holomorphically to a smaller domain $\Omega\subset { \mathbb{D}}$, we just need to replicate the same argument with several discs instead of a single one. We omit it for simplicity. Counting nodal domains ====================== The non-elementary case ----------------------- Let us introduce some notations. We assume in this section that $\Gamma$ is non-elementary. We will work on the universal cover ${\mathbb{H}^2}$, so that the convex core $X_0$ is the image under the covering ${\mathbb{H}^2}\rightarrow \Gamma \backslash {\mathbb{H}^2}$ of a compact geodesic polygon $$\mathcal{P}\subset {\mathbb{H}^2}.$$ The polygon $\mathcal{P}$ has finitely many sides which are geodesic segments, see the picture below for an example such a polygon in ${\mathbb{H}^2}={ \mathbb{D}}$, the gray hyperbolic octogon is $\mathcal{P}$. ![image](Nielsen2.png) We choose $\xi \in S^1\setminus \Lambda(\Gamma)$ such that the upper bound of Theorem \[main\] is valid on the full boundary $\partial \mathcal{P}$, which can be done for a set of full measure. We recall that the nodal domains of $F_\lambda(z,\xi):{\mathbb{H}^2}\rightarrow {\mathbb{R}}$ are by definition the connected components of $${\mathbb{H}^2}\setminus \{F_\lambda(z,\xi)=0\}.$$ The nodal domains $\mathcal{D}$ which do intersect $\mathcal{P}$ fall into two categories. Either $$\overline{\mathcal{D}}\cap \partial \mathcal{P}\neq \emptyset,$$ and thanks to Theorem \[main\] there are at most $O(\lambda)$ of them, or we have $$\overline{\mathcal{D}}\subset \mathrm{Int}(\mathcal{P}).$$ In that case, since $F_\lambda$ has constant sign on $\mathcal{D}$, the eigenvalue $$\mu=1/4+\lambda^2$$ must be the first eigenvalue of the hyperbolic Laplacian $\Delta_{{\mathbb{H}^2}}$ on $\mathcal{D}$ for the [*Dirichlet boundary problem*]{}: $$\left \{ \begin{array}{ccc} \Delta_{{\mathbb{H}^2}} \psi&=&\mu \psi \\ \psi=0& \mathrm{on}& \partial \mathcal{D}. \end{array} \right.$$ Let $\lambda_1(\mathcal{D})$ denote the smallest eigenvalue for the above Dirichlet problem. We will use the following key lower bound. \[FK1\] Fix $\epsilon_0>0$, then there exists $C_0>0$ such that for all domain $\Omega\subset {\mathbb{H}^2}$ with $\mathrm{Vol}(\Omega)\leq \epsilon_0$ $$\lambda_1(\Omega) \geq \frac{C_0}{\mathrm{Vol}(\Omega)}.$$ [*Proof.*]{} We first use the Faber-Krahn inequality for domains in ${\mathbb{H}^2}$, see Chavel [@Chavel] p. 87. It is valid on simply connected spaces of constant curvature. If $\Omega$ is a compact domain of ${\mathbb{H}^2}$ with piecewise $C^\infty$ boundary, then the first Dirichlet eigenvalue $\lambda_1(\Omega)$ of the Laplacian satisfies $\lambda_1(\Omega)\geq \lambda_1(D)$, where $D$ is a geodesic disc with same (hyperbolic) volume. The game is now to prove a lower bound for the first eigenvalue on a disc $D$ of the hyperbolic plane, for small values of the radius. We use the disc model for ${\mathbb{H}^2}$ and can assume that $D=D(0,r)$ (euclidean disc) is centered at $0$. By the min-max principle, we have $$\lambda_1(D(0,r))=\inf_{\varphi\neq 0 \in C_0^\infty(D)} \frac{\int_D \varphi (\Delta_{{\mathbb{H}^2}}\varphi) d\mathrm{Vol}}{\int_D \varphi^2 d\mathrm{Vol}}.$$ But we have $$\int_D \varphi (\Delta_{{\mathbb{H}^2}}\varphi) d\mathrm{Vol}=\int_D \varphi (\Delta \varphi) dm,$$ where $m$ is the Lebesgue measure and $\Delta$ the positive euclidean Laplacian, while $$\int_D \varphi^2 d\mathrm{Vol}=\int_D \varphi^2(z) \frac{4dm(z)}{(1-\vert z \vert^2)^2} \leq \frac{4}{(1-r_0^2)^2} \int_D \varphi^2(z) dm(z),$$ as long as $r\leq r_0<1$. We therefore have $$\lambda_1(D(0,r))\geq \frac{(1-r_0^2)^2}{4} \lambda_1^{\mathrm{euc}}(D(0,r)),$$ where $\lambda_1^{\mathrm{euc}}$ denotes the first Dirichlet eigenvalue for the [*euclidean*]{} Laplacian. A simple change of coordinates in the min-max then shows that $$\lambda_1^{\mathrm{euc}}(D(0,r))\geq \frac{\lambda_1^{\mathrm{euc}}(D(0,1)),}{r^2}.$$ Using the formula for the hyperbolic area of $D(0,r)$ $$\mathrm{Vol}(\Omega)=\mathrm{Vol}(D(0,r)) =\frac{4\pi r^2}{1-r^2}$$ shows that $$\lambda_1(\Omega)\geq \frac{\pi (1-r_0^2)^2 \lambda_1^{\mathrm{euc}}(D(0,1))}{\mathrm{Vol}(\Omega)},$$ and the claim is proved. $\square$ Going back to the proof of the upper bound, let $(\mathcal{D}_i)_{i\in I}$ be the (finite) collection of nodal domains $\mathcal{D}_i$ that are inside $\mathrm{Int}(\mathcal{P})$. By volume comparison, we have $$\mathrm{Vol}(\cup_{i\in I}\mathcal{D}_i)=\sum_{i\in I} \mathrm{Vol}(\mathcal{D}_i) \leq \mathrm{Vol}(\mathcal{P}).$$ Let $J\subset I$ be the set of indexes such that for all $j\in J$, $\mathrm{Vol}(\mathcal{D}_j)\leq \epsilon_0$. By Proposition \[FK1\] we get $$\frac{\#(J)C_0}{1/4+\lambda^2}\leq \mathrm{Vol}(\mathcal{P}),$$ which obviously shows that $\#(J)=O(\lambda^2)$. Similarly we have $$\#(I\setminus J) \leq \epsilon_0^{-1}\mathrm{Vol}(\mathcal{P})=O(1).$$ As a conclusion we have shown that the total number of nodal domains that intersect $\mathcal{P}$ is $O(\lambda^2)$, thus completing the proof of the upper bound. The cylinder case ----------------- Here we assume that $\Gamma$ is an elementary group so that $X=\Gamma\backslash {\mathbb{H}^2}$ is a hyperbolic cylinder. We denote by $\mathcal{C}_0$ the unique closed geodesic on $X$. Fix $r>0$. The collar $$\mathcal{C}(r):=\{ z\in X\ :\ \mathrm{dist}(z,\mathcal{C}_0)\leq r \}$$ is the image under the projection $\Pi:{\mathbb{H}^2}\rightarrow X$ of a domain $\mathcal{P}$ in ${\mathbb{H}^2}$ whose boundary is piecewise circular (not totally geodesic). ![image](Collar.png) More precisely, we can (up to a conjugation by an isometry) assume that $\mathcal{C}_0$ lifts in ${\mathbb{H}^2}$ to the segment $(-1,+1)$, so that by a classical formula (see Beardon [@Beardon], p.163) we have $$\mathrm{dist}(z,\mathcal{C}_0)\leq r \Leftrightarrow \frac{2\vert {{\rm Im}}(z) \vert}{1-\vert z\vert^2} \leq \sinh(r).$$ If $\mathcal{C}_0$ is the axis of a hyperbolic isometry $\gamma_0$ and $\Gamma$ is the group generated by $\gamma_0$, then a fundamental domain for the action of $\Gamma$ is provided by the domain of ${\mathbb{H}^2}$ which is outside the isometric circles of $\gamma_0$ and $\gamma_0^{-1}$. The domain $\mathcal{P}$ is then the grey region depicted in the previous picture, which correspond to the intersection of the collar (in ${\mathbb{H}^2}$) with a fundamental domain. Since $\partial \mathcal{P}$ is piecewise real analytic, we can adopt the exact same strategy as in the previous proof, by choosing $\xi$ such that Theorem \[main\] applies, and by arguing the same way, depending on the type of nodal domain. Lower bounds and open questions ------------------------------- The first remark that we have in mind is that by adapting straightforwardly the combinatorial arguments used in [@GRS; @JZ1; @JZ2] we can obtain a lower bound for the number of connected components of $X_0\setminus \mathcal{N}_\lambda$, for generic $\xi$, which says that for large $\lambda$, $$M_\xi(\lambda)\geq C^{-1} \lambda.$$ Clearly the main input here is the lower bound given by Theorem \[main\] and the graph theoretic arguments from [@JZ2], which are a generalization of the more elementary ideas pioneered in [@GRS]. However that kind of lower bound is rather [*irrelevant*]{}, because we cannot rule out the fact that these connected components could very well come from a [*single*]{} nodal domain which would intersect several times the convex core $X_0$. These issues are already present on compact manifolds where one has either to use symmetries or boundary conditions to rule out these pathologies. From the numerics one can formulate the following list of open questions which seem to be relevant. - It seems that for compact sets $K$ with non empty interior which are [*in the vicinity of $\xi$*]{}, the number $M_K(\lambda)$ of nodal domains that intersect $K$ obeys the growth rate $M_K(\lambda) \asymp \lambda$. - Is the number of compact nodal domains finite ? Do compact nodal sets remain in a compact part of the surface, uniformly in $\lambda$ ? - Prove that there exist compact nodal domains, if $\lambda$ is large enough, start with the elementary group case. - Prove or disprove that the number of compact nodal domains inside the convex core is, as $\lambda\rightarrow +\infty$, greater than $C\lambda^2$, for some $C>0$. This question could be tested numerically. - In the plot below we have found for $\lambda=150$ some examples of non-simply connected compact nodal domains. Can the topology be arbitrary ? ![image](Nodal5.pdf)\ [10]{} Alan F. Beardon. , volume [**91**]{} of [ *Graduate Texts in Mathematics*]{}. Springer-Verlag, New York, 1995. Corrected reprint of the 1983 original. David Borthwick. , volume 256 of [*Progress in Mathematics*]{}. Birkhäuser Boston Inc., Boston, MA, 2007. Jean Bourgain and Ze[é]{}v Rudnick. Restriction of toral eigenfunctions to hypersurfaces and nodal sets. , 22(4):878–937, 2012. Isaac Chavel. , volume 115 of [*Pure and Applied Mathematics*]{}. Academic Press, Inc., Orlando, FL, 1984. Including a chapter by Burton Randol, With an appendix by Jozef Dodziuk. Harold Donnelly and Charles Fefferman. Nodal sets of eigenfunctions on [R]{}iemannian manifolds. , 93(1):161–183, 1988. Semyon Dyatlov and Maciej Zworski. Quantum ergodicity for restrictions to hypersurfaces. , 26(1):35–52, 2013. Amit Ghosh, Andre Reznikov, and Peter Sarnak. Nodal domains of maass forms 1. , 2012. Colin Guillarmou and Fr[é]{}d[é]{}ric Naud. Equidistribution of [E]{}isenstein series on convex co-compact hyperbolic manifolds. , 136(2):445–479, 2014. Junehyuk Jung. Quantitative quantum ergodicity and the nodal domains of maass-hecke cusp forms. , 2013. Junehyuk Jung and Steve Zelditch. Number of nodal domains and singular points of eigenfunctions of negatively curved surfaces with an isometric involution. , 2013. Junehyuk Jung and Steve Zelditch. Number of nodal domains of eigenfunctions on non-positively curved surfaces with concave boundary. , 2014. Peter D. Lax and Ralph S. Phillips. Translation representation for automorphic solutions of the non-[E]{}uclidean wave equation [I]{}, [II]{}, [III]{}. , [**37,38**]{}:303–328, 779–813, 179–208, 1984, 1985. W. Luo and P. Sarnak. Quantum ergodicity of eigenfunctions on [$\text{SL}_2({\mathbb{Z}})\backslash {\mathbb{H}^2}$]{}. , [**81**]{}:207–237, 1995. Rafe R. Mazzeo and Richard B. Melrose. Meromorphic extension of the resolvent on complete spaces with asymptotically constant negative curvature. , [**75**]{}(2):260–310, 1987. S. J. Patterson. The limit set of a [F]{}uchsian group. , [**136**]{}(3-4):241–273, 1976. E. C. Titchmarsh. . Oxford University Press, second edition, 1932. John A. Toth and Steve Zelditch. Quantum ergodic restriction theorems: manifolds without boundary. , 23(2):715–775, 2013. Steve Zelditch. Eigenfunctions and nodal sets. In [*Surveys in differential geometry. [G]{}eometry and topology*]{}, volume 18 of [*Surv. Differ. Geom.*]{}, pages 237–308. Int. Press, Somerville, MA, 2013.
--- abstract: 'Neutrino event generators are an essential tool needed for the extraction of neutrino mixing parameters, the mass hierarchy and a CP violating phase from long-baseline experiments. In this article I first describe the theoretical basis and the approximations needed to get to any of the generators. I also discuss the strengths and limitations of theoretical models used to describe semi-inclusive neutrino-nucleus reactions. I then confront present day’s generators with this theoretical basis by detailed discussions of the various reaction processes. Finally, as examples, I then show for various experiments results of the generator GiBUU for lepton semi-inclusive cross sections as well as particle spectra. I also discuss features of these cross sections in terms of the various reaction components, with predictions for DUNE. Finally, I argue for the need for a new neutrino generator that respects our present-day knowledge of both nuclear theory and nuclear reactions and is as much state-of-the-art as the experimental equipment. I outline some necessary requirements for such a new generator.' address: 'Institut fuer Theoretische Physik, Universitaet Giessen, Giessen, Germany' author: - Ulrich Mosel bibliography: - 'nuclear.bib' --- =6 [*Keywords*]{}: neutrino Interactions, electroweak interactions, nuclei, long-baseline experiments, neutrino event generators\ Contents {#contents .unnumbered} ======== Introduction ============ Electron scattering on nuclei has been an active field of nuclear physics research over many decades. It has increased our knowledge about the response of a nuclear many-body system to the electromagnetic interaction [@Boffi:1993gs]. At relatively low energies (10s of MeV) collective excitations of the nucleus are dominant, at higher energies (100 MeV) quasielastic reactions on individual nucleons become essential [@Benhar:2006wy], at still higher energies (100s of MeV) one enters the regime of nucleon resonance excitations and finally, at the highest energies (10s of GeV), the reactions explore the Deep Inelastic Scattering (DIS) regime [@Bianchi:2007fz]. Originally unexpected phenomena such as 2p2h excitations [@Dekker:1991ph], in-medium spectral functions [@Benhar:1994hw; @CiofidegliAtti:1995qe], short-range correlations [@Alvioli:2008as] and spectroscopic factors [@Radici:2002ut] and the change of parton distributions inside the nucleus, as showing up in the EMC effect [@Hen:2016kwk], have been explored. For all of these features not only the incoming beam energy is important, but in addition also the momentum transfer. It is, therefore, natural to extend these studies to reactions with neutrinos where the axial coupling, typical for weak interactions, offers a new degree of freedom to explore [@Bernard:2001rs]. Indeed, such studies were originally motivated by the interest in the axial response of nuclear many-body systems. In the first such studies, nearly 50 years ago, the nucleus was described as an unbound system of freely moving nucleons with their momenta determined by the Fermi-gas distribution [@Smith:1972xh; @LlewellynSmith:1971zm]. First steps beyond that simple model were studies where the nucleus was described as a bound system with a mean-field potential [@Donnelly:1984yr; @Alberico:1997vh; @Amaro:2011qb; @Meucci:2015bea] and excitations were treated by the Random Phase Approximation (RPA)[@Kolbe:1995af; @Nieves:2004wx; @Botrugno:2005kn; @Pandey:2014tza]. More recently, even ab-initio calculations of the electroweak response of nuclei have become possible [@Lovato:2017cux]. In general, neutrino-induced reactions exhibit the same characteristics as the electron-induced reactions. The same reaction subprocesses as described above also are present here. The only difference being the additional presence of an axial amplitude in all processes; the final state interactions are the same [@Mosel:2016cwa] if the incoming kinematical conditions (energy- and momentum-transfer) are identical. Even though these two types of experiments, electron-induced and neutrino-induced ones, are so similar there is a very essential differences between them. In electron-induced reactions the incoming beam energy is very accurately known and the momentum transfer can be measured with the help of magnetic spectrometers. Both of these observables are not available for neutrinos. Because neutrino beams are produced through the secondary decay of pion and kaons, first produced in a p+A reaction, their energies are not sharp, but smeared out over a wide range. For example, for the Deep Underground Neutrino Experiment (DUNE) [@DUNE] the energy distributions peaks at about 2 GeV, but has long tails all the way down to zero energy and up to 30 GeV. In a charged current (CC) reaction the outgoing lepton’s energy and angle can be measured, but since the incoming energy is not known, also the momentum transfer is experimentally not available. This is per se a challenge for any comparison of theory with experimental results since the theory calculations have to be performed at many different energies. It also presents a challenge to theory because even lepton semi-inclusive cross sections cannot be easily separated according to the first interaction process since the smearing of the incoming energy automatically brings with it a smearing of the energy- and momentum-transfer. For example, even at the lower energies of the T2K experiment (beam energy peak at about 0.75 GeV [@t2k]) true quasielastic (QE) scattering on a single nucleon cannot be separated from events involving 2p2h interactions or those in which first a pion was created that was subsequently reabsorbed. Thus, any theoretical description of experimental data requires a simultaneous and consistent treatment of several different elementary interaction processes. Presently running (T2K [@t2k], NOvA [@nova]) or planned (T2HK[@T2HK], DUNE[@DUNE]) oscillation experiments aim at a precise determination of the neutrino mixing parameters, of a CP violating phase and of the mass ordering of neutrinos; they all use nuclei as targets. The oscillation formulas used to extract these quantities from the data all involve the incoming neutrino energy. This incoming energy must be reconstructed from the measured final state of the neutrino-nucleus reaction. Because of experimental acceptance cuts and the entanglement of different elementary processes this reconstruction is less than trivial. It involves an often wide extrapolation from the actually measured final state to the full final state. #### Energy reconstruction There are two methods in use to reconstruct the energy of the incoming neutrino from final state properties: - [**Kinematical Method**]{} In this method one uses the fact that the incoming energy of a neutrino interacting in a CCQE process with a neutron at rest is entirely determined by the kinematics of the outgoing lepton. If this method is used with nuclear targets then there are two complications: - First, the neutron is not free and at rest, but it is bound in a nucleus and Fermi-moving. This alone already leads to a smearing of the reconstructed energy and a shift with respect to the free-nucleon case. - Also, the method in principle requires to identify the interaction process as true QE scattering. - This, however, is impossible in a nuclear target where always pion production followed by reabsorption can take place. This effect leads to a low-energy tail in the reconstructed energy [@Leitner:2010kp]. Therefore, the method is expected to work best at lower energies, where pion production is not yet dominant. - In addition, detector acceptances may limit the necessary QE identification. - [**Calorimetric Method**]{} In this method one measures the energies of all particles in the final state and reconstructs the energy from that information. Problems with this method arise because detectors are not perfect, have detection threshold and may miss certain particles, e.g. neutrons, alltogether. The energy then has to be reconstructed from a final state that is only partially known. This method is mostly used in higher-energy experiments where the complication due to inelastic excitations and pion production make the kinematical method less reliable. An illustrative example for the typical errors in energy reconstruction is shown in the upper part of Figure \[fig:t2kflux-oscillations\]. For this example the kinematical method was used. ![[**Top:**]{} QE-like event distributions in the T2K flux for electron neutrino appearance experiments, as obtained from GiBUU. The dashed curves give the event distributions as function of the true incoming neutrino energy, the solid curves those as a function of reconstructed energies. The oscillated event curves have been multiplied by a factor of 10 to enhance the visibility of the difference. [**Bottom:**]{} Sensitivity of QE-like event distributions on the CP violating phase. The solid (red) curve is the same as the one in the upper part. Both figures taken from [@Lalakulich:2012hs].[]{data-label="fig:t2kflux-oscillations"}](T2Kflux-oscillations-Che-before-after-mue.pdf "fig:"){width="0.7\linewidth"} ![[**Top:**]{} QE-like event distributions in the T2K flux for electron neutrino appearance experiments, as obtained from GiBUU. The dashed curves give the event distributions as function of the true incoming neutrino energy, the solid curves those as a function of reconstructed energies. The oscillated event curves have been multiplied by a factor of 10 to enhance the visibility of the difference. [**Bottom:**]{} Sensitivity of QE-like event distributions on the CP violating phase. The solid (red) curve is the same as the one in the upper part. Both figures taken from [@Lalakulich:2012hs].[]{data-label="fig:t2kflux-oscillations"}](T2Kflux-oscillations-Che-mue-deltaCP.pdf "fig:"){width="0.7\linewidth"} Here a generator, in this case GiBUU [@gibuu; @Buss:2011mx], has been used to generate millions of events as a function of ’true’ energy. These events were then analyzed and the energy was reconstructed by using the so-called kinematical method. The peak of the oscillated event distribution lies at about 0.65 GeV; here the true and the reconstructed curves differ by about 25%. This discrepancy is just as large as the sensitivity of the electron appearance signal to the CP violating phase $\delta_{CP}$. This is illustrated for exactly the same reaction (T2K flux on $^{16}$O) in the lower part of Figure \[fig:t2kflux-oscillations\]. At the peak of the oscillation signal the curves corresponding to three different values of $\delta_{CP}$ differ by about 25%, i.e. just by about the same amount as the error in the energy reconstruction. The influence of this error on neutrino mixing parameters has been quantified in Ref. [@Coloma:2013rqa; @Ankowski:2015jya]. In both methods for the energy reconstruction the actually measured energy has to be extrapolated to the true one. To perform this extrapolation and reconstruction so-called neutrino generators have been constructed early on. These generators try to take not only the initial neutrino-nucleon interaction into account, but also the quite essential final state interactions. A good review of generators presently used by experiments is given in [@Gallagher:2018pdg]. There it is also pointed out that generators are also needed for simulations of practical importance, e.g. for acceptance studies and for handling the effects of the typically quite large, extended targets. While the generator NEUT [@Hayato:2002sd] is primarily being used by the T2K experiment, the generator GENIE [@GENIE; @Andreopoulos:2009rq] has become widely used by groups connected to Fermilab experiments, such as MicroBooNE, NOvA and MINERvA. In addition, a generator named NuWro [@Golan:2012rfa] is being used for comparisons of experiment with calculations. Also a transport theoretical framework, GiBUU [@gibuu; @Buss:2011mx], for general nuclear reactions can be used as a generator of neutrino interactions with nuclei. #### Review outline Obviously, in all these generators cross sections both for the initial neutrino-nucleon reaction and for the hadron-hadron reactions in the final state are essential [@Katori:2016yel]. Unfortunately, the few data that exist on elementary targets, such as p and D, are all at least 30 years old and carry large uncertainties. Our knowledge about these cross sections has been discussed in a number of fairly recent reviews [@Conrad:1997ne; @Gallagher:2011zza; @Formaggio:2013kya; @Mahn:2018mai]. In this article I will, therefore, not repeat the discussions of cross sections. Instead I first give a short outline of the theoretical basis for [*any*]{} generator and discuss the approximations that go into the presently used ones. I will then go through the various subprocesses (QE, Resonance excitation, DIS, ...) and confront the inner workings of the generators with present-day nuclear physics knowledge about these processes. The main motivation for this critical discussion is my conviction that only a theoretically up-to-date and consistent generator can provide the reliability needed when used for new targets or in new energy regimes. While all of these discussions are generally valid I will then illustrate features of neutrino-nucleus cross sections with the help of a specific generator, GiBUU. Towards the end of this critical review I will argue that in view of the upcoming high-precision experiments also a new well-founded, high-precision generator is needed that is free of many the shortcomings of presently used ones. Foundations of generators {#Generators} ========================= In this chapter I briefly discuss the theoretical basis of *all* generators. Most of the generators treat the hadrons as billiard balls following classical trajectories. It is, therefore, essential to understand under which circumstances such a treatment can be justified. In the following subsection I merely summarize the essential steps necessary to get to a well-founded transport equation; more details can be found in [@Buss:2011mx; @Kad-Baym:1962; @Danielewicz:1982kk; @Danielewicz:1982ca]. Short derivation of a general transport equation ------------------------------------------------ The dynamical development of any quantum mechanical many-body system is determined by an infinite set of coupled equations for the Green’s functions: the one-particle Green’s function depends on the two-particle one, the two-particle Green’s function depends on the higher-order one, and so on. All the higher order Green’s functions can formally be included in a self-energy. The dynamics of the correlated many-body system is then determined by a Dyson equation for the single particle Green’s function which contains the (quite complicated) self-energy. In addition, interaction vertices of single particles can be dressed. Now approximations are introduced: - The first, and most important, approximation is to truncate the hierarchy of coupled Green’s functions by neglecting all higher-order correlations and keeping only the single-particle Green’s function. Better is the so-called Dirac-Brueckner-Hartree-Fock (DBHF) approach in which some two-body correlations are being taken into account. The self-energies are modeled, e.g. by an energy-density functional theory or the relativistic mean field theory. - It is, furthermore, assumed that all particles move locally in a homogeneous medium, with corresponding self-energies (potentials). This is the so-called local-density approximation. - The damping terms, i.e. the widths, of all the particles in medium are small relative to the mass gap. In nuclear physics this is well fulfilled since spectral functions of nucleons inside nuclei are very narrow compared to the total mass [@Benhar:1992cnb]. Closely connected with the local-density approximation is the ’gradient-approximation’ in which it is assumed that the Green’s functions $G(x,x')$ are rapidly oscillating functions of the relative coordinates $x - x'$ while their variation with the center of mass coordinate $X = (x + x')/2$ is small[^1]. This is the case if the medium itself is nearly homogeneous. For a nuclear system it implies that this approximation is the better the heavier the nucleus is; in heavy nuclei the homogeneous volume part prevails over the inhomogeneous surface region. For a lepton-nucleus reaction the starting point are the single particle Green’s functions for the nucleons in the target and the incoming lepton. In a homogeneous system it is advantageous to introduce the so-called Wigner-transforms of a single particle Green’s function $$\begin{aligned} G^<_{\alpha \beta}(x,p) &=& \int d^4\xi\, {\rm e}^{i p_\mu\xi^\mu}\, ({\rm i}) \left< \bar \psi_\beta(x+\xi/2) \psi_\alpha (x-\xi/2) \right> \nonumber \\ G^>_{\alpha \beta}(x,p) &=& \int d^4\xi\, {\rm e}^{i p_\mu\xi^\mu}\, ({\rm -i}) \left< \psi_\alpha(x-\xi/2) \bar \psi_\beta (x+\xi/2) \right>\end{aligned}$$ which depend on the Dirac indices. These are the objects that determine the time-evolution in a lepton-nucleus reaction. They are nothing else than the relativistic one-body density matrices. By tracing over the Dirac indices (i.e. concentrating on the spin-averaged behavior) one obtains a vector current density $$\label{FV} F^\mu_V(x,p)= -{\rm i\, tr}\left(G^<(x,p)\gamma^\mu\right) ~.$$ The time development of the vector current density for Dirac particles, i.e. leptons and nucleons, is then given by $$\begin{aligned} \label{transp} \partial_\mu F_V^\mu(x,p) &-& {\rm tr}\left[\Re \Sigma^{\rm ret}(x,p), -{\rm i} G^<(x,p)\right]_{\rm PB} \nonumber \\ &+& {\rm tr} \left[\Re G^{\rm ret}(x,p),-{\rm i} \Sigma^<(x,p)\right]_{\rm PB} = C(x,p) ~.\end{aligned}$$ with $$C(x,p) = {\rm tr} \left[\Sigma^<(x,p)G^>(x,p) - \Sigma^> (x,p)G^<(s,p)\right] ~.$$ In Eq. (\[transp\]) the symbol $\left[\dots\right]_{\rm PB}$ stands for the Poisson bracket $$\left[S,G\right]_{\rm PB} = \frac{\partial S}{\partial p_\mu} \frac{\partial G}{\partial x^\mu} - \frac{\partial S}{\partial x^\mu} \frac{\partial G}{\partial p_\mu}$$ and the quantities $\Sigma$ in (\[transp\]) are self-energies which represent the potentials and are thus essential ingredients of the Hamiltonian. In a homogeneous system of fermions one can relate the two propagators $G^>$ amd $G^<$ to each other $$\begin{aligned} \label{G><} {\rm i} G^<(x,p) = + 2 f(x,p)\, \Im G^{\rm ret}(x,p) \nonumber \\ {\rm i} G^>(x,p) = -2 (1 - f(x,p)) \,\Im G^{\rm ret}(x,p) ~,\end{aligned}$$ where $f(x,p)$ is a Lorentz-scalar function. The trace over the imaginary part of the retarded propagator in (\[G&gt;&lt;\]) is – up to some numerical factors – just the single particle spectral function $A(x,p)$. Reducing the vector current density $F_V^\mu$ to a scalar density $F$ by means of $F_V^\mu = ({p^*}^\mu/E^*) F$, where $p^*$ and $E^*$ are the momentum and the energy for a particle with self-energies, one has from (\[FV\]) $$\label{def_F} F(x,p) = 2 \pi g f(x,p) A(x,p) ~.$$ The function $F$ is the actual density distribution function in the eight-dimensional phase space $(x,p)$. It thus describes the time-development also of off-shell particles. Since it contains also the spectral function, often $F$ is called the ’spectral phase space density’ whereas $f(x,p)$ is just the ’phase space density’. The factor $g$ is a spin-degeneracy factor. The equation of motion for $F_V$ can now be converted into one for $F$. It becomes $$\label{BUU} \mathcal{D}F(x,p) + {\rm tr}\, \left[\Re G^{\rm ret}(x,p), - {\rm i} \Sigma^<(x,p)\right]_{\rm PB} = C(x,p)$$ with $$\label{drift} \mathcal{D}F = \left[p_0 - H,F \right]_{\rm PB}$$ This so-called ’drift term’ (\[drift\]) originates in the first Poisson bracket of Eq. (\[transp\]). $H$ is the single particle Hamiltonian which involves Lorentz-scalar and -vector potential mean fields (including in particular the Coulomb field); it is obtained from Eq. (\[transp\]) by identifying the selfenergy given there by a potential. The physics content of the second term on the left hand side of Eq. (\[BUU\]) is not obvious. It is also not easy to handle numerically since the term does not explicitly contain the spectral phase space density $F$. A major simplification was achieved by Botermans and Malfliet [@Botermans:1990qi] who showed that this term can be evaluated under the assumptions of local equilibrium in phase space and the gradient approximation. The equation of motion for $F$ then becomes $$\label{GiBUU} \mathcal{D}F(x,p) - {\rm tr} \Big\{ \Gamma(x,p)f(x,p),\Re G^{\rm ret}(x,p) \Big\}_{\rm PB} = C(x,p) ~.$$ Now the second term on the lhs is proportional to $F$ thus simplifying its practical evaluation. The quantity $\Gamma(x,p)$ is the imaginary part (width) of the retarded self-energy. This shows that this term is connected to the in-medium width and is essential for off-shell transport: its presence ensures that, e.g., the in-medium spectral function of a nucleon becomes a $\delta$-function when the nucleon leaves the nucleus. Using Eqs. (\[G&gt;&lt;\]) and (\[def\_F\]) the term $C(x,p)$ on the rhs of Eq. (\[BUU\]) becomes $$C(x,p) = 2 \pi g \, {\rm tr} \Big\{ \left[ \Sigma^>(x,p) f(x,p) - \Sigma^<(x,p) (1 - f(x,p) ) \right] \, A(x,p) \Big\} ~.$$ This term has the typical structure of a loss term (1. term in parentheses) that is proportional to the phase-space density of the interacting particle and a gain term (2. term) that takes the Pauli-principle into account. The self-energies $\Sigma^{\stackrel{>}{<}}$ contain the transition probabilities for both processes. $C(x,p)$ thus represents a collision term that that takes into account that interactions with other particles can either deplete a specific phase-space volume or populate it; in the latter case the Pauli-principle (for fermions) is taken into account by the factors $(1 - f)$. Eq. (\[BUU\]) without the second (off-shell transport) term has the form of the Boltzmann-Uehling-Uhlenbeck (BUU) equation. Eq. (\[GiBUU\]) represents the center piece of a practical off-shell transport theory; it is, e.g., encoded in the generator GiBUU [@gibuu; @Buss:2011mx]. For each particle there is one such equation to be solved and they are all coupled through the collision term, on one hand, and by the mean field potentials in $H$ (to which all particles contribute), on the other hand. If particles, such as e.g. pions, are created in a collision the corresponding equation for them has to be added to the initial system of equations [^2]. For an explicit example consider the case of a neutrino-nucleus interaction: initially then there is one such equation for the incoming neutrino with a $\delta$-function like spectral function ($\Gamma=0$) and [*A*]{} equations which contain the spectral phase-space densities of the [*A*]{} nucleons in the nuclear ground state; their spectral functions are contained in $F$ and in the Botermans-Malfliet off-shell transport term. At this point it is worthwhile to point out that the theory developed so far as expressed in Eq. (\[BUU\]) is fully relativistic and the equations of motion are covariant. ### Initial conditions The initial conditions for the integrations of the transport equation of motion (\[GiBUU\]) are determined by the spectral phase-space density at time $t=0$ $$\label{spectrphsp} F(x,p)_{t=0} = 2 \pi g f({\ensuremath{\boldsymbol{x}}},0,p) A({\ensuremath{\boldsymbol{x}}},0,p) ~.$$ and is thus fully determined by the Wigner-transform of the one-body density matrix. This density matrix could be obtained from any nuclear many-body theory. Numerical methods ----------------- The generalized BUU equation (\[GiBUU\]) can be solved numerically by using the test-particle technique, i.e., the continuous Wigner function is replaced by an ensemble of test particles represented by $\delta$-functions, $$F(x,p)= \lim_{n(t)\to \infty}\frac{{\left( 2\pi \right) }^4}{N} \sum_{j=1}^{n(t)} \delta[{\ensuremath{\boldsymbol{x}}}-{\ensuremath{\boldsymbol{x}}}_j(t)] \delta[{\ensuremath{\boldsymbol{p}}}-{\ensuremath{\boldsymbol{p}}}_j(t)] \delta[p^0-p^0_j(t)]~, \label{eq:testparticleansatz}$$ where $n(t)$ denotes the number of test particles at time, $t$, and ${\ensuremath{\boldsymbol{x}}}_j(t)$ and $p_j(t)$ are the coordinates and the four-momenta of test particle $j$ at time $t$. As the phase-space density changes in time due to both, collisions and the mean field dynamics, also the number of test particles changes throughout the simulation: in the collision term, test particles are destroyed and new ones are created, for example when a pion is absorbed or produced. At $t=0$ one starts with $n(0)=N \cdot A$ test particles, where $A$ is the number of physical particles and $N$ is the number of ensembles (test particles per physical particle). More details about the numerical treatment of the Vlasov and collision dynamics can be found in [@Buss:2011mx]. While this method is well established for the drift term of the BUU equation the collision term requires some more refinement. Here one has often just used a geometrical argument to relate a cross section between two particles $\sigma = \pi d^2$ to an interaction distance $d$. This recipe poses a problem when the energies of the interacting particles become relativistic since then the distance seen from either one of the two particles may be different in their respective restframes because there are two different eigentimes for the two particles involved whereas the equation itself contains only the laboratory time. However, in the context of heavy-ion collisions approximate schemes have been developed that minimize this problem; these same methods can also be used in neutrino generators. Other schemes involve interactions between all particles in a given phase-space cell [@Lang:1993]; any relativistic deficiencies can be minimized that way. The actual choice of a particular reaction channel is then done by a cross section weighted random decision. Approximations -------------- ### Quasiparticle approximation In the quasiparticle approximation one neglects the width of the single particle spectral function of all particles. This gives $$\label{QPapprox} F(x,p) = 2 \pi g \, \delta[p_0 - E(x,\mathbf{p})]\, f(x,\mathbf{p})$$ Here $E(x,\mathbf{p})$ is the energy of a particle in mean field that depend on $x$ and $\mathbf{p}$ and $g$ is the spin-isospin degeneracy. In this approximation the off-shell transport term in (\[GiBUU\]) disappears and the equation becomes for a $2 \rightarrow 2'$ collision $$\label{QPBUU} \begin{split} \Big[ \partial_t + & ({\ensuremath{\boldsymbol{\nabla}}}_{{\ensuremath{\boldsymbol{p}}}} E_{{\ensuremath{\boldsymbol{p}}}}) \cdot {\ensuremath{\boldsymbol{\nabla}}}_{{\ensuremath{\boldsymbol{x}}}} - ({\ensuremath{\boldsymbol{\nabla}}}_{{\ensuremath{\boldsymbol{x}}}} E_{{\ensuremath{\boldsymbol{p}}}}) \cdot {\ensuremath{\boldsymbol{\nabla}}}_{{\ensuremath{\boldsymbol{p}}}} \Big ] f(x,{\ensuremath{\boldsymbol{p}}}) = \frac{g}{2} \int \frac{ {\mathrm{d}}^3 {\ensuremath{\boldsymbol{p}}}_2 \; {\mathrm{d}}^3 {\ensuremath{\boldsymbol{p}}}_1' \; {\mathrm{d}}^3 {\ensuremath{\boldsymbol{p}}}_2'}{(2 \pi)^9} \frac{m_{{\ensuremath{\boldsymbol{p}}}}^* m_{{\ensuremath{\boldsymbol{p}}}_2}^* m_{{\ensuremath{\boldsymbol{p}}}_1'}^* m_{{\ensuremath{\boldsymbol{p}}}_2'}^*}{E_{{\ensuremath{\boldsymbol{p}}}}^* E_{{\ensuremath{\boldsymbol{p}}}_2}^* E_{{\ensuremath{\boldsymbol{p}}}_1'}^* E_{{\ensuremath{\boldsymbol{p}}}_2'}^*} \\ & \times (2 \pi)^4 \delta^{(3)}({\ensuremath{\boldsymbol{p}}}+{\ensuremath{\boldsymbol{p}}}_2 - {\ensuremath{\boldsymbol{p}}}_1' - {\ensuremath{\boldsymbol{p}}}_2') \delta(E_{{\ensuremath{\boldsymbol{p}}}}+E_{{\ensuremath{\boldsymbol{p}}}_2}-E_{{\ensuremath{\boldsymbol{p}}}_1'}-E_{{\ensuremath{\boldsymbol{p}}}_2}') \times \overline{|\mathfrak{M}_{p\,p_2 \to p_1'\,p_2'}|^2}\\ & \times [ f(x,{\ensuremath{\boldsymbol{p}}}_1') f(x,{\ensuremath{\boldsymbol{p}}}_2') \overline{f}(x,{\ensuremath{\boldsymbol{p}}}) \overline{f}(x,{\ensuremath{\boldsymbol{p}}}_2) - f(x,{\ensuremath{\boldsymbol{p}}}) f(x,{\ensuremath{\boldsymbol{p}}}_2) \overline{f}(x,{\ensuremath{\boldsymbol{p}}}_1') \overline{f}(x,{\ensuremath{\boldsymbol{p}}}_2')] \end{split}$$ with $E_p=E(x,\mathbf{p})$. The functions $\bar f = 1 - f$ contain the effects of the Pauli-principle. The transition probability averaged over spins of initial particles and summed over spins of final particles is denoted by $\overline{|\mathfrak{M}_{p\,p_2 \to p_1'\,p_2'}|^2}$. It has to be calculated with final states that contain the effects of the same potential as the one in the drift term. The stars “\*” denote in-medium masses and energies that involve potentials. The corresponding expressions for other collisions, such as $2 \rightarrow 3$, or the decay of a resonance $1 \rightarrow 2 + 3$ can be found in [@Buss:2011mx]. The quasiparticle approximation describes a system of particles that move in a potential well. This is thus obviously a reasonable description not only of a nuclear groundstate, but also the final state interactions that take place in this same potential. The phase-space distributions $f(x,\mathbf{p})$ are the same in the drift term as in the collision term. If many different reaction channels are open, e.g. at T2K energies CCQE scattering and $\Delta$ resonance excitation, the collision term consists of a sum of terms for the various reaction processes. Essential is that for each individual reaction channel the initial ground state distribution $f$ of the nucleons is the same. In the quasiparticle approximation one neglects the in-medium spectral function of particles. For nucleons this essentially amounts to neglecting their short-range correlations that are known to lead to a broadening of the nucleon’s spectral function. For in vacuum unstable particles, which have already a free width (e.g. the $\Delta$ resonance), one has two possibilities: one can either treat these particles only as intermediate, virtual excitations that contribute to the transition matrix elements in the collision term, but are never explicitly propagated. For very broad, i.e. very short-lived, resonances this is a reasonable assumption. On the other hand, a resonance such as the $\Delta$ lives long enough to be propagated as an actual particle. This propagation could then be handled by propagating $\Delta$s with different masses, but it requires some knowledge about $\Delta N$ interactions in the collision term. All neutrino generators so far work in the quasiparticle approximation, although GiBUU allows also for off-shell propagation. ### Frozen approximation The mean field potentials contained in the Hamiltonian $H$ in (\[QPBUU\]) depend self-consistently (in a Hartree-Fock sense) on the phase-space distributions of the target nucleons. If nucleons are being knocked out by the incoming neutrino then also the mean field changes. To take care of this time-dependent change of the target structure requires some numerical expense. A significant computational simplification can be reached by assuming that the interaction is not violent enough to disrupt the whole nucleus, but allows just for the emission of a few ($\approxeq 2 - 3$) nucleons for nuclei with a mass number $A >12$. In this case it is reasonable to assume that the nuclear density distribution does not change significantly with time (“frozen approximation”, sometimes also called “perturbative particle method”). Since at the same time the number of ejected particles is relatively small collisions take place only between already ejected particles and the frozen target nucleons, but not between ejected particles. This approximation obviously becomes the better the heavier the target nucleus is and the lower the incoming neutrino energy. The frozen approximation is used in all the generators, but its actual implementation is quite different. Whereas standard generators freeze the density and then decide about final state interactions by means of a mean free path in that density, in GiBUU collisions between outgoing hadrons and the target nucleons are handled by picking out a target nucleon with its binding energy and its momentum in the Fermi sea. In this case the Fermi sea occupation is not changed during the time-development of the reaction. ### Free particle approximation {#sect:QPapprox} Both GiBUU and FLUKA [@Battistoni:2009zzb] have potentials for the nucleons implemented so that the nuclear ground states are actually bound and some of the nucleon-nucleon effects are already incorporated in the potential. As a consequence the effects of residual interactions (e.g. RPA) are diminished. A prize one has to pay for the presence of potentials is in terms of computer time. In between collisions the nucleons move on trajectories that are determined by the potentials; these trajectories have to be numerically integrated, if potentials (including Coulomb) are present. The widely used neutrino generators GENIE and NEUT (as well as NuWro) do not contain any binding potentials. In these generators the nuclear ground state is not bound and the system of nucleons, initialized with a momentum distribution of either the global or the local Fermi gas model, would fly apart if the nucleons were propagated from time $t=0$ on. Target nucleons and ejected nucleons are thus being treated on a very different basis. While the former are essentially frozen, the latter are being propagated from collision to collision in the final state interaction phase. The phase-space distributions of all nucleons are always those of free nucleons with free dispersion relations connecting energy and momentum. This makes it numerically simple to follow the nucleons in the final state because in between collisions they move on straight-line trajectories. Binding energy effects are at the end introduced by fitting an overall binding energy parameter to final state energy distributions. In the free particle approximation the structure of equation (\[QPBUU\]) is quite transparent. For free on-shell particles one has [@Buss:2011mx] $$\begin{aligned} \label{QPeq} H &=& p^2/(2M) \nonumber \\ F(x,p) &=& 2\pi g \delta(p_0 - E) f(x,\mathbf{p})\end{aligned}$$ where $g$ is a spin-isospin degeneracy factor. Inserting this into (\[QPBUU\]) gives $$\label{Gentransp} \left(\partial_t + \frac{\mathbf{p}}{M} \cdot \mathbf{\nabla_x}\right) f(x,\mathbf{p}) = C(x,\mathbf{p}) ~.$$ In this equation all potentials and spectral functions are neglected, except for the collision term it describes the free motion of particles and is used in most simple Monte-Carlo based event generators. Setting the function $f(\mathbf{x,p}) \sim \delta(\mathbf{x} - \mathbf{x(t)})\, \delta(\mathbf{p} - \mathbf{p}(t))$ then just gives the trajectories ($\mathbf{x(t)},\mathbf{p}(t)$), of freely moving particles (in the absence of the collision term). Together with the frozen approximation this free particle approximation allows to rewrite the collision term from one involving two-body collisions to one that just involves a mean free path in a fixed density. With that ingredient Eq. (\[Gentransp\]) is the one that is being solved by the standard event generators. Factorization in $\nu A$ reactions ---------------------------------- In a neutrino-nucleus reaction the incoming neutrino first reacts with one (or two) nucleons which are bound inside the target nucleus and are Fermi-moving. The reaction products of this very first reaction then traverse the nuclear volume until they leave the nucleus on their way to the detector. This time-development suggests to invoke the so-called factorization of the whole reaction into a first, initial process and a final state interaction process. The factorization is, however, not perfect. The wave functions of the outgoing nucleons from the first, initial interaction, and thus the cross sections for this initial interaction, are influenced by the potentials present in that final state. The subsequent propagation of particles then takes place in this very same potential. This is not the case in generators which decouple the first interaction from the final state ones, for example by using different modules for initial interactions and final state propagation, often taken from quite different models. For example, in some versions of GENIE, NEUT and NuWro the spectral functions of nucleons are taken into account in the description of the initial state for the very first interaction. These spectral functions contain implicitly information on potentials and off-shellness. However, the outgoing nucleons move freely on straight lines, i.e. without experiencing a potential, and only their energies are corrected by some constant binding energy. This is obviously not consistent. A consistent theory requires to use the full off-shell transport for these ’collision-broadened’ nucleons if one is interested not only in semi-inclusive cross sections, but also in quantities such as the final momentum-distribution of the hit nucleon. While factorization between initial and final state interactions is not exact a careful choice of observables can minimize the coupling between both stages of the collision. This is the central idea behind the proposal by Lu et al [@Lu:2015hea; @Lu:2015tcr] who have shown that transverse kinematic imbalances of final state leptons and hadrons decouple to some extent from the incoming channel. Preparation of the target ground state -------------------------------------- The nuclear ground state plays an important role for neutrino-nucleus interactions. In most generators it is simply assumed to be described by a free Fermi gas, either global or local. This ansatz neglects all effects of nuclear binding; the nucleus, if left alone, would simply ’evaporate’. In the nuclear many body approach [@Benhar:2006wy] all the effects of a binding potential are contained in the spectral function, even though the potentials themselves cannot be easily determined. The scaling models, on the other hand, do not really need a ground state if the scaling function has been determined from experiment. On the other hand, in the SUSA approach it is explicitly calculated from a relativistic mean field theory [@Gonzalez-Jimenez:2014eqa]; its properties then are used in calculating the scaling function. In these calculations the potential is momentum-dependent. From studies of p-A scattering one knows that the nucleon-nucleus potentials are momentum-dependent such that at small momenta ($< p_F$) they produce binding and at larger kinetic energies of about 300 MeV they disappear [@Cooper:1993nx]. In GiBUU the ground state is prepared by first calculating for a given realistic density distribution a mean field potential from a density- and momentum-dependent energy-density functional, originally proposed for the description of heavy-ion reactions [@Welke:1988zz]. The potential obtained from it is given by $$\label{U(p)} U[\rho,p] = A \frac{\rho}{\rho_0} + B \left(\frac{\rho}{\rho_0}\right)^\tau + 2 \frac{C}{\rho_0} g \int \frac{d^3p'}{(2\pi)^3} \, \frac{f(\vec{r},\vec{p}')}{1 + \left(\frac{\vec{p} - \vec{p}'}{\Lambda}\right)^2}$$ which is explicitly momentum dependent. Here $\rho_0$ is the nuclear equilibrium density and $f(\vec{r},\vec{p}')$ is the nuclear phase-space density with $g$ being the spin-isospin degeneracy. If a local Fermi-gas model is used it reads $$f(\vec{r},\vec{p}') = \Theta[(|\vec{p}| - p_F(\vec{r})] \quad {\rm with} \quad p_F(\vec{r}) = \left(\frac{6\pi^2}{g} \rho(\vec{r})\right)^{1/3} ~;$$ it is consistent with the spectral phase-space density defined in Eq. (\[spectrphsp\]) for a local Fermi gas momentum distribution, neglecting any in-medium width of the nucleons. By an iterative procedure the Fermi-energy is kept constant over the nuclear volume; the binding is fixed to -8 MeV for all nuclei. The ground state potential is thus by construction momentum-dependent. [^3] This momentum dependence is also present for unbound states and thus affets any processes with outgoing nucleons. In particular, this is also so for QE scattering. Typical potentials are shown in Figure \[fig:Npot\] in their dependence on momentum $p$. The potentials EQS5 and EQS14 agree with each other in the kinetic energy range $100 < T < 300$ MeV. For lower kinetic energies, corresponding to lower energy transfers, EQS5 gives a better description of the data (see the results shown in [@Gallmeister:2016dnq]). The momentum dependence of the ground state potential and of the potential seen by ejected nucleons is thus consistently obtained from one and the same theory. In addition to the nuclear potential just discussed for charged particles there is also a Coulomb potential present, which is also implemented in GiBUU, but in none of the other generators. Both potentials together affect mainly low momentum particles. They also cause a deviation of final state particle’s trajectories from straight lines which are usually assumed in standard generators. ### RPA correlations In calculations that start from an unbound initial state with a local Fermi gas momentum distribution it was found that RPA correlations play a major role in the region of the QE peak [@Nieves:2011yp]. Here, at $Q^2 \approx 0.2$ GeV$^2$, uncorrelated calculations were found to describe the data already reasonably well. Adding then RPA effects was found to lower the cross section by about 25%; after adding then the 2p2h contributions, to be discussed in a later subsection, agreement with the data was again achieved. Only recently the Ghent group has shown that this strong lowering caused by the RPA correlations is mostly an artifact due to the use of an unphysical ground state in these calculations [@Pandey:2016jju]; this was later also confirmed in an independent calculation [@Nieves:2017lij]. In Continuum RPA (CRPA) calculations this group showed that RPA effects overall are significantly smaller and play a significant role only at small $Q^2$ and small energy transfers ($\approx 10$s of MeV) when a realistic ground state potential is used. This result provides a justification for using mean field potentials in generators without any RPA correlations. Fig. \[fig:minervame-Ghent-GiBUU\] shows a comparison of the double-differential cross section for the QE scattering of the MicroBooNE flux on $^{40}$Ar, calculated on one hand within CRPA [@VanDessel:2017ery] and on the other hand – without any RPA correlations – within GiBUU. The overall agreement between the models is quite good and illustrates the small influence of RPA correlations ![Double-differential QE cross section for outgoing muons in a reaction of neutrinos in the booster neutrino beam with $^{40}Ar$. The numbers in the upper parts of each subfigure gives the cosine of the muon scattering angle with respect to the neutrino beam. All cross sections are given per nucleon. The solid curve gives the results of a CRPA calculation [@VanDessel:2017ery], the dash-dotted curve those obtained with GiBUU.[]{data-label="fig:minervame-Ghent-GiBUU"}](Ghent-GiBUU-comparison.pdf){width="0.9\linewidth"} on double differential semi-inclusive cross sections. Since in the calculations of Ref. [@Nieves:2011yp] the strong RPA effect was nearly canceled by an equally strong, but with opposite sign, effect of 2p2h interactions to reach agreement with experiment one could speculate that also the 2p2h contributions were overestimated in a calculations that starts with a free, non-interacting ground state. ### Spectral functions Nuclear many-body theory obtains the hole spectral function $\mathcal{P}_h(\mathbf{p},E)$ (SF) as the imaginary part of the hole propagator: it contains all the information about the energy-momentum distribution of bound nucleons [@Benhar:2006wy]. In the semiclassical formalism developed here the same information is contained in $F(x,p)$ at time $t=0$, i.e. at the time of the first contact. The hole spectral function expressed by $F$ is then given by an integral over the nuclear volume $$\label{Pvolint} \mathcal{P}_h(\mathbf{p},E) = \int\limits_{\rm nucleus} \!\!\!{\rm d}^3x \, F(\mathbf{x},t=0,\mathbf{p},E) ~.$$ The function $F(\mathbf{x},t=0,\mathbf{p},E)$ here can come in principle from any sophisticated nuclear many body theory. It is directly related to the Wigner transform of the one-body density matrix. In the quasiparticle approximation, relevant for use in generators,the initial $F$ is given by $$\label{QPapprox1} F(\mathbf{x},t=0,\mathbf{p},E) = 2 \pi g \, \delta[E - \tilde{E}(\mathbf{x},\mathbf{p})]\, f(\mathbf{x},t=0,\mathbf{p},E) ~$$ This quasiparticle approximation – used together with the global Fermi gas momentum distribution – has been criticized because it leads to ’spiky’, $\delta$-function shaped energy-momentum distributions. Indeed, for free particles without any potential and with a global Fermi gas momentum distribution we have $$\begin{aligned} \tilde{E}(x,\mathbf{p}) &=& \sqrt{\mathbf{p}^2 + m^2} \nonumber\\ f(\mathbf{x},t=0,\mathbf{p},\tilde{E}) &=& \Theta\left[p_\mathrm{F} - |\mathbf{p}|\right] \Theta(\tilde{E})\end{aligned}$$ Then the energy $\tilde{E}$, and therefore also $F$, is no longer dependent on $\mathbf{x}$ so that the spiky behavior is also present after integration over the nuclear volume in Eq. (\[Pvolint\]). If there is a potential present, however, then the hole spectral function is much better behaved even if the momentum distribution is given by a Fermi-gas approximation. I illustrate this here for a local Fermi gas bound in a scalar potential. The hole spectral function is now given by $$\mathcal{P}_h(\mathbf{p},E) = 2\pi g \int\limits_{\rm nucleus} \!\!\!{\rm d}^3x \,\Theta\left[p_\mathrm{F}(\mathbf{x}) - |\mathbf{p}|\right] \Theta(E)\, \delta\left(E - m + \sqrt{\mathbf{p}^2 + {m^*}^2(\mathbf{x},\mathbf{p})}\right)~,$$ here $p_F$ is the local Fermi-momentum taken to be a function of $p_F(\mathbf{x}) \sim \rho(\mathbf{x})^{1/3}$ (local Fermi-gas) and $E$ is the hole energy taken to be positive. For simplicity it is assumed that in this spectral function all effects of the nucleon potential are contained in the effective mass $m^*$ [@Buss:2011mx] which can depend on location and momentum of the nucleon. The corresponding *momentum distribution* approximates that obtained in state-of-the-art nuclear many-body theory calculations quite well (see Figure 4 in [@Alvarez-Ruso:2014bla]); its *energy distribution* no longer contains the $\delta$-function spikes of a free Fermi gas because of the $\mathbf{x}$-dependence of the potential in $m^*$ and the integration over ${\rm d}^3x$. This can be seen in Figs. 8 and 9 in Ref. [@Alberico:1997jg] which show semi-classical spectral functions calculated as discussed above. The SF thus obtained is quite similar to the one obtained from NMBT for the nuclear matter background [@Benhar:1994hw] but does not contain the wiggly structure caused by shell effects that are present in the final NMBT SF. These may have an influence on exclusive electron-induced reactions. For neutrino-induced reactions, however, the effect will be minor because of the inherent smearing of all observables over incoming energies due to the flux-distribution of any neutrino long-baseline beam. Alberico et al. [@Alberico:1997jg] have given a quite detailed discussion of this semi-classical approach to exclusive electron scattering. The main result of that study was that the potential binding the nucleons has a significant impact on exclusive cross sections which are, in addition, quite sensitive to final state interactions through the mean field. Both of these points are in contrast to the basic assumptions in standard neutrino event generators which neglect mean field potentials both in the initial and the final state of the reaction. Reaction types ============== Coherent interactions --------------------- An incoming neutrino can interact with the nucleus in many different ways. It can, first of all, coherently interact with all nucleons. This process happens if the momentum transfer is small and can lead either to elastic neutrino scattering (in a Neutral Current (NC) event) [@Akimov:2017ade] or to coherent pion production where the pion carries off the charge of the $W$ in a CC event [@AlvarezRuso:2011zz]. In both reaction types the target nucleus remains in its ground state. Such processes can be described by a coherent sum over the individual nucleon amplitudes [@Leitner:2009ph] and thus depends crucially on a phase-coherence between all nucleons. This coherence cannot be described by the quantum-kinetic transport equations (or any MC-based generator) which describe the incoherent time-evolution of single particle phase-space densities. Thus, coherent processes fall outside the validity of any semi-classical description and have to be added ’by hand’ to any generator. Theories for the description of coherent processes have been developed for incoming on-shell photons [@Peters:1998mb] and experimental results for these are available [@Krusche:2002iq]. For neutrino-induced coherent processes the situation is more complicated since experiments usually cannot see de-excitation photons from the target nucleus; one can thus not be sure that the target is left in its ground state. Theoretical investigations of coherent neutrino-induced pion production exploit the fact that the incoming gauge boson ’sees’ a coherent superposition of all nucleons and, therefore work with the overall form factor of the target nucleus [@AlvarezRuso:2011zz; @Hernandez:2009vm]. This is clearly an advantage in particular at higher energies where fully microscopical calculations such as the one in [@Leitner:2009ph] are not feasible because of the large number of contributing intermediate states. Semi-inclusive cross sections ----------------------------- The transport equations describe the full event evolution, from the very first, initial interaction of the incoming neutrino with one (or more) nucleons through the final state interactions up to the asymptotically free final state consisting of a target remnant and outgoing particles. Lepton semi-inclusive cross sections as a function of the outgoing leptons’s energy and angle (or equivalently squared four-momentum transfer $Q^2$ and energy-transfer $\omega$) are then easily obtained by stopping the time-evolution after the first initial interaction. They are defined as the sum of cross sections for all microscopic initial processes. Knowledge about the ultimate fate of the initially struck particles is not required for determining that quantity but their final state wave function enters into the transition amplitude. Such semi-inclusive cross sections obviously are a necessary test for all neutrino generators. In the theoretical framework outlined above they are obtained by summing over all reaction processes in the first time-step; the further time-development of the reaction is irrelevant for these inclusive cross sections. For example, for the semi-inclusive QE scattering cross section one has $$\label{inclXsect} {\rm d} \sigma^{\nu A}_{\rm QE}(E_\nu,Q^2,\omega) = \int \frac{{\rm d}^3p}{(2\pi)^3} \frac{dE}{2\pi}\,P_h(\mathbf{p},E) f_{\rm corr}\, {\rm d}\sigma^{\rm med}_{\rm QE}(E_\nu,E,Q^2,\omega) \, P_{\rm PB} (\mathbf{p + q}) ~.$$ Here ${\rm d}\sigma^{\rm med}$ is a possibly medium-dressed semi-inclusive cross section on a nucleon that depends both on the energy of the incoming neutrino and the energy of the hit nucleon, $f_{\rm corr}$ is a flux correction factor $f_{\rm corr} = (k \cdot p)/(k^0p^0)$ that transforms the flux in the nucleon rest frame into that in the nuclear restrame. The momenta $k$ and $p$ denote the four-momenta of the neutrino and nucleon momentum, respectively and $P_{\rm PB}$ describes the Pauli-blocking and any final state potential or spectral function effects. The cross section ${\rm d} \sigma$ on the lhs of Equation (\[inclXsect\]) depends on $Q^2$ and $\omega$ and can thus be used to calculate the semi-inclusive cross section. Final state interactions enter into the inclusive cross sections only through the final states needed to calculate the transition amplitude in the first, initial interaction. The interactions that produced particles experience when they traverse the nucleus are irrelevant. From these discussions it is clear that all generators, i.e. all theories and codes that lead to a full final state event, are also able to describe inclusive and lepton semi-inclusive cross sections. The opposite, however, is not true. Theories that describe the inclusive or semi-inclusive (as function of the outgoing lepton’s energy and angle) cross sections very well but inherently integrate over all final momenta of the first interaction do not give any information about the subsequent dynamical evolution of the system. Into this category fall the so-called scaling method and the nuclear-many body theories. Both will be discussed in the following sections. ### Scaling models It was observed quite early on that electron scattering data on nuclei show ’scaling’ over a kinematical range that roughly covers the low $\omega$ side of the QE-peak up to its maximum [@Day:1990mf]. ’Scaling’ here means that the ratios of the nuclear data over the nucleon data in the QE region are described by a function $F(y)$ of the single variable $ y(q,\omega) $ which depends on the momentum transfer $q$ and the energy transfer $\omega$. Later on, the scaling function was extended to include more detailed information on the particular target nucleus by introducing the Fermi momentum $p_F$ and a shift parameter to fit the strength and the position of the QE peak [@Caballero:2010fn]. With these fit parameters an excellent description of electron data in the QE-peak region could be obtained [@Antonov:2006md]. This is not too surprising since the width of the QE peak is determined by $p_F$ and its position can be shifted away from the free peak position by a potenial. While all these analyses relied on data models were also developed to calculate the scaling function starting from nuclear theory. This led to the quite successful SUSAv2 model [@Gonzalez-Jimenez:2014eqa; @Megias:2018ujz] which combines a relativistic mean field description of the target nucleus with a calculation using the Relativistic Plane Wave Impulse Approximation (RPWIA) at higher energies. An open problem here is still the combination of two different models (RMF vs RPWIA) which causes problems for maintaining the relativistic energy and momentum conservation as well as for gauge invariance. Nevertheless, the SUSA model is an excellent tool to calculate lepton-induced inclusive and semi-inclusive cross sections on nuclei; semi-inclusive here refers to the outgoing lepton. Its main strength lies in the description of semi-inclusive cross sections in the QE region. Inelastic excitations have to be added in by using phenomenological fits to single-nucleon inelastic structure functions. The method does not give any information on the final state of the reaction, except for the outgoing lepton’s properties, and thus cannot be used in any generator without invoking further assumptions on the states of the outgoing nucleons. ### Methods from nuclear many-body theory Short range correlations in nuclei cause a broadening of the nucleon spectral function which in vacuum is just given by $\delta$ function. Nuclear many body theory (NMBT) has allowed to calculate that function for the nuclear hole states in the nuclear ground state to a high precision [@Benhar:1994hw]. By invoking the impulse approximation one then uses this hole spectral function to describe the initial state in a QE scattering event; the final state in this method is usually assumed to be that of a free nucleon (impulse approximation) [@Benhar:2006wy]. The calculations thus contain effects in the nuclear ground state that go beyond the mean-field approximation, e.g. short-range correlations. The lepton semi-inclusive cross sections are then just given by Eq. (\[inclXsect\]) with the hole spectral function obtained from NMBT by summing the total cross sections for all individual processes. Thus the reliability of this method also depends on the ability of the theory to describe besides QE scattering also pion production (for the T2K energy regime) and also higher-lying excitations and DIS processes for the DUNE flux. In presently available calculations these inelastic excitations are usually taken from fits to inclusive inelastic responses. So far most of the results are available only for electron-induced reactions [@Ankowski:2014yfa]; results for neutrino-induced reactions are mostly missing. Nevertheless, the generators GENIE, NEUT and NuWro have implemented a so-called spectral function option into their description of QE scattering. In this option the very first, initial interaction is described by a cross section for QE scattering obtained by using a spectral function. Such a procedure is dubious since it uses a very different ground state for the QE scattering than for the other processes. Furthermore, in these generators the potential for the outgoing particles is absent and there is, therefore, a discontinuity between the initial state potential, hidden in the spectral function, and the final state potential. The method can thus work only in a limited kinematic range where the outgoing nucleon’s kinetic energy is about 250 - 300 MeV so that the momentum-dependent potential nearly vanishes (see Figure \[fig:Npot\]). More recently, nuclear many-body theories have had a remarkable success in describing the nuclear ground state and low-lying excitations starting from ab initio interactions [@Carlson:2014vla; @Lovato:2014eva]. Their main strength lies in calculating lepton semi-inclusive cross sections at relatively low energies since they become the better the closer they stay to the nuclear ground state. There these calculations have the potential to describe not only the inclusive contributions of true one-particle QE events but also admixtures of 2p2h events and short-range correlations (SRC) that could overlap with the former. Indeed, they have recently reproduced experimental results for the semi-inclusive QE response in NC events on light nuclei [@Lovato:2014eva]. The method, so far, does not yield any hole spectral functions and can thus not be combined with the impulse approximation, as in the NMBT calculation,s to overcome the limitations of its non-relativistic character. It also does not contain any inelastic excitations nor does it give any information on the final state of the reaction, except for the outgoing lepton’s properties, and thus cannot be used in any generator without invoking further assumptions. ### Comparison with experiment A quite general difficulty even for describing semi-inclusive cross sections with these methods is that experimental neutrino data always contain a superposition of many different coupled reaction channels. In electron-induced reactions the measurable energy transfer can be used to distinguish at least the bulk parts of QE scattering and resonance excitations, for example. In neutrino-induced reactions, however, the beam energy and thus also the energy transfer is smeared (see discussion in the introduction). Thus, any experimentally observed events in the QE region are always mixtures of different reaction processes with very similar final, asymptotic states. At the lower beam energy of the T2K experiment, for example, QE and pion production through the $\Delta$ resonance, overlap. This implies that the QE cross section alone cannot be measured in neutrino-induced reactions. This is also true for so-called 0-pion (sometimes also called QE-like) events without any pions in the final state. In this case pions could have been first produced through $\Delta$ excitation and then reabsorbed through FSI. Detailed analyses show that the latter events amount to about 10% in the T2K energy range ($\approx 700$) MeV [@Mosel:2017ssx]. Theories, that work well for electrons in describing the semi-inclusive cross sections around the QE peak, thus cannot describe experimental neutrino-induced cross sections, even if these are only for semi-inclusive or 0 pion events. Quasielastic interactions ------------------------- The simplest process that can take place in a neutrino-nucleus reaction is that of quasielastic (QE) scattering in which the incoming neutrino interacts with just one nucleon. In theoretical descriptions one usually assumes that the cross section for that process on a Fermi-moving nucleon is the same as that on a free nucleon, except for a necessary Lorentz transformation to the moving nucleon’s rest frame (“impulse approximation”). Assuming a Fermi gas momentum distribution for the initial nucleons and free movement for the final state particles allows to give an analytical expression for this cross section [@LlewellynSmith:1971zm; @Smith:1972xh]. Possible off-shell effects are often treated by a shift of the energy transfer [@Benhar:2006wy] In the early work by Smith and Moniz [@Smith:1972xh] there was the assumption hidden in that cross section that the binding energy of the hit nucleon before and after the collision can be neglected; only possible Pauli-blocking of the final state is taken into account. Binding energy effects are then simulated by a change of the energy of the final state nucleon which is assumed to be free [@Bodek:2018lmc]; sometimes also the energy transfer is modified [@Benhar:2006wy] mainly to correct for the target recoil. Using the so-called impulse approximation then not just inclusive cross sections but also properties of the outgoing nucleons can be calculated. This impulse approximation makes it possible to use the spectral functions obtained from nuclear many-body theory for a description of the initial state while the final state is still assumed to be free. In terms of the general structure of the transport equations in Section \[Generators\] this procedure corresponds to a mixture of the full theory as outlined there and the quasiparticle approximation without potentials. The very first interaction is described by the interaction of the incoming neutrino with correlated and bound target nucleons described by broadened spectral functions. The following transport of the initial final state through the nuclear environment then proceeds as if the nucleons were unbound and $\delta$-function like in their energy-momentum dispersion relation. In these calculations an initial state potential is inherent in the spectral function even though its precise value is not known. Assuming then a free outgoing nucleon introduces implicitly a momentum-dependence in the potential. From an analysis of $pA$ reactions [@Cooper:1993nx] it is known that the nucleon-nucleus potential is momentum dependent such that it is attractive ($\approxeq - 50$ MeV) at low ($< p_F$) momenta whereas it becomes very small at larger momenta ($\approxeq 500$ MeV) (see Figure \[fig:Npot\]). Such a momentum-dependence of the single-particle potential has been known to affect the position of the QE peak [@Rosenfelder:1978qt; @OConnell:1988htk]. In [@Ankowski:2014yfa] the authors have demonstrated that such a potential has a major influence on the location of the QE peak, in particular at low momentum transfers where the momentum dependence of the potential is strongest. Indeed, at the lowest $Q^2$ the effect of the final state potential, which is introduced from the outside, is dramatic and shifts the QE peak significantly. 2p2h processes -------------- From earlier studies with electrons it was well known that incoming electrons can also interact with two nucleons at the same time in so-called 2p2h processes [@Dekker:1991ph; @DePace:2003xu]. They tend to fill in the so-called ’dip region’ between the QE-peak and the $\Delta$ peak in semi-inclusive cross sections as a function of energy transfer. It was recognized by M. Ericson and her collaborators [@Delorme:1985ps; @Marteau:1999jp] that such processes can also play a role in neutrino-induced reactions, in particular if only the outgoing lepton was observed. This knowledge was rediscovered after the data of the experiment MiniBooNE showed a surplus of so-called quasielastic-like events as a function of reconstructed neutrino energy [@AguilarArevalo:2010zc]. The surplus could be explained quite well just by these 2p2h processes [@Nieves:2011yp; @Martini:2009uj; @Martini:2010ex; @Martini:2011wp; @Nieves:2011pp]. The models used in this work involved various assumptions, such as non-relativistic treatment, unbound local Fermi gas for the ground state and a restriction of the underlying elementary processes to the $\Delta$ resonance region. The latter limits the application of such models to neutrino energies less than about 1 GeV. In the following subsections some presently used models for the 2p2h contribution in generators are discussed. A common shortcoming of all of them is that they give only semi-inclusive cross sections for the 2p-2h channel. For use in a generator thus an additional assumption has to be made about the momentum-distributions (energy and angle) of the final state particles. Usually a uniform phase-space occupation is imposed which can be formulated very easily in the two-particle center of mass system, followed by a boost to the laboratory system [@Lalakulich:2012ac]. ### Microscopic 2p2h contribution #### Lyon-Valencia model The calculations first used to explain the MiniBooNE data used various approximations. The calculations reported in [@Martini:2009uj; @Martini:2010ex; @Martini:2011wp] were non-relativistic and involved further approximations such as the neglect of longitudinal contributions to the vector-axial vector interference term. The calculations reported in [@Nieves:2011yp], on the other hand, were relativistic, but involved approximations in evaluating the momentum-space integrals and in neglecting the direct-exchange interference terms in the matrix elements. Both models start from an unbound ground state assuming a local Fermi-gas momentum distribution. Intrinsic excitations of the nucleon are limited to the $\Delta$ resonance; this limits the applicability of these models to the MicroBooNE, T2K energy regime. The model does not provide the energy-momentum distributions of outgoing nucleons. For use in a generator it thus has to be supplemented with assumptions about these distributions. The calculations of the Valencia group [@Nieves:2011yp] have found their way into the generators GENIE [@Schwehr:2016pvn] and NEUT as options. The model of [@Nieves:2011yp] is found to severely underestimate the 2p2h strength in the dip region [@Rodrigues:2015hik] which led the MINERvA collaboration to increase the flux-folded strength by multiplying it with a 2d correction function amounting to an overall factor of 1.53. A comparison of the original Valencia result or the tuned version in GENIE with electron data and with neutrino data from another experiment would be desirable. Contrary to the MINERvA tune this readjustment of the 2p2h strength should take place for the 2p2h structure functions before flux integration. #### MEC model {#Megias2p2h} A calculation of the 2p2h contributions that is free of most of the approximations just mentioned was performed in Ref. [@Simo:2014wka; @Megias:2014qva]. These authors evaluated all the relevant diagrams involving 2p2h interactions in the nucleon and $\Delta$ energy regime, using a somewhat simplified $ NN $ interaction. The calculation is fully relativistic and includes all the interference terms. Earlier calculations had assumed that the 2p2h contribution was purely transverse. The new calculations by Megias et al verify that for electrons. For neutrinos they also obtain a longitudinal contribution although the latter is small relative to the transverse one [@Megias:2018ujz; @Megias:2014qva]. An open problem in these calculations is that only the real part of the $\Delta$ propagator, which appears in all the relevant graphs, is taken into account. The agreement with electron data, usually assumed to be an essential test of an description of neutrino-nucleus data, is quite bad if the full propagator is used. An advantage of this method is that it predicts the relative ratios of pn pairs vs pp pairs in the outgoing state [@RuizSimo:2016ikw]. This is particularly interesting because electron-induced experiments show a clear enhancement of pn vs. pp pair ejection[@Duer:2018sxh]. This effect is usually ascribed to the dominance of pn pairs vs pp pairs in nuclear matter. The calculations of RuizSimo et al [@Simo:2016imi] show that in a region that is dominated by the meson exchange current (MEC) at least a part of the observed effect is due to the actual interaction process. The authors have added the microscopic 2p2h cross section calculated as just described to the SUSA description of QE scattering and obtain impressive agreement with semi-inclusive electron and neutrino data [@Megias:2016lke; @Megias:2016fjk] in the QE and dip region. The MEC model evaluates the two-body current with nucleons up to the $\Delta$ resonance. This is sufficient for the T2K and MicroBooNE energy regime. It does not give any information on the momentum distribution of the outgoing particles and thus cannot be used in a generator without further additional assumptions on the final state of the 2p2h process. #### NMBT 2p2h excitations In [@Rocco:2015cil] the spectral function formalism was extended to include also 2p2h excitations by using standard two-body currents. Results were shown in that paper for electron-induced semi-inclusive cross sections.In a very recent paper [@Rocco:2018mwt] results of this method have also been shown for some fixed-energy neutrino reactions. Only semi-inclusive cross sections could be calculated. The results can thus not be used in a generator without further additional assumptions on the final state of the 2p2h process. ### Empirical 2p2h contribution {#GiBUU2p2h} An alternative to the microscopic calculations for 2p2h contributions is to take these directly from an analysis of semi-inclusive electron scattering data. An analysis by Bosted et al and Christy [@Bosted:2012qc; @Christy:2015] indeed did extract the structure function for these processes directly from data in a wide kinematical range ($0 < Q^2 < 10$ GeV$^2$, $0.9 < W < 3$ GeV) that goes well beyond the resonance region and implicitly includes any not only MEC, but also SRC and DIS components. Starting assumption for this extraction was that these 2p2h effects are purely transverse. The parameterized structure function $W^e_1(Q^2,\omega)$ for electrons thus obtained can then directly be used in calculations of cross sections for electrons. In GiBUU these structure functions have been combined with the other reaction processes and good agreement with semi-inclusive electron data is obtained [@Gallmeister:2016dnq]. Under the assumption that also for neutrinos the dominant 2p2h cross-section contribution ($\sigma^{2p2h}$) is transverse and that lepton masses can be neglected the corresponding cross section can be written in terms of only two neutrino structure functions, $W_1^\nu$ and $W_3^\nu$, $$\begin{aligned} \label{LPnu} \frac{d^2\sigma^{2p2h}}{d\Omega dE'} &=& \frac{G^2}{2 \pi^2} E'^2 \cos^2 \frac{\theta}{2} \,\left[2W_1^\nu \left(\frac{Q^2}{2\mathbf{q}^2} + \tan^2\frac{\theta}{2} \right) \right. \nonumber \\ & & \mbox{}\left. \mp W_3^\nu \frac{E + E'}{M} \tan^2\frac{\theta}{2}\right] ~.\end{aligned}$$ Here $G$ is the weak coupling constant; $E'$ and $\theta$ are the outgoing lepton energy and angle respectively; $E$ is the incoming neutrino energy; and $M$ is the nucleon mass. Walecka et al. [@Walecka:1975; @OConnell:1972edu] have derived a connection between the electron and the neutrino structure functions for 1p processes. In the version used by the Lyon group [@Marteau:1999kt] for 2p2h processes it reads $$\begin{aligned} \label{eq:W1} W_1^\nu &=& \left( 1 + \frac{G_A^2(Q^2)}{G_M^2(Q^2)} \frac{\mathbf{q}^2}{\omega^2} \right)\, W_1^e \,2(\mathcal{T} + 1)\end{aligned}$$ Here $G_M(Q^2)$ is the magnetic coupling form factor, $G_A(Q^2)$ the axial coupling form factor and $Q^2$ is the squared four momentum transfer $Q^2 = \mathbf{q}^2-\omega^2$. The structure of $W_1^\nu$ is transparent: to the vector-vector interaction in $W_1^e$ an axial-axial interaction $~ G_A(Q^2)$ term is added; the axial coupling is related to the vector coupling by an empirical factor $\mathbf{q}^2/\omega^2$ [@Marteau:1999kt]. The extra factor 2 is due to the fact that neutrinos are left-handed only. Finally, a factor $(\mathcal{T} + 1)$ appears where $\mathcal{T}$ is the isospin of the target nucleus. A similar structure shows up in the V-A interference structure function $$\label{eq:W3} W_3^\nu = 2 \frac{G_A}{G_M} \frac{\mathbf{q}^2}{\omega^2}\, W_1^e \, 2(\mathcal{T} + 1) ~.$$ Exactly this form has also been used in all calculations by the Lyon group [@Martini:2009uj]. The isospin factor in (\[eq:W1\]) and (\[eq:W3\]) is derived under the assumption that neutrinos populate the isobaric analogue states of those reached in electron scattering. The Wigner-Eckart theorem then allows to connect the transition matrix elements for electrons ($\sim \tau_3$) with those for neutrinos ($\sim \tau_\pm$). This connection was originally derived by Walecka [@Walecka:1975] for single particle processes but it also holds for the 2p2h processes considered here because the relevant transition operators can again be expressed in terms of irreducible tensors of SU(2) [@Simo:2016ikv]. As already mentioned this connection depends on the assumption that neutrino processes excite just the isobaric analogues of states reached in electron scattering. It would thus be very interesting to verify the presence of this isospin factor in actual data. So far most of neutrino data were obtained for the $\mathcal{T}=0$ nuclei C and O. It will, therefore, be very interesting to see the effects of this factor for the isospin asymmetric nucleus $^{40}$Ar for which $\mathcal{T}=2$. To experimentally verify this will require a very good knowledge of the incoming neutrino flux and small other uncertainties (see discussion in [@Gallmeister:2016dnq; @Dolan:2018sbb]). Equations (\[eq:W1\]) and (\[eq:W3\]) relate both the neutrino structure function $W_1^\nu$ and the interference structure function $W_3^\nu$ to just one other function, the structure function $W_1^e(Q^2,\omega)$, determined from electron data. Parameterizing the latter, either by the Bosted et al fit or by some other ansatz, as a function of $Q^2$ and $\omega$ then determines the electron, neutrino and the antineutrino cross sections consistently [@Gallmeister:2016dnq]. This phenomenological model does not make predictions about the magnitude of pp pair vs pn pair ejection since the phenomenological analysis, on which the model is based, did not take any final state information into account. Instead, in GiBUU which uses this phenomenological description, the isospin composition of pairs is entirely determined by statistical ratios. Because of the wide kinematical range of the data that were used to extract $W_1^e$ the model is applicable to experiments with high incoming energy (MINERvA, NOvA, DUNE). Since it is based on an empirical analysis of semi-inclusive 2p2h processes the model does not give any information on the final state. In GiBUU it is, therefore, supplemented with the assumption that the energy- and momentum-distributions of the two outgoing nucleons are determined by phase-space. Pion production --------------- In neutrino-nucleus reactions pion-production plays a major role. In particular at the higher energies of the MINERvA or DUNE experiment pion-production, through resonances or DIS, makes up for about 1/2 - 2/3 of the total interaction cross section and even at T2K it amounts to up to 1/3 of the total. Therefore it must be under quantitative control in the generators used to extract cross sections and neutrino mixing properties from such experiments. Very recently, two comparisons of the generators NEUT (with charged pion - nucleus data) [@PinzonGuerra:2018rju] and GENIE (with MINERvA neutrino-induced pion production data) [@Stowell:2019zsh] have shown that major discrepancies of these generators with the data exist that cannot be tuned away. Neutrino-induced pion production on nuclei can - similar to the QE scattering description - be treated by describing the pion production on single nucleons that are bound and Fermi-moving. For the lower energy transfers pion production proceeds predominantly through the $\Delta$ resonance. The binding energy correction is then usually handled by assigning a omplex density-dependent selfenergy to the $\Delta$ which takes care of such effects as Pauli-blocking and collisional broadening. The pion production cross section in the resonance region is then obtained as a coherent sum over resonance and background amplitudes [@Leitner:2006ww; @Leitner:2006sp; @Hernandez:2010bx]. The models of Leitner et al [@Leitner:2006ww; @Leitner:2006sp] and Hernandez [@Hernandez:2010bx] are essentially identical and differ only in their treatment of the background amplitude. ### Resonance amplitudes The resonance amplitudes are determined by the nucleon-resonance transition currents; the corresponding interaction vertices involve form factors. The number of these form factors is tied to the spin of the resonances. Thus, spin-1/2 resonances are connected with two form factors and spin 3/2 ones, such as the $\Delta$ resonance, with four such form factors. This is true for electromagnetic interactions where only vector couplings appear. For neutrinos, in addition axial couplings are present which require the same number of formfactors again. Thus, for the case of a spin-isospin 3/2-3/2 resonance as for the $\Delta$, there are four vector form factors and four axial ones. In standard neutrino generators GENIE and NEUT the so-called Rein-Segal model [@Rein:1980wg] for the form factors for resonance excitation is still used although that model is known to fail in its description of electron scattering data [@Graczyk:2007bc; @Leitner:2008fg]. It is not used in any electron-induced studies of nucleon resonances. A better way is to obtain the vector form factors $C_i^V(Q^2)$ (i=3,...,6), which are directly related to the electromagnetic transition form factors [@Leitner:2009zz], from the measured helicity amplitudes (see [@Leitner:2009zz]). The helicity amplitudes are determined in, e.g., the MAID analysis [@Drechsel:2007if]; the connection between these helicity amplitudes and the vector form factors is given in [@Leitner:2009zz]. Current conservation here imposes a constraint on one of the vector form factors to vanish ($C_6^V=0$). This is the approach followed in GiBUU. For the axial form factors the situation is less well determined since the quality of these data is not sufficient to determine all four axial form factors $C_i^A(Q^2)$. Already in Ref.  it was noticed that $C_5^A$ gives the dominant contribution. $C_6^A$ can be related to $C_5^A$ by PCAC [@Lalakulich:2005cs] leaving also in the axial sector only three form factors. In addition, $C_3^A$ is set to zero based on an old analysis by Adler [@Adler:1968tw], whereas $C_4^A$ is linked to $C_5^A$. Based on these relations all theoretical analyses have so far used the axial form factor $C^A_5(Q^2)$ (see Eq. (18) in [@Leitner:2006ww]) with various parameterizations. The latter usually go beyond that of a simple dipole [@Leitner:2006ww; @Leitner:2006sp; @Lalakulich:2005cs; @Hernandez:2010bx] and have been obtained by fitting the neutrino pion-production data on an elementary target available from two experiments performed in the middle 80’s at Argonne National Laboratory (ANL) [@Radecky:1981fn] and at Brookhaven National Laboratory [@Kitagaki:1986ct]. Both experiments also had extracted various invariant mass distributions from their data. The analysis of these invariant mass data together with the experimental $d\sigma/dQ^2$ distributions then led the authors of [@Lalakulich:2010ss] to conclude that probably the BNL data [@Kitagaki:1986ct] were too high. This has been confirmed by a reanalysis of the old data by Wilkinson et al [@Wilkinson:2014yfa] who used the QE data obtained in the same experiment for a new flux calibration. After that flux recalibration the BNL data agree with the ANL data within experimental uncertainties. There remains an uncertainty, however, that is connected with possible final state effects in the extraction of pion production cross sections on the nucleon from data obtained with a deuterium target [@Wu:2014rga]. In a very recent calculation that covers a wider kinematical regime Nakamura et al have tried to correct the old bubble chamber data on pion production for Fermi motion and final state effects [@Nakamura:2018ntd]. They find in particular at lower neutrino energies corrections of the order of 10 - 20 %. New, more precise measurements on elementary targets are needed to verify this result. ### Background amplitudes A complication in determining the form factors for neutrino-induced excitations of nucleon resonances from comparison with data is due to background contributions that contribute to the observed cross section. For *electro*-production of pions $t$-channel processes provide a background to the resonance contribution. The total cross section is then given by the coherent sum of $s$-channel and $t$-channel amplitudes. Analogously, also for the case of *neutrino*-interactions there is a background contribution due to Born-type diagrams where the incoming $W^+$ (for neutrino-induced CC pion production) interacts with the nucleon line $W N \rightarrow N' \pi$. The Lagrangian for this latter interaction can be obtained from effective field theory, for low energies up to about the $\Delta$ region [@Lalakulich:2010ss; @Hernandez:2007qq; @Alvarez-Ruso:2015eva]. The description for the $\Delta$ region obtained with this model is quite good, but for the higher-lying resonances one has to resort to modeling both the resonance and the background contributions. Since for higher lying resonances there is even less experimental information available all models simply use dipole parameterizations for the resonance transition form factors with the strength obtained from PCAC [@Lalakulich:2010ss]. Much more ambitious, but also significantly more involved, is the dynamical coupled-channel model of photo-, electro- and weak pion production developed in Ref. [@Nakamura:2015rta] that has been applied to all resonances with invariant masses up to 2.1 GeV. In this model background and resonance contributions emerge from the same Lagrangian and thus the relative phase between resonance and background amplitudes is fixed. Furthermore, not only pion, but also other meson production channels, for example, for the important 2 pion production as well as for kaons and etas, can be predicted. These elementary data on the nucleon are an essential input into calculations for production on nuclei. Representing a coherent sum of amplitudes in a semiclassical, fundamentally incoherent generator poses a problem that requires some practical approximation. In GiBUU, for example, the resonance part alone is handled by exciting nucleon resonances that are then being propagated as new particles until they decay again. The sum of the background part and the interference part is lumped together into a background term; it is assumed that pions that are due to this ’background’ are produced immediately. GENIE uses instead the Rein-Sehgal model for the resonance excitation and adds a tuned fit to average total cross sections as background. NuWro uses a similar procedure. ### Pion production and absorption Pion production and pion absorption are closely linked through basic quantum mechanical constraints. This can be clearly seen for an energy regime where only the nucleon resonances are essential and DIS does not yet contribute significantly, i.e. in the energy regime of T2K and MicroBooNE. Here the resonance ($\Delta$) contribution to pion production proceeds via $$\label{Wpiprod} W^+ + N \rightarrow \Delta \rightarrow \pi + N' ~,$$ whereas pion absorption proceeds through the same resonance $$\label{piabs} \pi + N \rightarrow \Delta \qquad \Rightarrow \qquad \Delta + N \rightarrow N' + N'' ~.$$ In both processes the very same $\pi N \Delta$ vertex appears, once in the $\Delta$ production and once in its decay. The generators GENIE, NEUT and NuWro all violate this time-reversal invariance condition by using quite different models for pion production and absorption. For example, in these standard generators the production is described by resonance excitations within the Rein-Segal model, whereas pion reabsorption is handled by a very different model, mostly the pion-absorption cascade of the Valencia group [@Salcedo:1987md]. This is particularly dangerous if then – as usual in the generators – tuning parameters are introduced that allow to tune pion production and pion absorption independently from each other. This obviously introduces artificial degrees of freedom. The amount of ’stuck-pion events’, i.e. events in which a pion was first produced and then reabsorbed in the same target nucleus, are most sensitive to any imbalance between pion production and pion absorption. The generator NuWRo uses a model with a formation-time parameter for the emitted pion even in the $\Delta$ resonance region [@Golan:2012wx]. This gives additional tuning degrees of freedom for pion absorption which, however, have no physical basis: In the resonance region the time-development of pion production is governed by the resonance width; there is no room for additional parameters. Finally, it is worthwhile to point out that descriptions of pion production in generators can use many available data on photo- and electro-production of pions on nuclei for a check of the method [@Krusche:2004uw; @Krusche:2004zc; @Clasie:2007aa; @ElFassi:2012nr]. Unlike any other generator GiBUU, which respects the stringent connection between pion production and pion absorption in the resonance region, has been applied to many of them [@Hombach:1994gb; @Effenberger:1996im; @Effenberger:1999jc; @Lehr:1999zr; @LehrDiss:2003; @Muhlich:2004zj]. For neutrino-induced reactions the GiBUU results for pion production are generally in agreement with experimental data [@Gallmeister:2016dnq; @Mosel:2017ssx; @Lalakulich:2010ss; @Mosel:2015tja; @Mosel:2017nzk; @Mosel:2017zwq]. Deep inelastic scattering ------------------------- Neutrino physicists have traditionally defined all reactions connected with the emission of more than one pion as Deep Inelastic Scattering (DIS). This is an oversimplification since over the last 30 years the study of nucleon excitations has shown that up to invariant masses of about 2 GeV there are many resonances that also decay into two and even more pions; the 2 pion threshold opens at a mass of about 1.5 GeV. Only above about 2 GeV the individual nucleon resonances start to overlap and the DIS regime starts. Furthermore, nucleon resonance excitations and DIS have very different $Q^2$ dependencies such that DIS is connected with events with $Q^2 > 1$ GeV$^2$. The semi-inclusive cross sections for DIS can be expressed in terms of structure functions [@Leader:1996]. In the pQCD regime, i.e. $Q^2 > 1$ GeV$^2$, these structure functions can be written down in terms of parton distribution functions. In regions, where pQCD does not yet provide the correct description, parameterizations of these structure functions have been obtained by fits to semi-inclusive cross section data. The high-energy event generator PYTHIA [@Sjostrand:2006za] is then often used to actually provide also mass- and energy-distributions of the final state. When using this framework for neutrino-nucleus reactions one is faced with the complication that the target nucleon is bound, i.e. off shell. Various schemes have been investigated to deal with this problem; it was found that these effects play only a small, but visible role at intermediate energies [@Lalakulich:2012gm]. Also for DIS reactions electro-production data provide an excellent testing ground for generators. In particular the HERMES data, taken with a lepton beam at 28 GeV and 12 GeV incoming lepton energy on nuclear targets up to Xe [@Airapetian:2007vu; @Airapetian:2011jp], but also the JLab data taken by Brooks et al [@Accardi:2009qv] are particularly relevant for such tests. These data have been analyzed with an early version of GiBUU [@Gallmeister:2007an]. This latter study also has shown that the often used prescription to forbid any interactions within a so-called ’formation-time’ is unrealistic and not in agreement with the data from HERMES and the EMC experiment. Final state interactions ======================== In the preceding sections I have already mentioned the importance of final state interactions (FSI), e.g. in connection with the final state potential in QE processes or in connection with pion reabsorption. These final state interactions are due to hadron-hadron interactions. They are thus independent of the electroweak nature of the initial interaction. In this section I now summarize some of the methods used in generators to describe FSI. This description can necessarily only be rather superficial since there often exist no detailed write-ups of the physics used in these generators and their algorithms. A notable exception is GiBUU for which both the physics and many details of the numerical implementation have been extensively documented [@Buss:2011mx]. Final state interactions can be split into different categories: - The final state wavefunction for the very first, initial interaction is affected by a potential for the outgoing hadrons. This FSI obviously affects the initial transition rate. - This same potential acts not only in the initial interaction but also during all of the following cascade. Particles then move on possibly complicated trajectories that can have an influence on observed final state angular distributions. - Another kind of FSI is that which an initially produced particle experiences when it collides with other nucleons inside the target on its way out of the nucleus to the detector. In such collisions both elastic and inelastic scattering, as well as particle production, can take place, possibly connected also with charge transfer. The cross sections for these processes can be affected by the nuclear medium with the changes depending on the local density and the momentum of the interacting particles [@Li:1993rwa]. An often not discussed problem is connected with the final state of the target remnant. The nucleons in the target nucleus carry energy and momentum which have to enter into the overall energy-conservation of the primary interaction, both in the initial and the final state. This also holds for a final state collision between the primarily ejected nucleon and a target nucleon, but in the usually used frozen configuration this is often not taken into account. One problem that one faces in this description is that the neutrino ’illuminates’ the whole target nucleus and, therefore, any kicked out (nucleons) or produced (e.g. pions) particles can start their way through the nuclear target at any density. This is very different from a reaction in which an incoming particle hits a nucleus from its outside. For example, $\pi A$ absorption data are sensitive to the overall absorption, they do not give, however, any information on the mean free path inside the nucleus as long as their mean free path is smaller than the nuclear diameter. A significantly better check is provided by electro- or photo-production of pions on nuclei ($\gamma A \to \pi A^*$) since the incoming photons illuminate the whole nuclear volume, just as neutrino do. Unfortunately, no such comparisons of standard generator results with photo-production data are available, the notable exception being again GiBUU [@Krusche:2004zc; @Krusche:2004uw]. In the following discussion I will briefly go through some of the models used in generators[^4]. 1. [**Effective Models**]{} - GENIE hA In this model the cascade is reduced to a single step in which e.g. pions are impinging on an iron nucleus. Their absorption is calculated and then scaled to other nuclei. - GENIE hA 2014 uses the same oversimplification, only the $A$-scaling has been improved This model, which still is the default in GENIE goes back to the INTRANUKE program developed about 25 years ago; it is “simple and empirical, data-driven” in the words of the GENIE manual, but is really quite outdated and misses the relevant physics: a particle + A reaction does not describe the particle being produced inside the nucleus and then cascading through it. 2. [**Cascade models**]{} - GENIE hN is a “ full intranuclear cascade” model, according to the manual of v. 3.0, but no details are given. Obviously pion production and absorption are treated independent from each other. Hadrons move freely, without potentials. - NuWRo The treatment of FSI in NuWRo is similar to that in GENIE hN and thus the same criticisms applay. - FLUKA [@FLUKA] is a model that has been widely used for all sorts of nuclear and hadron interaction studies. It also has a neutrino option [@Battistoni:2009zzb], but so far has not been used to analyze or reproduce neutrino data from ongoing experiments. - NEUT Nucleon beam scattering data are used to tune the nucleon FSI. The final state interactions of different particles, such as pions, are handled by introducing tunable multiplicative factors that are different for different phyics processes. For pions the FSI were originally handled by simple attenuation factors. More modern versions of the code use the Valencia cascade for the FSI of pions; this violates detailed balance as discussed above. 3. [**Transport Models**]{} The GiBUU transport model starts the outgoing particles inside the nuclear volume at their point of creating and then propagates them out of the nucleus. It respects the detailed balance constraints on pion production and absorption in the resonance region. The relevant cross sections for both processes are calculated within one and the same theoretical model (see App. B1 in [@Buss:2011mx]). The pion final state interactions are then handled by a full, relativistically correct cascade. Calculations for neutrino-nucleus reactions so far have used the frozen configuration approximation. GiBUU does allow also to treat the target nucleons dynamically; this could become important when at high incoming energies the target nucleus is significantly disrupted. Tuning of generators ==================== Finally, a comment is in order on tuning the generators to data. This is a widespread practice among experimentalists using neutrino generators such as GENIE and NEUT. Cross sections and potentials are crucial inputs to all generators and they all carry with them some experimental uncertainties; varying them within their experimental error bars is well justified. Such fits of physics parameters to data could actually help to decrease the experimental uncertainties on elementary in-medium cross sections. On the other hand, often also unphysical parameters are tuned. An example is the use of different momentum distributions (global vs. local Fermi gas) in describing different elementary processes or the ’brute force’ change of physics input. An example for the latter is provided by the tuning of 2p2h cross sections in the MINERvA experiment[@Rodrigues:2015hik]. The GENIE version used for that analysis originally started with a description of 2p2h processes taken from the Valencia model [@Nieves:2011pp]. After comparing with data it was found that this 2p2h contribution provided by that model is too weak and its integrated strength had to to be increased by about 53% [@Ruterbories:2018gub]. This increase was not achieved by changing coupling constant or similar physics parameters, but instead by fitting a 2d function with free parameters directly to the data. A very recent analysis of the MINERvA pion production data has shown that even allowing tuning of GENIE ingredients does not give a satisfactory fit to the data [@Stowell:2019zsh]. Similarly, an analysis of pion-nucleus data using the generator NEUT has shown significant discrepancies between data and generator result [@PinzonGuerra:2018rju]. One may wonder, however, about the ultimate conclusion from these studies which probably just showed that incorrect physics models could not even be fitted to data. GiBUU, on the other hand, which contains a consistent pion production model in the resonance region, has sucessfully reproduced many pion data without any special tune [@Mosel:2017ssx; @Mosel:2017zwq]. Generator results ================= In this section I use some results of the generator GiBUU to illustrate some properties of neutrino-nucleus interactions in different energy regimes and for different targets. I will start with semi-inclusive cross sections and then also show some more exclusive observables for the experiments T2K, MINERvA, MicroBOONE and DUNE. All results shown in this section were obtained with the 2019 version of GiBUU [@gibuu] running in the quasiparticle approximation described in Sect. \[sect:QPapprox\]. No special tune was used; all calculations for different incoming flux distributions and for different targets were obtained with one and the same set of input parameters ’out of the box’. Comparisons with electron data ------------------------------ Even though the community in general agrees that checks of generators against electron data are necessary there exist very few published comparisons of generator results with such data. For NEUT there are no published comparisons available, for GENIE only very recently some comparisons have become available [@Ankowski:2019; @Ashkenazi:2018] which indicate serious discrepancies with data in all of the inelastic excitation region. For NuWro only some preliminary results exist [@Zmuda:2015twa]. For GiBUU there is a long list of comparisons both with electron and photon data available [@Mosel:2018qmv; @Gallmeister:2016dnq; @Gallmeister:2007an; @Effenberger:1996rc; @Lehr:1999zr; @Lehr:2001an; @Lehr:2003ka; @Lehr:2003km; @Buss:2006vh; @Buss:2006yk; @Buss:2007ar; @Buss:2008sj; @Gallmeister:2010wn; @Hombach:1994gb; @Muhlich:2003tj; @Muhlich:2002tu; @Kaskulov:2008ej; @Kaskulov:2011ab] which cover both inclusive and semi-inclusive particle production data obtained in electro-nuclear and photo-nuclear reactions. Semi-inclusive cross sections ----------------------------- ### T2K The mean beam energy at T2K is about 650 MeV so that one expects that here QE scattering and pion production through the $\Delta$ resonance are dominant. This is indeed borne out in the semi-inclusive cross section shown in Fig. \[fig:T2K-incl\]. ![Double-differential cross section per nucleon for outgoing muons for the T2K neutrino beam hitting a $^{12}C$ target at the near detector. The different curves depict the contributions of CCQE scattering, $\Delta$ excitation and 2p2h processes to the cross section as a function of the outgoing muon kinetic energy, as indicated in the figure. The numbers in the upper parts of each subfigure gives the cosine of the muon scattering angle with respect to the neutrino beam. All cross sections are given per nucleon.[]{data-label="fig:T2K-incl"}](T2K-incl-multiplot.pdf){width="0.8\linewidth"} While all angular bins look very similar, with a QE scattering peak at about $T_\mu = 0.5$ GeV, the most forward bin ($\cos\,\theta=0.975$) shows a different behavior, with a long, flat shoulder out to higher $T_\mu$. This behavior shows up already in the individual QE and $\Delta$ contributions shown separately in the figure. It is due to the higher energy tail in the incoming neutrino flux. In addition, also DIS starts to contribute at about $T_\mu = 1$ GeV (not explicitly shown in the figure). Note that the 2p2h contribution is essentially absent in the most forward bin. This reflects the transverse character of this reaction type in GiBUU. ### MicroBooNE MicroBooNE is an experiment that runs in the Booster Neutrino Beam with an $^{40}$Ar target. It has $4\pi$ coverage; therefore I show in Fig. \[fig:microb-incl-multiplot\] the double differential cross section over the full angular range. ![Double-differential cross section per nucleon for outgoing muons for the MicroBooNE experiment with a $^{40}Ar$ target. The different curves depict the various contributions to the cross section as a function of the outgoing muon kinetic energy, as indicated in the figure. The numbers in the upper parts of each subfigure gives the cosine of the muon scattering angle with respect to the neutrino beam. All cross sections are given per nucleon.[]{data-label="fig:microb-incl-multiplot"}](MicroB-incl-multiplot.pdf){width="0.8\linewidth"} The cross section is now presented in wider bins than the one shown before for the T2K experiment. As a consequence, the first bin is centered less forward. The flat behavior of the total cross section in the forward bin for T2K thus does not show up here in the plot of the double differential distribution for MicroBooNE. Otherwise, the results are quite similar, with the exception of the 2p2h contribution that is now larger than for T2K, reflecting the non-zero isospin of the target nucleus $^{40}$Ar ($\mathcal{T}=2$). It will be most interesting to see if this increase in the 2p2h strength is indeed borne out by the data that are presently being taken [@Adams:2019iqc]. A verification would give direct information on the states that are excited in a neutrino-nucleus 2p2h reaction since the isospin factor emerges under the assumption that neutrino-induced reactions populate the isobaric analogue states of electron scattering experiments. ### MINERvA LE and ME The MINERvA experiment has run at a higher energy, but with targets similar to those used in the T2K ND detector. The mean beam energy in its lower energy (LE) configuration is about 3.5 GeV so that one expects a larger contribution of resonance excitations and even DIS in this case. This can indeed be seen in the lepton semi-inclusive double-differential cross sections as a function of muon kinetic energy in various angular bins in Fig. \[fig:minervale-incl-multiplot\]. The experiment has acceptance cuts. Muons with energies below about 1.5 GeV and with angles larger than 20 degrees are not detected. These cuts are also used in the figure. ![Double-differential cross section per nucleon for outgoing muons for the MINERvA lower energy neutrino beam hitting a $^{12}C$ target. The different curves depict the various contributions to the cross section as a function of the outgoing muon kinetic energy, as indicated in the figure. The numbers in the upper parts of each subfigure gives the cosine of the muon scattering angle with respect to the neutrino beam. All cross sections are given per nucleon.[]{data-label="fig:minervale-incl-multiplot"}](MINERvA_LE-fullincl-multiplot_dEdcos.pdf){width="0.8\linewidth"} One sees that the cross sections are strongly forward peaked. In the most forward bin ($\cos\,\theta=0.995$) QE and $\Delta$ excitation nearly completely overlap. Both together make up more than 3/4 of the total cross section in that bin. The 2p2h contribution is comparatively negligible (about 5% of the total); at the peak of the total cross section the 2p2h contribution is the smallest of all. DIS accounts for less than 10 % in the peak region of that bin, but it has a long high-energy tail[^5]. For muon kinetic energies above about 4 GeV the DIS contribution becomes dominant. In the next angular bin ($cos\,\theta=0.985$) DIS accounts for nearly all of the cross section for $T_\mu > 4$ GeV. This defines an optimal kinematical region for studies of DIS in neutrino-induced reactions. Figure \[fig:minervadsigmadq2LE\] gives the $Q^2$ distribution, where $Q^2$ here is calculated from $Q^2 = 4 E_\nu E_\mu sin^2(\theta_\mu/2)$  . ![$Q^2$ distribution per nucleon for inclusive and for 0-pion events in the MINERvA LE beam hitting a $^{12}C$ beam.[]{data-label="fig:minervadsigmadq2LE"}](MINERvA_LE_dsigmadQ2.pdf){width="0.7\linewidth"} Since $Q^2$ cannot directly be measured, but must be reconstructed, this is a less direct observable than the double differential cross sections. Nevertheless it is interesting to see that the inclusive $Q^2$ distributions reach out to fairly large $Q^2$ while the distribution for 0-pion events dies out at $Q^2 \approx 1.5 \:{\rm GeV}^2$. The latter reflects the fact that in 0-pion events resonance excitations and DIS, which both dominantly decay into a nucleon and pions, are suppressed. DIS events are connected with momentum transfers $Q^2 > 1$ GeV${^2}$ where they contribute significantly to the inclusive cross section. Presently, the MINERvA experiment is also analyzing data from a so-called medium energy (ME) run where the flux peaks at about 5.75 GeV. If one looks at the same distributions as before now for the medium energy energy beam in Fig. \[fig:minervame-incl-multiplot\] one sees similar shapes for the overall cross sections. Again $\Delta$ excitation and QE scattering give about equal contributions to the cross section at the most forward bin. In that bin ($\cos\,\theta = 0.995)$ the DIS contribution is larger already under the peak, but its shoulder is not so visible as in Fig. \[fig:minervale-incl-multiplot\] because the other components are broader. It becomes dominant in the next angular bin ($\cos\,\theta = 0.985$) and from $\cos\,\theta = 0.975$ on it accounts for nearly all the cross section. Choosing only events with the outgoing muon angle $> 10$ degrees thus enriches the DIS events significantly. ![Same as Fig. \[fig:minervale-incl-multiplot\] for the MINERvA medium energy neutrino beam.[]{data-label="fig:minervame-incl-multiplot"}](MINERvA_ME-fullincl-multiplot_dEdcos.pdf){width="0.8\linewidth"} The $Q^2$ distribution (Fig. \[fig:minervadsigmadq2ME\]) is similar to the one at the LE flux. Again restricting the events to those with zero outgoing pions cuts the cross section by nearly a factor 2 at small $Q^2$ and brings the cross section down to nearly zero at $Q^2 \approx 2.5$ GeV. ![$Q^2$ distribution per nucleon for inclusive and for 0-pion events in the MINERvA ME beam hitting a $^{12}C$ target.[]{data-label="fig:minervadsigmadq2ME"}](MINERvA_ME_dsigmadQ2.pdf){width="0.7\linewidth"} ### ArgoNeut The data from ArgoNeut are particularly interesting because this experiment was the first to use an Ar target in a higher energy beam. Fig. \[fig:Argodd\] shows the double-differential cross section for a neutrino beam; this flux peaks at about 6 GeV with a long, hard tail out to larger energies. ![Lepton semi-inclusive double differential cross section in the forward region for the ArgoNEUT experiment. The numbers in each bin give the central $\cos\,\theta$ value of the angular bin. Some contributions from different reaction mechanisms are indicated in the figure. []{data-label="fig:Argodd"}](ArgoN-multiplot.pdf){width="0.8\linewidth"} It is noticeable that now even in the most forward bin ($\cos\,\theta = 0.995$) the largest contribution comes from DIS, which contributes about a factor of three more than QE and $\Delta$ processes. This is due to the fact that DIS cross sections depend approximately linearly on the incoming neutrino energy and the ArgoNeut flux has significant strength also at higher energies. These Ar data could in principle serve as a testing ground for the isospin dependence of the 2p2h component. However, this component is so small even for $\mathcal{T}=2$, compared to the dominant DIS contribution, so that there is hardly any sensitivity to its strength. Fig. \[fig:Argopmu\] gives the muon momentum spectrum for the same experiment. Shown are results for a calculation with $\mathcal{T}=2$, but a calculation with $\mathcal{T}=0$ gives nearly the same curve because the overall contribution of 2p2h processes is small. Also shown is the spectrum for 0-pion events. This cross section is quite flat and significantly smaller than the inclusive one. The latter feature is due to the dominance of DIS in the fully inclusive cross section and to the fact that DIS events nearly always lead to pions in the final state. ![Semi-inclusive muon momentum spectrum for the ArgoNEUT experiment. The dashed curve gives cross section for $0 \pi$ events. Data are from [@Acciarri:2014eit].[]{data-label="fig:Argopmu"}](ArgoN_pmu.pdf){width="0.8\linewidth"} ### DUNE Finally we discuss here the semi-inclusive cross section for the DUNE near detector (ND). DUNE will work with a neutrino flux similar to the MINERvA LE flux, but with $^{40}$Ar as a target. On the one hand, we thus expect a very similar behavior as for MINERvA, in particular a strong forward peaking of the cross section. On the other hand, the isospin of $^{40}$Ar is $\mathcal{T}=2$; this leads to an overall enhancement of the 2p2h cross contribution by a factor 3. It will be interesting to look for any observable consequences of that isospin change. All calculations for the DUNE ND were performed with the flux from [@DUNEFlux]. That the double differential cross section is indeed quite similar to the one obtained for the MINERvA LE run can indeed be seen in Fig. 15 of Ref. [@Gallmeister:2016dnq]. We show, therefore, now only the most forward angles in Fig. \[fig:DUNEincl\]. ![Same as Fig. \[fig:minervale-incl-multiplot\] for the DUNE near detector in the 2017 flux.[]{data-label="fig:DUNEincl"}](DUNE-multiplot.pdf){width="0.8\linewidth"} Most noticeable is the significantly larger (than in the MINERvA LE run) contribution of 2p2h excitations that comes about because the target nucleus Ar has $\mathcal{T}=2$ whereas in the MINERvA experiment the target is C with $\mathcal{T}=0$. This difference leads to an enhancement of the 2p2h cross section by a factor 3 relative to the one on C. DIS is not as large as it is in the ArgoNeut experiment because the incoming flux does not have the sizeable high energy tail that the ArgoNEUT flux had. Semi-inclusive cross sections for hadrons ----------------------------------------- From the preceding discussions it is clear that already the lepton semi-inclusive cross sections are made up of various quite different reaction mechanisms, the most important ones being QE scattering and pion production (through resonances and DIS). This is already a challenge even for theories that describe inclusive cross sections, such as GFMC and SUSA inspired models, since they usually cannot handle the elastic and the inelastic processes without further approximations. In this subsection I will now illustrate some features of semi-inclusive cross sections for processes with outgoing hadrons in the DUNE Near Detector (ND). These can, for example, be spectra of certain given particles while an integration over all other degrees of freedom is performed. This is the actual strength of generators such as GiBUU which yield information about the complete final state. ### Transparency Before coming to the DUNE cross sections for outgoing nucleons it is interesting to look at a comparison with the transparency of nuclei for outgoing nucleons that was measured in electron-induced reactions for different targets and over a wide range of momentum-transfer $Q^2$. Experimentally, the transparency was defined as a ratio of the number of nucleons in a given angular interval, chosen to be approximately symmetric around the momentum of the virtual photon, to the same number expected if there were no final state interactions. In the experimental work the latter number was obtained from Glauber or Distorted Wave Impulse Approximation calculations. The theoretical transparencies were, on the other hand, consistently obtained by running GiBUU with and without final state interactions. The comparison of GiBUU results, obtained already in 2001, with the data is shown in Fig. \[fig:transp\]. ![Transparency ratio for C, Fe and Au compared to data (open symbols) from JLAB [@Abbott:1997bc; @Garrow:2001di] and SLAC [@ONeill:1994znv]. The full symbols represent the GiBUU results. Taken from [@Lehr:2001an; @LehrDiss:2003].[]{data-label="fig:transp"}](transparency.pdf){width="0.8\linewidth"} The agreement is obviously quite good. Both the $Q^2$-dependence and the $A$-dependence are described very well in a calculation that used no ’unusual’ effects, such as color transparency. The structures in these curves are mostly explained by experimental constraints on the intervals over which the experimental transparencies were determined [@LehrDiss:2003], the peak at $Q^2 \approx 1$ GeV$^2$ reflects a minimum in the nucleon-nucleon interaction cross section at a proton momentum of about 0.7 GeV. A more detailed discussion of there results can be found in [@LehrDiss:2003]. ### Particle spectra #### Nucleons Fig. \[fig:DUNEdsigmadtn\] shows the kinetic energy spectra of protons and neutrons expected in the neutrino beam for the DUNE experiment. Plotted are both the spectra for events with one and only one outgoing nucleon and those for multi-p or multi-n events[^6]. While the former are small and quite flat as a function of nucleon kinetic energy the multi-nucleon events exhibit a steep rise in their spectra below about 0.3 GeV [^7]. This rise is due to final state interactions: While at the end of the first, initial neutrino-nucleus interaction there may be only one nucleon outgoing, its collisions with other nucleons causes an ’avalanche’ of nucleons. Energy conservation then requires that all these secondary particles carry lower energies. ![Kinetic energy spectra for protons and neutrons in the DUNE ND. Shown are results both for single and for multinucleon events. The single events contain one and only one nucleon of the given isospin. The Multi events contain at least one nucleon of the given isospin and any number of nucleons with the same or a different isospin.[]{data-label="fig:DUNEdsigmadtn"}](DUNE_dsigmadTN.pdf){width="0.8\linewidth"} This steep rise also implies that caution has to be taken when a calorimetric energy reconstruction is performed. Typically, the detectors have a lower detection threshold of about 50 MeV [@Acciarri:2015uup]. This means that a large part of the nucleon ejection cross section is not visible. This is even more so in detectors that do not see outgoing neutrons in the final state. #### Pions The following two figures \[fig:DUNEdsigmadtpi\] and \[fig:DUNEsigmadthetapi\] give the pion kinetic energy and angular distributions. ![Kinetic energy spectra of incoherently produced pions for all charges in the DUNE ND. Shown are the cross sections both for the single and the multi-meson events.[]{data-label="fig:DUNEdsigmadtpi"}](DUNE_dsigmadTpi.pdf){width="0.8\linewidth"} ![Angular distribution of incoherently produced pions of all charges both for single- and for multi-meson events in the DUNE ND.[]{data-label="fig:DUNEsigmadthetapi"}](DUNE_dsigmadtheta_pi_1.pdf){width="0.8\linewidth"} The pion spectra show the typical behavior known from lower energies with a strong peak at about 0.1 GeV and a flattening around 0.25 GeV before the cross section continues to fall again towards larger kinetic energies. The flat region is due to pion reabsorption through the $\Delta$ resonance. As expected for a neutrino beam $\pi^+$ production dominates, but $\pi^0$ is close reflecting the larger number of neutrons compared to protons in an Ar target and the presence of charge transfer reactions. Even $\pi^-$ amounts to about 1/2 of $\pi^+$, due to the strong presence of DIS. The figures also show that – because of the strong DIS component – the multi-pion cross sections are much larger than the single-pion ones. All cross sections are forward peaked. Earlier theoretical studies have shown that pions with kinetic energies below about 0.03 GeV cannot reliably be described by semiclassical methods, but require a quantum-mechanical treatment [@Buss:2006vh]. The DUNE experiment was originally assumed to have a threshold of about 100 MeV kinetic energy [@Acciarri:2015uup]. This is just about where the peak of the cross section is located. Thus, the detector cuts out a nonnegligeable part of the cross section if the cutoff cannot be lowered. #### Strange baryons Fig. \[fig:DUNEdsigmadts\] gives spectra for strange baryons. In this case single and multiple production cross sections lie on top of each other. This just reflects the lower probability (and higher thresholds) for strangeness production. Strangeness is produced mainly through DIS processes, often in connection with other mesons. ![Spectra of strange baryons. Shown are both the spectra for multi-strange and for single-strange events in the DUNE ND. The ’single’ events contain one and only one baryon of the given flavor. The ’Multi’ events contain at least one strange baryon of the given flavor together with any number of other baryons.[]{data-label="fig:DUNEdsigmadts"}](DUNE_dsigmadTS.pdf){width="0.8\linewidth"} It is seen that $\Lambda$ production prevails. The increase of the spectrum is reminiscent of that observed above in the nucleon spectra. Indeed, the origin of that rise is again to be found in the FSI where the produced $\Lambda$ collides with the target nucleons and thereby looses energy. #### Strange mesons The same is true for strange meson production (Fig. \[fig:DUNEdsigmadtsmeson\]). There only $K^+$ and $K^0$ play any essential role; all the other flavors are much lower. ![Spectra of strange mesons, both for single and for multi-meson events in the DUNE ND.[]{data-label="fig:DUNEdsigmadtsmeson"}](DUNE_dsigmadTsmeson.pdf){width="0.8\linewidth"} Summary and Conclusion ====================== Generators are essential tools that make nuclear theory directly applicable to the description of neutrino-nucleus reactions. If the agreement with neutrino-nucleus reaction data is good then these generators can be relied on to do the ’backwards computation’ necessary to determine the incoming neutrino energy in a broad-band oscillation experiment which in turn is needed for the extraction of neutrino mixing parameters, the mass hierarchy and a possibly CP violating phase $\delta_{CP}$ from long-baseline oscillation experiments. In view of the importance of generators for neutrino long-baseline experiments it is surprising to see that often these generators are being used by experimenters in a ’black box mode’ without much knowledge and concern about their inner workings. This information is indeed hard to come by since none of the generators GENIE, NEUT or NuWro come together with a detailed and comprehensive documentation about their physics contents and their algorithms used [^8]. For cases where there is some text, e.g. for the treatment of FSI, the discussion is superficial. The reason is probably that the generators often incorporate quite old code fragments and methods for which the basic knowledge and the original authors are no longer available. This is different for GiBUU which represents a consistent framework of theory and code, is well documented [@Buss:2011mx] and most of its primary authors are still active in the field. For a test of generators besides neutrino data themselves also data from electro-nuclear or photo-nuclear reactions are useful and necessary. Their initial reactions are identical with the vector part of neutrino-induced reactions and the final state interactions are the same if the incoming kinematics (energy- and momentum-transfer) are identical. Checking generators against electron or photon data is thus an indispensable requirement. So far, only a few generators can actually be used to describe electro- or photo-nuclear interactions. There are no published results from NEUT or NuWro available for such reactions and the results obtained with GENIE are quite unsatisfactory for energy transfers beyond the QE peak [@Ankowski:2019]. Checks of semiinclusive cross sections with GiBUU can be found in Refs. [@Gallmeister:2016dnq; @Mosel:2017ssx; @Mosel:2018qmv] where they can be seen to work quite well. Agreement with such data is a necessary prerequisite for any generator, but it is not sufficient since all these electron data are semi-inclusive ones. In addition, electro-nuclear pion [@Kaskulov:2008ej] and $\rho$ [@Gallmeister:2010wn] production data must be used for a further check, as well as the many data from photonuclear pion production on nuclei [@Krusche:2004zc; @Krusche:2004uw; @Mertens:2008np; @Nanova:2010sy]. In 2017 a group of authors interested in the interplay of experimental and theoretical neutrino-nucleus physics published the following list of ’general challenges facing the community’ [@Alvarez-Ruso:2017oui]: 1. The development of a unified model of nuclear structure giving the initial kinematics and dynamics of nucleons bound in the nucleus. 2. Modeling neutrino-bound-nucleon cross sections not only at the lepton semi-inclusive cross section level, but also in the full phase space for all the exclusive channels that are kinematically allowed. 3. Improving our understanding of the role played by nucleon-nucleon correlations in interactions and implementing this understanding in MC generators, in order to avoid double counting. 4. Improving models of final state interactions, which may call for further experimental input from other communities such as pion-nucleus scattering. 5. Expressing these improvements of the nuclear model in terms that can be successfully incorporated in the simulation of neutrino events by neutrino event generators. All of these points are indeed quite relevant.They reflect shortcomings of the generators widely used by most neutrino physics experimenters. They do not, however, reflect open physics problems nor do they reflect the absence of a practical solution. Contrary to the impression one could have gotten from the discussions in Ref. [@Alvarez-Ruso:2017oui] these challenges have actually been tackled by nuclear theorists and solved about 20 years ago. The solutions have found their way into the practical implementation in the generator GiBUU. I therefore now add some comments to the points above: 1. In contrast to all the other presently used generators GiBUU does have the target nucleons bound in the nucleus. In its present version the phase-space distribution of these particles is semi-classical (local Thomas-Fermi gas in a mean field potential), but this initial state could be replaced by any more refined model, e.g. from nuclear many-body theory. The agreement with data reached already with the present model indicates that presently available neutrino data are not very sensitive to details of the initial phase-space distributions and spectral functions. This is so because, on one hand, the smearing over incoming energies, necessarily present in all neutrino experiments, smears out all quantum-mechanical phases. On the other hand, the presently available neutrino data still carry large uncertainties. Choosing particularly sensitive observables may also change the situation in the future. 2. Generators have to provide this information on the final state and indeed all the widely used generators do that, at the expense, however, of patching up one model for the initial interactions with another, different model for the final state interactions. Models built for lepton semi-inclusive cross sections, such as the GFMC calculations and the SUSA model calculations, cannot give that information. 3. There is so far no indication for the presence of nucleon-nucleon short range correlations in any neutrino data. In electron experiments such correlations show up in quite exclusive reactions that fix energy and momentum transfer in reactions restricted to one or two outgoing nucleons. Such exclusivity cannot be achieved in neutrino experiments, simply because of the broad energy distribution in the incoming beam which naturally also leads to a broad smearing over energy transfers. Observables that are reasonably insensitive to the incoming energy may offer a way to more exclusivity. 4. Final state interactions indeed have to be checked against data obtained in other experiments. Pion-nucleus data have been used for generator checks; equally essential are particle production data from pA experiments. As discussed earlier, however, more stringent would be checks against photo- and electro-nuclear data since photons populate the whole nucleus whereas incoming pions suffer strong initial state interactions. 5. This is not a future program: all of these points are implemented in GiBUU. Future developments ------------------- The ongoing and planned long-baseline neutrino experiments all strive for a precision-dominated determination of neutrino mixing parameters. These parameter can be obtained from the experimental observations only by means of a generator which has to be as up-to-date as the experimental hardware is. One cannot stress this point strongly enough: without a reliable state-of-the-art generator the experiments cannot reach their goals. It is thus time to combine scientific expertise from nuclear theory with resources mainly from the high-energy experimental community to construct a new generator which could be built on the experience reached with GiBUU, but also some of the QGP generators. This new generator has to fulfill the following physics requirements - Foremost, it has to be built on consistent nuclear theory, with one and the same ground state for all reaction processes. This requirement removes artificial and unphysical tuning degrees of freedom. - It has to include potentials, both nuclear and Coulomb, from the outset. The most obvious feature of nuclei is that they are bound; generators should respect that in the preparation of the ground state. The potentials are well determined from other nuclear physics experiments; there is then no more freedom to introduce artificial binding energy parameters. Potentials will necessarily increase the computing times; this is the (small) price one has to pay for a realistic nuclear physics scenario. - The starting point for any such generators should be transport theory. Transport theory is no longer some esoteric theoretical ’dream’, but it is well established in other fields of physics as well as in nuclear physics where all the top-level experiments searching for the quark-gluon plasma (QGP) at RHIC and LHC use generators built on transport theory. - If sophisticated spectral functions of bound nucleons actually become essential for the description of neutrino-nucleus reactions then modern transport theory provides the only consistent method to perform the off-shell transport necessary to bring nucleons back to their mass-shell once they leave the nucleus. The implementation of off-shell transport in actual codes is one of the major achievements of transport theory during the last 20 years [@Leupold:1999ga; @Buss:2011mx; @Cassing:2008nn]. It has found its way into the transport codes used by the QGP community. From a more practical point of view any new generator should have the following properties in addition - The new generator must be well-documented, both in its physics content and its algorithms. In addition to a well-structured complete documentation a thorough comment structure in the code is necessary. After all, the neutrino generator would have to be used over the next 20 - 30 years, most probably also by physicists who were not involved in the writing of the original code. Maintenance of the code is then only possible, if documentation and comment structure are available. - The new generator should allow for easy variations of essential parameters. Elementary cross sections are an essential input into any generator and these cross sections carry experimental uncertainties. It, therefore, must be possible to vary them within their uncertainties to see the effect on the final oscillation parameters. To be distinguished from that must be the tuning of unphysical or redundant parameters which can, for example, appear when different subprocesses are described by very different theories and general principles such as time-reversal invariance are not respected. - The code also must have a modular structure, with well defined interfaces, that allows easier modification if new theories or algorithms for indivicual subprocesses become available. Special care must be taken, however, that the individual modules respect the intrinsic connections between different processes, such as pion production and absorption. There has to be some scientific supervisory structure that guarantees that the various processes are described on a consistent basis. A ’platform model’ in which various different groups provide alternative descriptions of individual processes may work for the technical problems such as determining detector efficiencies. It cannot work for a theoretical understanding of data and for the ultimate task of extracting the relevant neutrino properties from long-baseline experiments. - Existing generators contain important parts which deal with the complications caused by the broad neutrino beam and the extended target sizes; both are features not present in nuclear physics experiments and outside of the expertise of nuclear theorists. These parts of the codes are essential and should be taken over into any new generator. Any such new generator has to be checked against data not only from electro-nuclear experiments, but also from neutrino-nucleus interaction experiments. Since generators are used in the experiment for a determination of detector efficiencies there is the danger of a ’logical loop’ in which a generator is first used to get the data points and then is used again to compare the data with. This is now common practice in neutrino physics experimental groups. Instead, a ’nuclear physics model’ should be implemented in which programs such as GEANT are used to determine purely experimental properties, such as efficiencies and threshold, but the final comparison of data takes place with ’real theory’. Any future, newly built neutrino generator will have to be tested against data obtained nowadays in experiments such as MINERvA or the ND experiments in T2K or NOvA. This test will be difficult if the published data depend on generator [*X*]{} version [*y.z*]{} for which no documentation exists. Care must, therefore, be taken that the data contain as little ’generator contamination’ as possible. This is, for example, not the case if cuts on incoming neutrino energies or on invariant masses are imposed on the published data since such cuts can only be imposed by means of a generator. I gratefully acknowledge many extremely helpful and productive discussions with Kai Gallmeister, both about the physics and the inner workings of GiBUU. I am also indebted to some of my experimental colleagues for explaining to me details of data and experiments. Here, Debbie Harris, Xianguo Lu and Kevin McFarland have been invaluable discussion partners. I am also grateful to Luis Alvarez-Ruso for a careful reading of the manuscript. References {#references .unnumbered} ========== [^1]: Here $x$ is the space-time four-vector [^2]: For bosons, e.g. pions, the equation actually looks slightly different, see [@Buss:2011mx] [^3]: The parameters appearing in this potential are given for two different parameter sets in a Table in Ref. [@Mosel:2018qmv]. [^4]: For GENIE I rely on the manual, dated March 13, 2018, to be found at https://genie-docdb.pp.rl.ac.uk/cgi-bin/ShowDocument?docid=2 [^5]: In GiBUU all events connected with nucleon excitations above an invariant mass of 2 GeV are identified as ’DIS’ [^6]: The cross section shown in Fig. \[fig:DUNEdsigmadtn\] for Multi events is that for events with one of the nucleons with the given isospin and others with any isospin present. The kinetic energy is that of any one nucleon with the given isospin in such an event [^7]: Kinetic energy below about 0.02 - 0.04 GeV cannot be trusted in a semiclassical theory because of general quantum-mechanical effects becoming essential [@Buss:2006vh]. [^8]: For GENIE even the latest manual of version 3 [@GENIE:2018] often contains only headers, followed by empty pages.
--- abstract: 'The mean-field theory tells that the classical critical exponent of susceptibility is the twice of that of magnetization. However, the linear response theory based on the Vlasov equation, which is naturally introduced by the mean-field nature, makes the former exponent half of the latter for families of quasistationary states having second order phase transitions in the Hamiltonian mean-field model and its variances. We clarify that this strange exponent is due to existence of Casimir invariants which trap the system in a quasistationary state for a time scale diverging with the system size. The theoretical prediction is numerically confirmed by $N$-body simulations for the equilibrium states and a family of quasistationary states.' author: - Shun Ogawa - Aurelio Patelli - 'Yoshiyuki Y. Yamaguchi' title: 'Non-mean-field critical exponent in a mean-field model : Dynamics versus statistical mechanics' --- Introduction {#sec:introduction} ============ Are critical exponents of an isolated dynamical system the same as ones computed via the statistical mechanics? We tackle this question, by dealing with a ferromagnetic like model in the mean-field universality class and considering the critical exponents of the zero-field susceptibility. Isothermal susceptibility, $\chi^{\rm T}$, can be obtained by using standard methods of statistical mechanics, while the susceptibility of an isolated system, $\chi^{\rm I}$, can be derived from linear response theory [@Kubo]. These two susceptibilities satisfy the inequality $\chi^{\rm I}\leq\chi^{\rm T}$ [@chis1; @chis2] which is derived considering existence of invariants [@mazur-69; @suzuki-70]. This implies that the exponents, $\gamma^{\rm T}$ and $\gamma^{\rm I}$, with which the two susceptibilities diverge at the critical point, satisfy $\gamma^{\rm I}\leq\gamma^{\rm T}$. Is it possible that $\gamma^{\rm I}$ is strictly smaller than $\gamma^{\rm T}$? A difficulty in answering this question is that the susceptibility of an isolated system cannot be easily evaluated. In this article we show how kinetic theory [@balescu; @nicholson] can effectively answer the initial question in systems of the mean-field type using a recently developed version of linear response theory [@LinearResponseI; @LinearResponseII] based on the Vlasov equation. Many different physical systems can be described by kinetic theory, including self-gravitating systems, plasmas and fluids [@balescu; @binney; @nicholson]. For $N$-particle systems with long-range interactions [@campa-dauxois-ruffo-09] both perturbative approaches [@balescu] and the rigorous mean-field limit [@spohn-book; @braun] lead to a description of the system in the continuum, $N\to\infty$, limit in terms of the Vlasov equation. This equation rules the time-evolution of the single-particle distribution function and has an infinity of stationary solutions. For instance, all distribution functions that depend on phase-space variables only through the single-particle energy do not evolve in time, as proven by Jeans [@Jeans-1919]. On the long time scale, the system is described by appropriate kinetic equations which include “collisional” (finite $N$) effects, like Landau and Balescu-Lenard equation, and evolves towards Boltzmann-Gibbs (BG) equilibrium. However, since the relaxation time scale diverges with $N$ [@physicaa], the early evolution of the system is well described by the Vlasov equation. Therefore, the use of the linear response theory developed in [@LinearResponseI; @LinearResponseII] is appropriate in the large $N$ limit. In order to perform explicit calculations of susceptibilities, it is convenient to consider the so-called Hamiltonian Mean-Field (HMF) model [@campa-dauxois-ruffo-09; @spohn; @inagaki-konishi-93; @antoni-ruffo-95]. This model describes the motion of $N$ particles on a circle interacting with an attractive cosine potential. The BG equilibrium solution of this model displays a high-energy phase where the particles are uniformly distributed on the circle and a low-energy phase where the particles form a cluster. The two phases are separated by a second order phase transition point at which susceptibility diverges with the classical mean-field exponents. On the other hand, in the mean-field limit, the time-evolution of the single-particle distribution function of the HMF model is exactly described by the Vlasov equation. Moreover, a BG homogeneous state is a stationary solution of this equation which looses its stability at an energy which coincides with the second-order phase transition energy [@inagaki-konishi-93]. Below this energy, the BG inhomogeneous state is also a stable stationary state of the Vlasov equation. Stable stationary states almost do not evolve even in the system with finite but large $N$, and called as quasistationary states [@physicaa; @physicaaII]. The long-lasting quasistationary states, therefore, show nonequilibrium phase transitions, and a phase diagram is theoretically drawn for a set of initial states with the aid of a nonequilibrium statistical mechanics [@antoniazzi-07]. In this article, we perform a detailed analysis of the scaling laws of susceptibility around the critical point of (non)equilibrium phase transitions for quasistationary states. We remark that the BG equilibrium states are ones of quasistationary states, and hence the obtained scaling laws are valid even for the BG equilibrium states. This article is constructed as follows. We introduce the HMF model and the corresponding Vlasov system in Sec. \[sec:HMFmodel\]. The scaling of the Vlasov susceptibility is analyzed in Sec. \[sec:scaling\], and the theoretical prediction is numerically confirmed in Sec. \[sec:numerics\]. A generalization from the HMF model is discussed in Sec. \[sec:generalization\]. Sec. \[sec:summary\] is devoted to summary and discussions. Hamiltonian mean-field model {#sec:HMFmodel} ============================ The Hamiltonian function of the HMF model reads $$\begin{split} H_{N} & = \sum_{i=1}^{N} \left[ \dfrac{p_{i}^{2}}{2} + \sum_{j=1}^{N} \dfrac{1-\cos(q_{i}-q_{j})}{2N} - h\Theta(t) \cos (q_{i}-\phi) \right] \end{split}$$ where $h$ and $\phi$ are respectively the modulus and the phase of the external magnetic vector $(h\cos\phi,h\sin\phi)$, and $\Theta(t)$ is the Heaviside step function. The magnetization vector $({\langle M_{x} \rangle}_{N},{\langle M_{y} \rangle}_{N})$ is defined by $$({\langle M_{x} \rangle}_{N},{\langle M_{y} \rangle}_{N}) = \dfrac{1}{N} \sum_{j=1}^{N} (\cos q_{j},\sin q_{j}),$$ where ${\langle \cdot \rangle}_{N}$ represents the average over $N$ particles. The isolated system ($h=0$) has the rotational symmetry, therefore, we consider $\phi=0$ and ${\langle M_{y} \rangle}_{N}=0$ without loss of generality. As a consequence, we call the $x$-axis the direction of the spontaneous magnetization. The corresponding effective one-particle Hamiltonian of HMF is $$\label{eq:Hf} H_{h}[f_{h}](q,p,t) = \dfrac{p^{2}}{2} - {\langle M \rangle}_{h}\cos q - h\Theta(t)\cos q,$$ where the magnetization observable is $M(q)=\cos q$ and the brackets $\langle\cdots\rangle_h$ means the average respects to the single particle distribution $f_h$. The distribution $f_{h}$ evolves following the Vlasov equation $${\dfrac{\partial f_{h}}{\partial t}} + \{ H_{h}[f_{h}], f_{h} \} = 0, \label{eq:vlasov}$$ where $\{\cdot,\cdot\}$ is the Poisson bracket defined by $$\{a,b\} = {\dfrac{\partial a}{\partial p}} {\dfrac{\partial b}{\partial q}} - {\dfrac{\partial a}{\partial q}} {\dfrac{\partial b}{\partial q}}.$$ We note that the magnetization ${\langle M \rangle}_{h}$ appearing in $H_{h}$, , is determined self-consistently to satisfy the equation $$\label{eq:self-consistent} {\langle M \rangle}_{h}= \iint \cos q f_{h} dqdp.$$ Let us consider the case in which the external field is turned off. In that case all the stationary states are in the Jeans’ class, and are functions of $H_{0}$ specified by $$\label{eq:f0} f_{0}(q,p;b,a_{1},\cdots,a_{n}) = \dfrac{F(H_{0}(q,p);b,a_{1},\cdots,a_{n})} {{{\langle {\langle F(H_{0}(q,p);b,a_{1},\cdots,a_{n}) \rangle} \rangle}}},$$ where $${{\langle {\langle \varphi(q,p) \rangle} \rangle}}=\iint \varphi(q,p) dqdp.$$ For instance, in the canonical equilibrium, the function $F$ of energy is $F(E;b)=e^{-bE}$, where $b$ is the inverse temperature. For the sake of simplicity, we consider only one independent parameter $b$, and other parameters $a_{1},\cdots,a_{n}$ depend on $b$. The effective Hamiltonian of any one-dimensional system in a stationary state is integrable, and the angle-action variables $(\theta,J)$ [@BOY10] can be introduced accordingly. The Hamiltonian $H_{0}[f_{0}]$ and the distribution function $f_{0}$ depend only on the action $J$. Scaling of the Vlasov susceptibility {#sec:scaling} ==================================== The Vlasov susceptibility is given by the linear response theory [@LinearResponseI; @LinearResponseII], and reads $$\label{eq:chi-Vlasov} \chi^{\rm V}(b) = \dfrac{1-D^{\rm V}(b)}{D^{\rm V}(b)},$$ where $D^{\rm V}$ is the stability functional, and $D^{\rm V}(b)>0$ implies the stability of the state [@physicaa; @ogawa13]. This functional can be decomposed in two terms $$\label{eq:D-Vlasov} D^{\rm V}(b) = D^{\rm V}_{1}(b) + D^{\rm V}_{2}(b),$$ where the first one is $$D^{\rm V}_{1}(b) = 1 + \dfrac{{{\langle {\langle F'(H_{0}(J);b){\langle \cos^{2}q \rangle}_{J} \rangle} \rangle}}} {{{\langle {\langle F(H_{0}(J);b) \rangle} \rangle}}},$$ while the second one is $$D^{\rm V}_{2}(b) = - \dfrac{{{\langle {\langle F'(H_{0}(J);b){\langle \cos q \rangle}_{J}^{2} \rangle} \rangle}}} {{{\langle {\langle F(H_{0}(J);b) \rangle} \rangle}}}.$$ The prime means the derivative $F'=dF/dE$ and ${\langle \cdots \rangle}_{J}$ represents the average with fixed $J$, i.e., $$\label{eq:avetheta} {\langle \varphi(\theta,J) \rangle}_{J} = \dfrac{1}{2\pi} \int_{-\pi}^{\pi} \varphi(\theta,J)~ d\theta.$$ For homogeneous distribution, we have $q=\theta$ and $D^{\rm V}_{2}$ vanishes. The stability functional in this case is $$\label{eq:D-Vlasov-homo} D^{\rm V}_{\rm homo}(b) = 1 + \dfrac{1}{2} \dfrac{{{\langle {\langle F'(p^{2}/2;b) \rangle} \rangle}}}{{{\langle {\langle F(p^{2}/2;b) \rangle} \rangle}}}.$$ For instance, using the canonical equilibrium, we obtain that the susceptibility is $\chi^{\rm V}=b/(b-2)$ and its critical point is $b_{\rm c}^{\rm cano}=2$. Let us introduce three assumptions in order to obtain Jeans’ distributions which describe continuous phase transitions at $b=b_{\rm c}$: (I) The states are homogeneously stable for $b<b_{\rm c}$ and inhomogeneously stable for $b>b_{\rm c}$. (II) The magnetization ${\langle M \rangle}_{0}$ is a continuous function of $b$. (III) The solution of the self-consistency equation gives an unstable homogeneous branch for $b>b_{\rm c}$. As a consequence, the stability functional is positive for any $b (\neq b_{\rm c})$, and $${\langle M \rangle}_{0} = \left\{ \begin{array}{ll} 0 & \quad b\leq b_{\rm c}, \\ (b-b_{\rm c})^{\beta} &\quad b\agt b_{\rm c}. \end{array} \right.$$ Moreover, in the homogeneous branches of both sides, the stability functional reads $$\label{eq:Dhomo-cond-1} D^{\rm V}_{\rm homo}(b) = \left\{ \begin{array}{ll} c_{+}(b)(b_{\rm c}-b)^{\Gamma_{+}}, & b\alt b_{\rm c}, \\ -c_{-}(b)(b-b_{\rm c})^{\Gamma_{-}}, & b\agt b_{\rm c} \end{array} \right.$$ with positive $c_{\pm}(b)$ and $\pm$ discriminates between the two regions over-critical and under-critical. The exponents $\Gamma_{\pm}$ depend on the choice of the parameter $b$ of the distribution. In general we can consider a parametrization such that $\Gamma_{\pm}=1$. The critical exponents, $\gamma^{\rm V}_{\pm}$, of the susceptibility depend on the behavior of the stability functional $D^{\rm V}(b)\to 0$ close to the critical point $b_{\rm c}$. Equation gives $\gamma^{\rm V}_{+}=\Gamma_{+}$ when the state of the system is homogeneous, that is equal to the classical exponent. In the following, we show the non-classical relation $\gamma^{\rm V}_{-}=\beta/2$ settled in the inhomogeneous phase, where $\beta=\Gamma_{-}/2$ in the case ferromagnetic mean-field systems. Let us start showing the relation $\beta=\Gamma_{-}/2$. Around the critical point $b\gtrsim b_{\rm c}$, the magnetization ${\langle M \rangle}_{0}$ is small by assumption (II), and we expand $F(H_{0})$ around the critical point as $$\label{eq:F-expand} F(H_{0};b) = \sum_{n=0}^{\infty} \dfrac{(-{\langle M \rangle}_{0}\cos q)^{n}}{n!} F^{(n)}(p^{2}/2;b),$$ where $F^{(n)}$ is the $n$-th derivative of $F$ and we assumed that $f_{0}$ depends on ${\langle M \rangle}_{0}$ through $H_{0}$ only. For such distributions, the self-consistency equation becomes $$A_{1}(b){\langle M \rangle}_{0} + A_{3}(b){\langle M \rangle}_{0}^{3} + O({\langle M \rangle}_{0}^{5}) = 0,$$ where $$\begin{split} & A_{1}(b) = 1 - \dfrac{B_{1}}{B_{0}} = D^{\rm V}_{\rm homo}(b), \\ & A_{3}(b) = \dfrac{B_{1}B_{2}-B_{0}B_{3}}{B_{0}^{2}}, \end{split}$$ and $B_{n}~(n=0,1,2,\cdots)$ are defined by $$B_{n}(b) = \dfrac{(-1)^{n}}{n!} \iint F^{(n)}(p^{2}/2;b) \cos^{2\lceil n/2 \rceil}q dqdp,$$ with $\lceil x \rceil=\min\{m\in\mathbb{Z}|m\geq x\}$. We further assume that $B_{n}(b_{\rm c})\neq 0$ for any $n$. The non-zero solution of the self-consistency equation gives the scaling $${\langle M \rangle}_{0} = \sqrt{\dfrac{c_{-}(b)}{A_{3}(b)}}~ (b-b_{\rm c})^{\Gamma_{-}/2}$$ whenever $A_{3}(b)>0$, which implies existence of Jeans’ inhomogeneous states. We, therefore, get the relation $\beta=\Gamma_{-}/2$. To prove the main relation $\gamma^{\rm V}_{-}=\beta/2$, we separately estimate $D^{\rm V}_{1}$ and $D^{\rm V}_{2}$. To evaluate the behavior of the first term we remark that ${{\langle {\langle F'(H_{0};b){\langle \cos^{2}q \rangle}_{J} \rangle} \rangle}}={{\langle {\langle F'(H_{0};b)\cos^{2}q \rangle} \rangle}}$. Using the expansion , the first component of the stability functional scales as $D^{\rm V}_{1}(b)\sim (b-b_{\rm c})^{\Gamma_{-}}$ for $b>b_{\rm c}$. The second component is given by [@LinearResponseII] $$D^{\rm V}_{2}(b) = 16\sqrt{{\langle M \rangle}_{0}}~(I_{1}+I_{2}),$$ where $$I_{1} = - \int_{0}^{1} \left[ \dfrac{2E(k)}{K(k)} - 1 \right]^{2} kK(k)\frac{F'({\langle M \rangle}_{0}(2k^{2}-1))}{{{\langle {\langle F(H_0) \rangle} \rangle}}} dk,$$ $$I_{2} = - \int_{1}^{\infty} \left[ \dfrac{2k^{2}E(1/k)}{K(1/k)} - 2k^{2}+1 \right]^{2} K(1/k) \frac{F'({\langle M \rangle}_{0}(2k^{2}-1))}{{{\langle {\langle F(H_0) \rangle} \rangle}}}dk,$$ and $K$ and $E$ are respectively the complete elliptic integrals of the 1st and the 2nd kinds. The integrals $I_{1}$ and $I_{2}$ converge to non-zero constants in general even in the limit ${\langle M \rangle}_{0}\to 0$. Hence, the second part scales as $$D^{\rm V}_{2}(b)\sim\sqrt{{\langle M \rangle}_{0}}\sim (b-b_{\rm c})^{\beta/2}.$$ Close to the critical point the second component $D^{\rm V}_{2}$ dominates since it goes to zero slower compared with the first one $D^{\rm V}_{1}$. Consequently, the Vlasov critical exponents for Jeans’ distributions are $$\label{eq:scaling-relation} \gamma^{\rm V}_{-}=\beta/2=1/4, \quad \gamma^{\rm V}_{+}=1,$$ when $\Gamma_\pm=1$. We stress that the exponent $\gamma^{\rm V}_{-}=\beta/2=1/4$ differs from the classical $\gamma_{-}=1$ [@chavanis-11]. We explain that the [*strange*]{} exponent $\gamma^{\rm V}_{-}=\beta/2$ is due to infinite invariants of the Vlasov equation, called Casimirs. A Casimir is a functional of the distribution function $\int s(f)dqdp$, where $s$ is any smooth function. It is an integral of motions of the Vlasov dynamics whenever the distribution solves the Vlasov equation itself. As a consequence, any Casimir introduce a conservation law and the second component of the stability functional, which gives the [*strange*]{} exponent, takes care of whole of them. The variation of the distribution $\delta f=f_{h}-f_{0}$ satisfies $$0 = \iint [s(f_{0}+\delta f)-s(f_{0})] dqdp = \int s'(f_{0}(J)) \widetilde{\delta f}_{0}(J) dJ$$ up to the linear order, where $\widetilde{\delta f}_{0}(J)$ is the Fourier zero mode of $\delta f$ with respect to the angle $\theta$. This constraint must hold for any smooth functions $s$, and hence $\widetilde{\delta f}_{0}(J)=0$ [@ogawa13]. Let us derive $f_{h}$ from the test function with the external field, $$\label{eq:gh} g_{h}(q,p;b) = \dfrac{F_{h}(H_{h}(q,p;b))}{{{\langle {\langle F_{h}(H_{h}(q,p;b)) \rangle} \rangle}}},$$ where $F_{h}$ is a family of functions of energy and is expanded as $$F_{h} = F + h G + O(h^{2}).$$ By the definition of the susceptibility $\chi^{\rm V}$, the magnetization ${\langle M \rangle}_{h}$ is written as ${\langle M \rangle}_{h} = {\langle M \rangle}_{0} + h \chi^{\rm V} + O(h^{2})$, and hence $$H_{h} = H_{0} + h\psi(q) + O(h^{2}),$$ where $$\psi(q) = - (\chi^{\rm V}+1)\cos q + O(h^{2}).$$ Substituting the above two expansions into $g_{h}$, and ignoring the term of order $O(h^{2})$, we have $$g_{h} = f_{0} + h \left[ \dfrac{F'(H_{0})\psi + G(H_{0})}{{{\langle {\langle F(H_{0}) \rangle} \rangle}}} - \dfrac{{{\langle {\langle F'(H_{0})\psi+G(H_{0}) \rangle} \rangle}} F(H_{0})}{{{\langle {\langle F(H_{0}) \rangle} \rangle}}^{2}} \right].$$ Subtracting the Fourier zero mode from $g_{h}-f_{0}$, the variation must satisfy $\delta f= g_{h}-f_{0} - {\langle g_{h}-f_{0} \rangle}_{J}=g_{h}-{\langle g_{h} \rangle}_{J}$ and hence $$f_{h} = f_{0} - \dfrac{h(\chi^{\rm V}+1)}{{{\langle {\langle F(H_{0}) \rangle} \rangle}}} F'(H_{0}) \left( \cos q - {\langle \cos q \rangle}_{J} \right),$$ where we used ${\langle F(H_{0}) \rangle}_{J}=F(H_{0})$ and ${\langle G(H_{0}) \rangle}_{J}=G(H_{0})$ since $H_{0}$ depends on the action $J$ only. Multiplying by $M(q)=\cos q$ and integrating in the $\mu$ space, we get the Vlasov susceptibility and the stability functional . Following these results, we propose a scenario of relaxation as follows [@physicaa; @physicaaII]: When the external field is switched on, the system gets trapped in a QSS to keep Casimirs invariants and this trapping gives the [*strange*]{} critical exponent $\gamma_{-}^{\rm V}=\beta/2$. However, the Vlasov dynamics is not the true dynamics for finite systems, thus Casimirs are not exactly conserved but evolve on a time-scale which diverges with $N$. Consequently, the system goes to the Boltzmann-Gibbs equilibrium recovering the classical exponent after the equilibration. Numerical tests {#sec:numerics} =============== The Vlasov exponent is verified by $N$-body simulations, which are performed by the 4th order symplectic integrator [@yoshida-93] with the time step $\Delta t=0.1$. We compute susceptibility for two families of Jeans’ class states. One is the thermal equilibrium $$F(E) = e^{-E/T},$$ whose control parameter is $b=1/T$ and the critical point is $T_{\rm c}=1/2$. The other is Fermi-Dirac type $$F(E)=\dfrac{1}{e^{(E-\mu)/T}+1}$$ with fixed $T=1/5$, whose control parameter is $b=-\mu$ and the critical point is $\mu_{\rm c}\simeq 0.239346$. The latter is an example of a family of out of equilibrium quasistationary states (QSSs), and the critical exponent $\beta$ is confirmed as $1/2$ by solving the self-consistent equation . The Fermi-Dirac type families are obtained, approximately at least, by starting from waterbag initial states. The values of parameters $\mu$ and $T$ are controlled by suitably choosing the waterbag initial states with the aid of a nonequilibrium statistical mechanics [@antoniazzi-07]. For both cases, the Vlasov predictions are in good agreements with the $N$-body simulations for time-scales shorter than the equilibration one, as shown in Fig.\[fig:exponent\]. ![(color online) Susceptibilities as functions of the normalized parameter $(b_{\rm c}-b)/b_{\rm c}$ in log-log plot. Lines report theoretical predictions of the isothermal $\chi^{\rm T}$ (green broken), the isoentropic $\chi^{\rm S}$ (orange dashed) and the Vlasov $\chi^{\rm V}$ (red lower solid) susceptibilities for the thermal equilibrium family. We remark that $\chi^{\rm S}$ is computed explicitly by using the exact solution in the microcanonical statistics [@campa-dauxois-ruffo-09] and by taking the invariance of the entropy during the quasistatic adiabatic process into account. The Vlasov susceptibility for a QSS family of the Fermi-Dirac type is also reported (blue upper solid). Points are computed in $N$-body simulations and represent $(M_{h}-M_{0})/h$, where $M_{h}$ is time average in the period of $t\in [0,500]$. $N=10^{6}$ and $h=10^{-2}$ for the thermal equilibrium family (purple square), and $N=10^{7}$ and $h=10^{-3}$ for the QSS family (light blue cross). For each $b$, $10$ points are plotted corresponding to $10$ realizations. []{data-label="fig:exponent"}](fig_chi.eps){width="9cm"} The scenario of relaxation proposed in the last of Sec.\[sec:scaling\] is examined by direct $N$-body simulations, shown in Fig. \[fig:Nbody\]. For $t<0$, the system is at equilibrium with a temperature $T=0.499<1/2=T_{\rm c}$. The external field with a small magnitude $h=0.01$ is switched on at $t=0$, and the system jumps to the QSS predicted by the linear response theory based on the Vlasov equation. In the long time regime Casimirs are no more invariants due to the presence of rare collisions [@balescu], and the system goes towards equilibrium. Simulations indicate that the time-scale of relaxation from the QSS to equilibrium grows linearly with $N$, as found for isolated inhomogeneous QSSs in Ref. [@debuyl-mukamel-ruffo-11]. ![(color online) $N$-body simulations in the HMF model with external field. (a) Short time evolutions of magnetization. (b) Long time evolutions. The horizontal axis is in the logarithmic scale, which is scaled as $\log_{10}(t/N)$ in the inset. $N=10^{3}~(100)$, $10^{4}~(10)$ and $10^{5}~(1)$, where the inside of braces is the number of realizations over which the orbits are averaged. The system is in thermal equilibrium with $T=0.499$ in $t<0$, and the external field turns on with $h=0.01$ at $t=0$. The direction of magnetization vector $(M_{x},M_{y})$ is reset to $x$-direction at $t=0$. In each panel, three horizontal lines represent equilibrium level with the quasistatic adiabatic susceptibility (upper), the QSS level predicted by the Vlasov linear response theory (middle) and thermal equilibrium level without external field (lower).[]{data-label="fig:Nbody"}](fig_Mshort.eps "fig:"){width="9cm"} ![(color online) $N$-body simulations in the HMF model with external field. (a) Short time evolutions of magnetization. (b) Long time evolutions. The horizontal axis is in the logarithmic scale, which is scaled as $\log_{10}(t/N)$ in the inset. $N=10^{3}~(100)$, $10^{4}~(10)$ and $10^{5}~(1)$, where the inside of braces is the number of realizations over which the orbits are averaged. The system is in thermal equilibrium with $T=0.499$ in $t<0$, and the external field turns on with $h=0.01$ at $t=0$. The direction of magnetization vector $(M_{x},M_{y})$ is reset to $x$-direction at $t=0$. In each panel, three horizontal lines represent equilibrium level with the quasistatic adiabatic susceptibility (upper), the QSS level predicted by the Vlasov linear response theory (middle) and thermal equilibrium level without external field (lower).[]{data-label="fig:Nbody"}](fig_Mlong.eps "fig:"){width="9cm"} Generalization of systems {#sec:generalization} ========================= For simplicity, we have concentrated in the HMF model, but the present theory can be applied to generalized systems. Let us consider the Hamiltonian $$\label{eq:HMFgeneral} \begin{split} H_{N} & = \sum_{i=1}^{N} \dfrac{p_{i}^{2}}{2} +\dfrac{1}{2} \sum_{i,j=1}^{N} K_{N}(r_{i}-r_{j})\left( 1- \cos (q_{i}-q_{j})\right) \\ & - \sum_{i=1}^{N} h_{r_{i}}(t) \Theta(t) \cos q_{i}, \end{split}$$ where $r_{i}$ is the $i$-th lattice point on the one-dimensional lattice, $r_{i+1}-r_{i}=1$, the lattice has the periodic boundary condition by identifying $r_{0}$ with $r_{N}$, and the factor $K(r)$ is even, non-negative and satisfies [@Kac-1963] $$\sum_{i=1}^{N} K_{N}(r_{i}) = 1.$$ Taking the limit $N\to\infty$ so that $$K(r) = \lim_{N\to\infty} N K_{N}(Nr)$$ and $$\label{eq:Kint} \int_{-1/2}^{1/2} K(r) dr = 1 ,$$ we get the effective one-particle Hamiltonian $$H_{h}[f] = \dfrac{p^{2}}{2} + V_{r}[f](q,t) - h_{r}(t)\cos q,$$ where $$\begin{split} V_{r}[f](q,t) = -& \int_{-1/2}^{1/2} dr' K(r-r') \\ &\times \iint \cos (q-q') f(q',p',r',t) dqdp. \end{split}$$ The single body distribution $f(q,p,r,t)$ evolves as the Vlasov equation [@alphaHMF-Vlasov] $$\frac{\partial f}{\partial t} + \left\{H_{h}[f], f \right\} = 0.$$ We consider the linear response for the uniform stable stationary configuration $f_{0}(q,p)$, which does not depend on the lattice point $r$. The external field modifies the state from $f_{0}(q,p)$ to $f_{0}(q,p)+f_{1}(q,p,r,t)$. We define the modification of magnetization depending on $r$ as $$M_{r}^{1}(t) = \iint \cos q f_{1}(q,p,r,t) dqdp.$$ Using the periodicity of the lattice, we expand the modification $M_{r}^{1}(t)$, the factor $K(r)$ and the external field $h_{r}(t)$ as $$M_{r}^{1}(t) = \sum_{n\in\mathbb{Z}} \tilde{M}_{n}^{1}(t) e^{2\pi inr}, \quad K(r) = \sum_{n\in\mathbb{Z}} \tilde{K}_{n}e^{2\pi inr},$$ and $$h_{r}(t) = \sum_{n\in\mathbb{Z}} \tilde{h}_{n}(t)e^{2\pi inr}.$$ The Laplace transform with respect to time $t$ gives $$M_{r}^{1}(t) = \sum_{n\in\mathbb{Z}} \dfrac{e^{2\pi inr}}{2\pi} \int_{\Gamma} \dfrac{F(\omega)}{1-\tilde{K}_{n}F(\omega)} \hat{h}_{n}(\omega) e^{-i\omega t},$$ where $\Gamma$ is the Bromwich contour, $\hat{h}_{n}(\omega)$ is the Laplace transform of $\tilde{h}_{n}(t)$, $$F(\omega) = \int_{0}^{\infty} dt e^{i\omega t} \iint \cos q_{t} \{ \cos q, f_{0} \} dqdp,$$ and $q_{t}$ is the solution to the canonical equation associated with the Hamiltonian $H_{0}[f_{0}]$, which has zero external field. Setting the external field as $h_{r}(t)\to h~(t\to\infty)$, we have $\tilde{h}_{0}\to h$ and $\tilde{h}_{n}\to 0~(n\neq 0)$. As discussed in [@LinearResponseII], the surviving response is provided by the pole of $\hat{h}_{0}(\omega)$ at $\omega=0$, and other poles give dampings by the stability assumption of $f_{0}$. The linear response is hence written by the same stability functional $D^{\rm V}$ with the HMF model [@LinearResponseII] as $$M_{r}^{1}(t) \to \dfrac{F(0)}{1-F(0)}~h = \dfrac{1-D^{\rm V}}{D^{\rm V}}~h,$$ where we used the fact $\tilde{K}_{0}=1$ from Eq. . From the above expression, we conclude that the Vlasov susceptibility and the critical exponents $\gamma_{\pm}^{\rm V}$ in the system are the same with ones in the HMF model, for uniform stable stationary configurations. Summary and discussions {#sec:summary} ======================= We investigated the critical exponent of susceptibility in the Hamiltonian mean-field (HMF) model, which is a mean-field ferro-magnetic model and is approximately described by the Vlasov dynamics. The classical mean-field theory gives the critical exponent $1$ both in the high- and low-energy phases, but the linear response theory for the Vlasov systems reveals that the exponent is the half of that of magnetization in the low-energy phase, which is typically $1/4$. This scaling is obtained not only in thermal equilibrium states, but also in one-parameter families of quasistationary states of the Jeans type, when the families have continuous phase transitions. Apart from the HMF model, the present theory can be applied to uniform stable stationary configurations of generalized systems, whose interaction depends on distance between two lattice points on which particles are. Some remarks are discussed in the followings. The first remark is about the validity of the linear response theory close to critical points. The theory assumes that $\delta f$ is vanishing when $h\to0$ with satisfying the condition $|h\chi^{\rm V}|\ll{\langle M \rangle}_{0}$. The Vlasov susceptibility can be, therefore, computed by use of the Vlasov linear theory even for a large $\chi^{\rm V}$ since it is computed in the limit of $h\to 0$. The second remark is on the spectrum analysis used to compute susceptibilities [@patelli-ruffo-12] in the inhomogeneous phase. This methods does not consider whole the integrals of motions but can be used to describe approximations of the linear theory for non-integrable systems. The third remark is for the critical exponents in the homogeneous phase. In the homogeneous equilibrium, the two susceptibilities satisfy $\chi^T = \chi^{\rm V} = T_{\rm c}/(T- T_{\rm c})$ for $T > T_{\rm c}$. Then, the isolated system shows the classical exponent, although the dynamics keeps an infinite number of Casimir invariants. Thus, Casimir constraints do not always bring about the [*strange*]{} critical exponent, and it depends on the initial equilibrium state. Another remark is that the existence of invariants may break some thermodynamic laws. Indeed, local temperature in isolated crystalline clusters is not uniform by conservation of angular and translational momenta [@niiyama-07]. We remark on other studies of the critical exponents in the Vlasov framework. Based on the theory on unstable manifolds of the Vlasov-Poisson equation [@crawford-95], Ivanov et al. [@ivanov-05] found numerically that scaling laws are different from the ones predicted by the classical theory. However, they start from unstable spatially homogeneous Maxwell distributions, and no critical exponents are discussed in literature for stable states and QSSs. We end this article remarking on observations in experiments. Dynamical systems could get trapped in QSSs, therefore, measures on experimental setups will show the Vlasov prediction for systems with large enough number of particles. The authors thank to S. Ruffo and C. Nardini for useful discussions and to M. M. Sano and M. Suzuki for informing about Ref. [@suzuki-70]. This work is inspired by discussions in the workshop held in Centre Blaise Pascal, ENS-Lyon, August 2012. SO acknowledges the support of Grants for Excellent Graduate Schools, the hospitality of University of Florence, and the JSPS Research Fellowships for Young Scientists (Grant No. 254728). YYY acknowledges the support of Grant-in-Aid for Scientific Research (C) 23560069. [99]{} R. Kubo, J. Phys. Soc. Jpn. [**12**]{}, 570 (1957). H. Falk, Phys. Rev. [**165**]{}, 602 (1968). R. M. Wilcox, Phys. Rev. [**174**]{}, 624 (1968). P. Mazur, Physica [**43**]{}, 533 (1969). M. Suzuki, Physica [**51**]{}, 277 (1971). R. Balescu, Equilibrium and Nonequilibrium Statistical Mechanics, Wiley, New York, (1975). D.R. Nicholson, Introduction to Plasma Theory, John Wiley, (1983). A. Patelli, S. Gupta, C. Nardini, and S. Ruffo, Phys. Rev. E [**85**]{}, 021133 (2012). S. Ogawa and Y. Y. Yamaguchi, Phys. Rev. E [**85**]{}, 061115 (2012). J. Binney, S. Tremaine, Galactic Dynamics, Princeton Series in Astrophysics, (1987). A. Campa, T. Dauxois, and S. Ruffo, Phys. Rep. [**480**]{}, 57 (2009). H. Spohn, Large Scale Dynamics of Interacting Particles, Springer-Verlag, Heidelberg, (1991). W. Braun and K. Hepp, Commun. Math. Phys. [**56**]{} 101 (1977). J. H. Jeans, Mon. Not. R. Astron. Soc. [**76**]{}, 71 (1915). Y. Y. Yamaguchi, J. Barré, F. Bouchet, T. Dauxois, and S. Ruffo, Physica A [**337**]{}, 36 (2004). J. Messer and H. Spohn, J. Stat. Phys. [**29**]{} 561 (1982). S. Inagaki and T. Konishi, Publ. Astron. Soc. Jpn. [**45**]{}, 733 (1993). M. Antoni and S. Ruffo, Phys. Rev. E [**52**]{}, 2361 (1995). J. Barré, F. Bouchet, T. Dauxois, S. Ruffo, and Y. Y. Yamaguchi, Physica A [**365**]{}, 177 (2006). A. Antoniazzi, D. Fanelli, S. Ruffo, and Y. Y. Yamaguchi, Phys. Rev. Lett. [**99**]{}, 040601 (2007). J. Barré, A. Olivetti, and Y. Y. Yamaguchi, J. Stat. Mech. P08002 (2010). S. Ogawa, Phys. Rev. E [**87**]{}, 062107 (2013). P. H. Chavanis, Eur. Phys. J. B [**80**]{}, 275 (2011). P. de Buyl, D. Mukamel, and S. Ruffo, Phys. Rev. E [**84**]{}, 061151 (2011). H. Yoshida, Celest. Mech. Dyn. Astron. [**56**]{}, 27 (1993). M. Kac, G. E. Uhlenbeck, and P. C. Hemmer, J. Math. Phys. [**4**]{}, 216 (1963). R. Bachelard, T. Dauxois, G. De Ninno, S. Ruffo, and F. Staniscia, Phys. Rev. E [**83**]{}, 061132 (2011). A. Patelli and S. Ruffo, in preparation. T. Niiyama, Y. Shimizu, T. R. Kobayashi, T. Okushima, and K. S. Ikeda, Phys. Rev. Lett. [**99**]{}, 014102 (2007). J. D. Crawford, Phys. Plasmas [**2**]{}, 97 (1995). A. V. Ivanov, S. V. Vladimirov, and P. A. Robinson, Phys. Rev. E [**71**]{}, 056406 (2005).
--- abstract: 'We report new STAR measurements of the single-spin asymmetries $A_L$ for $W^+$ and $W^-$ bosons produced in polarized proton–proton collisions at $\sqrt{s}$ = 510GeV as a function of the decay-positron and decay-electron pseudorapidity. The data were obtained in 2013 and correspond to an integrated luminosity of 250 pb$^{-1}$. The results are combined with previous results obtained with 86 pb$^{-1}$. A comparison with theoretical expectations based on polarized lepton-nucleon deep-inelastic scattering and prior polarized proton–proton data suggests a difference between the $\bar{u}$ and $\bar{d}$ quark helicity distributions for $0.05 < x < 0.25$. In addition, we report new results for the double-spin asymmetries $A_{LL}$ for $W^\pm$, as well as $A_L$ for $Z/\gamma^*$ production and subsequent decay into electron–positron pairs.' author: - 'J. Adam' - 'L. Adamczyk' - 'J. R. Adams' - 'J. K. Adkins' - 'G. Agakishiev' - 'M. M. Aggarwal' - 'Z. Ahammed' - 'I. Alekseev' - 'D. M. Anderson' - 'R. Aoyama' - 'A. Aparin' - 'D. Arkhipkin' - 'E. C. Aschenauer' - 'M. U. Ashraf' - 'F. Atetalla' - 'A. Attri' - 'G. S. Averichev' - 'V. Bairathi' - 'K. Barish' - 'A. J. Bassill' - 'A. Behera' - 'R. Bellwied' - 'A. Bhasin' - 'A. K. Bhati' - 'J. Bielcik' - 'J. Bielcikova' - 'L. C. Bland' - 'I. G. Bordyuzhin' - 'J. D. Brandenburg' - 'A. V. Brandin' - 'D. Brown' - 'J. Bryslawskyj' - 'I. Bunzarov' - 'J. Butterworth' - 'H. Caines' - 'M. Calder[ó]{}n de la Barca S[á]{}nchez' - 'D. Cebra' - 'I. Chakaberia' - 'P. Chaloupka' - 'B. K. Chan' - 'F-H. Chang' - 'Z. Chang' - 'N. Chankova-Bunzarova' - 'A. Chatterjee' - 'S. Chattopadhyay' - 'J. H. Chen' - 'X. Chen' - 'J. Cheng' - 'M. Cherney' - 'W. Christie' - 'G. Contin' - 'H. J. Crawford' - 'M. Csanad' - 'S. Das' - 'T. G. Dedovich' - 'I. M. Deppner' - 'A. A. Derevschikov' - 'L. Didenko' - 'C. Dilks' - 'X. Dong' - 'J. L. Drachenberg' - 'J. C. Dunlop' - 'T. Edmonds' - 'L. G. Efimov' - 'N. Elsey' - 'J. Engelage' - 'G. Eppley' - 'R. Esha' - 'S. Esumi' - 'O. Evdokimov' - 'J. Ewigleben' - 'O. Eyser' - 'R. Fatemi' - 'S. Fazio' - 'P. Federic' - 'J. Fedorisin' - 'Y. Feng' - 'P. Filip' - 'E. Finch' - 'Y. Fisyak' - 'C. E. Flores' - 'L. Fulek' - 'C. A. Gagliardi' - 'T. Galatyuk' - 'F. Geurts' - 'A. Gibson' - 'D. Grosnick' - 'D. S. Gunarathne' - 'A. Gupta' - 'W. Guryn' - 'A. I. Hamad' - 'A. Hamed' - 'A. Harlenderova' - 'J. W. Harris' - 'L. He' - 'S. Heppelmann' - 'S. Heppelmann' - 'N. Herrmann' - 'L. Holub' - 'Y. Hong' - 'S. Horvat' - 'B. Huang' - 'H. Z. Huang' - 'S. L. Huang' - 'T. Huang' - 'X.  Huang' - 'T. J. Humanic' - 'P. Huo' - 'G. Igo' - 'W. W. Jacobs' - 'A. Jentsch' - 'J. Jia' - 'K. Jiang' - 'S. Jowzaee' - 'X. Ju' - 'E. G. Judd' - 'S. Kabana' - 'S. Kagamaster' - 'D. Kalinkin' - 'K. Kang' - 'D. Kapukchyan' - 'K. Kauder' - 'H. W. Ke' - 'D. Keane' - 'A. Kechechyan' - 'M. Kelsey' - 'D. P. Kikoła ' - 'C. Kim' - 'T. A. Kinghorn' - 'I. Kisel' - 'A. Kisiel' - 'M. Kocan' - 'L. Kochenda' - 'L. K. Kosarzewski' - 'A. F. Kraishan' - 'L. Kramarik' - 'P. Kravtsov' - 'K. Krueger' - 'N. Kulathunga Mudiyanselage' - 'L. Kumar' - 'R. Kunnawalkam Elayavalli' - 'J. Kvapil' - 'J. H. Kwasizur' - 'R. Lacey' - 'J. M. Landgraf' - 'J. Lauret' - 'A. Lebedev' - 'R. Lednicky' - 'J. H. Lee' - 'C. Li' - 'W. Li' - 'W. Li' - 'X. Li' - 'Y. Li' - 'Y. Liang' - 'R. Licenik' - 'J. Lidrych' - 'T. Lin' - 'A. Lipiec' - 'M. A. Lisa' - 'F. Liu' - 'H. Liu' - 'P. Liu' - 'P. Liu' - 'X. Liu' - 'Y. Liu' - 'Z. Liu' - 'T. Ljubicic' - 'W. J. Llope' - 'M. Lomnitz' - 'R. S. Longacre' - 'S. Luo' - 'X. Luo' - 'G. L. Ma' - 'L. Ma' - 'R. Ma' - 'Y. G. Ma' - 'N. Magdy' - 'R. Majka' - 'D. Mallick' - 'S. Margetis' - 'C. Markert' - 'H. S. Matis' - 'O. Matonoha' - 'J. A. Mazer' - 'K. Meehan' - 'J. C. Mei' - 'N. G. Minaev' - 'S. Mioduszewski' - 'D. Mishra' - 'B. Mohanty' - 'M. M. Mondal' - 'I. Mooney' - 'Z. Moravcova' - 'D. A. Morozov' - 'Md. Nasim' - 'K. Nayak' - 'J. M. Nelson' - 'D. B. Nemes' - 'M. Nie' - 'G. Nigmatkulov' - 'T. Niida' - 'L. V. Nogach' - 'T. Nonaka' - 'G. Odyniec' - 'A. Ogawa' - 'K. Oh' - 'S. Oh' - 'V. A. Okorokov' - 'D. Olvitt Jr.' - 'B. S. Page' - 'R. Pak' - 'Y. Panebratsev' - 'B. Pawlik' - 'H. Pei' - 'C. Perkins' - 'R. L. Pinter' - 'J. Pluta' - 'J. Porter' - 'M. Posik' - 'N. K. Pruthi' - 'M. Przybycien' - 'J. Putschke' - 'A. Quintero' - 'S. K. Radhakrishnan' - 'S. Ramachandran' - 'R. L. Ray' - 'R. Reed' - 'H. G. Ritter' - 'J. B. Roberts' - 'O. V. Rogachevskiy' - 'J. L. Romero' - 'L. Ruan' - 'J. Rusnak' - 'O. Rusnakova' - 'N. R. Sahoo' - 'P. K. Sahu' - 'S. Salur' - 'J. Sandweiss' - 'J. Schambach' - 'A. M. Schmah' - 'W. B. Schmidke' - 'N. Schmitz' - 'B. R. Schweid' - 'F. Seck' - 'J. Seger' - 'M. Sergeeva' - 'R. Seto' - 'P. Seyboth' - 'N. Shah' - 'E. Shahaliev' - 'P. V. Shanmuganathan' - 'M. Shao' - 'W. Q. Shen' - 'S. S. Shi' - 'Q. Y. Shou' - 'E. P. Sichtermann' - 'S. Siejka' - 'R. Sikora' - 'M. Simko' - 'J. Singh' - 'S. Singha' - 'D. Smirnov' - 'N. Smirnov' - 'W. Solyst' - 'P. Sorensen' - 'H. M. Spinka' - 'B. Srivastava' - 'T. D. S. Stanislaus' - 'D. J. Stewart' - 'M. Strikhanov' - 'B. Stringfellow' - 'A. A. P. Suaide' - 'T. Sugiura' - 'M. Sumbera' - 'B. Summa' - 'X. M. Sun' - 'Y. Sun' - 'Y. Sun' - 'B. Surrow' - 'D. N. Svirida' - 'P. Szymanski' - 'A. H. Tang' - 'Z. Tang' - 'A. Taranenko' - 'T. Tarnowsky' - 'J. H. Thomas' - 'A. R. Timmins' - 'T. Todoroki' - 'M. Tokarev' - 'C. A. Tomkiel' - 'S. Trentalange' - 'R. E. Tribble' - 'P. Tribedy' - 'S. K. Tripathy' - 'O. D. Tsai' - 'B. Tu' - 'T. Ullrich' - 'D. G. Underwood' - 'I. Upsal' - 'G. Van Buren' - 'J. Vanek' - 'A. N. Vasiliev' - 'I. Vassiliev' - 'F. Videb[æ]{}k' - 'S. Vokal' - 'S. A. Voloshin' - 'A. Vossen' - 'F. Wang' - 'G. Wang' - 'P. Wang' - 'Y. Wang' - 'Y. Wang' - 'J. C. Webb' - 'L. Wen' - 'G. D. Westfall' - 'H. Wieman' - 'S. W. Wissink' - 'R. Witt' - 'Y. Wu' - 'Z. G. Xiao' - 'G. Xie' - 'W. Xie' - 'H. Xu' - 'N. Xu' - 'Q. H. Xu' - 'Y. F. Xu' - 'Z. Xu' - 'C. Yang' - 'Q. Yang' - 'S. Yang' - 'Y. Yang' - 'Z. Ye' - 'Z. Ye' - 'L. Yi' - 'K. Yip' - 'I. -K. Yoo' - 'H. Zbroszczyk' - 'W. Zha' - 'D. Zhang' - 'J. Zhang' - 'L. Zhang' - 'S. Zhang' - 'S. Zhang' - 'X. P. Zhang' - 'Y. Zhang' - 'Z. Zhang' - 'J. Zhao' - 'C. Zhong' - 'C. Zhou' - 'X. Zhu' - 'Z. Zhu' - 'M. Zurek' - 'M. Zyzak' title: 'Measurement of the longitudinal spin asymmetries for weak boson production in proton–proton collisions at $\sqrt{s}$ = 510GeV' --- Understanding the spin structure of the proton in terms of its quark, antiquark, and gluon constituents is of fundamental interest. This description is commonly done using polarized parton distribution functions (PDFs), which can be determined using perturbative QCD techniques and global analyses of data from polarized deep-inelastic lepton-nucleon scattering (DIS) experiments and from high-energy polarized proton–proton scattering experiments at the Relativistic Heavy-Ion Collider (RHIC). Recent examples of such PDFs are given in Refs. [@Nocera:2014gqa; @deFlorian:2014yva]. The data from leptonic $W$-decays in polarized proton–proton collisions at RHIC [@Aggarwal:2010vc; @Adamczyk:2014xyw; @Adare:2010xa; @Adare:2015gsd; @Adare:2018csm] provide constraints in these global analyses, which now show a flavor asymmetry in the light sea-quark polarizations for parton momentum fractions, $0.05 < x < 0.25,$ at hard perturbative scales. The existence of such an asymmetry in the polarized PDFs has been searched for directly in semi-inclusive DIS experiments [@Adeva:1998qz; @Airapetian:2004zf; @Alekseev:2010ub] but had thus far been established only in the case of the unpolarized PDFs. There, Drell-Yan measurements [@Baldit:1994jk; @Towell:2001nh] and DIS measurements [@Arneodo:1996qe; @Ackerstaff:1998sr], in particular, have reported large enhancements in the ratio of $\bar{d}$ over $\bar{u}$ antiquark distributions. This has provided a strong impetus for theoretical modeling [@Kumano:1997cy] and renewed measurement [@Reimer:2011zza]. Considerable progress is being made also in lattice-QCD [@Lin:2017snn]. The leptonic $W^+\rightarrow e^+ \nu$ and $W^-\rightarrow e^- \bar{\nu}$ decay channels provide sensitivity to the helicity distributions of the quarks, $\Delta u$ and $\Delta d$, and antiquarks, $\Delta\bar{u}$ and $\Delta\bar{d}$, that is free of uncertainties associated with non-perturbative fragmentation. The cross-sections are well described [@STAR:2011aa]. The primary observable is the longitudinal single-spin asymmetry $A_L \equiv (\sigma_+-\sigma_- )/ (\sigma_++\sigma_-)$ where $\sigma_{+(-)}$ is the cross-section when the helicity of the polarized proton beam is positive (negative). At leading order, $$% 64 words equivalent A_L^{W^+}(y_W) \propto \frac{ \Delta \bar{d}(x_1) u(x_2) - \Delta u(x_1) \bar{d}(x_2) }{\bar{d}(x_1) u(x_2) + u(x_1) \bar{d}(x_2)}, \label{Eqn:ALWp}$$ $$A_L^{W^-}(y_W) \propto \frac{\Delta \bar{u}(x_1) d(x_2)-\Delta d(x_1) \bar{u}(x_2)}{\bar{u}(x_1)d(x_2) + d(x_1) \bar{u}(x_2)}, \label{Eqn:ALWn}$$ where $x_1~(x_2)$ is the momentum fraction carried by the colliding quark or antiquark in the polarized (unpolarized) beam. $A_L^{W^+}$ ($A_L^{W^-}$) approaches $-\Delta u/u$ ($-\Delta d/d$) in the very forward region of $W$ rapidity, $y_W \gg 0$, and $\Delta \bar{d}/\bar{d}$ ($\Delta \bar{u}/\bar{u}$) in the very backward region of $W$ rapidity, $y_W \ll 0$. The observed positron and electron pseudorapidities, $\eta_e$, are related to $y_W$ and to the decay angle of the positron and electron in the $W$ rest frame [@Bunce:2000uv]. Higher-order corrections to $A_L(\eta_e)$ are known [@Ringer:2015oaa; @Nadolsky2003; @deFlorian:2010aa] and have been incorporated into the aforementioned global analyses. In this Rapid Communication, we report new measurements of the single-spin asymmetries for decay positrons and electrons from $W^\pm$ bosons produced in longitudinally-polarized proton–proton collisions at a center-of-mass energy of $\sqrt{s}$ = 510GeV. In addition, we report new results for the double-spin asymmetries $A_{LL}$ for $W^\pm$ and $A_L$ for $Z/\gamma^*$ production. The data were recorded in the year 2013 by the STAR collaboration and correspond to an integrated luminosity of about 250 pb$^{-1}$. The polarizations of the two incident proton beams were measured using Coulomb-nuclear interference proton–carbon polarimeters, which were calibrated with a polarized hydrogen gas-jet target [@RHICpol]. The luminosity-weighted beam polarization was $P = 0.56$, with a relative scale uncertainty of 3.3% for the single-beam polarization and 6.4% for the product of the polarizations from both beams. The figure-of-merit, $P^2\mathcal{L}$ for single-spin asymmetry measurements, is higher by a factor of three for the 2013 data compared to the results [@Adamczyk:2014xyw] from the 2011 and 2012 data. This measurement and analysis made use of essentially the same apparatus and techniques as described in Refs. [@Aggarwal:2010vc; @STAR:2011aa; @Adamczyk:2014xyw]. As before, the subsystems of the STAR detector [@NIM] used in this measurement are the Time Projection Chamber [@TPC] (TPC), which provides charged particle tracking for pseudorapidities $|\eta| \lesssim 1.3$, and the Barrel [@BEMC] and Endcap [@EEMC] Electromagnetic Calorimeters (BEMC, EEMC). These lead-scintillator sampling calorimeters are segmented into optically isolated towers that cover the full azimuthal angle, $\phi$, for mid and forward pseudorapidity, $|\eta|<1.0$ and $1.1<\eta < 2.0$, respectively. They provide the online triggering requirements to initiate the data recording. The trigger accepted events if a transverse energy $E_T > 12\ (10)\, \mathrm{GeV}$ was observed in a region $\Delta\eta\times\Delta\phi \simeq 0.1 \times 0.1$ of the BEMC (EEMC). Events were kept in the analysis if their collision vertex along the beam axis, determined from tracks reconstructed in the TPC, was within $\pm\,100$cm of the center of the STAR detector. The vertex distribution along the beam axis was approximately Gaussian with an RMS width of 47cm. The $W^\pm$ bosons were detected via their decay into positrons and electrons, $W^+\rightarrow e^+ \nu$ and $W^-\rightarrow e^- \bar{\nu}$. These events are characterized by an isolated $e^+$ or $e^-$ with high transverse momentum, $p_T$, accompanied by a high $p_T$ neutrino, $\nu$, or antineutrino, $\bar{\nu}$. Since the $\nu$ and $\bar{\nu}$ escape detection, this leads to a characteristically large $p_T$ imbalance in these events. Candidate $W$-decay positrons or electrons were identified at mid-rapidity (forward rapidity) by a high $p_T$ TPC track associated with the primary event collision vertex pointing to a matching tower cluster in the BEMC (EEMC) with high energy. Candidate tracks at mid-rapidity (forward rapidity) were required to have at least 15(5) TPC hits to ensure good track quality, and the ratio of the number of hits in the fit to the number of possible hits was required to be more than 0.51 to avoid splitting tracks. A threshold was imposed on the transverse momentum of the particle track, $p_T > 10\,(7)\,\mathrm{GeV/}c$. Of the four possible 2 $\times$ 2 calorimeter tower clusters containing the tower that was hit at its front face by the high-$p_T$ positron or electron, the cluster with the largest total energy was used to determine the positron or electron transverse energy, $E^e_T$. This energy was required to exceed 14 GeV. The distance between the track and the center position of the tower cluster was required to be less than 7(10)cm at the front face of the BEMC (EEMC). Unlike background events, signal events have a characteristic isolated transverse energy deposit from the decay positron or electron of about 40GeV, approximately half the $W$ mass, and a large imbalance in the total observed transverse energy as mentioned above. QCD backgrounds were suppressed using selections based on kinematic and topological differences between leptonic $W$-decay events and QCD processes. To identify isolated high-$p_T$ decay positrons or electrons, and discriminate against jets, the ratio of $E^e_T$ to the total energy in a $4\times4$ BEMC (EEMC) cluster centered on and including the candidate $2\times2$ tower cluster was required to be greater than $95\,(96)\%$. In addition, the ratio of $E^e_T$ to the transverse energy $E^{\Delta R < 0.7}_T$ in a cone of radius of $\Delta R = \sqrt{\Delta\eta^2 + \Delta\phi^2} < 0.7$ around the candidate track was required to be greater than 88%. The transverse energy $E^{\Delta R < 0.7}_T$ was determined by summing the BEMC and EEMC $E_T$ and the TPC track $p_T$ within the cone. This selection thus suppressed jet-like events. In addition, in the EEMC acceptance, an isolation cut based on the energy deposited in the two layers of the EEMC Shower Maximum Detector (ESMD) [@EEMC] was used. The ESMD can be used to measure the transverse profile of the electromagnetic shower and thereby discriminate between the narrow transverse profile of an isolated (signal) positron or electron shower and the typically wider distribution observed in QCD (background) events. This was done by requiring that the ratio of total energy deposited in ESMD strips within $\pm$1.5cm of the central strip pointed to by a TPC track to the energy deposited in strips within $\pm$10cm, $R_\mathrm{ESMD}$, was greater than 0.7. In addition, the characteristic transverse energy imbalance of signal events was used to further suppress backgrounds. A $p_T$-balance vector, $\vec{p}_T^{\ \mathrm{bal}}$, defined as the vector-sum of the decay positron or electron candidate $\vec{p}^{\ e}_T$ vector plus the sum of the $\vec{p}_T$ vectors for all reconstructed jets whose axes are outside a cone radius of $\Delta R$ = 0.7 around the candidate decay positron or electron, was computed for each event. Jets were reconstructed for this purpose using an anti-$k_T$ algorithm [@Cacciari:2008gp] with a resolution parameter $R = 0.6$ from towers (tracks) with $E_T$ ($p_T$) $> 0.2$ GeV($/c$). Reconstructed jets were required to have $p_T > 3.5$ GeV$/c$. A scalar signed $p_T$-balance variable, defined as $(\vec{p}_T^{\ e}\cdot\,\vec{p}^{\ \mathrm{bal}}_T)/|\vec{p}^{\ e}_T|$, was then computed and required to be larger than 14 (20)GeV/$c$ for candidate events in the BEMC (EEMC) to be retained in the analysis. Complementary to the signed $p_T$-balance cut, it was required that the total transverse energy opposite in azimuth to the candidate positron or electron in the BEMC, $-0.7 < \Delta\phi -\pi < 0.7,$ did not exceed 11GeV. This further reduced QCD dijet background in cases when a sizable fraction of the energy for one of the jets was not observed due to detector effects. Candidate positrons or electrons that passed the above selection cuts were then sorted by charge-sign, determined from the curvature of the TPC tracks in the solenoidal magnetic field. Figure \[fig:charge\]a (b) shows the distribution of the reconstructed charge-sign, $Q = \pm 1$, multiplied by the ratio of $E^e_T$ observed in the BEMC (EEMC) to $p^e_T$ determined with the TPC for events in the signal region, $25 < E_T^e < 50\,\mathrm{GeV}$. The relative yields of the $W^+$ and $W^-$ follow the pseudorapidity dependence of the cross-section ratio. The distributions were each fitted with two double-Gaussian template shapes, determined from a Monte Carlo simulated $W$ sample, to estimate the reconstructed charge-sign purity. The amplitudes of the Gaussians were fitted to the data, as was the central position of the narrower Gaussian in each of the templates. The remaining parameters were fixed by studies in which simulated $W^+\rightarrow e^+ \nu$ and $W^-\rightarrow e^- \bar{\nu}$ events were embedded (c.f. the paragraph below) in zero-bias data. The hatched regions, $|Q\cdot E_T/p_T| < 0.4$ and $|Q\cdot E_T/p_T| > 1.8$, were excluded to remove tracks with poorly reconstructed $p_T$ and to reduce contamination from events with opposite charge-sign. This contamination is negligible at mid-rapidity, but increases to 9.6% and 12.0% for $W^+$ and $W^-$ candidate events, respectively, in the EEMC region. The forward $A_L$ values were corrected for this contamination using the asymmetries observed in the data. ![\[fig:charge\] Distribution of the product of $Q$, the TPC reconstructed charge-sign, and $E_T/p_T$ in the BEMC (a) and EEMC (b) regions. The positron (red) and electron (blue) candidate events have been fitted with double-Gaussian distributions. The excluded regions are marked by hatched shades.](chargeSep_run13){width="8.6cm"} Figure \[fig:bemc\] shows the distributions of $W^+$ and $W^-$ yields as a function of $E^e_T$ for the four central $\eta_e$ intervals considered in this analysis, along with the estimated residual background contributions from electroweak and QCD processes. The residual electroweak backgrounds are predominantly due to $W^\pm\rightarrow\tau^\pm\nu_\tau$ and $Z/{\gamma^*}\rightarrow e^+ e^-$. These contributions were estimated from Monte Carlo simulations, using events generated with <span style="font-variant:small-caps;">pythia 6.4.28</span> [@Sjostrand:2006za] and the “Perugia 0" tune [@Skands:2010ak] that passed through a <span style="font-variant:small-caps;">geant 3</span> [@Brun:1994aa] model of the STAR detector, and were subsequently embedded into STAR zero-bias data. The simulated samples were normalized to the $W$ data using the known integrated luminosity. The <span style="font-variant:small-caps;">tauola</span> package was used for the polarized $\tau^\pm$ decay [@Golonka:2003xt]. Residual QCD dijet background in which one of the jets pointed to uninstrumented pseudorapidity regions was estimated using two separate procedures. The contribution from $e^\pm$ candidate events with an opposite-side jet fragment in the uninstrumented region $-2 < \eta < -1.1$ was estimated by studying such data in the EEMC, which instruments the region $1.1 < \eta < 2$. This is referred to as the “Second EEMC" procedure. Residual background from the uninstrumented region $|\eta|>2$ was estimated by studying events that satisfy all isolation criteria, but do not satisfy the cuts on the scalar signed $p_T$-balance variable. This is referred to as the “Data-driven QCD" procedure. To assess the background remaining in the signal region, the $E_T$ distribution of this background-dominated sample was normalized to the signal candidate distribution that remained after all other background contributions had been removed for $E_T$ values between 14GeV and 18GeV. Additional aspects of both procedures are described in Refs [@Aggarwal:2010vc; @STAR:2011aa]. ![\[fig:bemc\] $E_T^e$ distribution of electron (top) and positron (bottom) candidates (black crosses), background contributions, and sum of backgrounds and $W \rightarrow e\nu$ MC signal (red-dashed) in the BEMC region.](STAR-bemcW8_2013p1p2){width="17.8cm"} Figure \[fig:eemc\] shows the charge-separated distributions in the EEMC region as a function of the signed $p_{T}$-balance variable, together with the estimated residual background contributions. Residual electroweak backgrounds for these regions were estimated in the same way as for the mid-rapidity data. Residual QCD backgrounds were estimated using the ESMD, where the isolation parameter $R_\mathrm{ESMD}$ was required to be less than 0.6 for QCD background events. The shape was determined for each charge-sign separately and normalized to the measured yield in the region where the signed $p_T$-balance variable was between -8 and 8GeV/$c$. This region is dominated by QCD backgrounds. ![\[fig:eemc\] Signed $p_{T}$-balance distribution for electron (left) and positron (right) candidates (black crosses), background contributions, and sum of backgrounds and $W \rightarrow e\nu$ MC signal (red-dashed) in the EEMC region.](STAR-eemcW2_2013p1p2){width="8.6cm"} At RHIC, there are four helicity configurations for the two longitudinally-polarized proton beams: $++$, $+-$, $-+$, and $--$. The data from these four configurations can be combined such that the net polarization for one beam effectively averages to zero, while maintaining high polarization in the other. The longitudinal single-spin asymmetry $A_L$ for the combination in which the first beam is polarized and the second carries no net polarization was determined from: $$%A_L = \frac{R_{++}N_{++}+R_{+-}N_{+-}-R_{-+}N_{-+}-R_{--}N_{--}}{\beta \cdot P \cdot \sum_{i=1}^{4} R_iN_i}, A_L = \frac{1}{\beta P}\frac{R_{++}N_{++}+R_{+-}N_{+-}-R_{-+}N_{-+}-R_{--}N_{--}}{R_{++}N_{++}+R_{+-}N_{+-}+R_{-+}N_{-+}+R_{--}N_{--}}$$ where $\beta$ is the signal purity, $P$ is the average beam polarization, and $R$ and $N$ are the normalizations for relative luminosity and the raw $W^\pm$ yields, respectively, for the helicity configurations indicated by the subscripts. The relative luminosities were obtained from a large QCD sample that exhibits no significant single-spin asymmetry. Typical values were between 0.993 and 1.009. The purity was evaluated from the aforementioned signal and background contributions and was found to be between 83% and 98%. $A_L$ was determined in a similar way for the combination in which the second beam is polarized and the first carries no net polarization, and the values for the two combinations were then combined. ![\[fig:money\] Longitudinal single-spin asymmetries, $A_L$, for $W^\pm$ production as a function of the positron or electron pseudorapidity, $\eta_e$, separately for the STAR 2011+2012 (black squares) and 2013 (red diamonds) data samples for 25 $< E_T^e <$ 50GeV. The 2011+2012 results have been offset to slightly smaller $\eta$ values for clarity. Shown also are the final asymmetries for high-energy decay leptons from $W$ and $Z/\gamma^*$ production from the PHENIX central arms as a function of $\eta_e$ and from the muon-arms as a function of $\eta_\mu$ with their statistical and systematic uncertainties [@Adare:2015gsd; @Adare:2018csm]. ](moneyPlot2013_wtPRL12_wtPH.pdf){width="8.6cm"} The $A_L$ results for $W^+$ and $W^-$ from the data sample recorded by STAR in 2013 are shown in Fig. \[fig:money\] as a function of $\eta_e$. The vertical error bars show the size of the statistical uncertainties, including those associated with the correction for the wrong charge-sign in the case of the points at $|\eta_e| \simeq$ 1.2. The previously published STAR data [@Adamczyk:2014xyw] are shown for comparison. Shown also are the $A_L$ data on high-energy forward decay muons and mid-rapidity positrons or electrons from combined $W$ and $Z/\gamma^*$ production by the PHENIX experiment with their statistical and systematic uncertainties as a function of $\eta_\mu$ and $\eta_e$, respectively [@Adare:2015gsd; @Adare:2018csm]. The size of systematic uncertainties associated with BEMC and EEMC gain calibrations (5% variation) and the data-driven QCD background are indicated by the boxes. The gray band shown along the $A_L$ = 0 line indicates the size of the systematic uncertainty from the determination of relative luminosity, and is correlated among all the points. The 3.3% relative systematic uncertainty from beam polarization measurement is not shown. Table \[tab:WAL2013\] gives the results for $A_L$, as well as for the longitudinal double-spin asymmetry $A_{LL} \equiv (\sigma_{++}+\sigma_{--}-\sigma_{+-}-\sigma_{-+})/(\sigma_{++}+\sigma_{--}+\sigma_{+-}+\sigma_{-+})$, where the subscripts denote the helicity configurations. The new $W^\pm$ $A_{LL}$ data are consistent with previously published STAR data [@Adamczyk:2014xyw] and have better precision. $W^\pm$ $A_{LL}$ is sensitive to quark and antiquark polarizations, albeit less so than $A_L$, and has been proposed for tests of consistency and positivity constraints [@Chang:2014jba; @Kang:2011qz]. -- ------- -------------------------------- -------------------------------- -------------------------------- -------------------------------- 2013 2011–2013 2013 2011–2013 -1.24 -0.493 $\pm$ 0.181 $\pm$ 0.022 -0.312 $\pm$ 0.145 $\pm$ 0.017 -0.72 -0.255 $\pm$ 0.035 $\pm$ 0.016 -0.251 $\pm$ 0.030 $\pm$ 0.014 – – -0.25 -0.327 $\pm$ 0.027 $\pm$ 0.014 -0.331 $\pm$ 0.023 $\pm$ 0.014  0.25 -0.406 $\pm$ 0.027 $\pm$ 0.016 -0.412 $\pm$ 0.023 $\pm$ 0.016  0.039 $\pm$ 0.049 $\pm$ 0.014  0.016 $\pm$ 0.042 $\pm$ 0.011  0.72 -0.557 $\pm$ 0.034 $\pm$ 0.024 -0.534 $\pm$ 0.029 $\pm$ 0.022  0.049 $\pm$ 0.063 $\pm$ 0.014  0.072 $\pm$ 0.054 $\pm$ 0.011  1.24 -0.365 $\pm$ 0.183 $\pm$ 0.023 -0.482 $\pm$ 0.140 $\pm$ 0.022 -0.052 $\pm$ 0.331 $\pm$ 0.044  0.000 $\pm$ 0.262 $\pm$ 0.028 -1.27  0.269 $\pm$ 0.185 $\pm$ 0.010  0.241 $\pm$ 0.146 $\pm$ 0.010 -0.74  0.264 $\pm$ 0.060 $\pm$ 0.010  0.260 $\pm$ 0.051 $\pm$ 0.010 – – -0.27  0.282 $\pm$ 0.066 $\pm$ 0.010  0.281 $\pm$ 0.056 $\pm$ 0.011  0.27  0.254 $\pm$ 0.066 $\pm$ 0.010  0.239 $\pm$ 0.056 $\pm$ 0.010  0.067$\pm$ 0.120 $\pm$ 0.025 -0.012 $\pm$ 0.101 $\pm$ 0.019  0.74  0.383 $\pm$ 0.059 $\pm$ 0.015  0.385 $\pm$ 0.051 $\pm$ 0.014 -0.096$\pm$ 0.107 $\pm$ 0.026 -0.028 $\pm$ 0.092 $\pm$ 0.020  1.27  0.218 $\pm$ 0.185 $\pm$ 0.009  0.205 $\pm$ 0.148 $\pm$ 0.009 -0.133$\pm$ 0.331 $\pm$ 0.061 -0.147 $\pm$ 0.260 $\pm$ 0.038 -- ------- -------------------------------- -------------------------------- -------------------------------- -------------------------------- The new $W^\pm$ $A_L$ data are consistent with the previously published results, and have statistical uncertainties that are $40-50$% smaller. The combined STAR data are shown in Fig. \[fig:moneyTh\] and compared with expectations based on the DSSV14 [@deFlorian:2014yva], NNPDFpol1.1 [@Nocera:2014gqa] and BS15 [@Bourrely:2015] PDFs evaluated using the next-to-leading order <span style="font-variant:small-caps;">CHE</span> [@deFlorian:2010aa] and fully resummed <span style="font-variant:small-caps;">RHICBOS</span> [@Nadolsky2003] codes. The NNPDFpol1.1 analysis, unlike DSSV14 and BS15, includes the STAR 2011+2012 $W^\pm$ data [@Adamczyk:2014xyw], which reduces in particular the uncertainties for $W^-$ expectations at negative $\eta$. To assess the impact, the STAR 2013 data were used in the reweighting procedure of Refs. [@Ball:2010gb; @Ball:2011gg] with the 100 publicly available NNPDFpol1.1 PDFs. The results from this reweighting, taking into account the total uncertainties of the STAR 2013 data and their correlations [@Sup:mat], are shown in Fig. \[fig:moneyTh\] as the blue hatched bands. The NNPDFpol1.1 uncertainties [@Nocera:2014gqa] are shown as the green bands for comparison. Figure \[fig:PolSea\] shows the corresponding differences of the light sea-quark polarizations versus $x$ at a scale of $Q^2 = 10\,(\mathrm{GeV}/c)^2$. The data confirm the existence of a sizable, positive $\Delta\bar{u}$ in the range $0.05 < x < 0.25$ [@Adamczyk:2014xyw] and the existence of a flavor asymmetry in the polarized quark sea. ![\[fig:moneyTh\]Longitudinal single-spin asymmetries, $A_L$, for $W^\pm$ production as a function of the positron or electron pseudorapidity, $\eta_e$, for the combined STAR 2011+2012 and 2013 data samples for 25 $< E_T^e <$ 50GeV (points) in comparison to theory expectations (curves and bands) described in the text.](moneyPlotComb){width="8.6cm"} ![\[fig:PolSea\]The difference of the light sea-quark polarizations as a function of $x$ at a scale of $Q^2$ = 10(GeV/$c$)$^2$. The green band shows the NNPDFpol1.1 results [@Nocera:2014gqa] and the blue hatched band shows the corresponding distribution after the STAR 2013 $W^\pm$ data are included by reweighting.](PolSeaAsy.pdf){width="8.6cm"} In addition, $ A_L$ was determined for $Z/\gamma^*$ production from a sample of 274 electron–positron pairs with 70 $<m_{e^+e^-}<$ 110GeV/$c^2$. The $e^+$ and $e^-$ were each required to be isolated, have $|\eta_e|<1.1$, and $E^e_T >$ 14GeV. The result, $A^{Z/\gamma^*}_L = -0.04 \pm 0.07$, is consistent with that in Ref. [@Adamczyk:2014xyw] but with half the statistical uncertainty. The systematic uncertainty is negligible compared to the statistical uncertainty. This result is also consistent with theoretical expectations, $A^{Z/\gamma^*}_L = -0.08$ from DSSV14 [@deFlorian:2014yva] and $A^{Z/\gamma^*}_L =-0.04$ from NNPDFpol1.1 [@Nocera:2014gqa]. In summary, we report new STAR measurements of longitudinal single-spin and double-spin asymmetries for $W^\pm$ and single-spin asymmetry for $Z/\gamma^*$ bosons produced in polarized proton–proton collisions at $\sqrt{s}$ = 510GeV. The production of weak bosons in these collisions and their subsequent leptonic decay is a unique process to delineate the quark and antiquark polarizations in the proton by flavor. The $A_L$ data for $W^+$ and $W^-$, combined with previously published STAR results, show a significant preference for $\Delta\bar{u}(x,Q^2) > \Delta\bar{d}(x,Q^2)$ in the fractional momentum range 0.05 $< x <$ 0.25 at a scale of $Q^2 = 10$(GeV/$c$)$^2$. This is opposite to the flavor asymmetry observed in the spin-averaged quark-sea distributions. We thank the RHIC Operations Group and RCF at BNL, the NERSC Center at LBNL, and the Open Science Grid consortium for providing resources and support. This work was supported in part by the Office of Nuclear Physics within the U.S. DOE Office of Science, the U.S. National Science Foundation, the Ministry of High Education and Science of the Russian Federation, National Natural Science Foundation of China, Chinese Academy of Science, the Ministry of Science and Technology of China and the Chinese Ministry of Education, the National Research Foundation of Korea, Czech Science Foundation and Ministry of Education, Youth and Sports of the Czech Republic, Department of Atomic Energy and Department of Science and Technology of the Government of India, the National Science Centre of Poland, the Ministry of Science, Education and Sports of the Republic of Croatia, RosAtom of Russia and German Bundesministerium für Bildung, Wissenschaft, Forschung and Technologie (BMBF) and the Helmholtz Association. [9]{} E. R. Nocera [*et al.*]{} (NNPDF Collaboration), Nucl. Phys. [**B887**]{}, 276 (2014). D. de Florian, R. Sassot, M. Stratmann, and W. Vogelsang, Phys. Rev. Lett.  [**113**]{}, 012001 (2014). M. M. Aggarwal [*et al.*]{} (STAR Collaboration), Phys. Rev. Lett.  [**106**]{}, 062002 (2011). L. Adamczyk [*et al.*]{} (STAR Collaboration), Phys. Rev. Lett.  [**113**]{}, 072301 (2014). A. Adare [*et al.*]{} (PHENIX Collaboration), Phys. Rev. Lett.  [**106**]{}, 062001 (2011). A. Adare [*et al.*]{} (PHENIX Collaboration), Phys. Rev. D [**93**]{}, 051103 (2016). A. Adare [*et al.*]{} (PHENIX Collaboration), Phys. Rev. D [**98**]{}, 032007 (2018). B. Adeva [*et al.*]{} (Spin Muon Collaboration), Phys. Lett. B [**420**]{}, 180 (1998). A. Airapetian [*et al.*]{} (HERMES Collaboration), Phys. Rev. D [**71**]{}, 012003 (2005). M. G. Alekseev [*et al.*]{} (COMPASS Collaboration), Phys. Lett. B [**693**]{}, 227 (2010). A. Baldit [*et al.*]{} (NA51 Collaboration), Phys. Lett. B [**332**]{}, 244 (1994). R. S. Towell [*et al.*]{} (NuSea Collaboration), Phys. Rev. D [**64**]{}, 052002 (2001). M. Arneodo [*et al.*]{} (New Muon Collaboration), Nucl. Phys. [**B483**]{}, 3 (1997). K. Ackerstaff [*et al.*]{} (HERMES Collaboration), Phys. Rev. Lett.  [**81**]{}, 5519 (1998). S. Kumano, Phys. Rept.  [**303**]{}, 183 (1998) and references therein. P. E. Reimer (Fermilab SeaQuest Collaboration), J. Phys. Conf. Ser.  [**295**]{}, 012011 (2011). H. W. Lin [*et al.*]{}, Prog. Part. Nucl. Phys.  [**100**]{}, 107 (2018) and references therein. L. Adamczyk [*et al.*]{} (STAR Collaboration), Phys. Rev. D [**85**]{}, 092010 (2012). G. Bunce, N. Saito, J. Soffer, and W. Vogelsang, Ann. Rev. Nucl. Part. Sci.  [**50**]{}, 525 (2000). F. Ringer and W. Vogelsang, Phys. Rev. D [**91**]{}, 094033 (2015). D. de Florian and W. Vogelsang, Phys. Rev. D [**81**]{}, 094020 (2010). P. M. Nadolsky and C. Yuan, Nucl. Phys. [**B666**]{}, 31 (2003). RHIC Polarimetry Group, RHIC/CAD Accelerator Physics Note [**490**]{} (2013). K.H. Ackermann [*et al.*]{}, Nucl. Instrum. Meth. A [**499**]{}, 624 (2003). M. Anderson [*et al.*]{}, Nucl. Instrum. Meth. A [**499**]{}, 659 (2003). M. Beddo [*et al.*]{}, Nucl. Instrum. Meth. A [**499**]{}, 725 (2003). C.E. Allgower [*et al.*]{}, Nucl. Instrum. Meth. A [**499**]{}, 740 (2003). M. Cacciari, G. P. Salam, and G. Soyez, J. High Energy Phys. 04 (2008) 063. T. Sjostrand, S. Mrenna, and P. Z. Skands, J. High Energy Phys. 05 (2006) 026. P. Z. Skands, Phys. Rev. D [**82**]{}, 074018 (2010). R. Brun [*et al.*]{}, doi:10.17181/CERN.MUHF.DMJ1. P. Golonka [*et al.*]{}, Comput. Phys. Commun.  [**174**]{}, 818 (2006). C. Bourrely and J. Soffer, Nucl. Phys. [**A941**]{}, 307 (2015). R. D. Ball [*et al.*]{} (NNPDF Collaboration), Nucl. Phys. [**B849**]{}, 112 (2011) \[*Errata: ibid* [**B854**]{}, 926 (2012) and [**B855**]{}, 927 (2012)\]. R. D. Ball [*et al.*]{} (NNPDF Collaboration), Nucl. Phys. [**B855**]{}, 608 (2012). W. C. Chang and J. C. Peng, Prog. Part. Nucl. Phys. [**79**]{}, 95 (2014). Z.-B. Kang and J. Soffer, Phys. Rev. D [**83**]{}, 114020 (2011). See Supplemental Material at \[URL will be inserted by publisher\] for the correlation matrix.
--- abstract: 'This paper is concerned with learning transferable forward models for push manipulation that can be applying to novel contexts and how to improve the quality of prediction when critical information is available. We propose to learn a parametric internal model for push interactions that, similar for humans, enables a robot to predict the outcome of a physical interaction even in novel contexts. Given a desired push action, humans are capable to identify where to place their finger on a new object so to produce a predictable motion of the object. We achieve the same behaviour by factorising the learning into two parts. First, we learn a set of local contact models to represent the geometrical relations between the robot pusher, the object, and the environment. Then we learn a set of parametric local motion models to predict how these contacts change throughout a push. The set of contact and motion models represent our internal model. By adjusting the shapes of the distributions over the physical parameters, we modify the internal model’s response. Uniform distributions yield to coarse estimates when no information is available about the novel context. We call this an unbiased predictor. A more accurate predictor can be learned for a specific environment/object pair (e.g. low friction/high mass), called a biased predictor. The effectiveness of our approach is demonstrated in a simulated environment in which a Pioneer 3-DX robot equipped with a bumper needs to predict a push outcome for an object in a novel context, and we support those results with a proof of concept on a real robot. We train on two objects (a cube and a cylinder) for a total of 24,000 pushes in various conditions, and test on six objects encompassing a variety of shapes, sizes, and physical parameters for a total of 14,400 predicted push outcomes. Our experimental results show that both biased and unbiased predictors can reliably produce predictions in line with the outcomes of a carefully tuned physics simulator.' author: - 'Rhys Howard${}^{1}$, Claudio Zito${}^{1,2,\dagger}$[^1] [^2] [^3]' bibliography: - 'references.bib' title: Learning Transferable Push Manipulation Skills in Novel Contexts --- [Howard : Learning Transferable Push Manipulation Skills in Novel Contexts]{} learning transferable skills, push manipulation, prediction, forward models for physical interaction. [^1]: $^{\dagger}$Corresponding author [Claudio.Zito@tii.ae]{} [^2]: $^{1}$Intelligent Robotics Lab, School of Computer Science, University of Birmingham, B15 2TT, Birmingham, United Kingdom [^3]: $^{2}$Technology Innovation Institute, Abu Dhabi, UAE
--- abstract: 'This paper addresses the linear independence of T-splines that correspond to refinements of three-dimensional tensor-product meshes. We give an abstract definition of analysis-suitability, and prove that it is equivalent to dual-compatibility, wich guarantees linear independence of the T-spline blending functions. In addition, we present a local refinement algorithm that generates analysis-suitable meshes and has linear computational complexity in terms of the number of marked and generated mesh elements.' author: - 'Philipp Morgenstern[^1]' title: 'Globally structured 3D analysis-suitable T-splines: definition, linear independence and $m$-graded local refinement' --- **Keywords:** Isogeometric Analysis, trivariate T-Splines, Analysis-Suitability, Dual-Compatibility, Adaptive mesh refinement. Introduction ============ T-splines [@SZBN:2003] have been introduced as a free-form geometric technology and were the first tool of interest in Adaptive Isogeometric Analysis (IGA). Although they are still among the most common techniques in Computer Aided Design, T-splines provide algorithmic difficulties that have motivated a wide range of alternative approaches to mesh-adaptive splines, such as hierarchical B-splines [@Forsey:Bartels:1988; @KVZB:2014], THB-splines [@GJS:2012], LR splines [@DLP:2013], hierarchical T-splines [@ESLT:2015], amongst many others. One major difficulty using T-splines for analysis has been pointed out by Buffa, Cho and Sangalli [@BCS:2010], who showed that general T-spline meshes can induce linear dependent T-spline blending functions. This prohibits the use of T-splines as a basis for analytical purposes such as solving a discretized partial differential equation. This insight motivated the research on T-meshes that guarantee the linear independence of the corresponding T-spline blending functions, referred to as *analysis-suitable T-meshes*. Analysis-suitability has been characterized in terms of topological mesh properties [@ZSHS:2012] and, in an alternative approach, through the equivalent concept of Dual-Compatibility [@BBCS:2012]. While Dual-Compatibility has been characterized in arbitrary dimensions [@BBSV:2014], Analysis-Suitability as a topological criterion for linear independence of the T-spline functions is only available in the two-dimensional setting. In this paper, we introduce analysis-suitable T-splines for those 3D meshes that are refinements of tensor-product meshes, and propose an algorithm for their local refinement, based on our preliminary work in [@Morgenstern:Peterseim:2015]. In addition, we generalize the algorithm from [@Morgenstern:Peterseim:2015] by introducing a grading parameter $m$ that represents the number of children in a single elements’ refinement. This allows the user to fully control how local the refinement shall be. Choosing $m$ large yields meshes with very local refinement, while a small $m$ will cause more wide-spreaded refinement. The former yields a smaller number of degrees of freedom, while the latter reduces the overlap of the basis functions and hence provides sparser Galerkin and collocation matrices. This paper is organized as follows. Section \[sec: refinement\] defines the initial mesh and basic refinement steps and introduces our new refinement algorithm. Section \[sec: adm meshes\] then characterizes the class of ‘admissible meshes’ generated by this algorithm. In Section \[sec: Tspline def\] we give a brief definition of trivariate odd-degree T-splines. In Section \[sec: AS\] we give an abstract definition of Analysis-Suitability in the 3D setting and prove that all admissible meshes are analysis-suitable. In Section \[sec: DC\] we define dual-compatible meshes, and prove that analysis-suitability and dual-compatibility are equivalent, and that all dual-compatible meshes provide linear independent T-spline functions. (Figure \[fig: overview\] illustrates this “long way” to linear independence.) Section \[sec: complexity\] proves linear complexity of the refinement procedure, and conclusions and an outlook to future work are finally given in Section \[sec: conclusions\]. Symbol Section -------------------------- ------------------------------------------ --------------------- refinement algorithm ${\operatorname{ref}^{\mathbf p,m}}$ \[sec: refinement\] admissible meshes ${\mathbb A^{\mathbf p,m}}$ \[sec: adm meshes\] analysis-suitable meshes ${\mathbb{A\negmedspace S}^{\mathbf p}}$ \[sec: AS\] dual-compatible meshes ${\mathbb{D\negmedspace C}^{\mathbf p}}$ \[sec: DC\] $${\operatorname{ref}^{\mathbf p,m}}({\mathbb A^{\mathbf p,m}})\stackrel{\text{Theorem~\ref{thm: ref works}}}\subseteq {\mathbb A^{\mathbf p,m}}\stackrel{\text{Theorem~\ref{thm: A in AS}}}\subseteq {\mathbb{A\negmedspace S}^{\mathbf p}}\stackrel{\text{Theorem~\ref{thm: AS in DC}}}{\mathrel{\rule{0pt}{\eqheight}=}}{\mathbb{D\negmedspace C}^{\mathbf p}}\stackrel{\text{Theorem~\ref{thm: DC has dual basis}}}\subseteq \left[\parbox{3.25cm}{\centering \small meshes with linearly independent T-splines}\right]$$ Adaptive mesh refinement {#sec: refinement} ======================== This section defines the new refinement algorithm and characterizes the class of meshes which are generated by this algorithm. The algorithm is essentially a 3D version of the one introduced in [@Morgenstern:Peterseim:2015], with the additional feature of variable grading. The initial mesh is assumed to have a very simple structure. In the context of IGA, the partitioned rectangular domain is referred to as *index domain*. This is, we assume that the *physical domain* (on which, e.g., a PDE is to be solved) is obtained by a continuous map from the active region (cf. Section \[sec: DC\]), which is a subset of the index domain. Throughout this paper, we focus on the mesh refinement only, and therefore we will only consider the index domain. For the parametrization and refinement of the T-spline blending functions, we refer to [@SLSH:2012]. Given $\tilde X,\tilde Y,\tilde Z\in\mathbb N$, the initial mesh ${\mathcal G}_0$ is a tensor product mesh consisting of closed cubes (also denoted *elements*) with side length 1, i.e., $${\mathcal G}_0{\coloneqq}\Bigl\{[x-1,x]\times[y-1,y]\times[z-1,z]\mid x\in\{1,\dots,\tilde X\},y\in\{1,\dots,\tilde Y\},z\in\{1,\dots,\tilde Z\}\Bigr\}.$$ The domain partitioned by ${\mathcal G}_0$ is denoted by $\Omega{\coloneqq}(0,\tilde X)\times (0,\tilde Y)\times(0,\tilde Z)$. The key property of the refinement algorithm will be that refinement of an element $K$ is allowed only if elements in a certain neighbourhood are sufficiently fine. The size of this neighbourhood, which is denoted $(\mathbf p,m)$-patch and defined through the definitions below, depends on the size of $K$, the polynomial degree $\mathbf p=(p_1,p_2,p_3)$ of the T-spline functions, and the grading parameter $m$. For the sake of legibility, we assume that $p_1,p_2,p_3$ are odd and greater or equal to 3. (For comments on even polynomial degrees, see Section \[sec: conclusions\].) The *level* of an element $K$ is defined by $$\ell(K){\coloneqq}-\log_m|K|,$$ where $m$ is the manually chosen grading parameter, i.e., the number of children in a single elements’ refinement, and $|K|$ denotes the volume of $K$. This implies that all elements of the initial mesh have level zero and that the refinement of an element $K$ yields $m$ elements of level $\ell(K)+1$. \[df: distance\] Given $x\in{\overline{\rule{0pt}{8pt}\smash\Omega}}$ and an element $K$, we define their distance as the componentwise absolute value of the difference between $x$ and the midpoint of $K$, $$\begin{aligned} \operatorname{Dist}(K,x)&{\coloneqq}\operatorname{abs}\bigl(\operatorname{mid}(K)-x\bigr)\ \in{\mathbb R}^3,\\ \text{with}\quad\operatorname{abs}(y) &{\coloneqq}\bigl(\lvert y_1\rvert, \lvert y_2\rvert, \lvert y_3\rvert\bigr).\end{aligned}$$ For two elements $K_1,K_2$, we define the shorthand notation $$\operatorname{Dist}(K_1,K_2){\coloneqq}\operatorname{abs}\bigl(\operatorname{mid}(K_1)-\operatorname{mid}(K_2)\bigr).$$ \[df: magic patch\] Given an element $K$, a grading parameter $m\ge2$ and the polynomial degree $\mathbf p=(p_1,p_2,p_3)$, we define the open environment $$\begin{aligned} {U^{\mathbf p,m}}(K)&{\coloneqq}\{x\in\Omega\mid\operatorname{Dist}(K,x)<{\operatorname{\mathbf D}^{\mathbf p,m}}(\ell(K))\}, \shortintertext{where} {\operatorname{\mathbf D}^{\mathbf p,m}}(k)&{\coloneqq}\begin{cases}m^{-k/3}\,\bigl(p_1+\tfrac32,p_2+\tfrac32,p_3+\tfrac32\bigr)&\text{if }k=0\bmod3, \\[.7ex] m^{-(k-1)/3}\,\bigl(\tfrac{p_1+3/2}m,p_2+\tfrac32,p_3+\tfrac32\bigr)&\text{if }k=1\bmod3, \\[.7ex] m^{-(k-2)/3}\,\bigl(\tfrac{p_1+3/2}m,\tfrac{p_2+3/2}m,p_3+\tfrac32\bigr)&\text{if }k=2\bmod3.\end{cases} \intertext{The $(\mathbf p,m)$-patch of $K$ is defined as the set of all elements that intersect with environment of $K$,} {{\mathcal G}^{\mathbf p,m}(K)} &{\coloneqq}\{K'\in{\mathcal G}\mid K'\cap{U^{\mathbf p,m}}(K)\neq\emptyset\}.\end{aligned}$$ Note as a technical detail that this definition does *not* require that $K\in{\mathcal G}$. See also Figure \[fig: magic patch examples\] for examples. By definition, the size of the $(\mathbf p,m)$-patch of an element $K$ scales linearly with the size of $K$ and with the polynomial degree $\mathbf p$. Since ${\operatorname{\mathbf D}^{\mathbf p,m}}(k)$ is decreasing in $m$, choosing $m$ large will cause small $(\mathbf p,m)$-patches and hence more localized refinement. ![Examples for the $(\mathbf p,m)$-patch of an element $K$, for $\mathbf p=(3,3,3)$, $m=3$ and $\ell(K)=2,3,4$.[]{data-label="fig: magic patch examples"}](refex_12 "fig:"){width=".3\textwidth"}![Examples for the $(\mathbf p,m)$-patch of an element $K$, for $\mathbf p=(3,3,3)$, $m=3$ and $\ell(K)=2,3,4$.[]{data-label="fig: magic patch examples"}](refex_20 "fig:"){width=".3\textwidth"}![Examples for the $(\mathbf p,m)$-patch of an element $K$, for $\mathbf p=(3,3,3)$, $m=3$ and $\ell(K)=2,3,4$.[]{data-label="fig: magic patch examples"}](refex_30 "fig:"){width=".3\textwidth"} In the subsequent definitions, we will give a detailed description of the elementary subdivision steps and then present the new refinement algorithm. Given an arbitrary element $K=[x,x+\tilde x]\times[y,y+\tilde y]\times[z,z+\tilde z]$, where $x, y,z, \tilde x,\tilde y,\tilde z\in\mathbb{R}$ and $\tilde x,\tilde y,\tilde z>0$, we define the operators $$\begin{aligned} \operatorname{subdiv}_\mathrm x(K) &{\coloneqq}\bigl\{\,[x+\tfrac {j-1}m\tilde x,x+\tfrac jm\tilde x]\times[y,y+\tilde y]\times[z,z+\tilde z]\mid j\in\{1,\dots,m\}\bigr\},\\ \enspace\operatorname{subdiv}_\mathrm y(K) &{\coloneqq}\bigl\{\,[x,x+\tilde x]\times[y+\tfrac {j-1}m\tilde y,y+\tfrac jm\tilde y]\times[z,z+\tilde z]\mid j\in\{1,\dots,m\}\bigr\}, \\\text{and}\enspace \enspace\operatorname{subdiv}_\mathrm z(K) &{\coloneqq}\bigl\{\,[x,x+\tilde x]\times[y,y+\tilde y]\times[z+\tfrac {j-1}m\tilde z,z+\tfrac jm\tilde z]\mid j\in\{1,\dots,m\}\bigr\}.\end{aligned}$$ These operators will be used for $x$-, $y$-, and $z$-orthogonal subdivisions in the refinement procedure. Their output is illustrated in Figure \[fig: elemref\]. \[df: subdivision\]Given a mesh ${\mathcal G}$ and an element $K\in{\mathcal G}$, we denote by $\operatorname{subdiv}({\mathcal G},K)$ the mesh that results from a level-dependent subdivision of $K$, $$\begin{aligned} \operatorname{subdiv}({\mathcal G},K)&{\coloneqq}{\mathcal G}\setminus\{K\}\cup\operatorname{child}(K),\\ \text{with}\enspace\operatorname{child}(K)&{\coloneqq}\begin{cases}\operatorname{subdiv}_\mathrm x(K)&\text{if }\ell(K)=0\bmod3,\\\operatorname{subdiv}_\mathrm y(K)&\text{if }\ell(K)=1\bmod3,\\\operatorname{subdiv}_\mathrm z(K)&\text{if }\ell(K)=2\bmod3.\end{cases}\end{aligned}$$ ![Elementary subdivision routines for $m=3$: $x$-orthogonal subdivision of an element with level 0 (left), $y$-orthogonal subdivision of an element with level 1 (middle), and $z$-orthogonal subdivision of an element with level 2 (right).[]{data-label="fig: elemref"}](elemref_1 "fig:")![Elementary subdivision routines for $m=3$: $x$-orthogonal subdivision of an element with level 0 (left), $y$-orthogonal subdivision of an element with level 1 (middle), and $z$-orthogonal subdivision of an element with level 2 (right).[]{data-label="fig: elemref"}](elemref_2 "fig:") ![Elementary subdivision routines for $m=3$: $x$-orthogonal subdivision of an element with level 0 (left), $y$-orthogonal subdivision of an element with level 1 (middle), and $z$-orthogonal subdivision of an element with level 2 (right).[]{data-label="fig: elemref"}](elemref_3 "fig:") We introduce the shorthand notation $\operatorname{subdiv}({\mathcal G},{\mathcal M})$ for the subdivision of several elements ${\mathcal M}=\{K_1,\dots,K_J\}\subseteq{\mathcal G}$, defined by successive subdivisions in an arbitrary order, $$\operatorname{subdiv}({\mathcal G},{\mathcal M}){\coloneqq}\operatorname{subdiv}(\operatorname{subdiv}(\dots\operatorname{subdiv}({\mathcal G},K_1),\dots),K_J).$$ We will now define the new refinement algorithm through the subdivision of a superset ${\operatorname{clos}^{\mathbf p,m}}({\mathcal G},{\mathcal M})$ of the marked elements ${\mathcal M}$. In the remaining part of this section, we characterize the class of meshes generated by this refinement algorithm. \[alg: closure\] Given a mesh ${\mathcal G}$ and a set of marked elements ${\mathcal M}\subseteq{\mathcal G}$ to be refined, the *closure* ${\operatorname{clos}^{\mathbf p,m}}({\mathcal G},{\mathcal M})$ of ${\mathcal M}$ is computed as follows. ${\overset{\mspace{9mu}\textstyle\sim}{\smash{{\mathcal M}}\rule{0pt}{.8ex}}}{\coloneqq}{\mathcal M}$ ${\overset{\mspace{9mu}\textstyle\sim}{\smash{{\mathcal M}}\rule{0pt}{.8ex}}}{\coloneqq}{\overset{\mspace{9mu}\textstyle\sim}{\smash{{\mathcal M}}\rule{0pt}{.8ex}}}\cup\bigl\{K'\in{{\mathcal G}^{\mathbf p,m}(K)}\mid \ell(K')<\ell(K)\bigr\}$ ${\operatorname{clos}^{\mathbf p,m}}({\mathcal G},{\mathcal M})={\overset{\mspace{9mu}\textstyle\sim}{\smash{{\mathcal M}}\rule{0pt}{.8ex}}}$ \[alg: refinement\] Given a mesh ${\mathcal G}$ and a set of marked elements ${\mathcal M}\subseteq{\mathcal G}$ to be refined, ${\operatorname{ref}^{\mathbf p,m}}({\mathcal G},{\mathcal M})$ is defined by $${\operatorname{ref}^{\mathbf p,m}}({\mathcal G},{\mathcal M}){\coloneqq}\operatorname{subdiv}({\mathcal G},{\operatorname{clos}^{\mathbf p,m}}({\mathcal G},{\mathcal M})).$$ An example of this algorithm is given in Figure \[fig: refinement algorithm\]. ![Example for Algorithm \[alg: refinement\], with $\mathbf p=(3,3,3)$, $m=3$ and ${\mathcal M}=\{K\}$ with $\ell(K)=2$. In the first iteration of the **for**-loop, all coarser (level 1) elements in the $(\mathbf p,m)$-patch of $K$ are marked as well. in the second iteration, all coarser (level 0) “neighbours” of those elements are also marked. Since there are no elements that are coarser than level 0, the third iteration does not change anything. Hence the **for**-loop ends, and all marked elements are subdivided in the directions that correspond to their levels. []{data-label="fig: refinement algorithm"}](refex_11 "fig:"){width=".25\textwidth"}![Example for Algorithm \[alg: refinement\], with $\mathbf p=(3,3,3)$, $m=3$ and ${\mathcal M}=\{K\}$ with $\ell(K)=2$. In the first iteration of the **for**-loop, all coarser (level 1) elements in the $(\mathbf p,m)$-patch of $K$ are marked as well. in the second iteration, all coarser (level 0) “neighbours” of those elements are also marked. Since there are no elements that are coarser than level 0, the third iteration does not change anything. Hence the **for**-loop ends, and all marked elements are subdivided in the directions that correspond to their levels. []{data-label="fig: refinement algorithm"}](refex_13 "fig:"){width=".25\textwidth"}![Example for Algorithm \[alg: refinement\], with $\mathbf p=(3,3,3)$, $m=3$ and ${\mathcal M}=\{K\}$ with $\ell(K)=2$. In the first iteration of the **for**-loop, all coarser (level 1) elements in the $(\mathbf p,m)$-patch of $K$ are marked as well. in the second iteration, all coarser (level 0) “neighbours” of those elements are also marked. Since there are no elements that are coarser than level 0, the third iteration does not change anything. Hence the **for**-loop ends, and all marked elements are subdivided in the directions that correspond to their levels. []{data-label="fig: refinement algorithm"}](refex_15 "fig:"){width=".25\textwidth"}\ ![Example for Algorithm \[alg: refinement\], with $\mathbf p=(3,3,3)$, $m=3$ and ${\mathcal M}=\{K\}$ with $\ell(K)=2$. In the first iteration of the **for**-loop, all coarser (level 1) elements in the $(\mathbf p,m)$-patch of $K$ are marked as well. in the second iteration, all coarser (level 0) “neighbours” of those elements are also marked. Since there are no elements that are coarser than level 0, the third iteration does not change anything. Hence the **for**-loop ends, and all marked elements are subdivided in the directions that correspond to their levels. []{data-label="fig: refinement algorithm"}](refex_15 "fig:"){width=".25\textwidth"}![Example for Algorithm \[alg: refinement\], with $\mathbf p=(3,3,3)$, $m=3$ and ${\mathcal M}=\{K\}$ with $\ell(K)=2$. In the first iteration of the **for**-loop, all coarser (level 1) elements in the $(\mathbf p,m)$-patch of $K$ are marked as well. in the second iteration, all coarser (level 0) “neighbours” of those elements are also marked. Since there are no elements that are coarser than level 0, the third iteration does not change anything. Hence the **for**-loop ends, and all marked elements are subdivided in the directions that correspond to their levels. []{data-label="fig: refinement algorithm"}](refex_18 "fig:"){width=".25\textwidth"} [.3]{} ![Refinement examples for $\mathbf p=(3,3,3)$ and different choices of $m$. In all cases, the initial mesh consists of $4\times5\times8$ cubes of size $1\times1\times1$, and is refined by marking the lower left front corner element repeatedly until it is of the size $\tfrac1{16}\times\tfrac1{16}\times\tfrac1{16}$. []{data-label="fig: refex"}](wuerfel1 "fig:"){width="\textwidth"} [.3]{} ![Refinement examples for $\mathbf p=(3,3,3)$ and different choices of $m$. In all cases, the initial mesh consists of $4\times5\times8$ cubes of size $1\times1\times1$, and is refined by marking the lower left front corner element repeatedly until it is of the size $\tfrac1{16}\times\tfrac1{16}\times\tfrac1{16}$. []{data-label="fig: refex"}](wuerfel2 "fig:"){width="\textwidth"} [.3]{} ![Refinement examples for $\mathbf p=(3,3,3)$ and different choices of $m$. In all cases, the initial mesh consists of $4\times5\times8$ cubes of size $1\times1\times1$, and is refined by marking the lower left front corner element repeatedly until it is of the size $\tfrac1{16}\times\tfrac1{16}\times\tfrac1{16}$. []{data-label="fig: refex"}](wuerfel3 "fig:"){width="\textwidth"} Consider an initial mesh that consists of $4\times5\times8$ cubes of size $1\times1\times1$. We refine the mesh by marking the lower left front corner element repeatedly until it is of the size $\tfrac1{16}\times\tfrac1{16}\times\tfrac1{16}$. The resulting meshes for different choices of $m$ are illustrated in Figure \[fig: refex\], and the results are listed below. Figure $m$ -------------------- ----- ---- ------- \[sfig: refex 2\] 2 12 10728 \[sfig: refex 4\] 4 6 3175 \[sfig: refex 16\] 16 3 1030 Admissible meshes {#sec: adm meshes} ================= In the subsequent definitions, we introduce a class of admissible meshes. We will then prove that this class coindices with the meshes generated by Algorithm \[alg: refinement\]. \[df: adm. subdivision\]Given a mesh ${\mathcal G}$ and an element $K\in{\mathcal G}$, the subdivision of $K$ is called *$(\mathbf p,m)$-admissible* if all $K'\in{{\mathcal G}^{\mathbf p,m}(K)}$ satisfy $\ell(K')\ge\ell(K)$. In the case of several elements ${\mathcal M}=\{K_1,\dots,K_J\}\subseteq{\mathcal G}$, the subdivision $\operatorname{subdiv}({\mathcal G},{\mathcal M})$ is $(\mathbf p,m)$-admissible if there is an ordering $(\sigma(1),\dots,\sigma(J))$ (this is, if there is a permutation $\sigma$ of $\{1,\dots,J\}$) such that $$\operatorname{subdiv}({\mathcal G},{\mathcal M})=\operatorname{subdiv}(\operatorname{subdiv}(\dots\operatorname{subdiv}({\mathcal G},K_{\sigma(1)}),\dots),K_{\sigma(J)})$$ is a concatenation of $(\mathbf p,m)$-admissible subdivisions. \[df: admissible mesh\] A refinement ${\mathcal G}$ of ${\mathcal G}_0$ is *$(\mathbf p,m)$-admissible* if there is a sequence of meshes ${\mathcal G}_1,\dots,{\mathcal G}_J={\mathcal G}$ and markings ${\mathcal M}_j\subseteq{\mathcal G}_j$ for $j=0,\dots,J-1$, such that ${\mathcal G}_{j+1}=\operatorname{subdiv}({\mathcal G}_j,{\mathcal M}_j)$ is an $(\mathbf p,m)$-admissible subdivision for all $j=0,\dots,J-1$. The set of all $(\mathbf p,m)$-admissible meshes, which is the initial mesh and its $(\mathbf p,m)$-admissible refinements, is denoted by ${\mathbb A^{\mathbf p,m}}$. For the sake of legibility, we write ‘admissible’ instead of ‘$(\mathbf p,m)$-admissible’ throughout the rest of this paper. \[thm: ref works\] Any admissible mesh ${\mathcal G}$ and any set of marked elements ${\mathcal M}\subseteq{\mathcal G}$ satisfy $${\operatorname{ref}^{\mathbf p,m}}({\mathcal G},{\mathcal M})\in{\mathbb A^{\mathbf p,m}}.$$ The proof of Theorem \[thm: ref works\] given at the end of this section relies on the subsequent results. \[lma: magic patches are nested\] Given an admissible mesh ${\mathcal G}$ and two nested elements $K\subseteq\hat K$ with $K,\hat K\in\operatorname*{{\textstyle\bigcup}}{\mathbb A^{\mathbf p,m}}$, the corresponding $(\mathbf p,m)$-patches are nested in the sense ${{\mathcal G}^{\mathbf p,m}(K)}\subseteq{{\mathcal G}^{\mathbf p,m}(\hat K)}$. The proof is given in Appendix \[apx: magic patches are nested\]. \[lma: levels change slowly\] Given $K\in{\mathcal G}\in{\mathbb A^{\mathbf p,m}}$, any $K'\in{{\mathcal G}^{\mathbf p,m}(K)}$ satisfies $\ell(K')\ge\ell(K)-1$. The proof is given in Appendix \[apx: levels change slowly\]. Given the mesh ${\mathcal G}\in{\mathbb A^{\mathbf p,m}}$ and marked elements ${\mathcal M}\subseteq{\mathcal G}$ to be refined, we have to show that there is a sequence of meshes that are subsequent admissible refinements, with ${\mathcal G}$ being the first and ${\operatorname{ref}^{\mathbf p,m}}({\mathcal G},{\mathcal M})$ the last mesh in that sequence. Set ${\overset{\mspace{9mu}\textstyle\sim}{\smash{{\mathcal M}}\rule{0pt}{.8ex}}}{\coloneqq}{\operatorname{clos}^{\mathbf p,m}}({\mathcal G},{\mathcal M})$ and $$\begin{aligned} {2} \notag\overline L&{\coloneqq}\max\ell({\overset{\mspace{9mu}\textstyle\sim}{\smash{{\mathcal M}}\rule{0pt}{.8ex}}}),\quad\underline L{\coloneqq}\min\ell({\overset{\mspace{9mu}\textstyle\sim}{\smash{{\mathcal M}}\rule{0pt}{.8ex}}})&&\\ \notag{\mathcal M}_j&{\coloneqq}\bigl\{K\in{\overset{\mspace{9mu}\textstyle\sim}{\smash{{\mathcal M}}\rule{0pt}{.8ex}}}\mid\ell(K)=j\bigr\}&&\text{for}\enspace j=\underline L,\dots,\overline L\\ \label{eq: ref works 1}{\mathcal G}_{\underline L}&{\coloneqq}{\mathcal G},\quad {\mathcal G}_{j+1}{\coloneqq}\operatorname{subdiv}({\mathcal G}_j,{\mathcal M}_j)&\enspace&\text{for}\enspace j=\underline L,\dots,\overline L.\end{aligned}$$ It follows that ${\operatorname{ref}^{\mathbf p,m}}({\mathcal G},{\mathcal M})={\mathcal G}_{\overline L+1}$. We will show by induction over $j$ that all subdivisions in are admissible. For the first step $j=\underline L$, we know $\{K'\in{\overset{\mspace{9mu}\textstyle\sim}{\smash{{\mathcal M}}\rule{0pt}{.8ex}}}\mid\ell(K')<\underline L\}=\emptyset$, and by construction of ${\overset{\mspace{9mu}\textstyle\sim}{\smash{{\mathcal M}}\rule{0pt}{.8ex}}}$ that for each $K\in{\overset{\mspace{9mu}\textstyle\sim}{\smash{{\mathcal M}}\rule{0pt}{.8ex}}}_{\underline L}$ holds $\{K'\in{{\mathcal G}^{\mathbf p,m}(K)}\mid\ell(K')<\ell(K)\}\subseteq{\overset{\mspace{9mu}\textstyle\sim}{\smash{{\mathcal M}}\rule{0pt}{.8ex}}}$. Together with $\ell(K)=\underline L$, it follows for any $K\in{\overset{\mspace{9mu}\textstyle\sim}{\smash{{\mathcal M}}\rule{0pt}{.8ex}}}_{\underline L}$ that there is no $K'\in{{\mathcal G}^{\mathbf p,m}(K)}$ with $\ell(K')<\ell(K)$. This is, the subdivisions of all $K\in{\overset{\mspace{9mu}\textstyle\sim}{\smash{{\mathcal M}}\rule{0pt}{.8ex}}}_{\underline L}$ are admissible independently of their order and hence $\operatorname{subdiv}({\mathcal G}_{\underline L},{\overset{\mspace{9mu}\textstyle\sim}{\smash{{\mathcal M}}\rule{0pt}{.8ex}}}_{\underline L})$ is admissible. Consider an arbitrary step $j\in\{\underline L,\dots,\overline L\}$ and assume that ${\mathcal G}_{\underline L},\dots,{\mathcal G}_j$ are admissible meshes. Assume for contradiction that there is $K\in{\mathcal M}_j$ of which the subdivision is not admissible, i.e., there exists $K'\in\smash{{{\mathcal G}_j^{\mathbf p,m}(K)}}$ with $\ell(K')<\ell(K)$ and consequently $K'\notin{\overset{\mspace{9mu}\textstyle\sim}{\smash{{\mathcal M}}\rule{0pt}{.8ex}}}$, because $K'$ has not been refined yet. It follows from the closure Algorithm \[alg: closure\] that $K'\notin{\mathcal G}$. Hence, there is $\hat K\in{\mathcal G}$ such that $K'\subset\hat K$. We have $\ell(\hat K)<\ell(K')<\ell(K)$, which implies $\ell(\hat K)<\ell(K)-1$. Note that $K\in{\mathcal G}$ because ${\mathcal M}_j\subseteq{\overset{\mspace{9mu}\textstyle\sim}{\smash{{\mathcal M}}\rule{0pt}{.8ex}}}\subseteq{\mathcal G}$. From $K'\in{{\mathcal G}_j^{\mathbf p,m}(K)}$, it follows by definition that $K'\cap{U^{\mathbf p,m}}(K)\neq\emptyset$, and $K'\subset\hat K$ yields $\hat K\cap{U^{\mathbf p,m}}(K)\neq\emptyset$ and hence $\hat K\in{{\mathcal G}^{\mathbf p,m}(K)}$. Together with $\ell(\hat K)<\ell(K)-1$, Lemma \[lma: levels change slowly\] implies that ${\mathcal G}$ is not admissible, which contradicts the assumption. T-spline definition {#sec: Tspline def} =================== In this section, we define trivariate T-spline functions corresponding to a given admissible mesh. We roughly follow the definitions from [@Morgenstern:Peterseim:2015]. For each element $K=[x,x+\tilde x]\times[y,y+\tilde y]\times[z,z+\tilde z]$, the corresponding set of vertices is denoted by $${\mathcal N}(K){\coloneqq}\{x,x+\tilde x\}\times\{y,y+\tilde y\}\times\{z,z+\tilde z\}.$$ We refer to the elements of ${\mathcal N}{\coloneqq}\bigcup_{K\in{\mathcal G}}{\mathcal N}(K)$ as *nodes*. We define the *active region* $${\mathcal{AR}}{\coloneqq}\Bigl[{\mathchoice{\bigl\lceil\tfrac{p_1}{2}\bigr\rceil}{\lceil\frac{p_1}{2}\rceil}{\lceil\frac{p_1}{2}\rceil}{\lceil\frac{p_1}{2}\rceil}},\tilde X-{\mathchoice{\bigl\lceil\tfrac{p_1}{2}\bigr\rceil}{\lceil\frac{p_1}{2}\rceil}{\lceil\frac{p_1}{2}\rceil}{\lceil\frac{p_1}{2}\rceil}}\Bigr]\times\Bigl[{\mathchoice{\bigl\lceil\tfrac{p_2}{2}\bigr\rceil}{\lceil\frac{p_2}{2}\rceil}{\lceil\frac{p_2}{2}\rceil}{\lceil\frac{p_2}{2}\rceil}},\tilde Y-{\mathchoice{\bigl\lceil\tfrac{p_2}{2}\bigr\rceil}{\lceil\frac{p_2}{2}\rceil}{\lceil\frac{p_2}{2}\rceil}{\lceil\frac{p_2}{2}\rceil}}\Bigr]\times\Bigl[{\mathchoice{\bigl\lceil\tfrac{p_3}{2}\bigr\rceil}{\lceil\frac{p_3}{2}\rceil}{\lceil\frac{p_3}{2}\rceil}{\lceil\frac{p_3}{2}\rceil}},\tilde Z-{\mathchoice{\bigl\lceil\tfrac{p_3}{2}\bigr\rceil}{\lceil\frac{p_3}{2}\rceil}{\lceil\frac{p_3}{2}\rceil}{\lceil\frac{p_3}{2}\rceil}}\Bigr]$$ and the set of *active nodes* ${\mathcal N}_A{\coloneqq}{\mathcal N}\cap{\mathcal{AR}}$. Given a mesh ${\mathcal G}$, denote the union of all closed $x$-orthogonal element faces by ${\Xi_\mathrm x}{\coloneqq}\operatorname*{{\textstyle\bigcup}}_{K\in{\mathcal G}}{\Xi_\mathrm x}(K)$, with $$\begin{aligned} {\Xi_\mathrm x}(K) &{\coloneqq}\{x,x+\tilde x\}\times[y,y+\tilde y]\times [z,z+\tilde z]\\ \text{for any } K &= [x,x+\tilde x]\times[y,y+\tilde y]\times [z,z+\tilde z]\in{\mathcal G}.\end{aligned}$$ We call ${\Xi_\mathrm x}$ the *$x$-orthogonal skeleton*. Analogously, we denote the $y$-orthogonal skeleton by ${\Xi_\mathrm y}$, and the $z$-orthogonal skeleton by ${\Xi_\mathrm z}$. ![$x$-orthogonal, $y$-orthogonal and $z$-orthogonal skeleton of the final mesh from Figure \[fig: refinement algorithm\].[]{data-label="fig: Xi_xyz"}](Xi_1 "fig:"){width=".275\textwidth"} ![$x$-orthogonal, $y$-orthogonal and $z$-orthogonal skeleton of the final mesh from Figure \[fig: refinement algorithm\].[]{data-label="fig: Xi_xyz"}](Xi_2 "fig:"){width=".275\textwidth"} ![$x$-orthogonal, $y$-orthogonal and $z$-orthogonal skeleton of the final mesh from Figure \[fig: refinement algorithm\].[]{data-label="fig: Xi_xyz"}](Xi_3 "fig:"){width=".275\textwidth"} For any $x,y,z\in{\mathbb R}$, we define $$\begin{aligned} {2} \operatorname{\pmb{\mathrm X}}(y,z) &{\coloneqq}\bigl\{\tilde x\in[0,&\tilde X]&\mid(\tilde x,y,z)\in{\Xi_\mathrm x}\bigr\},\\ \operatorname{\pmb{\mathrm Y}}(x,z) &{\coloneqq}\bigl\{\tilde y\in[0,&\tilde Y]&\mid(x,\tilde y,z)\in{\Xi_\mathrm y}\bigr\},\\ \operatorname{\pmb{\mathrm Z}}(x,y) &{\coloneqq}\bigl\{\tilde z\in[0,&\tilde Z]&\mid(x,y,\tilde z)\in{\Xi_\mathrm z}\bigr\}.\end{aligned}$$ Note that in an admissible mesh, the entries $\bigl\{ 0,\dots,{\mathchoice{\bigl\lceil\tfrac{p_1}{2}\bigr\rceil}{\lceil\frac{p_1}{2}\rceil}{\lceil\frac{p_1}{2}\rceil}{\lceil\frac{p_1}{2}\rceil}}-1,\enspace \tilde X-{\mathchoice{\bigl\lceil\tfrac{p_1}{2}\bigr\rceil}{\lceil\frac{p_1}{2}\rceil}{\lceil\frac{p_1}{2}\rceil}{\lceil\frac{p_1}{2}\rceil}}+1,\dots,\tilde X\bigr\}$ are always included in $\operatorname{\pmb{\mathrm X}}(y,z)$ (and analogously for $\operatorname{\pmb{\mathrm Y}}(x,z)$ and $\operatorname{\pmb{\mathrm Z}}(x,y)$). To each active node $v=(v_1,v_2,v_3)\in{\mathcal N}_A$, we associate a local index vector $\operatorname{\pmb{\mathrm x}}(v)\in{\mathbb R}^{p_1+2}$, which is obtained by taking the unique $p_1+2$ consecutive elements in $\operatorname{\pmb{\mathrm X}}(v_2,v_3)$ having $v_1$ as their $\tfrac{p_1+3}2$-th (this is, the middle) entry. We analogously define $\operatorname{\pmb{\mathrm y}}(v)\in{\mathbb R}^{p_2+2}$ and $\operatorname{\pmb{\mathrm z}}(v)\in{\mathbb R}^{p_3+2}$. We associate to each active node $v\in{\mathcal N}_A$ a trivariate B-spline function, referred as *T-spline blending function*, defined as the product of the B-spline functions on the corresponding local index vectors, $$B_v(x,y,z)\coloneqq N_{\operatorname{\pmb{\mathrm x}}(v)}(x) \cdot N_{\operatorname{\pmb{\mathrm y}}(v)}(y)\cdot N_{\operatorname{\pmb{\mathrm z}}(v)}(z).$$ Analysis-Suitability {#sec: AS} ==================== In this section, we give an abstract definition of Analysis-Suitability. Instead of using T-junction extensions as in the 2D case, we define *perturbed regions* through the intersection of particular T-spline supports. Analysis-Suitability is then defined as the absence of intersections between these perturbed regions. This idea is comparable to the 2D case, where Analysis-Suitability is defined as the absence of intersections between T-junction extensions. Subsequent to these definitions, we prove that all previously defined admissible meshes are analysis-suitable. \[df: perturbed regions\]For $q,r,s\in{\mathbb R}$ define the slices $$\begin{aligned} {{\mathcal S}_\mathrm x}(q) &{\coloneqq}\left\{(\tilde x,\tilde y,\tilde z)\in{\mathcal{AR}}\mid \tilde x=q\right\},\\ {{\mathcal S}_\mathrm y}(r) &{\coloneqq}\left\{(\tilde x,\tilde y,\tilde z)\in{\mathcal{AR}}\mid \tilde y=r\right\},\\ {{\mathcal S}_\mathrm z}(s) &{\coloneqq}\left\{(\tilde x,\tilde y,\tilde z)\in{\mathcal{AR}}\mid \tilde z=s\right\}. \intertext{Moreover, we denote by} {{\mathcal N}_\mathrm x}(q) &{\coloneqq}\left\{(v_1,v_2,v_3)\in{\mathcal N}_A\mid(q,v_2,v_3)\in{\Xi_\mathrm x}\right\} \intertext{the set of all nodes of which the projection on the slice ${{\mathcal S}_\mathrm x}(q)$ lies in some element's face. Define analogously} {{\mathcal N}_\mathrm y}(r) &{\coloneqq}\left\{(v_1,v_2,v_3)\in{\mathcal N}_A\mid(v_1,r,v_3)\in{\Xi_\mathrm y}\right\},\\ {{\mathcal N}_\mathrm z}(s) &{\coloneqq}\left\{(v_1,v_2,v_3)\in{\mathcal N}_A\mid(v_1,v_2,s)\in{\Xi_\mathrm z}\right\}.\\ \intertext{For any $q,r,s\in{\mathbb R}$ we define \emph{slice perturbations}} {{\mathcal R}_\mathrm x}(q) &{\coloneqq}{{\mathcal S}_\mathrm x}(q)\cap{{\makebox[2em]{$\displaystyle \bigcup_{v\in{{\mathcal N}_\mathrm x}(q)}$}}}\operatorname{supp}B_v\ \cap {{\makebox[2em]{$\displaystyle \bigcup_{v\in{\mathcal N}_A\setminus{{\mathcal N}_\mathrm x}(q)}$}}}\operatorname{supp}B_v,\\ {{\mathcal R}_\mathrm y}(r) &{\coloneqq}{{\mathcal S}_\mathrm y}(r)\cap{{\makebox[2em]{$\displaystyle \bigcup_{v\in{{\mathcal N}_\mathrm y}(r)}$}}}\operatorname{supp}B_v\ \cap {{\makebox[2em]{$\displaystyle \bigcup_{v\in{\mathcal N}_A\setminus{{\mathcal N}_\mathrm y}(r)}$}}}\operatorname{supp}B_v,\\ {{\mathcal R}_\mathrm z}(s) &{\coloneqq}{{\mathcal S}_\mathrm z}(s)\cap{{\makebox[2em]{$\displaystyle \bigcup_{v\in{{\mathcal N}_\mathrm z}(s)}$}}}\operatorname{supp}B_v\ \cap {{\makebox[2em]{$\displaystyle \bigcup_{v\in{\mathcal N}_A\setminus{{\mathcal N}_\mathrm z}(s)}$}}}\operatorname{supp}B_v.\end{aligned}$$ The *perturbed regions* ${{\mathcal R}_\mathrm x}$, ${{\mathcal R}_\mathrm y}$, ${{\mathcal R}_\mathrm z}$ are defined by $${{\mathcal R}_\mathrm x}{\coloneqq}\bigcup_{q\in{\mathbb R}}{{\mathcal R}_\mathrm x}(q),\quad{{\mathcal R}_\mathrm y}{\coloneqq}\bigcup_{r\in{\mathbb R}}{{\mathcal R}_\mathrm y}(r),\quad{{\mathcal R}_\mathrm z}{\coloneqq}\bigcup_{s\in{\mathbb R}}{{\mathcal R}_\mathrm z}(s).$$ In a uniform mesh, the perturbed regions are empty. In a non-uniform mesh, the perturbed regions are a superset of all hanging nodes and edges (this is, all kinds of 3D T-junctions). See Figure \[fig: AS example 1\] for a 2D visualization of these definitions. \[df: AS\]A given mesh ${\mathcal G}$ is *analysis-suitable* if the above-defined perturbed regions do not intersect, i.e. if $${{\mathcal R}_\mathrm x}\cap{{\mathcal R}_\mathrm y}={{\mathcal R}_\mathrm y}\cap {{\mathcal R}_\mathrm z}={{\mathcal R}_\mathrm z}\cap {{\mathcal R}_\mathrm x}= \emptyset.$$ The set of analysis-suitable meshes is denoted by ${\mathbb{A\negmedspace S}^{\mathbf p}}$. When applied in the two-dimensional case, the above definitions may yield perturbed regions that are larger than the T-junction extensions from [@ZSHS:2012; @BBCS:2012] (see Fig. \[fig: AS example 2\]). However, this occurs only in meshes that are not analysis-suitable, and the 2D version of Definition \[df: AS\] is, regarding refinements of tensor-product meshes, equivalent to the classical definition of analysis-suitability. (1.5,-.5) coordinate (q) – (1.5,5.5) coordinate (sx); at (sx) [${{\mathcal S}_\mathrm x}(q)$]{}; at (q) [$q$]{}; (-.5,6) node \[above\] [$y$]{} |- (5,-.5) node \[right\] [$x$]{}; (q)++(0,-.1)–++(0,.2); (0,0) grid (4,5) (0,1.5)–(3,1.5) (1.5,0)–(1.5,2); in [0,...,4]{} [ in [0,1,2]{} (,) circle (3pt); in [3,4,5]{} (,) circle (3pt); ]{} in [0,1,2]{} (1.5,) circle (3pt); in [0,1,1.5,2,3]{} (,1.5) circle (3pt); (na) at (4.5,3.5) [${\mathcal N}_A\setminus{{\mathcal N}_\mathrm x}(q)$]{}; (nx) at (4.5,1.5) [${{\mathcal N}_\mathrm x}(q)$]{}; in [3,4,5]{} (na) – (4,); in [0,1,2]{} (nx) – (4,); (-.6,-.5) |- (4.4,4) |- (-.6,-.5); (4.6,5.5) |- (2,1) |- (-.4,1.5) |- (4.6,5.5); (1.5,1.5)–(1.5,4) ++(.5,-1.5) coordinate (rx); at (rx) [${{\mathcal R}_\mathrm x}(q)$]{}; (0,0) grid (4,5) (0,1.5)–(3,1.5) (1.5,0)–(1.5,2); at (-.6,0) [${{\makebox[2em]{$\displaystyle \bigcup_{v\in{{\mathcal N}_\mathrm x}(q)}$}}}\operatorname{supp}B_v$]{}; at (-.4,5) [${{\makebox[2em]{$\displaystyle \bigcup_{v\in{\mathcal N}_A\setminus{{\mathcal N}_\mathrm x}(q)}$}}}\operatorname{supp}B_v$]{}; (1.5,-.5) coordinate (q) – (1.5,5.5) coordinate (sx); at (sx) [${{\mathcal S}_\mathrm x}(q)$]{}; at (q) [$q$]{}; (-.5,6) node \[above\] [$y$]{} |- (5,-.5) node \[right\] [$x$]{}; (q)++(0,-.1)–++(0,.2); (0,0) grid (4,5) (0,1.5)–(2,1.5) (1.5,0)–(1.5,2); in [0,...,4]{} [ in [0,1,2]{} (,) circle (3pt); in [3,4,5]{} (,) circle (3pt); ]{} in [0,1,2]{} (1.5,) circle (3pt); in [0,1,1.5,2]{} (,1.5) circle (3pt); (na) at (4.5,3.5) [${\mathcal N}_A\setminus{{\mathcal N}_\mathrm x}(q)$]{}; (nx) at (4.5,1.5) [${{\mathcal N}_\mathrm x}(q)$]{}; in [3,4,5]{} (na) – (4,); in [0,1,2]{} (nx) – (4,); (-.6,-.5) |- (4.4,4) |- (-.6,-.5); (4.6,5.5) |- (1,1) |- (-.4,1.5) |- (4.6,5.5); (1.5,1)–(1.5,4) ++(.5,-1.5) coordinate (rx); at (rx) [${{\mathcal R}_\mathrm x}(q)$]{}; (0,0) grid (4,5) (0,1.5)–(2,1.5) (1.5,0)–(1.5,2); at (-.6,0) [${{\makebox[2em]{$\displaystyle \bigcup_{v\in{{\mathcal N}_\mathrm x}(q)}$}}}\operatorname{supp}B_v$]{}; at (-.4,5) [${{\makebox[2em]{$\displaystyle \bigcup_{v\in{\mathcal N}_A\setminus{{\mathcal N}_\mathrm x}(q)}$}}}\operatorname{supp}B_v$]{}; \[thm: A in AS\]${\mathbb A^{\mathbf p,m}}\subseteq{\mathbb{A\negmedspace S}^{\mathbf p}}$ for all $m\ge 2$. We prove the claim by induction over admissible subdivisions. Assume $K\in{\mathcal G}\in{\mathbb A^{\mathbf p,m}}\cap{\mathbb{A\negmedspace S}^{\mathbf p}}$ and let $\hat{\mathcal G}{\coloneqq}\operatorname{subdiv}({\mathcal G},K)\in{\mathbb A^{\mathbf p,m}}$ be an admissible subdivision of ${\mathcal G}$. We have to show that $\hat{\mathcal G}\in{\mathbb{A\negmedspace S}^{\mathbf p}}$. We assume without loss of generality that $\ell(K)=0\bmod 3$. Hence subdividing $K$ adds $m-1$ faces to the mesh, which are $x$-orthogonal. Set $K\eqqcolon[x,x+\tilde x]\times[y,y+\tilde y]\times[z,z+\tilde z]$ and ${\overset{\textstyle\sim}{\smash{\Xi}\rule{0pt}{.8ex}}}{\coloneqq}\{x+\tfrac jm\tilde x\mid j\in\{1,\dots, m-1\}\}$, then the skeletons of $\hat{\mathcal G}$ are given by $${\hat{\rule{0pt}{8pt}\Xi}_\mathrm x}={\Xi_\mathrm x}\cup{\overset{\textstyle\sim}{\smash{\Xi}\rule{0pt}{.8ex}}}\times[y,y+\tilde y]\times[z,z+\tilde z] ,\quad {\hat{\rule{0pt}{8pt}\Xi}_\mathrm y}={\Xi_\mathrm y},\quad{\hat{\rule{0pt}{8pt}\Xi}_\mathrm z}={\Xi_\mathrm z}.$$ Let $\hat v\in\hat{\mathcal N}_A\setminus{\mathcal N}_A$ be a new active node. Using the local quasi-uniformity from Lemma \[lma: levels change slowly\], it can be verified that for all $r\in{\mathbb R}$ such that $\hat v\in\hat{{\mathcal N}_\mathrm y}(r)$ follows ${{\mathcal R}_\mathrm y}(r)\cap\operatorname{supp}B_{\hat v}=\emptyset$. Consequently, $\hat{{\mathcal R}_\mathrm y}={{\mathcal R}_\mathrm y}$ and analogously $\hat {{\mathcal R}_\mathrm z}={{\mathcal R}_\mathrm z}$. Moreover, $\hat{{\mathcal R}_\mathrm x}(q)={{\mathcal R}_\mathrm x}(q)$ for all $q\notin{\overset{\textstyle\sim}{\smash{\Xi}\rule{0pt}{.8ex}}}$. It remains to characterize $$\hat{{\mathcal R}_\mathrm x}(\xi) = {{\mathcal S}_\mathrm x}(\xi)\cap{{\makebox[2em]{$\displaystyle \bigcup_{v\in\hat{{\mathcal N}_\mathrm x}(\xi)}$}}}\operatorname{supp}B_v\cap{{\makebox[2em]{$\displaystyle \bigcup_{v\in\hat{\mathcal N}_A\setminus\hat{{\mathcal N}_\mathrm x}(\xi)}$}}}\operatorname{supp}B_v$$ for any $\xi\in{\overset{\textstyle\sim}{\smash{\Xi}\rule{0pt}{.8ex}}}$. With $$\label{eq: A in AS: active nodes} \begin{aligned} \hat{{\mathcal N}_\mathrm x}(\xi)&={{\mathcal N}_\mathrm x}(\xi)\cup\hat{\mathcal N}_A\setminus{\mathcal N}_A\\\text{and}\enspace\hat{\mathcal N}_A\setminus\hat{{\mathcal N}_\mathrm x}(\xi)&={\mathcal N}_A\setminus\hat{{\mathcal N}_\mathrm x}(\xi)={\mathcal N}_A\setminus{{\mathcal N}_\mathrm x}(\xi), \end{aligned}$$ it follows $$\begin{aligned} \hat{{\mathcal R}_\mathrm x}(\xi) &= {{\mathcal S}_\mathrm x}(\xi)\cap{{\makebox[2em]{$\displaystyle \bigcup_{v\in\hat{{\mathcal N}_\mathrm x}(\xi)}$}}}\operatorname{supp}B_v\ \cap {{\makebox[2em]{$\displaystyle \bigcup_{v\in\hat{\mathcal N}_A\setminus\hat{{\mathcal N}_\mathrm x}(\xi)}$}}}\operatorname{supp}B_v,\\ &{\mathrel{{\makebox[1.4ex]{$\displaystyle \stackrel{\eqref{eq: A in AS: active nodes}}{=}$}}}} {{\mathcal S}_\mathrm x}(\xi)\cap\Bigl({{\makebox[2em]{$\displaystyle \bigcup_{v\in{{\mathcal N}_\mathrm x}(\xi)}$}}}\operatorname{supp}B_v\cup{{\makebox[2em]{$\displaystyle \bigcup_{v\in\hat{\mathcal N}_A\setminus{\mathcal N}_A}$}}}\operatorname{supp}B_v\Bigr)\cap{{\makebox[2em]{$\displaystyle \bigcup_{v\in{\mathcal N}_A\setminus{{\mathcal N}_\mathrm x}(\xi)}$}}}\operatorname{supp}B_v\\ &= {{\mathcal R}_\mathrm x}(\xi) \cup \Bigl(\underbrace{{{\mathcal S}_\mathrm x}(\xi)\ \cap{{\makebox[2em]{$\displaystyle \bigcup_{\hat v\in\hat{\mathcal N}_A\setminus{\mathcal N}_A}$}}}\operatorname{supp}B_{\hat v}}_\Sigma\ \cap{{\makebox[2em]{$\displaystyle \bigcup_{v\in{\mathcal N}_A\setminus{{\mathcal N}_\mathrm x}(\xi)}$}}}\operatorname{supp}B_v\Bigr).\end{aligned}$$ We will prove below that $\Sigma\cap\hat{{\mathcal R}_\mathrm z}=\Sigma\cap\hat{{\mathcal R}_\mathrm y}=\emptyset$. See Figures \[fig: Sx-levels\] and \[fig: Sx-perturbations\] for an example with $\ell(K)=3$ and $m=2$. Assume for contradiction that there is $s\in{\mathbb R}$ with $\hat{{\mathcal R}_\mathrm z}(s)\cap\Sigma\neq\emptyset$. Then there exist $v\in\hat{{\mathcal N}_\mathrm z}(s)$ and $w\in\hat{\mathcal N}_A\setminus\hat{{\mathcal N}_\mathrm z}(s)$ such that $$\label{eq: A in AS: eqn to contradict} {{\mathcal S}_\mathrm z}(s)\cap \operatorname{supp}B_v\cap\operatorname{supp}B_w\cap\Sigma\neq\emptyset.$$ Since the subdivision of $K$ is admissible, we know that all elements in ${{\mathcal G}^{\mathbf p,m}(K)}$ are at least of level $\ell(K)$. This implies that all those elements are of equal or smaller size than $K$. Denote $\operatorname{mid}(K)\eqqcolon(\sigma,\nu,\tau)$ and $\varepsilon{\coloneqq}\tfrac{m^{-\ell(K)/3}}2$. It follows $$\label{eq: A in AS: Sigma in patch} \Sigma\subseteq\operatorname*{{\textstyle\bigcup}}{{\mathcal G}^{\mathbf p,m}(K)},$$ and with $$\hat{\mathcal N}_A\setminus{\mathcal N}_A\ \subset\ [\sigma-\varepsilon,\sigma+\varepsilon]\times[\nu-\varepsilon,\nu+\varepsilon]\times[\tau-\varepsilon,\tau+\varepsilon],$$ we get more precisely $$\label{eq: A in AS: def Sigma} \Sigma\subseteq\left\{\xi\right\}\times\bigl[\nu-\varepsilon(p_2+2),\nu+\varepsilon(p_2+2)\bigr]\times\bigl[\tau-\varepsilon(p_3+2),\tau+\varepsilon(p_3+2)\bigr].$$ The second-order patch ${{\mathcal G}^{\mathbf p,m}({{\mathcal G}^{\mathbf p,m}(K)})}{\coloneqq}\bigcup_{K'\in{{\mathcal G}^{\mathbf p,m}(K)}}{{\mathcal G}^{\mathbf p,m}(K')}$ consists of elements that may be larger in $z$-direction, but are of same or smaller size than $K$ in $x$- and $y$-direction. For $w=(w_1,w_2,w_3)$, Equation implies $\operatorname{supp}B_w\cap\Sigma\neq\emptyset$, and we conclude from that $$\label{eq: A in AS: w is near Sigma} (w_1,w_2)\ \in\ \bigl[\xi-\varepsilon(p_1+1),\xi+\varepsilon(p_1+1)\bigr]\times\bigl[\nu-\varepsilon(2p_2+3),\nu+\varepsilon(2p_2+3)\bigr]$$ We assume that there is no element in ${\mathcal G}$ with level higher than $\ell(K)+1$. This is an eligible assumption, since every admissible mesh can be reproduced by a sequence of level-increasing admissible subdivisions; see [@Morgenstern:Peterseim:2015 Proposition 4.3] for a detailed construction. This assumption implies that the $z$-orthogonal skeleton ${\Xi_\mathrm z}$ is a subset of the $z$-orthogonal skeleton of a uniform $(\ell(K)+1)$-leveled mesh, $${\Xi_\mathrm z}({\mathcal G})\subseteq{\Xi_\mathrm z}({\mathcal G_{u\mid \ell(K)+1}}), \label{eq: A in AS: xiz in xiz-uni}$$ and with $\min\ell({{\mathcal G}^{\mathbf p,m}(K)})=\ell(K)$, we have even equality on the patch ${{\mathcal G}^{\mathbf p,m}(K)}$, $$\label{eq: A in AS: xiz-patch is xiz-uni-patch} {\Xi_\mathrm z}\bigl({{\mathcal G}^{\mathbf p,m}(K)}\bigr)={\Xi_\mathrm z}\bigl({{\mathcal G_{u\mid \ell(K)}}^{\mathbf p,m}(K)}\bigr)={\Xi_\mathrm z}\bigl({{\mathcal G_{u\mid \ell(K)+1}}^{\mathbf p,m}(K)}\bigr),$$ using the notation ${\Xi_\mathrm z}\bigl({{\mathcal G}^{\mathbf p,m}(K)}\bigr){\coloneqq}{\Xi_\mathrm z}({\mathcal G})\cap\operatorname*{{\textstyle\bigcup}}{{\mathcal G}^{\mathbf p,m}(K)}$. Since $v\in\hat{{\mathcal N}_\mathrm z}(s)$, we know that , which means that there are elements in ${\mathcal G}$ that have $z$-orthogonal faces at the $z$-coordinate $s$, i.e., . With we get ${{\mathcal S}_\mathrm z}(s)\cap {\Xi_\mathrm z}({\mathcal G_{u\mid \ell(K)+1}}) \neq\emptyset$. Since ${\mathcal G_{u\mid \ell(K)+1}}$ is a tensor-product mesh, its $z$-orthogonal skeleton consists of global domain slices, which yields ${{\mathcal S}_\mathrm z}(s)\subseteq {\Xi_\mathrm z}({\mathcal G_{u\mid \ell(K)+1}}).$ The restriction to the patch ${{\mathcal G}^{\mathbf p,m}(K)}$ yields $$\label{eq: A in AS: Szs-GpK in xiz-GpK} {{\mathcal S}_\mathrm z}(s)\cap\operatorname*{{\textstyle\bigcup}}{{\mathcal G}^{\mathbf p,m}(K)}\subseteq {\Xi_\mathrm z}\bigl({{\mathcal G_{u\mid \ell(K)+1}}^{\mathbf p,m}(K)}\bigr) \stackrel{\eqref{eq: A in AS: xiz-patch is xiz-uni-patch}}={\Xi_\mathrm z}\bigl({{\mathcal G}^{\mathbf p,m}(K)}\bigr)\subseteq{\Xi_\mathrm z}({\mathcal G}).$$ Equation implies that ${{\mathcal S}_\mathrm z}(s)\cap\Sigma\neq\emptyset$, and with we get that ${{\mathcal S}_\mathrm z}(s)\cap\operatorname*{{\textstyle\bigcup}}{{\mathcal G}^{\mathbf p,m}(K)}\neq\emptyset$. Hence $$\begin{aligned} {{\mathcal S}_\mathrm z}(s)\cap\operatorname*{{\textstyle\bigcup}}{{\mathcal G}^{\mathbf p,m}(K)}\ &\supseteq\ {{\mathcal S}_\mathrm z}(s)\cap{U^{\mathbf p,m}}(K) \notag \\ &=\ \bigl[\xi-\varepsilon(2p_1+3),\xi+\varepsilon(2p_1+3)\bigr]\times\bigl[\nu-\varepsilon(2p_2+3),\nu+\varepsilon(2p_2+3)\bigr]\times\{s\}.\end{aligned}$$ Since $w\notin\hat{{\mathcal N}_\mathrm z}(s)$, we know by definition that $(w_1,w_2,s)\notin{\Xi_\mathrm z}$. Then it follows from that $(w_1,w_2,s)\notin{{\mathcal S}_\mathrm z}(s)\cap\operatorname*{{\textstyle\bigcup}}{{\mathcal G}^{\mathbf p,m}(K)}$, and hence $$(w_1,w_2)\ \notin\ \bigl[\xi-\varepsilon(2p_1+3),\xi+\varepsilon(2p_1+3)\bigr]\times\bigl[\nu-\varepsilon(2p_2+3),\nu+\varepsilon(2p_2+3)\bigr]$$ in contradiction to . This proves that $\hat{{\mathcal R}_\mathrm z}\cap\Sigma=\emptyset$. Similar arguments prove that $\Sigma\cap{{\mathcal R}_\mathrm y}=\emptyset$, which concludes the proof. (6,9.5) rectangle (10.5,12); (5,7) rectangle (11.5,14); (0,0) grid (17,21); in [5,...,11]{} (+.5,7)–(+.5,14); in [9.5,10.5,11.5]{} (6,)–(10.5,); in [0,...,20]{} [ in [0,1,2,14,15,16]{} at (+.5,+.5) [0]{}; ]{} in [3,...,13]{} [ in [0,1,2,18,19,20]{} at (+.5,+.5) [0]{}; ]{} in [3,...,17]{} [ in [3,4,12,13]{} at (+.5,+.5) [1]{}; ]{} in [5,...,11]{} [ in [3,4,5,6,14,15,16,17]{} at (+.5,+.5) [1]{}; ]{} in [7,...,13]{} [ in [5,5.5,10.5,11,11.5]{} at (+.25,+.5) [2]{}; ]{} in [12,...,20]{} [ in [7,8,12,13]{} at (/2+.25,+.5) [2]{}; ]{} in [12,...,20]{} [ in [9,9.5,10,11,11.5]{} at (/2+.25,+.25) [3]{}; ]{} in [12,13,14,15,17,18,19,20]{} at (/2+.25,10.75) [3]{}; at (8.25,10.75) [4]{}; (7,9.5) rectangle (9.5,12) (1,1) – (16,1) – (16,20) – (1,20) – (1,1) – (4,4) – (4,17) – (13,17) – (13,4) – (4,4) – (1,1); in [5,...,11]{} (+.5,5)–(+.5,8) (+.5,13)–(+.5,16); in [9.5,10.5,11.5]{} (5,)–(6.5,) (10,)–(11.5,); (0,0) grid (17,21); in [5,...,11]{} (+.5,7)–(+.5,14); in [9.5,10.5,11.5]{} (6,)–(10.5,); Dual-Compatibility {#sec: DC} ================== This section recalls the concept of Dual-Compatibility, which is a sufficient criterion for linear independence of the T-spline functions, based on dual functionals. We follow the ideas of [@BBSV:2014] for the definitions and for the proof of linear independence. In addition, we prove that all analysis-suitable (and hence all admissible) meshes are dual-compatible and thereby generalize a 2D result from [@BBCS:2012]. \[prp:overlap\]Given the local index vector $X=(x_1,\dots,x_{p+2})$, there exists an $L^2$-functional $\lambda_{X}$ with $\operatorname{supp}\lambda_X=\operatorname{supp}N_X$ such that for any $\tilde X=(\tilde x_1,\dots,\tilde x_{p+2})$ satisfying $$\label{eq: overlap} \begin{alignedat}{4} \forall\ x&\in\{x_1,\dots,x_{p+2}\}&&:&\quad \tilde x_1&\leq x\leq \tilde x_{p+2}&&\Rightarrow x\in \{\tilde x_1,\dots,\tilde x_{p+2}\}\\ \text{and}\quad\forall\ \tilde x&\in\{\tilde x_1,\dots,\tilde x_{p+2}\}&&:&\quad x_1&\leq \tilde x\leq x_{p+2}&&\Rightarrow \tilde x\in \{x_1,\dots,x_{p+2}\}, \end{alignedat}$$ follows $\lambda_{X}(N_{\tilde X}) = \delta_{X\tilde X}$. Following [@Schumaker:2007], we construct a dual functional on the same local knot vector $X$ which we denote by $\lambda_{X}:L^2\bigl([0,1]\bigr)\to{\mathbb R}$. For details, see [@Schumaker:2007 Theorem 4.34, 4.37, and 4.41]. Let $y_j=\cos\bigl(\tfrac{p-j+1}{p+1}\pi\bigr)$ for $j=0,\dots,p+1$. Using divided differences, the perfect B-spline of order $p+1$ is defined by $$B^*_{p+1}(x){\coloneqq}(p+1)\,(-1)^{p+1}\bigl[y_0,\dots,y_{p+1}\bigr]\left((x-\bullet)_+\right)^p$$ and satisfies (amongst other things) $\int_{-1}^1B^*_{p+1}(x){\,\mathrm d}x=1$ as depicted in Figure \[pic: perfect B-spline\]. ![Plot of the perfect B-splines $B^*_4$ (solid), $B^*_6$ (dotted), $B^*_{10}$ (dashed) and the corresponding antiderivatives.[]{data-label="pic: perfect B-spline"}](perfBS){width=".5\textwidth"} Set $$G_{X}(x) {\coloneqq}\int_{-1}^{\tfrac{2x-{x_1}-{x_{p+2}}}{{x_{p+2}}-{x_1}}}B^*_{p+1}(t){\,\mathrm d}t \quad\text{for }{x_1}\le x\le {x_{p+2}}$$ and $$\phi_{X}(x)=\tfrac1{p!}\left(x-{x_2}\right)\cdots\bigl(x-{x_{p+1}}\bigr).$$ We define the dual functional by $$\label{eq:def_lambda} \lambda_{X}(f) = \int_{{x_1}}^{{x_{p+2}}}\mspace{-9mu}f\, D^{p+1}(G_{X}\,\phi_{X}){\,\mathrm d}x\quad\text{for all }f\in L^2\bigl([0,1]\bigr).$$ Note in particular that for all $f\in L^2({\mathbb R})$ with $f|_{[x_1,x_{p+2}]}=0$ follows $\lambda_{X}(f)=0$. If holds then the claim follows by construction, see [@Schumaker:2007 Theorem 4.41]. We say that two index vectors verifying *overlap*. In order to define the set of T-spline blending functions of which we desire linear independence, we construct local index vectors for each active node. We define the functional $\lambda_v$ by $$\lambda_v(B_{w}){\coloneqq}\lambda_{\operatorname{\pmb{\mathrm x}}(v)}(N_{\operatorname{\pmb{\mathrm x}}(w)})\cdot\lambda_{\operatorname{\pmb{\mathrm y}}(v)}(N_{\operatorname{\pmb{\mathrm y}}(w)})\cdot\lambda_{\operatorname{\pmb{\mathrm z}}(v)}(N_{\operatorname{\pmb{\mathrm z}}(w)})$$ using the one-dimensional functional $\lambda_X$ defined in . \[df: partial overlap\]We say that a couple of nodes $v,w\in{\mathcal N}$ *partially overlap* if their index vectors overlap in at least two out of three dimensions; this is, if (at least) two of the pairs $$\bigl(\operatorname{\pmb{\mathrm x}}(v),\operatorname{\pmb{\mathrm x}}(w)\bigr),\ \bigl(\operatorname{\pmb{\mathrm y}}(v),\operatorname{\pmb{\mathrm y}}(w)\bigr),\ \bigl(\operatorname{\pmb{\mathrm z}}(v),\operatorname{\pmb{\mathrm z}}(w)\bigr)$$ overlap in the sense of Proposition \[prp:overlap\]. A mesh ${\mathcal G}$ is *dual-compatible (DC)* if any two active nodes $v,w\in{\mathcal N}_A$ with $\bigl|\operatorname{supp}B_v \cap\operatorname{supp}B_w\bigr|>0$ partially overlap. The set of dual-compatible meshes is denoted by ${\mathbb{D\negmedspace C}^{\mathbf p}}$. The above Definition \[df: partial overlap\] fulfills the definition of *partial overlap* given in [@BBSV:2014 Def. 7.1], which is not equivalent. The definition given in [@BBSV:2014] is more general, and the corresponding mesh classes are nested in the sense ${\mathbb{D\negmedspace C}^{\mathbf p}}\subseteq{\mathbb{D\negmedspace C}^{\mathbf p}}_{\text{\cite{BBSV:2014}}}$. However, we do have equivalence of these definitions in the two-dimensional setting. The following lemma states that the perturbed regions from Definition \[df: perturbed regions\] indicate non-overlapping knot vectors, and it is applied in the proof of Theroem \[thm: AS in DC\] below. \[lma: perturbed regions indicate non-overlapping\]Let $q\in[0,\tilde X]$ and $v_1,v_2\in{\mathcal N}_A$. If $v_1\in{{\mathcal N}_\mathrm x}(q)\notni v_2$ and ${{\mathcal S}_\mathrm x}(q)\cap\operatorname{supp}B_{v_1}\cap\operatorname{supp}B_{v_2}\neq\emptyset$, then $\operatorname{\pmb{\mathrm x}}(v_1)$ and $\operatorname{\pmb{\mathrm x}}(v_2)$ do not overlap in the sense of . This holds analogously for ${{\mathcal N}_\mathrm y}(r),r\in[0,\tilde Y]$ and ${{\mathcal N}_\mathrm z}(s),s\in[0,\tilde Z]$. Let $v_1=(x_1,y_1,z_1)$. From $v_1\in{{\mathcal N}_\mathrm x}(q)$ and Definition \[df: perturbed regions\], we conclude that $(q,y_1,z_1)\in{\Xi_\mathrm x}$, and hence $q\in\operatorname{\pmb{\mathrm X}}(y_1,z_1)$. Let $\operatorname{\pmb{\mathrm x}}(v_1)=(x_1^1,\dots,x_1^{p_1+2})$ be the local $x$-direction knot vector associated to $v_1$, then implies that $x_1^1\le q\le x_1^{p_1+2}$. This and $q\in\operatorname{\pmb{\mathrm X}}(y_1,z_1)$ yield $q\in\operatorname{\pmb{\mathrm x}}(v_1)$. Let . From $v_2\notin{{\mathcal N}_\mathrm x}(q)$, we get $(q,y_2,z_2)\notin{\Xi_\mathrm x}$, hence $q\notin\operatorname{\pmb{\mathrm X}}(y_2,z_2)$, and in particular . Let be the local knot vector associated to $v_2$, then implies that $x_2^1\le q\le x_2^{p_1+2}$. Together with $\operatorname{\pmb{\mathrm x}}(v_1)\ni q\notin\operatorname{\pmb{\mathrm x}}(v_2)$, we see that $v_1$ and $v_2$ do not overlap. \[thm: AS in DC\]${\mathbb{A\negmedspace S}^{\mathbf p}}={\mathbb{D\negmedspace C}^{\mathbf p}}$. “$\subseteq$”. Assume for contradiction a mesh ${\mathcal G}$ which is not DC, hence there exist active nodes with that do not overlap in two dimensions, without loss of generality $x$ and $y$. We will show that there exist two slice perturbations ${{\mathcal R}_\mathrm x}(q)$ and ${{\mathcal R}_\mathrm y}(r)$ with nonempty intersection. We denote $v=(v_1,v_2,v_3)$, $w=(w_1,w_2,w_3)$ and $\operatorname{\pmb{\mathrm x}}(v)=(x^v_1,\dots,x^v_{p_1+2})$. The elements of $\operatorname{\pmb{\mathrm y}}(v),\operatorname{\pmb{\mathrm x}}(w),\operatorname{\pmb{\mathrm y}}(w)$ are denoted analogously. Moreover we define $$\begin{aligned} {2} x{_{\mathrm m}}&{\coloneqq}\max(x^v_1,x^w_1),&\enspace x{_{\mathrm M}}&{\coloneqq}\min(x^v_{p_1+2},x^w_{p_1+2})\\ y{_{\mathrm m}}&{\coloneqq}\max(y^v_1,y^w_1),& y{_{\mathrm M}}&{\coloneqq}\min(y^v_{p_2+2},y^w_{p_2+2})\\ z{_{\mathrm m}}&{\coloneqq}\max(z^v_1,z^w_1),& z{_{\mathrm M}}&{\coloneqq}\min(z^v_{p_3+2},z^w_{p_3+2})\end{aligned}$$ and note that $$\begin{aligned} \operatorname{supp}B_v \cap \operatorname{supp}B_w = [x{_{\mathrm m}},x{_{\mathrm M}}]\times[y{_{\mathrm m}},y{_{\mathrm M}}]\times[z{_{\mathrm m}},z{_{\mathrm M}}].\end{aligned}$$ Since $\operatorname{\pmb{\mathrm x}}(v)$ and $\operatorname{\pmb{\mathrm x}}(w)$ do not overlap, there exists $q\in[x{_{\mathrm m}},x{_{\mathrm M}}]$ with either or . Without loss of generality we assume $\operatorname{\pmb{\mathrm x}}(v)\ni q\notin\operatorname{\pmb{\mathrm x}}(w)$. Since $\operatorname{\pmb{\mathrm x}}(v)\subseteq \operatorname{\pmb{\mathrm X}}(v_2,v_3)$, it follows by definition that $(q,v_2,v_3)\in{\Xi_\mathrm x}$ and hence $v\in{{\mathcal N}_\mathrm x}(q)$. Since $q\notin\operatorname{\pmb{\mathrm x}}(w)$ and hence $(q,w_2,w_3)\notin{\Xi_\mathrm x}$, it follows that $w\notin{{\mathcal N}_\mathrm x}(q)$. Then $$\begin{aligned} {{\mathcal R}_\mathrm x}(q) &= {{\mathcal S}_\mathrm x}(q)\ \cap\ {\makebox[2em]{$\displaystyle \bigcup_{v'\in{{\mathcal N}_\mathrm x}(q)} $}} \operatorname{supp}B_{v'}\ \cap\ {\makebox[2em]{$\displaystyle \bigcup_{v'\in{\mathcal N}_A\smallsetminus{{\mathcal N}_\mathrm x}(q)} $}} \operatorname{supp}B_{v'}\\ &\supseteq {{\mathcal S}_\mathrm x}(q)\ \cap \ \operatorname{supp}B_v\ \cap \ \operatorname{supp}B_w\\ &= \{q\}\times[y{_{\mathrm m}},y{_{\mathrm M}}]\times[z{_{\mathrm m}},z{_{\mathrm M}}]. \intertext{Analogously, we have} {{\mathcal R}_\mathrm y}(r) &\supseteq [x{_{\mathrm m}},x{_{\mathrm M}}]\times\{r\}\times[z{_{\mathrm m}},z{_{\mathrm M}}] \intertext{and hence} {{\mathcal R}_\mathrm x}(q) \cap {{\mathcal R}_\mathrm y}(r) &\supseteq \{q\}\times\{r\}\times[z{_{\mathrm m}},z{_{\mathrm M}}]\neq\emptyset, \end{aligned}$$ which means that the mesh ${\mathcal G}$ is not analysis-suitable. “$\supseteq$”. Assume for contradiction that the mesh is not analysis-suitable, and w.l.o.g. that there is such that ${{\mathcal R}_\mathrm x}\cap{{\mathcal R}_\mathrm y}\supseteq\{v\}\neq\emptyset$. Definition \[df: perturbed regions\] implies that there exist $v_1,v_2,v_3,v_4\in{\mathcal N}_A$ with and such that $$v\ \in\ {{\mathcal S}_\mathrm x}(q)\cap{{\mathcal S}_\mathrm y}(r)\cap\operatorname{supp}B_{v_1}\cap\operatorname{supp}B_{v_2}\cap\operatorname{supp}B_{v_3}\cap\operatorname{supp}B_{v_4}.$$ Lemma \[lma: perturbed regions indicate non-overlapping\] yields that $\operatorname{\pmb{\mathrm x}}(v_1)$ and $\operatorname{\pmb{\mathrm x}}(v_2)$ do not overlap, and that $\operatorname{\pmb{\mathrm y}}(v_3)$ and $\operatorname{\pmb{\mathrm y}}(v_4)$ do not overlap. *Case 1.* If $v_1\in{{\mathcal N}_\mathrm y}(r)\notni v_2$, or $v_1\notin{{\mathcal N}_\mathrm y}(r)\ni v_2$, then $v_1$ and $v_2$ do not partially overlap. *Case 2.* If $v_1\in{{\mathcal N}_\mathrm y}(r)$ and $v_4\notin{{\mathcal N}_\mathrm x}(q)$, then $v_1$ and $v_4$ do not partially overlap. *Case 3.* If $v_1\notin{{\mathcal N}_\mathrm y}(r)$ and $v_3\notin{{\mathcal N}_\mathrm x}(q)$, then $v_1$ and $v_3$ do not partially overlap. *Case 4.* If $v_2\in{{\mathcal N}_\mathrm y}(r)$ and $v_4\in{{\mathcal N}_\mathrm x}(q)$, then $v_2$ and $v_4$ do not partially overlap. *Case 5.* If $v_2\notin{{\mathcal N}_\mathrm y}(r)$ and $v_3\in{{\mathcal N}_\mathrm x}(q)$, then $v_2$ and $v_3$ do not partially overlap. In all cases (see the table in Fig. \[tb: cases\]), the mesh is not dual-compatible. This concludes the proof. [1.5]{}1 ------------------------------------- ------------------------------------- ------------------------------------- ------------------------------------- --------- $v_1\in{{\mathcal N}_\mathrm y}(r)$ $v_2\in{{\mathcal N}_\mathrm y}(r)$ $v_3\in{{\mathcal N}_\mathrm y}(r)$ $v_4\in{{\mathcal N}_\mathrm y}(r)$ case(s) [$\mathsf{true}$]{} [$\mathsf{true}$]{} [$\mathsf{true}$]{} [$\mathsf{true}$]{} 4 [$\mathsf{true}$]{} [$\mathsf{true}$]{} [$\mathsf{true}$]{} [$\mathsf{false}$]{} 2 [$\mathsf{true}$]{} [$\mathsf{true}$]{} [$\mathsf{false}$]{} [$\mathsf{true}$]{} 4 [$\mathsf{true}$]{} [$\mathsf{true}$]{} [$\mathsf{false}$]{} [$\mathsf{false}$]{} 2 [$\mathsf{true}$]{} [$\mathsf{false}$]{} [$\mathsf{true}$]{} [$\mathsf{true}$]{} 1, 5 [$\mathsf{true}$]{} [$\mathsf{false}$]{} [$\mathsf{true}$]{} [$\mathsf{false}$]{} 1, 2, 5 [$\mathsf{true}$]{} [$\mathsf{false}$]{} [$\mathsf{false}$]{} [$\mathsf{true}$]{} 1 [$\mathsf{true}$]{} [$\mathsf{false}$]{} [$\mathsf{false}$]{} [$\mathsf{false}$]{} 1, 2 [$\mathsf{false}$]{} [$\mathsf{true}$]{} [$\mathsf{true}$]{} [$\mathsf{true}$]{} 1, 4 [$\mathsf{false}$]{} [$\mathsf{true}$]{} [$\mathsf{true}$]{} [$\mathsf{false}$]{} 1 [$\mathsf{false}$]{} [$\mathsf{true}$]{} [$\mathsf{false}$]{} [$\mathsf{true}$]{} 1, 3, 4 [$\mathsf{false}$]{} [$\mathsf{true}$]{} [$\mathsf{false}$]{} [$\mathsf{false}$]{} 1, 3 [$\mathsf{false}$]{} [$\mathsf{false}$]{} [$\mathsf{true}$]{} [$\mathsf{true}$]{} 5 [$\mathsf{false}$]{} [$\mathsf{false}$]{} [$\mathsf{true}$]{} [$\mathsf{false}$]{} 5 [$\mathsf{false}$]{} [$\mathsf{false}$]{} [$\mathsf{false}$]{} [$\mathsf{true}$]{} 3 [$\mathsf{false}$]{} [$\mathsf{false}$]{} [$\mathsf{false}$]{} [$\mathsf{false}$]{} 3 ------------------------------------- ------------------------------------- ------------------------------------- ------------------------------------- --------- \[thm: DC has dual basis\] Let ${\mathcal G}$ be a DC T-mesh. Then the set of functionals $\{\lambda_v\mid v\in{\mathcal N}_A\}$ is a set of dual functionals for the set $\{B_v\mid v\in{\mathcal N}_A\}$. The proof below follows the ideas of [@BBCS:2012 Proposition 5.1] and [@BBSV:2014 Proposition 7.3]. Let $v,w\in{\mathcal N}_A$. We need to show that $$\label{eq:DC_claim} \lambda_v(B_w) = \delta_{vw},$$ with $\delta$ representing the Kronecker symbol. If $\operatorname{supp}B_v$ and $\operatorname{supp}B_w$ are disjoint (or have an intersection of empty interior), then at least one of the pairs $$\bigl(\operatorname{supp}(N_{\operatorname{\pmb{\mathrm x}}(v)}),\operatorname{supp}(N_{\operatorname{\pmb{\mathrm x}}(w)})\bigr),\ \bigl(\operatorname{supp}(N_{\operatorname{\pmb{\mathrm y}}(v)}),\operatorname{supp}(N_{\operatorname{\pmb{\mathrm y}}(w)})\bigr),\ \bigl(\operatorname{supp}(N_{\operatorname{\pmb{\mathrm z}}(v)}),\operatorname{supp}(N_{\operatorname{\pmb{\mathrm z}}(w)})\bigr)$$ has an intersection with empty interior. Assume w.l.o.g. that $\left|\operatorname{supp}(N_{\operatorname{\pmb{\mathrm x}}(v)})\cap\operatorname{supp}(N_{\operatorname{\pmb{\mathrm x}}(w)})\right|=0$, then $$\lambda_v(B_w) = \underbrace{\lambda_{\operatorname{\pmb{\mathrm x}}(v)}(N_{\operatorname{\pmb{\mathrm x}}(w)})}_0\cdot \lambda_{\operatorname{\pmb{\mathrm y}}(v)}(N_{\operatorname{\pmb{\mathrm y}}(w)})\cdot \lambda_{\operatorname{\pmb{\mathrm z}}(v)}(N_{\operatorname{\pmb{\mathrm z}}(w)})=0.$$ Assume that $\operatorname{supp}B_v$ and $\operatorname{supp}B_w$ have an intersection with nonempty interior. Since the mesh ${\mathcal G}$ is DC, the two nodes overlap in at least two dimensions. Without loss of generality we may assume the index vectors $\bigl(\operatorname{\pmb{\mathrm x}}(v),\operatorname{\pmb{\mathrm x}}(w)\bigr)$ and $\bigl(\operatorname{\pmb{\mathrm y}}(v),\operatorname{\pmb{\mathrm y}}(w)\bigr)$ overlap. Proposition \[prp:overlap\] yields $$\lambda_{\operatorname{\pmb{\mathrm x}}(v)}(N_{\operatorname{\pmb{\mathrm x}}(w)}) = \delta_{v_1w_1} \enspace\text{and}\enspace \lambda_{\operatorname{\pmb{\mathrm y}}(v)}(N_{\operatorname{\pmb{\mathrm y}}(w)}) = \delta_{v_2w_2} .$$ The above identities immediately prove if $v_1\neq w_1$ or $v_2\neq w_2$. If on the contrary, $v_1=w_1$ and $v_2\neq w_2$, then $v$ and $w$ are aligned in $z$-direction, this is, $\operatorname{\pmb{\mathrm z}}(v)$ and $\operatorname{\pmb{\mathrm z}}(w)$ are both vectors of $p+2$ consecutive indices from the same index set $\operatorname{\pmb{\mathrm Z}}(v_1,v_2)=\operatorname{\pmb{\mathrm Z}}(w_1,w_2)$. Hence $v$ and $w$ must overlap also in $z$-direction. Again, Proposition \[prp:overlap\] yields $$\lambda_{\operatorname{\pmb{\mathrm z}}(v)}(N_{\operatorname{\pmb{\mathrm z}}(w)}) = \delta_{v_3w_3},$$ which concludes the proof. Let ${\mathcal G}$ be a DC T-mesh. Then the set $\{B_v\mid v\in{\mathcal N}_A\}$ is linear independent. Assume $$\sum_{v\in{\mathcal N}_A}c_vB_v=0$$ for some coefficients $\{c_v\}_{v\in{\mathcal N}_A}\subseteq{\mathbb R}$. Then, for any $w\in{\mathcal N}_A$, applying $\lambda_w$ to the sum, using linearity and Theorem \[thm: DC has dual basis\], we get $$c_w = \lambda_w\,\Bigl(\sum_{v\in{\mathcal N}_A}c_vB_v\Bigr)=0.$$ [\ ]{} Linear Complexity {#sec: complexity} ================= This section is devoted to a complexity estimate in the style of a famous estimate for the Newest Vertex Bisection on triangular meshes given by Binev, Dahmen and DeVore [@BDV:2004] and, in an alternative version, by Stevenson [@Stevenson:2007]. Linear Complexity of the refinement procedure is an inevitable criterion for optimal convergence rates in the Adaptive Finite Element Method (see e.g. [@BDV:2004; @Stevenson:2007; @CFPP:2014] and [@Buffa:Giannelli:2015 Conclusions]). The estimate and its proof follow our own work [@Morgenstern:Peterseim:2015; @BGMP:2016], which we generalize now to three dimensions and $m$-graded refinement. The estimate reads as follows. \[thm: complexity\] Any sequence of admissible meshes ${\mathcal G}_0,{\mathcal G}_1,\dots,{\mathcal G}_J$ with $${\mathcal G}_j={\operatorname{ref}^{\mathbf p,m}}({\mathcal G}_{j-1},{\mathcal M}_{j-1}),\quad{\mathcal M}_{j-1}\subseteq{\mathcal G}_{j-1}\quad\text{for}\enspace j\in\{1,\dots,J\}$$ satisfies $$\left|{\mathcal G}_J\setminus{\mathcal G}_0\right|\ \le\ C_{\mathbf p,m}\sum_{j=0}^{J-1}|{\mathcal M}_j|\ ,$$ with $C_{\mathbf p,m}=\tfrac{m^{1/3}}{1-m^{-1/3}}\,\bigl(4d_1+1\bigr)\,\bigl(4d_2+m^{1/3}\bigr)\,\bigl(4d_3+m^{2/3}\bigr)$ and $d_1,d_2,d_3$ from Lemma \[lma: K1 in refMS =&gt; K2 in S\] below. \[lma: K1 in refMS =&gt; K2 in S\] Given ${\mathcal M}\subseteq{\mathcal G}\in{\mathbb A^{\mathbf p,m}}$ and $K\in{\operatorname{ref}^{\mathbf p,m}}({\mathcal G},{\mathcal M})\setminus{\mathcal G}$, there exists $K'\in {\mathcal M}$ such that $\ell(K)\le\ell(K')+1$ and $$\operatorname{Dist}(K,K')\le m^{-\ell(K)/3}(d_1,d_2,d_3),$$ with “$\le$” understood componentwise and constants $$\begin{aligned} d_1&{\coloneqq}\tfrac1{1-m^{-1/3}} \,\bigl(p_1+\tfrac{3+m^{1/3}}2+\tfrac{m^{1/3}-1}{m^2}\bigr),\\ d_2&{\coloneqq}\tfrac{m^{1/3}}{1-m^{-1/3}}\,\bigl(p_2+\tfrac{3+m^{1/3}}2+\tfrac{m^{1/3}-1}{m^2}\bigr),\\ d_3&{\coloneqq}\tfrac{m^{2/3}}{1-m^{-1/3}}\,\bigl(p_3+\tfrac{3+m^{1/3}}2+\tfrac{m^{1/3}-1}{m^2}\bigr).\end{aligned}$$ The proof is given in Appendix \[apx: K1 in refMS =&gt; K2 in S\].   [**()****]{}For $K\in\operatorname*{{\textstyle\bigcup}}{\mathbb A^{\mathbf p,m}}$ and ${\tilde K}\in{\mathcal M}{\coloneqq}{\mathcal M}_0\cup\dots\cup{\mathcal M}_{J-1}$, define $\lambda(K,{\tilde K})$ by $$\lambda(K,{\tilde K}){\coloneqq}\begin{cases}m^{(\ell(K)-\ell({\tilde K}))/3}&\text{if }\ell(K)\le\ell({\tilde K})+1\text{ and }\operatorname{Dist}(K,{\tilde K})\le 2m^{-\ell(K)/3}(d_1,d_2,d_3),\\[.3em]0&\text{otherwise.}\end{cases}$$ [**()***Main idea of the proof.*]{}$$\begin{aligned} {2} \left|{\mathcal G}_J\setminus{\mathcal G}_0\right| &= {\makebox[2.5em]{$\displaystyle \sum_{K\in{\mathcal G}_J\setminus{\mathcal G}_0}$}}1 &&{\mathrel{{\makebox[1.4ex]{$\displaystyle \stackrel{{\textbf{(\ref{sum_lambda > 1})}}}{\le}$}}}}\sum_{K\in{\mathcal G}_J\setminus{\mathcal G}_0}\sum_{{\tilde K}\in{\mathcal M}}\lambda(K,{\tilde K}) \\ &{\mathrel{{\makebox[1.4ex]{$\displaystyle \stackrel{{\textbf{(\ref{summe aller lambdas beschraenkt})}}}{\le}$}}}}\sum_{{\tilde K}\in{\mathcal M}} C_{\mathbf p,m} &&=C_{\mathbf p,m}\,\sum_{j=0}^{J-1} |{\mathcal M}_j|.\end{aligned}$$ [**()***Each $K\in{\mathcal G}_J\setminus{\mathcal G}_0$ satisfies $$\sum_{{\tilde K}\in{\mathcal M}}\lambda(K,{\tilde K})\ \ge\ 1.$$*]{}\[sum\_lambda &gt; 1\]Consider $K\in{\mathcal G}_J\setminus{\mathcal G}_0$. Set $j_1<J$ such that $K\in{\mathcal G}_{j_1+1}\setminus{\mathcal G}_{j_1}$. Lemma \[lma: K1 in refMS =&gt; K2 in S\] states the existence of $K_1\in{\mathcal M}_{j_1}$ with $\operatorname{Dist}(K,K_1)\le m^{-\ell(K)/3}(d_1,d_2,d_3)$ and $\ell(K)\le\ell(K_1)+1$. Hence $\lambda(K,K_1)=m^{\ell(K)-\ell(K_1)}>0$. The repeated use of Lemma \[lma: K1 in refMS =&gt; K2 in S\] yields $j_1>j_2>j_3>\dots$ and $K_2,K_3,\dots$ with $K_{i-1}\in{\mathcal G}_{j_i+1}\setminus{\mathcal G}_{j_i}$ and $K_i\in{\mathcal M}_{j_i}$ such that $$\label{eq: complexity -last} \operatorname{Dist}(K_{i-1},K_i)\le m^{-\ell(K_{i-1})/3}(d_1,d_2,d_3)\enspace\text{and}\enspace\ell(K_{i-1})\le\ell(K_i)+1.$$ We repeat applying Lemma \[lma: K1 in refMS =&gt; K2 in S\] as $\lambda(K,K_i)>0$ and $\ell(K_i)>0$, and we stop at the first index $L$ with $\lambda(K,K_L)=0$ or $\ell(K_L)=0$. If $\ell(K_L)=0$ and $\lambda(K,K_L)>0$, then $$\sum_{{\tilde K}\in{\mathcal M}}\lambda(K,{\tilde K})\ge\lambda(K,K_L)=m^{(\ell(K)-\ell(K_L))/3}\ge m^{1/3}.$$ If $\lambda(K,K_L)=0$ because $\ell(K)>\ell(K_L)+1$, then yields $\ell(K_{L-1})\le\ell(K_L)+1<\ell(K)$ and hence $$\sum_{{\tilde K}\in{\mathcal M}}\lambda(K,{\tilde K})\ge\lambda(K,K_{L-1})=m^{(\ell(K)-\ell(K_{L-1}))/3}>m^{1/3}.$$ If $\lambda(K,K_L)=0$ because $\operatorname{Dist}(K,K_L)>2m^{-\ell(K)/3}(d_1,d_2,d_3)$, then a triangle inequality shows $$\begin{aligned} 2m^{-\ell(K)/3}(d_1,d_2,d_3) &< \operatorname{Dist}(K,K_1)+\sum_{i=1}^{L-1}\operatorname{Dist}(K_i,K_{i+1}) \\&\le\ m^{-\ell(K)/3}(d_1,d_2,d_3)+\sum_{i=1}^{L-1} m^{-\ell(K_i)/3}(d_1,d_2,d_3),\end{aligned}$$ and hence $\smash{\displaystyle m^{-\ell(K)/3} \le\sum_{i=1}^{L-1} m^{-\ell(K_i)/3}}$. The proof is concluded with $$1\ \le\ \sum_{i=1}^{L-1} m^{(\ell(K)-\ell(K_i))/3}\ =\ \sum_{i=1}^{L-1} \lambda(K,K_i)\ \le\ \sum_{{\tilde K}\in{\mathcal M}}\lambda(K,{\tilde K}).$$ [**()***For all $j\in\{0,\dots,J-1\}$ and ${\tilde K}\in{\mathcal M}_j$ holds $$\sum_{K\in{\mathcal G}_J\setminus{\mathcal G}_0}\lambda(K,{\tilde K})\ \le\ \tfrac{m^{1/3}}{1-m^{-1/3}}\,\bigl(4d_1+1\bigr)\,\bigl(4d_2+m^{1/3}\bigr)\,\bigl(4d_3+m^{2/3}\bigr)\ =\ C_{\mathbf p,m}\ .$$*]{}\[summe aller lambdas beschraenkt\]This is shown as follows. By definition of $\lambda$, we have $$\begin{aligned} {\makebox[1cm]{$\displaystyle \sum_{K\in{\mathcal G}_J\setminus{\mathcal G}_0}$}}\lambda(K,{\tilde K}) &\le {\makebox[1cm]{$\displaystyle \sum_{K\in\bigcup{\mathbb A^{\mathbf p,m}}\setminus{\mathcal G}_0}$}}\lambda(K,{\tilde K})\\ &= \sum_{j=1}^{\ell({\tilde K})+1}m^{(j-\ell({\tilde K}))/3}\,\#\underbrace{\bigl\{K\in\operatorname*{{\textstyle\bigcup}}{\mathbb A^{\mathbf p,m}}\mid\ell(K)=j\text{ and }\operatorname{Dist}(K,{\tilde K})\le 2m^{-j/3}(d_1,d_2,d_3)\bigl\}}_B.\end{aligned}$$ Since we know by definition of the level that $\ell(K)=j$ implies $|K|=m^{-j}$, we know that $m^j\left|\operatorname*{{\textstyle\bigcup}}B\right|$ is an upper bound of $\#B$. The cuboidal set $\operatorname*{{\textstyle\bigcup}}B$ is the union of all admissible elements of level $j$ having their midpoints inside a cuboid of size $$4m^{-j/3}d_1\,\times\,4m^{-j/3}d_2\,\times\,4m^{-j/3}d_3.$$ An admissible element of level $j$ is not bigger than $m^{-j/3}\,\times\,m^{(1-j)/3}\,\times\,m^{(2-j)/3}$. Together, we have $$\bigl|\operatorname*{{\textstyle\bigcup}}B\bigr|\le m^{-j}\,\bigl(4d_1+1\bigr)\,\bigl(4d_2+m^{1/3}\bigr)\,\bigl(4d_3+m^{2/3}\bigr),$$ and hence $\#B\le \bigl(4d_1+1\bigr)\,\bigl(4d_2+m^{1/3}\bigr)\,\bigl(4d_3+m^{2/3}\bigr)$. An index substitution $k{\coloneqq}1-j+\ell({\tilde K})$ proves the claim with $$\sum_{j=1}^{\ell({\tilde K})+1}m^{(j-\ell({\tilde K}))/3}=\sum_{k=0}^{\ell({\tilde K})}m^{(1-k)/3}<m^{1/3}\sum_{k=0}^\infty m^{-k/3}=\tfrac{m^{1/3}}{1-m^{-1/3}}.$$[\ ]{} An experiment on $C_{\mathbf p,m}$ {#an-experiment-on-c_mathbf-pm .unnumbered} ---------------------------------- The constant $C_{\mathbf p,m}$ arising from this theory is very large, however we observed much smaller ratios of refined and marked elements in the experiment (in all cases less than $\tfrac{C_{\mathbf p,m}}{3000}$, see Figure \[fig: complexity constants\]). Starting from a $5\times5\times5$ mesh, we applied the refinement algorithm with only one corner element marked, always sticking to the same corner. This is realistic when resolving a singularity of the solution of a discretized PDE. The advantage of greater grading parameters could not be seen in random refinement all over the domain. ![The complexity constant $C_{\mathbf p,m}$ in theory (left) and experiment (right). The values of $C'_m$ were taken from an experiment illustrated in Figure \[fig: testing complexity\].[]{data-label="fig: complexity constants"}](Cpm_1 "fig:"){width=".45\textwidth"}![The complexity constant $C_{\mathbf p,m}$ in theory (left) and experiment (right). The values of $C'_m$ were taken from an experiment illustrated in Figure \[fig: testing complexity\].[]{data-label="fig: complexity constants"}](Cpm_2 "fig:"){width=".45\textwidth"} ![Estimation of the experimental constants $C'_m$ for $m=2,\dots,5$.[]{data-label="fig: testing complexity"}](Cpm_4){width=".70\textwidth"} Conclusions & Outlook {#sec: conclusions} ===================== We have generalized the concept of Analysis-Suitability to three-dimensional meshes that originate from tensor-product initial meshes, and proved that it guarantees linear independence of the T-spline blending functions. We introduced a local refinement algorithm with adjustable mesh grading, and proved that it has linear complexity in the sense that the overhead for preserving Analysis-Suitability is essentially bounded by the number of marked elements. We expect that these results also generalize to even-degree and mixed-degree T-splines. In order to achieve this, a universal definition of anchor elements is needed, based on the techniques from [@BBSV:2013]. Open questions that have not been investigated in this paper address the overlay (this is, the coarsest common refinement of two meshes), the nesting behavior of the T-spline spaces, and more general meshes. As in our preliminary work [@Morgenstern:Peterseim:2015], we expect that the overlay has a bounded cardinality in terms of the two overlaid meshes, and that it is also an admissible mesh. Nestedness of T-spline spaces is not evident in general [@Li:Scott:2014], but we expect nestedness for the meshes generated by the proposed refinement algorithm. A first step in this issue will be a characterization of three-dimensional meshes that induce nested T-spline spaces. A generalization of this paper to a more general class of meshes will most likely require a manifold representation of the mesh, and use recent results on Dual-Compatibility in spline manifolds [@STV:2015]. \[2\][ [\#2](http://www.ams.org/mathscinet-getitem?mr=#1) ]{} \[2\][\#2]{} [10]{} T. Sederberg, J. Zheng, A. Bakenov, and A. Nasri, *[T-Splines and T-NURCCs]{}*, ACM Trans. Graph. **22** (2003), no. 3, 477–484. D. R. Forsey and R. H. Bartels, *Hierarchical [B]{}-spline refinement*, Comput. Graphics **22** (1988), 205–212. G. Kuru, C. Verhoosel, K. van der Zee, and E. van Brummelen, *Goal-adaptive isogeometric analysis with hierarchical splines*, Comput. Methods Appl. Mech. Engrg. **270** (2014), 270 – 292. C. Giannelli, B. Jüttler, and H. Speleers, *[THB]{}–splines: the truncated basis for hierarchical splines*, Comput. Aided Geom. Design **29** (2012), 485–498. T. Dokken, T. Lyche, and K. Pettersen, *Polynomial splines over locally refined box-partitions*, Comput. Aided Geom. Design **30** (2013), no. 3, 331 – 356. E. J. Evans, M. A. Scott, X. Li, and D. C. Thomas, *[Hierarchical T-splines: Analysis-suitability, Bézier extraction, and application as an adaptive basis for isogeometric analysis]{}*, Comput. Methods Appl. Mech. Engrg. **284** (2015), 1–20. A. Buffa, D. Cho, and G. Sangalli, *[Linear independence of the T-spline blending functions associated with some particular T-meshes]{}*, Comput. Methods Appl. Mech. Engrg. **199** (2010), no. 23–24, 1437 – 1445. X. Li, J. Zheng, T. Sederberg, T. Hughes, and M. Scott, *[On Linear Independence of T-spline Blending Functions]{}*, Comput. Aided Geom. Des. **29** (2012), no. 1, 63–76. L. B. da Veiga, A. Buffa, D. Cho, and G. Sangalli, *[Analysis-Suitable T-splines are Dual-Compatible]{}*, Comput. Methods Appl. Mech. Engrg. **249-–252** (2012), 42–51, Higher Order Finite Element and Isogeometric Methods. L. B. da Veiga, A. Buffa, G. Sangalli, and R. Vàzquez, *Mathematical analysis of variational isogeometric methods*, Acta Numerica **23** (2014), 157–287. P. Morgenstern and D. Peterseim, *[Analysis-suitable adaptive [T]{}-mesh refinement with linear complexity]{}*, Comput. Aided Geom. Design **34** (2015), 50–66. M. Scott, X. Li, T. Sederberg, and T. Hughes, *Local refinement of analysis-suitable [T]{}-splines*, Comput. Methods Appl. Mech. Engrg. **213–216** (2012), 206–222. L. Schumaker, *[Spline Functions: Basic Theory]{}*, 3 ed., Cambridge Mathematical Library, Cambridge Univ. Press, Cambridge, 2007. P. Binev, W. Dahmen, and R. DeVore, *[Adaptive Finite Element Methods with convergence rates]{}*, Numer. Math. **97** (2004), no. 2, 219–268. R. Stevenson, *Optimality of a standard adaptive finite element method*, Found. Comput. Math. **7** (2007), no. 2, 245–269. C. Carstensen, M. Feischl, M. Page, and D. Praetorius, *Axioms of adaptivity*, Comput. Math. Appl. **67** (2014), no. 6, 1195–1253. A. [Buffa]{} and C. [Giannelli]{}, *Adaptive isogeometric methods with hierarchical splines: error estimator and convergence*, ArXiv e-prints (2015). A. Buffa, C. Giannelli, P. Morgenstern, and D. Peterseim, *Complexity of hierarchical refinement for a class of admissible mesh configurations*, Computer Aided Geometric Design (2016), –, In Press. L. B. da Veiga, A. Buffa, G. Sangalli, and R. Vàzquez, *[Analysis-suitable T-splines of arbitrary degree: definition, linear independence and approximation properties]{}*, Math. Models Methods Appl. Sci. **23** (2013), no. 11, 1979–2003. X. Li and M. A. Scott, *[Analysis-suitable T-splines: Characterization, refineability, and approximation]{}*, Math. Models Methods Appl. Sci. **24** (2014), no. 06, 1141–1164. G. [Sangalli]{}, T. [Takacs]{}, and R. [V[á]{}zquez]{}, *[Unstructured spline spaces for isogeometric analysis based on spline manifolds]{}*, ArXiv e-prints (2015). Minor proofs ============ Proof of Lemma \[lma: magic patches are nested\] {#apx: magic patches are nested} ------------------------------------------------ If $K=\hat K$, the claim is trivially fulfilled. If otherwise $K\subsetneqq\hat K$, we consider the following two cases. *Case 1.* Assume that $\ell(K)=\ell(\hat K)+1$. Since $K=[x,x+\tilde x]\times[y,y+\tilde y]\times[z,z+\tilde z]$ is the result of successive subdivisions of a unit cube, it holds that $$\label{eq: df size} \operatorname{size}(\ell(K)){\coloneqq}(\tilde x,\tilde y, \tilde z) =\begin{cases}m^{-\ell(K)/3}\left(1,1,1\right)&\text{if }\ell(K)=0\bmod 3,\\m^{-(\ell(K)-1)/3}\bigl(\tfrac1m,1,1\bigr)&\text{if }\ell(K)=1\bmod 3,\\m^{-(\ell(K)-2)/3}\bigl(\tfrac1m,\tfrac1m,1\bigr)&\text{if }\ell(K)=2\bmod 3.\end{cases}$$ Since $K$ results from the subdivision of $\hat K$, we also have that $$\label{eq: parent dist} \operatorname{Dist}(K,\hat K)=\begin{cases}\bigl(m^{-(\ell(\hat K)+6)/3},0,0\bigr)&\text{if }\ell(\hat K)=0\bmod 3,\\\bigl(0,m^{-(\ell(\hat K)+5)/3},0\bigr)&\text{if }\ell(\hat K)=1\bmod 3,\\\bigl(0,0,m^{-(\ell(\hat K)+4)/3}\bigr)&\text{if }\ell(\hat K)=2\bmod 3.\end{cases}$$ Recall that $${\operatorname{\mathbf D}^{\mathbf p,m}}(k){\coloneqq}\begin{cases}m^{-k/3}\,\bigl(p_1+\tfrac32,p_2+\tfrac32,p_3+\tfrac32\bigr)&\text{if }k=0\bmod3, \\[.7ex] m^{-(k-1)/3}\,\bigl(\tfrac{p_1+3/2}m,p_2+\tfrac32,p_3+\tfrac32\bigr)&\text{if }k=1\bmod3, \\[.7ex] m^{-(k-2)/3}\,\bigl(\tfrac{p_1+3/2}m,\tfrac{p_2+3/2}m,p_3+\tfrac32\bigr)&\text{if }k=2\bmod3.\end{cases}$$ We rewrite in the form $$\label{eq: child dist} \operatorname{Dist}(K,\hat K)=\begin{cases}\bigl(0,0,m^{-(\ell(K)+3)/3}\bigr)&\text{if }\ell(K)=0\bmod 3,\\\bigl(m^{-(\ell(K)+5)/3},0,0\bigr)&\text{if }\ell(K)=1\bmod 3,\\\bigl(0,m^{-(\ell(K)+4)/3},0\bigr)&\text{if }\ell(K)=2\bmod 3\end{cases}$$ and observe that ${\operatorname{\mathbf D}^{\mathbf p,m}}(\ell(K))+\operatorname{Dist}(K,\hat K)\le{\operatorname{\mathbf D}^{\mathbf p,m}}(\ell(K)-1)={\operatorname{\mathbf D}^{\mathbf p,m}}(\ell(\hat K))$. The case 1 is concluded with $$\begin{aligned} {U^{\mathbf p,m}}(K) &= \{x\in\Omega\mid\operatorname{Dist}(K,x)\le{\operatorname{\mathbf D}^{\mathbf p,m}}(\ell(K))\}\\ &\subseteq \{x\in\Omega\mid\operatorname{Dist}(\hat K,x)\le{\operatorname{\mathbf D}^{\mathbf p,m}}(\ell(K)) + \operatorname{Dist}(K,\hat K)\}\\ &\subseteq{U^{\mathbf p,m}}(\hat K),\end{aligned}$$ and consequently ${{\mathcal G}^{\mathbf p,m}(K)}\subseteq{{\mathcal G}^{\mathbf p,m}(\hat K)}$. *Case 2.* Consider $K\subset\hat K$ with $\ell(K)>\ell(\hat K)+1$, then there is a sequence $$K=K_0\subset K_1\subset\dots\subset K_J=\hat K$$ such that $K_{j-1}\in\operatorname{child}(K_j)$ for $j=1,\dots,L$. Case 1 yields $${{\mathcal G}^{\mathbf p,m}(K)}\subseteq{{\mathcal G}^{\mathbf p,m}(K_1)}\subseteq\dots\subseteq{{\mathcal G}^{\mathbf p,m}(\hat K)}.$$[\ ]{} Proof of Lemma \[lma: levels change slowly\] {#apx: levels change slowly} -------------------------------------------- For $\ell(K)=0$, the assertion is always true. For $\ell(K)>0$, consider the parent $\hat K$ of $K$ (i.e., the unique element $\hat K\in\operatorname*{{\textstyle\bigcup}}{\mathbb A^{\mathbf p,m}}$ with $K\in\operatorname{child}(\hat K)$). Since ${\mathcal G}$ is admissible, there are admissible meshes ${\mathcal G}_0,\dots,{\mathcal G}_J={\mathcal G}$ and some $j\in\{0,\dots,J-1\}$ such that $K\in{\mathcal G}_{j+1}=\operatorname{subdiv}({\mathcal G}_j,\{\hat K\})$. The admissibility ${\mathcal G}_{j+1}\in{\mathbb A^{\mathbf p,m}}$ implies that any $K'\in{{\mathcal G}_j^{\mathbf p,m}(\hat K)}$ satisfies $\ell(K')\ge\ell(\hat K)=\ell(K)-1$. Since levels do not decrease during refinement, we get $$\begin{aligned} \ell(K)-1\le\min\ell({{\mathcal G}_j^{\mathbf p,m}(\hat K)})&\le\min\ell({{\mathcal G}^{\mathbf p,m}(\hat K)})\label{eq: levels change slooowly}\\ \notag&{\mathrel{{\makebox[1.4ex]{$\displaystyle \stackrel{\text{Lemma~\ref{lma: magic patches are nested}}}{\le}$}}}}\enspace\min\ell({{\mathcal G}^{\mathbf p,m}(K)}).\end{aligned}$$ Proof of Lemma \[lma: K1 in refMS =&gt; K2 in S\] {#apx: K1 in refMS => K2 in S} ------------------------------------------------- The coefficient ${\operatorname{\mathbf D}^{\mathbf p,m}}(k)$ from Definition \[df: magic patch\] is bounded by $$\label{eq: D bounded} {\operatorname{\mathbf D}^{\mathbf p,m}}(k)\le m^{-k/3}\,\underbrace{\Bigl(p_1+\tfrac32,\ m^{1/3}\bigl(p_2+\tfrac32\bigr),\ m^{2/3}\bigl(p_3+\tfrac32\bigr)\Bigr)}_{\mathbf{\tilde p}}\quad\text{for all }k\in\mathbb N.$$ Recall $\operatorname{size}(k)$ from and note that it is decreasing and bounded by $$\label{eq: size bounded} \operatorname{size}(k)\le m^{-k/3}\,\bigl(1,m^{1/3},m^{2/3}\bigr).$$ Hence for $\tilde K\in{\mathcal G}\in{\mathbb A^{\mathbf p,m}}$ and $\tilde K'\in{{\mathcal G}^{\mathbf p,m}(\tilde K)}$, there is $x\in \tilde K'\cap{U^{\mathbf p,m}}(\tilde K)$ and hence $$\begin{aligned} \operatorname{Dist}(\tilde K,\tilde K') &\le \operatorname{Dist}(\tilde K,x) + \operatorname{Dist}(\tilde K',x) \notag\\ &\le \operatorname{Dist}(\tilde K,x) + \tfrac12\operatorname{size}(\ell(\tilde K')) \notag\\ &{\mathrel{{\makebox[1.4ex]{$\displaystyle \stackrel{\text{Lemma~\ref{lma: levels change slowly}}}{\le}$}}}}\quad\operatorname{Dist}(\tilde K,x) + \tfrac12\operatorname{size}(\ell(\tilde K)-1) \notag\\ &{\mathrel{{\makebox[1.4ex]{$\displaystyle \stackrel{\eqref{eq: size bounded}}{\le}$}}}} m^{-\ell(\tilde K)/3}\,\mathbf{\tilde p} + m^{-\ell(\tilde K)/3}\underbrace{\bigl(\tfrac{m^{1/3}}2,\tfrac{m^{2/3}}2,\tfrac m2\bigr)}_{\textstyle\mathbf s} \notag\\ \label{eq: magic radius} &\le m^{-\ell(\tilde K)/3}\left(\mathbf{\tilde p} + \mathbf s\right).\end{aligned}$$ The existence of $K\in{\operatorname{ref}^{\mathbf p,m}}({\mathcal G},{\mathcal M})\setminus{\mathcal G}$ means that Algorithm \[alg: refinement\] subdivides $K'=K_J,K_{J-1},\dots,K_0$ such that $K_{j-1}\in{{\mathcal G}^{\mathbf p,m}(K_j)}$ and $\ell(K_{j-1})<\ell(K_j)$ for $j=J,\dots,1$, having $K'\in {\mathcal M}$ and $K\in\operatorname{child}(K_0)$, with ‘$\operatorname{child}$’ from Definition \[df: subdivision\]. Lemma \[lma: levels change slowly\] yields $\ell(K_{j-1})=\ell(K_j)-1$ for $j=J,\dots,1$, which yields the estimate $$\begin{aligned} {2} \operatorname{Dist}(K',K_0)\enspace&\le\enspace \sum_{j=1}^J\operatorname{Dist}(K_j,K_{j-1}) &\enspace &{\mathrel{{\makebox[1.4ex]{$\displaystyle \stackrel{\eqref{eq: magic radius}}{\le}$}}}} \enspace \sum_{j=1}^Jm^{-\ell(K_j)/3}\,(\mathbf{\tilde p}+\mathbf s) \\[1ex] &= \sum_{j=1}^Jm^{-(\ell(K_0)+j)/3}\,(\mathbf{\tilde p}+\mathbf s) & &< m^{-\ell(K_0)/3}\,(\mathbf{\tilde p}+\mathbf s)\sum_{j=1}^\infty m^{-j/3} \\[1ex] &= \frac{m^{-1/3-\ell(K_0)/3}}{1-m^{-1/3}}\,(\mathbf{\tilde p}+\mathbf s) & &= \frac{m^{-\ell(K)/3}}{1-m^{-1/3}}\,(\mathbf{\tilde p}+\mathbf s).\end{aligned}$$ From we get $$\operatorname{Dist}(K_0,K)\le \bigl(m^{-(\ell(K)+5)/3},m^{-(\ell(K)+4)/3},m^{-(\ell(K)+3)/3}\bigr).$$ This and a triangle inequality conclude the proof. [^1]: Rheinische Friedrich-Wilhelms-Universität Bonn Wegelerstr. 6, 53115 Bonn, Germany / +4922873-60153 / [<morgenstern@ins.uni.bonn.de>]{} The author gratefully acknowledges support by the Deutsche Forschungsgemeinschaft in the Priority Program 1748 “Reliable simulation techniques in solid mechanics: Development of non-standard discretization methods, mechanical and mathematical analysis” under the project “Adaptive isogeometric modeling of propagating strong discontinuities in heterogeneous materials”.
--- author: - | Dany Vanbeveren\ \ Astrophysical Institue, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium\ dvbevere@vub.ac.be\ and\ GroupT Leuven Engineering College,\ Association KU Leuven, Vessaliusstraat 13, 3000 Leuven, Belgium title: '**Signatures of binary evolution processes in massive stars**' --- PS. @plain[mkbothoddheadoddfoot[“The multi-wavelength view of hot, massive stars”; 39$^{\rm th}$ Liège Int. Astroph. Coll., 12-16 July 2010 ]{}evenheadevenfootoddfoot]{} \#1 \#1 [ Before binary components interact, they evolve as single stars do. We therefore first critically discuss massive single star processes which affect their evolution, stellar wind mass loss and rotation in particular. Next we consider binary processes and focus on the effect of rotation on binary evolution and on the mass transfer during Roche lobe overflow. The third part highlights the importance of close pairs on the comprehension of the evolution of stellar populations in starburst regions.]{} Introduction ============ Massive stars are among the most important objects in the Universe and many (most?) of them are formed in binaries. A selection of observational and theoretical facts that illustrate the importance of binaries and the evolution of massive and very massive stars in clusters with special emphasis on massive binaries have been summarized in two recent review papers (Vanbeveren, 2009 = paper I, and 2010). The present written version of the Liege binary review can be considered as an addendum of both papers. The evolution of massive single stars ===================================== Before one or two massive stars in a binary start to interact, they evolve as single stars do. We therefore first discuss briefly massive single star evolution. The effect of stellar wind mass loss ------------------------------------ The evolution of a massive star depends significantly on its mass loss by stellar wind. We may distinguish 4 stellar wind mass loss phases: the OB-type phase, the luminous blue variable (LBV) phase, the Wolf-Rayet (WR) phase and the red supergiant (RSG) phase. They were discussed in paper I but let me focus a bit more on the RSG-phase. ### The RSG stellar wind Single stars with an initial mass $\le$ 30-40 M$_\odot$ become RSGs and therefore RSG stellar wind dominates their further evolution. Most of the stellar evolutionary codes use the RSG wind formalism proposed by de Jager et al. (1988) however this formalism predicts the real RSG wind rates perhaps not better than a factor 5-10. These rates determine whether or not the massive star will lose most of its hydrogen rich layers, e.g. whether or not the massive star will become a WR star. This means that a large uncertainty factor in the RSG rates may seriously affect the theoretically predicted population of WR stars. It was shown in Vanbeveren (1991) that a 35 % increase of the RSG rates (compared to the de Jager rates) is sufficient in order to obtain correspondence between the observed and theoretically predicted WR population in the Solar neighbourhood. This was worked out in more detail in Vanbeveren (1996) and in Vanbeveren et al. (2007). Note that also the Padua group (Salasnich et al., 1999) defended evolutionary computations of massive stars with larger RSG rates. Interestingly, Yoon & Cantiello (2010) recently studied the evolution of massive stars with pulsation driven super-winds during the RSG phase. As far as the effect on massive star evolution is concerned, they essentially arrive at similar conclusions as in our work. Let me finally remark that all the massive star population studies performed in Brussels since 1996 account for these higher RSG rates (in particular the population of the different SN types, De Donder & Vanbeveren (2003); the WR population, Vanbeveren et al., 1998a, Van Bever & Vanbeveren, 2000, 2003; binaries and the chemical evolution of the Galaxy, De Donder & Vanbeveren, 2004, etc.). The effect of rotation ---------------------- The effects of rotation on the evolution of massive single stars has been studied very intensively in the past 2 decades by the Geneva group (see the contribution of George Meynet in the present proceedings). One may distinguish three main effects: rotating stars have larger convective cores than non-rotating stars, rotational mixing is responsible for the transport of interior matter up to the surface, the faster the rotation the higher the stellar wind mass loss rate. ### Rotation, convective cores and rotational mixing The faster the initial rotation of a massive single star the larger the convective core, e.g. the effect of rotation on massive star cores is similar to convective core overshooting. Based on the observed rotation velocities of galactic O-type stars one concludes that they are born with an initial rotational velocity $\approx$ 200-300 km/s (however see also section 3.1). The calculations of the Geneva team then let us conclude that on average moderate convective core overshooting mimics the average effect of rotation on convective cores. Due to rotational mixing, matter from the interior may reach the surface layers. This mixing process only slightly modifies the overall evolution of the massive star, but it may alter the chemical abundances of the surface layers and a comparison with observed abundances may decide upon the effectiveness of rotational mixing. Unfortunately the correspondence between theoretical prediction and observations is rather poor (Hunter et al., 2008). Binaries may help (Langer et al., 2008) but magnetic fields may be needed as well (see the contribution of I. Brott in the present proceedings). A general warning may be appropriate: comparing the Hunter diagram with theoretical prediction involves population synthesis. In Brussels we have been studying massive star and binary populations for about two decades and we frequently found poor correspondence with observations, but in many cases, after second thoughts we concluded that observational bias was one of the main reason. Selecting well observed stars/binaries and trying to explain them may be at least as useful as overall population synthesis. OBN binaries are known quite some time and I like to call your attention to the interesting system HD 163181. It is a BN0.5Ia + OBN binary (Hutchings, 1975; Josephs et al., 2001) with a period P = 12 days and masses 13 M$_\odot$ + 22 M$_\odot$. The 13 M$_\odot$ primary has the luminosity of a 30 M$_\odot$ main sequence star and is at least 1.5 mag brighter than the 22 M$_\odot$ secondary. Binary evolution then reveals that the primary must be a core helium burning star at the end of RLOF (Vanbeveren et al., 1998b). The nitrogen enhancements are most probably due to binary mass exchange and more observations of this system may prove to be very instructive. Interestingly, the 13 M$_\odot$ mass loser has all the properties of WR stars in binaries (except may be the Teff) but it is not a WR star. We suspect that in the very near future the supergiant will turn into a WR star. An embarrassing problem of the present rotating massive star evolutionary models is that they produce pulsars that are spinning too fast by at least an order of magnitude (Heger et al., 2000) and also here the coupling between rotation and magnetic fields may be the solution. Some massive stars are known to rotate very fast, close to break up (the Be stars but also some O-type stars). Due to the combined action of convective core growth and rotational mixing, stars that rotate close to the critical velocity are expected to evolve quasi-homogeneously (Maeder, 1987) and their evolution is quite different from the ’normal’ evolution of massive stars. An important question is how these stars became extremely rapid rotators. Many rapid rotators are known to be binary components or former binary components and therefore we will come back to this in section 3.1. ### Rotation and the stellar wind mass loss rate One of the main evolutionary effects of rotation is related to the effect of rotation on the stellar mass loss. A first attempt to link rotation and mass loss was proposed by Langer (1997) but Glatzel (1998) showed that the proposed relation may be not correct due to the fact that it does not account for the effect of gravity darkening (von Zeipel, 1924). An alternative and attractive formalism has been derived by Maeder and Meynet (2000) where the effect of gravity darkening was taken into account. This relation demonstrate that for most of the massive stars (with an average initial rotational velocity of $\approx$ 200-300 km/s) the increase of the stellar wind mass loss with respect to the non-rotating case is very modest. The increase is significant for stars with a large Eddington factor $\Gamma$ (e.g., stars with an initial mass $\ge$ 40 M$_\odot$) that are rotating close to critical. However, the following remarks are appropriate: there are no observations yet to sustain the relation proposed by Maeder and Meynet (Puls et al., 2010). Even more, one may wonder whether or not rotation can be a significant mass loss driver, since even at critical rotation, the rotational energy is at most half the escape energy of a massive star (Owocki, 2010, private communication). The evolution of massive binaries ================================= The main differences between single star evolution and the evolution of the same star when it is a binary component are related to the Roche lobe overflow (RLOF) process and to binary processes which determine the rotation rate of the star. Rotation and binaries --------------------- Due to tidal interaction the massive primaries of most of the binaries are expected to be slow rotators. Only in very short period systems (P = 1-2 days) it can be expected that massive primaries are rapid rotators for which the evolution proceeds quasi-homogeneously (De Mink, 2010). In binaries where the RLOF of the primary is accompanied by mass transfer towards and mass accretion onto the secondary, the secondary spins-up and very rapidly rotational velocities are reached close to the critical one (Packet, 1981). This happens in systems where the RLOF occurs when the outer layers of the primary are mainly in radiative equilibrium (Case A / Case Br systems). Population synthesis predicts that many Be stars are formed this way (Pols and Marinus, 1994; Van Bever and Vanbeveren, 1998). The latter two studies illustrate that one may expect many Be stars with a subdwarf (sdO) or white dwarf (WD) companion. The high temperature of these companions makes them very hard to detect and this may be the reason why so few are known at present ($\phi$ Per is an exception, see section 3.3.1). The Be-components in Be-X-ray binaries form an interesting subclass of the Be sample because here we have all reasons to believe that binary action has been important in the formation of the Be star. Many Be single stars are also expected to form via binary mass transfer. The reason is that the supernova explosion of a massive primary disrupts the binary in most of the cases. This means that the fact that many Be stars have a neutron star companion means that even more Be single stars have had a similar past as the Be stars in the Be-X-ray binaries. The optical components of the standard high mass X-ray binaries are former binary secondaries where mass and angular momentum accretion may have occured. The mass and helium discrepancy for single stars discussed by Herrero et al. (1992) is also visible in the optical component of the X-ray binary Vela X-1 (Vanbeveren et al., 1993) and we proposed [*the accretion induced full mixing model*]{}. The idea was the following: due to mass and angular momentum accretion, mass gainers spin-up. This may induce efficient mixing. We simulated this possibility with our evolutionary code by fully mixing the mass gainer and, after the mass transfer phase, following the further evolution of the mixed star in a normal way. In this way we were able to explain the helium and mass discrepancy in Vela X-1. The more sophisticated mass gainer models of Cantiello et al. (2007) demonstrate that our simplified models are not too bad. ![The probable v$_{rot}$ distribution of O-type stars using the data of Penny (1996) (from Vanbeveren et al., 1998).\[fig\_1\]](Figure1Vanbeveren.pdf){width="10cm"} Starting from the observed $v_{rot}sin$i data of O-type stars of Penny (1996) Figure 1 shows the probable distribution of rotational velocities $v_{rot}$ of O-type stars (from Vanbeveren et al., 1998a) and illustrates that many O-type stars are relatively slow rotators, corresponding to an initial average rotational velocity for O-type stars of $\le$ 200 km/s for which indeed the effect of rotation on their evolution is rather modest. The figure also shows that there is a subset of very rapid rotators. However, many of these rapid rotators are runaway stars (they have a space velocity $\ge$ 30 km/s) and this may indicate that these stars were former binary components, e.g. they became rapid rotators due to the mass transfer process in a binary and they became runaways due to the supernova explosion of their companion, or they became rapidly rotating runaways due to stellar dynamics in dense stellar clusters, in which case they were former binary members as well but their formation was governed by star merging. An interesting test bed for this type of process may be $\zeta$ Pup which is indeed a rapidly rotating runaway. Note that Mokiem et al. (2006) obtained rotational velocities of 21 OB dwarfs in the SMC and concluded that the average $v_{rot}$=160-190 km/s. Since massive dwarfs are stars close to the zero age main sequence, this average value is indicative for the average initial rotation velocity of OB-type stars. Remark that this value is not significantly different than the initial average value of Galactic O-type stars whereas, similar as in the Galactic sample, the most rapid rotators in the SMC are runaway stars. All in all, it looks to me that rotation is important for massive star evolution but perhaps mainly within the framework of binaries in combination with stellar dynamics in young dense clusters. The Roche lobe overflow process ------------------------------- When the RLOF starts when the mass loser has a convective envelope (Case Bc and Case C), the mass loss process happens on the dynamical timescale and a common envelope forms. It can be expected that the common envelope is lost as a superwind where most of the energy is supplied by orbital decay and it stops when the two components merge or, when merging of the two stars can be avoided, when most (but not all) of the hydrogen rich layers of the mass loser are removed. This phase is so rapid that it is unlikely that mass accretion plays an important role for the evolution of the secondary star and therefore the latter may not become a rapid rotator. When the RLOF starts when the mass loser has a radiative envelope (Case A and Case Br), the mass loss process happens on the Kelvin-Helmhotz time scale of the loser and when the initial mass of the gainer is not too much smaller than the initial mass of the loser, mass transfer and mass accretion becomes possible. The evolution of the mass loser in Case Br and in most of the Case A massive binaries is very straightforward: due to RLOF the redward evolution of the loser is avoided (e.g., massive primaries in Case A or Case Br binaries do not become red supergiants). The RLOF stops when the loser has lost most (but not all) of its hydrogen rich layers and helium starts burning in the core. At that moment the loser resembles a WR-like star (when the mass is large enough the WR-like star is expected to be a guinine WR star with hydrogen in its atmosphere, typically X = 0.2-0.3). The evolution of the mass gainer in Case A and Case Br binaries is governed by mass and angular momentum accretion and rotation plays a very important role (see the previous subsection). An important question is whether or not the RLOF in Case A or Case Br binaries is quasi-conservative. Let me first remark that removing matter out of a binary at a rate which is similar to the rate at which the primary loses mass, requires a lot of energy, much more than the intrinsic radiation energy of the two components which is in most cases only sufficient in order to drive a modest stellar wind. The Utrecht group promoted a massive binary model where extensive mass loss from the system happens when the mass gainer has been spun-up by mass and angular momentum transfer and reaches a rotational velocity close to the critical one (Petrovic et al., 2005; De Mink, 2010). However, as discussed already in subsection 2.2.2, rotation is not an efficient mass loss driver whereas even at break up, the rotational energy is at least a factor 2 too small compared to the required escape energy. Van Rensbergen et al. (2008) proposed the following model: the gasstream during RLOF forms a hot spot either on the surface of the star when the gasstream hits the mass gainer directly, or on the Keplerian disc when mass transfer proceeds via a disc. The radiation energy of the hot spot in combination with the rotational energy of the spun-up mass gainer can then drive mass out of the binary. The following illustrates that this model is probably not a process that removes from the binary a significant amount of mass lost by the loser. For the sake of simplicity, let us neglect rotation because as was stated already before, rotation is not an efficient mass loss driver whereas even at critical break up, the rotation energy is too small compared to the escape energy of a massive star. The radiation energy $L_{acc}$ generated by the accretion of the gasstream is given by $$L_{acc} = G\frac{\dot{M}_{acc}\,M}{R}$$ where we neglect the fact that the gasstream does not originate at infinity but at the first Lagrangian point (it can readily be checked that this assumption does not significantly alter our main conclusions). $\dot{M}_{acc}$ is the mass accretion rate, M and R are the mass and the radius of the gainer. When $\eta$ is the fraction of $L_{acc}$ that is effectively transformed into escape energy, it follows that $$\eta\ L_{acc} = \frac{1}{2}\dot{M}_{out} v^{2}_{esc}$$ with $\dot{M}_{out}$ the binary mass loss rate and $v_{esc}$ the binary escape velocity. In order to get a first order estimate of the binary mass loss rate, we can replace $v_{esc}$ by the escape velocity of the mass loser only. The foregoing equations then result into $$\dot{M}_{out}=\eta\ \dot{M}_{acc}$$ Detailed hydrodynamic calculations of stellar winds reveal that the efficiency factor for converting radiation energy into kinetic energy is of the order of 0.01 and 0.001 (Nugis & Lamers, 2002). So, unless the efficiency is much higher, equation (3) then illustrates that in general the accretion energy may cause mass loss out of a binary but this loss is much smaller than the mass accretion rate. Of course when the gainer rotates at the critical velocity, mass accretion on the equator will not happen. Possibly matter will pile up arround the gainer, accretion may happen on the rest of the star, or matter may leave the binary through the second Lagrangian point L2 as illustrated in Figure 2. When this happens the variation of the orbital period can be calculated in a straightforward way (e.g., Vanbeveren et al., 1998b). Note that in most of the population studies performed by different groups the effect of a non-conservative RLOF is investigated using this L2 model. Let me finally remark that mass that leaves the binary through the decretion/accretion disk of the gainer when the gainer rotates at the break up velocity, takes with the specific orbital angular momentum of the gainer but also the specific rotational angular momentum of the equator of the gainer. Interestingly, the sum of both momenta is roughly equal to the specific angular momentum of the L2 point. ![Mass loss from the system through L2 and the formation of a ring around the binary, during the RLOF of the primary in a Case A/ Case Br binary.\[fig\_2\]](Figure2Vanbeveren.pdf){width="10cm"} Some interesting observed test beds of the Roche lobe overflow process ---------------------------------------------------------------------- As discussed in the previous subsection, from theoretical considerations it is unclear whether or not Case A / Case Br evolution in massive binaries is quasi-conservative or not. Are there observed binaries that can be considered as RLOF test beds and allow us to say something about the RLOF? The best candidates are post-RLOF binaries or binaries at the end of RLOF where one can try to fit evolutionary models of binaries where the evolution of both components is followed simultaneously adopting different efficiency values for the mass transfer process. We did this for a number of Galactic massive binaries (Vanbeveren et al., 1998b) and de Mink et al. (2007) for SMC binaries. Here we reconsider two most interesting systems. ### $\phi$ Per $\phi$ Per is a sdO6 + B0.5Ve binary with a period = 126 days. It is a post-RLOF binary where the subdwarf O6 star has been the mass loser (thus evolutionary speaking this is the primary although it is by far the less luminous component = visual luminosity) and the Be star is the mass gainer (the secondary). The fact that the secondary is a rapidly rotating Be star is indicative that mass transfer has played an important role. There are two different studies where the masses of the components were determined, e.g., 1.7 M$_\odot$ + 17.3 M$_\odot$ (Bozic et al., 1995) and 1.14 M$_\odot$ + 9.3 M$_\odot$ (Gies et al., 1996). Accounting for the period of the binary, both sets of masses imply that the previous RLOF was quasi-conservative (Vanbeveren et al., 1998b). ### RY Scuti The massive binary RY Scuti may be a key system for the discussion whether or not the RLOF in massive binaries is conservative. The spectroscopic study of Grundstrom et al. (2007) reveals that it is a O9.7Ibpe + B0.5I binary with a period = 11.2 days and masses 7 M$_\odot$ + 30 M$_\odot$. The O-type supergiant is the most luminous component and comparison with evolutionary models of binaries reveals that it is most probably a core helium burning star near the end of RLOF with a significantly reduced surface hydrogen content (X = 0.3-0.4). Similarly as the supergiant in HD 163181 (section 2.2.1)we predict that the O-type supergiant will soon become a WR star. If the masses are correct then this system is an illustration of a massive binary where the RLOF was quasi-conservative for the following reasons. From evolution it follows that the 7 M$_\odot$ star comes from a star with initial mass $\le$ 20 M$_\odot$ that lost $\le$ 13 M$_\odot$ by RLOF. Since the initial mass of the secondary (mass gainer) must have been $\le$ 20 M$_\odot$ as well obviously, it must have accreted at least 10 M$_\odot$ of the $\le$ 13 M$_\odot$ lost by the loser in order to become a 30 M$_\odot$ star, e.g. the mass accretion efficiency must have been at least 80% and we call this quasi-conservative. Note that the observations of Grundstrom et al.(2007) seem to indicate that there is some circum-binary material, but it is clear that the model discussed above does not contradict this fact. ### The stellar wind and binaries Spherically symmetric stellar wind mass loss of one or both components in a binary increases the binary period and decreases the amount of mass lost by RLOF when the stellar wind mass loss happens before the latter. Stars with an initial mass $\ge$ 30-40 M$_\odot$ are losing their hydrogen rich layers by an O-type wind and by LBV-type mass loss. When a star like that is a binary component with a period that is large enough so that the RLOF would start after these O and LBV type mass loss processes, the RLOF will not happen (the ’LBV scenario’ of massive close binaries as it was introduced in Vanbeveren, 1991). Our evolutionary computations of massive single stars but with our preferred RSG wind rates (section 2.1.2) predict that stars in the mass range 15-20 M$_\odot$ up to 30-40 M$_\odot$ lose sufficient hydrogen rich layers so that in the HR-diagram they turn back and become WR or WR-like stars. It is clear that when such a star is a binary member with a period that is large enough so that this RSG mass loss process starts before the Roche lobe overflow (RLOF) starts, the RLOF will be avoided. This means that what is called Case C evolution of binaries does not happen when the primary star has an initial mass $\ge$ 15-20 M$_\odot$ (the RSG scenario of massive binaries, Vanbeveren, 1996). The evolutionary computations with our preferred RSG rates also predict that stars in the mass range 10 M$_\odot$ and 15-20 M$_\odot$ lose a few M$_\odot$ during the RSG phase. When such a star is the primary of a binary with a period such that it will evolve through a common envelope (CE) phase, the total mass lost due to the latter CE process may be reduced due to the RSG stellar wind mass loss. The binary $\upsilon$ Sgr (Upsilon Sagittarii) may have evolved this way and may be an interesting test bed for the effect of RSG mass loss on binary evolution. It consists of an A-type supergiant + B4-7 main sequence star. The period = 138 days and the present masses are 2.5 M$_\odot$ + 4 M$_\odot$. The A supergiant is extremely hydrogen deficient, core helium/helium shell burning, with Log L/L$_{\odot}$ = 4.6. Evolution predicts that the initial masses were 12 M$_\odot$ + 4 M$_\odot$. The binary is post-common envelope but to understand its evolution (in particular the period evolution) one has to accept that RSG stellar wind mass loss has been important and lowered the importance of common envelope mass loss (see also Vanbeveren et al., 1998b). Close pairs - key to comprehension of the evolution of stellar populations in starburst regions =============================================================================================== Mass transfer and mass accretion during a canonical RLOF in Case A/Br binaries is responsible for the formation of a blue straggler sequence in young clusters (Pols and Marinus, 1994; Van Bever and Vanbeveren, 1998, see Figure 3). Since these blue stragglers are mass gainers or binary mergers, it can be expected that they are rapid rotators, e.g., a cluster of slow rotators but with a significant population of close binaries will become populated with rapid rotators. ![A typical starburst with primordial binaries after 8 Myr; the blue stars are rapidly-rotating mass gainers or binary mergers (from Van Bever and Vanbeveren, 1998).\[fig\_3\]](Figure3Vanbeveren.pdf){width="10cm"} Starburst99 is an interesting spectral synthesis tool to estimate all kinds of properties of starburst regions where only integrated spectra are available. However, it should be noted that this tool only accounts for the properties of single stars. The effects of binaries on the spectral synthesis of starbursts has been studied in Brussels: the effects of binaries on the evolution of W(H$_\beta$) (Van Bever & Vanbeveren, 1999), the effect of binaries on the evolution of UV spectral features in massive starbursts (Belkus et al., 2003), the effect of binaries on WR spectral features in massive starbursts (Van Bever & Vanbeveren, 2003), hard X-rays emitted by starbursts with binaries (Van Bever & Vanbeveren, 2000). Note that Brinchmann et al. (2008) investigated the spectral properties of WR galaxies in the Sloan Digital Sky Survey and concluded that a comparison with theoretical population synthesis leeds to the conclusion that binaries are necessary. Intermezzo 1 ------------ Vanbeveren (1982) discussed a possible relation between the maximum stellar mass in a cluster and the total cluster mass. It was concluded that [*the integrated galaxial stellar IMF should be steeper than the stellar IMF*]{}. This has been worked out in more detail about 20 years later by Kroupa and Weidner (2003) and Weidner and Kroupa (2006) who essentially arrived at the same conclusion. A consequence of the fact that the mass of the most massive star in a cluster correlates with the cluster mass, is that it is possible that also the mass ratio distribution of the most massive binary population in the cluster correlates with the cluster mass. To illustrate, suppose that the cluster mass indicates that the maximum stellar mass/total binary mass is 50 M$_\odot$, then one may expect binaries like 40 M$_\odot$ + 10 M$_\odot$ or 30 M$_\odot$ + 20 M$_\odot$ etc., but not 40 M$_\odot$ + 30 M$_\odot$. Intermezzo 2 ------------ Could it be that stars form in isolation? The origin of massive O-type field stars has been studied by De Wit et al., 2004. The authors proposed the following procedure in order to find a candidate O-type star that may have formed in isolation : take a non-runaway O-type field star and look for young clusters within 65 pc from the O-star. The value 65 pc was obtained by assuming that the lifetime of an O-type star is $\le$ $10^7$ yr. This is true for single stars however, the lifetime of an O-type star in a binary may be 2-3 times larger and therefore it may be necessary to look for young clusters within 200 pc. The model goes as follows: a 12 M$_\odot$ + 9 M$_\odot$ binary is dynamically ejected from a dense cluster with a velocity of 6 km/s. After 30 million years, when the binary is 200 pc away from its parent cluster, the 12 M$_\odot$ primary starts its RLOF. A quasi-conservative RLOF turns the 9 M$_\odot$ secondary into a 19 M$_\odot$ rejuvenated O-type star. When the primary remnant finally explodes as a supernova, the 19 M$_\odot$ O-type star most likely becomes a single star but the magnitude and direction of its space velocity may have changed completely, even so that its direction does not hit the parent cluster any longer. Stellar dynamics in young dense star clusters --------------------------------------------- Ultra Luminous X-ray sources (ULX) are point sources with X-ray luminosities up to 10$^{42}$ erg s$^{-1}$. MGG-11 is a young dense star cluster with Solar type metallicity $\sim$200 pc from the centre of the starburst galaxy M82, the parameters of which have been studied by McCrady et al. (2003). A ULX is associated with the cluster. When the X-rays are due to Eddington limited mass accretion onto a black hole (BH) it is straightforward to show that the mass of the BH has to be at least 1000 M$_\odot$. However how to form a star with Solar metallicity and with a mass larger than 1000 M$_\odot$? Mass segregation in a dense young cluster associated with core collapse and the formation of a runaway stellar collision process was promoted by Portegies Zwart et al. (2004). Note that the latter paper mainly addressed the dynamical evolution of a dense cluster but the evolution of the very massive stellar collision product was poorly described. ![The mass evolution during core-hydrogen-burning and during core-helium-burning of very massive stars (Solar metallicity) with an initial mass 300 M$_\odot$, 500 M$_\odot$, 750 M$_\odot$ and 1000 M$_\odot$ (from Belkus et al., 2007).\[fig\_3b\]](Figure4Vanbeveren.pdf){width="10cm"} ![The mass evolution of the collision runaway object in a MGG-11 type cluster. The curve with the largest final mass (Blue curve) = simulation with small mass loss, similar to the results of Portegies Zwart and McMillan (2002), the curve with the lowest final mass (Black curve) = simulation with the stellar wind mass loss formalism as discussed in Belkus et al. (2007) for Solar type metallicity, the curve with the second largest final mass (Red curve) = same as the black curve but for a SMC type metallicity (from Vanbeveren et al., 2009).\[fig\_4\]](Figure5Vanbeveren.pdf){width="10cm"} The evolution of very massive stars has been studied in detail by Belkus et al. (2007) and Yungelson et al. (2008) and it was concluded that stellar wind mass losses during core hydrogen burning and core helium burning are very important (Figure 4). Belkus et al. presented a convenient evolutionary recipe for such very massive stars, which can easily be implemented in an N-body dynamical code. Our N-body code which includes this recipe has been described in Belkus (2008) and in Vanbeveren et al. (2009) and applied in order to simulate the evolution of MGG-11. In Figure 5 we show the evolution of the runaway stellar collision object for MGG-11 predicted by our code. The blue simulation is performed assuming a similar stellar wind mass loss formalism for very massive stars as the one used by Portegies Zwart et al. (2004). It can readily be checked that our simulation is very similar as the one of the latter paper and this gives some confidence that our N-body-routine is working properly. We then repeated the N-body run but with our preferred evolutionary scheme for very massive stars (the black run in Figure 5). Our main conclusion was the following: Similar conclusions were reached by Glebbeek et al. (2009) for MGG-11 and by Chatterjee et al. (2010) for the Arches cluster, although in both studies cluster dynamics and the evolution of very massive stars are not linked self-consistently. Our simulations then promote the model for the ULX in MGG-11 where the X-rays are due to super-Eddington accretion onto a stellar mass BH, a model that seems to be a most probable model for many of these systems (Gladstone et al., 2009). We also made a simulation for a MGG-11 like cluster but where the metallicity is significantly smaller. As can be noticed from Figure 5 (the red simulation) the formation of a BH with a mass of a few 100 M$_\odot$ is possible and this is of course a direct consequence of our adopted dependence of the stellar wind mass loss rate on the metallicity. From this simulation we are inclined to conclude that, if the progenitors of globular clusters were massive starbursts in the beginning, it is not unlikely that an intermediate mass black hole formed as a consequence of mass segregation and core collapse in a dense massive cluster. Belkus, H., 2008, PhD Thesis (Vrije Universiteit Brussel). Belkus, H., Van Bever, J., Vanbeveren, D., 2007, ApJ 659, 1576. Belkus, H., Van Bever, J., Vanbeveren, D., van Rensbergen, W., 2003, A&A 400, 429. Bozic, H., Harmanec, P., Horn, J., Koubsky, P. et al., 1995, A&A 304, 235. Brinchmann, J., Kunth, D., Durret, F., 2008, A&A 485, 657. Cantiello, M., Yoon, S.-C., Langer, N., Livio, M., 2007, A&A 465, 29. Chatterjee, S., Goswami, S., Umbreit, S., Glebbeek, E., et al., 2009, arXiv0911.1483C De Donder, E., Vanbeveren, D., 2003, New Astronomy 8, 817. De Donder, E., Vanbeveren, D., 2004, New Astronomy Reviews 48, 861. de Jager, C., Nieuwenhuijzen, H., van der Hucht, K.A., 1988, A&ASuppl 72, 259. de Mink, S. E., Pols, O. R., Hilditch, R. W., 2007, A&A 467, 1181. De Mink, S.E., PhD Thesis (Utrecht University). de Wit, W. J., Testi, L., Palla, F., Vanzi, L., Zinnecker, H., 2004, A&A 425, 937. Gies, D.R., Thaller, M.L., Bagnuolo, W.G. Jr., Kaye, A.B., et al., 1996, Bull. American Astron. Soc. 28, 1373. Gladstone, J., Roberts, T.P., Done, C., 2009, MNRAS 397, 1836. Glatzel, W., 1998, A&A 339, L5. Glebbeek, E., Gaburov, E., de Mink, S. E., Pols, O. R., Portegies Zwart, S. F., 2009, A&A 497, 255. Grundstrom, E. D., Gies, D. R., Hillwig, T. C., McSwain, M. V., et al., 2007, ApJ 667, 505. Heger, A., Langer, N., Woosley, S. E, 2000, ApJ 528, 368. Herrero, A., Kudritzki, R. P., Vilchez, J. M., Kunze, D., et al., 1992, A&A 261, 209. Hunter, I., Lennon, D.J., Dufton, P.L., Trundle, C., et al., 2008, A&A 479, 541. Hutchings, J.B., 1975, ApJ 200, 122. Josephs, T. S., Gies, D. R., Bagnuolo, W. G., Jr.; Shure, M. A., et al., 2001, PASP 113, 957. Kroupa, P., Weidner, C., 2003, ApJ 598, 1076. Langer, N., 1997, in ’Luminous Blue Variables: Massive Stars in Transition’, eds. Nota, A., Lamers, H., ASP Conf. Series 120, p. 83. Langer, N., Cantiello, M., Yoon, S.-C., Hunter, I., et al., 2008, IAUS 250, 167. Maeder, A., 1987, A&A 178, 159. Maeder, A., Meynet, G., 2000, A&A 361, 159. McCrady, N., Gilbert, A.M., Graham, J.R., 2003, ApJ 596, 240. Mokiem, M. R., de Koter, A., Evans, C. J., Puls, J., et al., 2006, A&A 456, 1131. Nugis, T., Lamers, H., 2002, A&A 389, 162. Packet, W., 1981, A&A 102, 17. Penny, L.R., 1996, PhD Thesis (Georgia State University). Petrovic, J., Langer, N., van der Hucht, K. A., 2005, A&A 435, 1013. Pols, O.R., Marinus, M., 1994, A&A 288, 475. Portegies Zwart, S. F., Baumgardt, H., Hut, P., Makino, J. McMillan, S., L. W., 2004, Nature 428, 724. Puls, J., Sundqvist, J. O., Rivero Gonz‡lez, J. G., 2010, arXiv1009.0364P Salasnich, B., Bressan, A., Chiosi, C., 1999, A&A 342, 131. Van Bever, J., Belkus, H., Vanbeveren, D., Van Rensbergen, W., 1999, New Astron. 4, 173. Van Bever, J., Vanbeveren, D., 1998, A&A 334, 21. Van Bever, J., Vanbeveren, D., 2000, A&A 358, 462. Van Bever, J., Vanbeveren, D., 2003, A&A 400, 63. Van Rensbergen, W., De Greve, J.P., De Loore, C., Mennekens, N., 2008, A&A 487, 1129. Vanbeveren, D., 1982, A&A 115, 65. Vanbeveren, D., 1991, A&A 252, 159. Vanbeveren, D., 1996, in ’Evolutionary processes in binary stars”, eds. Wijers, R.A.M.J. and Davies, M.B., ASI Series, vol. 477, Dordrecht: Kluwer Academic Publishers, p. 155 Vanbeveren, D., 2009, New Astron. Reviews 53, 27. Vanbeveren, D., 2010, in ’Star clusters: basic galactic building blocks throughout time and space’, Proceedings of the International Astronomical Union, IAU Symposium, Volume 266, p. 293 Vanbeveren, D., Belkus, H., Van Bever, J., Mennekens, N., 2009, Astrophys. Space Sci. 324, 271. Vanbeveren, D., De Donder, E., Van Bever, J., Van Rensbergen, W., De Loore, C., 1998, New Astronomy 3, 443. Vanbeveren, D., Herrero, A., Kunze, D., van Kerkwijk, M., 1993, Space Sci. Reviews 66, 395. Vanbeveren, D., Van Bever, J., Belkus, H., 2007, ApJ 662, 107. Vanbeveren, D., Van Rensbergen, W., De Loore, C., 1998b, in ’The Brightest Binaries’, Kluwer Academic Pub., Dordrecht. von Zeipel, H., 1924, MNRAS 84, 665. Weidner, C., Kroupa, P., 2006, MNRAS 365, 1333. Yoon, S.C., Cantiello, M., 2010, ApJ 717, 62. Yungelson. L.R., van den Heuvel, E.P.J., Vink, J.S., et al., 2008, A&A 477, 223.
--- abstract: 'In this paper we consider the classical capacities of quantum-classical channels corresponding to measurement of observables. Special attention is paid to the case of continuous observables. We give the formulas for unassisted and entanglement-assisted classical capacities $C,C_{ea}$ and consider some explicitly solvable cases which give simple examples of entanglement-breaking channels with $C<C_{ea}.$ We also elaborate on the ensemble-observable duality to show that $C_{ea}$ for the measurement channel is related to the $\chi$-quantity for the dual ensemble in the same way as $C$ is related to the accessible information. This provides both accessible information and the $\chi$-quantity for the quantum ensembles dual to our examples.' author: - 'A. S. Holevo' title: Information capacity of quantum observable --- Introduction ============ In quantum information theory one often has to deal with both quantum and classical information. A usual device is then to embed the classical system into quantum by representing classical states, i.e. probability distributions on the phase space $\Omega ,$ as diagonal density operators in the artificial Hilbert space $\mathcal{H}$ spanned by the orthonormal basis $ \left\{ |\omega \rangle ;\omega \in \Omega \right\} :$ $$P=\left\{ p_{\omega }\right\} \longrightarrow \rho =\sum\limits_{\omega }p_{\omega }|\omega \rangle \langle \omega |.$$ This works for finite and countable $\Omega $, although in the last case $ \mathcal{H}$ becomes infinite dimensional. Any channel with discrete classical input alphabet $\mathcal{X}$ or output alphabet $\mathcal{Y}$ can then be regarded as a quantum channel. In the case of classical-quantum (c-q) channel corresponding to preparation of states $\left\{ \rho _{x};x\in \mathcal{X}\right\} $ the quantum channel is $$\mathcal{P}(\rho )=\sum\limits_{x\in \mathcal{X}}\langle x|\rho |x\rangle \rho _{x}, \label{prep}$$ where $\left\{ |x\rangle ;x\in \mathcal{X}\right\} $ is a fixed orthonormal basis. Similarly, in the case of quantum-classical (q-c) channel corresponding to measurement of observable given by discrete probability operator-valued measure (POVM) $M=\{M_{y};y\in \mathcal{Y}\}$ [@asp] we have $$\mathcal{M}(\rho )=\sum\limits_{y\in \mathcal{Y}}\left( \mathrm{Tr}\rho M_{y}\right) |y\rangle \langle y|. \label{meas}$$ However, in the case of continuous classical variables the situation is different for c-q and q-c channels. In principle, there is no problem with embedding c-q channels. Let $\mathcal{X}$ be a domain in $ \mathbb{R}^{k}$ and $dx$ is the Lebesgue measure, then the continuous analog of (\[prep\]) is $$\mathcal{P}(\rho )=\int_{\mathcal{X}}\langle x|\rho |x\rangle \rho _{x}dx,$$ where $\left\{ |x\rangle ;x\in \mathcal{X}\right\} $ is the Dirac’s system satisfying $\langle x|x^{\prime }\rangle =\delta (x-x^{\prime }).$ Here $ \mathcal{P}$ maps density operators into density operators. Now let $ M=\{M(dy)\}$ be a quantum observable (POVM) with continuous set of outcomes $\mathcal{Y}\subseteq \mathbb{R}^{k}.$ Then for a density operator $\rho $ the diagonal operator $\int_{\mathcal{Y} }|y\rangle \langle y|\mathrm{Tr}\rho M(dy)$ has infinite trace, so in general there is no continuous analog of (\[meas\]). This is related to the well known repeatability issue for continuous observables and nonexistence of the normal expectation onto the Abelian subalgebra of the diagonal operators (see e.g. [@dav], Sec. 4.4, [@ozawa], [@struc], Sec. 4.1.4). The only way is to consider the q-c channel as transformation $\mathcal{M} :\rho \longrightarrow \mathrm{Tr}\rho M(dy)$ of density operators to probability distributions[^1] on $ \mathcal{Y}$. The main interest in this paper will be the classical capacities of such channel – both unassisted $C\left( \mathcal{M}\right) $ and entanglement-assisted $C_{ea}\left( \mathcal{M}\right)$. We give the general formulas and consider some explicitly solvable cases which provide simple examples of entanglement-breaking channels with $C_{ea}>C.$ Almost simultaneously with the first version of this work the papers [@da], [@or] appeared, where the quantity $C(\mathcal{M})$ was studied in detail for the finite case. In particular, the ensemble-observable duality [@hall] was used to relate $C(\mathcal{M})$ with the accessible information of the dual ensemble. In the last Section we elaborated further on the duality transformation to show that $C_{ea}\left( \mathcal{M}\right)$ is in similar relation with the $\chi$-quantity for the dual ensemble. This allows to compute both accessible information and the $\chi$-quantity for the quantum ensembles dual to our examples. The classical capacities of quantum observables =============================================== Consider the channel (\[meas\]) in the case of discrete $\mathcal{Y}$ and finite-dimensional input Hilbert space $\mathcal{H}.$ Since q-c channel is entanglement-breaking, the unassisted classical capacity is given by the one-letter expression $$C(\mathcal{M})=C_{\chi }(\mathcal{M})=\sup_{\pi }I\left( \pi ;M\right), \label{C}$$ where $\pi $ is a finite probability distribution on the state space $\mathfrak{S}(\mathcal{H})$ assigning probabilities $\pi _{x}$ to states $\rho _{x}$ (ensemble), and $$I\left( \pi ;M\right) =H\left( P_{\bar{\rho}_{\pi }}\right) -\sum\limits_{x}\pi _{x}H\left( P_{\rho _{x}}\right)$$ is the Shannon information between the input $x$ and the output $y.$ Here $ \bar{\rho}_{\pi }=\sum\limits_{x}\pi _{x}\rho _{x}$, $P_{\rho }=\left\{ \mathrm{Tr}\rho M_{y}\right\} $ – the probability distribution of the measurement outcomes and $H\left( \cdot \right) $ is the Shannon entropy. This can be rewritten as $$C(\mathcal{M})=C_{\chi }(\mathcal{M})=\sup_{\rho }\chi _{\Phi }\left( \rho \right) , \label{Cchi}$$ where $$\chi _{\Phi }\left( \rho \right) =H\left( P_{\rho }\right) -\inf_{\pi :\bar{ \rho}_{\pi }=\rho }\sum\limits_{x}\pi _{x}H\left( P_{\rho _{x}}\right) , \label{chi}$$ Then consider the entanglement-assisted capacity which according to the result of Shor et al [@bsst] is given by the formula $$C_{ea}(\mathcal{M})=\sup_{\rho }I(\rho ;\mathcal{M}), \label{Cea}$$ where $$I(\rho ;\mathcal{M})=S(\rho )+S(\mathcal{M}(\rho ))-S(\rho ,\mathcal{M})$$ is the quantum mutual information. Here $S(\cdot )$ is von Neumann entropy and $S(\rho ,\mathcal{M})$ is the entropy exchange. Let $p_{y}= \mathrm{Tr}\rho M_{y}$ and $V_{y}$ be an operator satisfying $ M_{y}=V_{y}^{\ast }V_{y},$ for example, $V_{y}=M_{y}^{1/2}.$ Then the density operator $\frac{V_{y}\rho V_{y}^{\ast }}{p_{y}}=\rho \left( y|M\right) $ can be interpreted as posterior state of the measurement of observable $M$ with the instrument $\rho \rightarrow \{V_{y}\rho V_{y}^{\ast }\}$ in the state $\rho $. The following formula was obtained by Shirokov [@shir] $$I(\rho ;\mathcal{M})=S\left( \rho \right) -\sum\limits_{y}\left( \mathrm{Tr} \rho M_{y}\right) S\left( \rho \left( y|M\right) \right) . \label{I}$$ Indeed, let us use the relation $S\left( \rho ,\mathcal{M}\right) =S(\tilde{ \mathcal{M}}(\rho )),$ where $\tilde{\mathcal{M}}$ is the complementary channel. According to [@comp] the complementary channel for (\[meas\]) is $$\tilde{\mathcal{M}}(\rho )=\sum\limits_{y}|y\rangle \langle y|\otimes V_{y}\rho V_{y}^{\ast }=\sum\limits_{y}|y\rangle \langle y|\otimes p_{y}\rho \left( y|M\right) .$$ It follows $S(\tilde{\mathcal{M}}(\rho ))=H(P_{\rho })+\sum\limits_{y}p_{y}S\left( \rho \left( y|M\right) \right) ,$ while $S( \mathcal{M}(\rho ))=H(P_{\rho })$, hence (\[I\]). The entanglement-assisted classical capacity of the channel $\mathcal{M}$ follows by substituting this expression into (\[Cea\]). Now consider the channel $\mathcal{M}:\rho \longrightarrow \mathrm{Tr}\rho M(dy)$ in the case of arbitrary measurable space $\mathcal{Y}$ and finite-dimensional input Hilbert space $\mathcal{H}.$ Then the relations (\[C\]) and (\[Cea\]) can be generalized to this case. Since the output is classical, the protocol of entanglement-assisted transmission of classical information should be explained in this case. First, a pure entangled state $$|\psi \rangle =\sum_{j}\lambda _{j}|j\rangle \otimes |j\rangle ,$$ where $\left\{ |j\rangle \right\} $ is an orthonormal basis in $\mathcal{H}$ is distributed between the input (Alice) and output (Bob). Thus classical Bob gets additional quantum space $\mathcal{H}$ becoming classical-quantum system [@bl]. The states of such a system are positive operator-valued measures $\left\{ \sigma (dy)\right\} $ satisfying $\mathrm{Tr}\int_{ \mathcal{Y}}\sigma (dy)=1.$ Alice uses different encoding maps $\mathcal{E} _{w}$ for different input signals $w$. The joint state of Alice and Bob is then $$\left( \mathcal{E}_{w}\otimes \mathrm{Id}\right) \left( |\psi \rangle \langle \psi |\right) =\sum_{j,k}\lambda _{j}\lambda _{k}\mathcal{E} _{w}\left( |j\rangle \langle k|\right) \otimes |j\rangle \langle k|,$$ and after the Alice measurement $\mathcal{M}$ Bob gets the state $\left\{ \sigma _{w}(dy)\right\} $ with $$\sigma _{w}(dy)=\sum_{j,k}\lambda _{j}\lambda _{k}\left[ \mathrm{Tr}\mathcal{ E}_{w}\left( |j\rangle \langle k|\right) M(dy)\right] |j\rangle \langle k|$$ $$=\sum_{j,k}\lambda _{j}\lambda _{k}\langle k|\mathcal{E}_{w}^{\ast }\left( M(dy)\right) |j\rangle |j\rangle \langle k|=\rho ^{1/2}\overline{\mathcal{E} _{w}^{\ast }\left( M(dy)\right) }\rho ^{1/2},$$ where $\rho =\sum_{j}\lambda _{j}|j\rangle \langle j|$ and bar means complex conjugation in the basis $\left\{ |j\rangle \right\} .$ Then Bob applies his decoding given by classical-quantum observable $\left\{ N_{yw^{\prime }}\right\} ,$ satisfying $\sum_{w}N_{yw^{\prime }}\equiv I,$ with the conditional probabilities of outcomes $\mathsf{P}\left( w^{\prime }|w\right) =\int_{\mathcal{Y}}\mathrm{Tr}\sigma _{w}(dy)N_{yw^{\prime }}.$ The continuous analog of formula (\[I\]) considered in [@shir] is $$I(\rho ;\mathcal{M})=S\left( \rho \right) -\int_{\mathcal{Y}}\left( \mathrm{ Tr}\rho M(dy)\right) S\left( \rho \left( y|M\right) \right) . \label{I_mod}$$ In [@bsst] it was stressed that entanglement-assisted classical capacity of entanglement-breaking channels can be greater than the unassisted capacity. The example given there was the depolarizing channel with high enough error probability (see also Appendix). In the next Sections we will see that the inequality $C_{ea}>C$ is rather common for the measurement channels with unsharp observables. Examples ======== **1.** Consider the case of general overcomplete system, $M_{y}=|\psi _{y}\rangle \langle \psi _{y}|,$ $\sum_{y}|\psi _{y}\rangle \langle \psi _{y}|=I$ in $m-$dimensional Hilbert space $\mathcal{H}.$ Then the posterior state $\rho \left( y|M\right) =\frac{|\psi _{y}\rangle \langle \psi _{y}|}{ \langle \psi _{y}|\psi _{y}\rangle }$ is pure and hence $S\left( \rho \left( y|M\right) \right) =0.$ Thus $I(\rho ;\mathcal{M})=S\left( \rho \right) $ and $$C_{ea}(\mathcal{M})=\sup_{\rho }S\left( \rho \right) =\log m. \label{C1}$$ A special case is covariant observable $$M_{g}=\frac{m}{\left\vert G\right\vert }V_{g}|\psi _{0}\rangle \langle \psi _{0}|V_{g}^{\ast }, \label{cov}$$ where $V_{g}$ is irreducible representation of the group $G$ and $|\psi _{0}\rangle $ is a unit vector [@asp]. Then the channel $\mathcal{M}$ is covariant and by [@covar] we have $$C(\mathcal{M})=C_{\chi }(\mathcal{M})=H\left( \mathcal{M}\left( \frac{I}{m} \right) \right) -\min_{\psi }H\left( \mathcal{M}\left( |\psi \rangle \langle \psi |\right) \right) . \label{C3}$$ But $\mathcal{M}\left( \frac{I}{m}\right) $ is uniform distribution over $G,$ hence $H\left( \mathcal{M}\left( \frac{I}{m}\right) \right) =\log \left\vert G\right\vert ,$ while $$\begin{aligned} H\left( \mathcal{M}\left( |\psi \rangle \langle \psi |\right) \right) &=&-\sum_{g}\frac{m}{\left\vert G\right\vert }\left\vert \langle \psi |V_{g}|\psi _{0}\rangle \right\vert ^{2}\log \frac{m}{\left\vert G\right\vert }\left\vert \langle \psi |V_{g}|\psi _{0}\rangle \right\vert ^{2} \\ &=&-\log \left\vert G\right\vert -\frac{m}{\left\vert G\right\vert } \sum_{g}\left\vert \langle \psi |V_{g}|\psi _{0}\rangle \right\vert ^{2}\log \left\vert \langle \psi |V_{g}|\psi _{0}\rangle \right\vert ^{2}.\end{aligned}$$ Therefore $$C(\mathcal{M})=\log m+\frac{m}{\left\vert G\right\vert }\max_{\psi }\sum_{g}\left\vert \langle \psi |V_{g}|\psi _{0}\rangle \right\vert ^{2}\log \left\vert \langle \psi |V_{g}|\psi _{0}\rangle \right\vert ^{2}$$ which is typically less than $C_{ea}(\mathcal{M})=\log m$ (see Sec. 4). **2.** Let $\Theta $ be the unit sphere in $\mathcal{H}$ and let $\nu (d\theta )$ be the uniform distribution on $\Theta .$ Then by [@asp] , Sec. IV.4 (see also Appendix) $$m\int_{\Theta }|\theta \rangle \langle \theta |\nu (d\theta )=I, \label{resid}$$ thus we have a continuous overcomplete system, i.e. observable $M(d\theta)=m|\theta \rangle \langle \theta |\nu (d\theta )$ in $\mathcal{H}$ with values in $\Theta $ . According to the remark above, $C_{ea}(\mathcal{M})=\log m.$ The channel $\mathcal{M}$ maps density operator $\rho $ to the probability distribution $m\langle \theta |\rho |\theta \rangle \nu (d\theta )$ on $ \Theta .$ All these outcome probability distributions are absolutely continuous with respect to $\nu (d\theta ),$ hence we can use the differential entropy $h(p_{\rho })=-\int_{\Theta }p_{\rho }(\theta )\log p_{\rho }(\theta )\nu (d\theta ),$ where $p_{\rho }(\theta )=$ $m\langle \theta |\rho |\theta \rangle ,$ to get the continuous analog of the formula (\[chi\]) $$\chi _{\Phi }\left( \rho \right) =h\left( p_{\rho }\right) -\inf_{\pi :\bar{ \rho}_{\pi }=\rho }\sum\limits_{x}\pi _{x}h\left( p_{\rho _{x}}\right) . \label{chi_mod}$$ Let us use this formula together with (\[Cchi\]) to compute $C(\mathcal{M})=C_{\chi }(\mathcal{M}).$ The channel $\mathcal{M}$ is covariant with respect to the irreducible action of the unitary group $U(\mathcal{H})$ in the sense that $$\mathcal{M}\left( U\rho U^{\ast }\right) =m\langle U^{\ast }\theta |\rho |U^{\ast }\theta \rangle \nu (d\theta ).$$ Therefore similarly to (\[C3\]) $$C(\mathcal{M})=C_{\chi }(\mathcal{M})=h\left( \mathcal{M}\left( \frac{I}{d} \right) \right) -\min_{\theta ^{\prime }}h\left( \mathcal{M}\left( |\theta ^{\prime }\rangle \langle \theta ^{\prime }|\right) \right) , \label{C2}$$ But $\mathcal{M}\left( \frac{I}{d}\right) $ is uniform distribution over $ \Theta $ with the density $p(\theta )\equiv 1,$ hence $h\left( \mathcal{M} \left( \frac{I}{d}\right) \right) =0.$ On the other hand, $$-h\left( \mathcal{M}\left( |\theta ^{\prime }\rangle \langle \theta ^{\prime }|\right) \right) =\int_{\Theta }m\left\vert \langle \theta |\theta ^{\prime }\rangle \right\vert ^{2}\log m\left\vert \langle \theta |\theta ^{\prime }\rangle \right\vert ^{2}\nu (d\theta ). \label{int}$$ By unitary invariance of $\nu $, this quantity is the same for all $\theta ^{\prime }$ so there is no need for minimization in (\[C2\]). To compute it we use Lemma IV.4.1 from [@asp] according to which $$\int_{\Theta }F\left( \left\vert \langle \theta |\theta ^{\prime }\rangle \right\vert \right) \nu (d\theta )=-\int\limits_{0}^{1}F(r)d(1-r^{2})^{m-1}. \label{formula}$$ Then (\[int\]) becomes $$-\int\limits_{0}^{1}mr^{2}\log mr^{2}d(1-r^{2})^{m-1}=\int\limits_{0}^{1}m(1-u)\log m(1-u)du^{m-1},$$ where $u=1-r^{2},$ which after integrations by parts gives (see Appendix) $$C(\mathcal{M})=\log m-\log e\sum_{k=2}^{m}\frac{1}{k}. \label{euler}$$ For $m\rightarrow \infty $ we have $C(\mathcal{M})\rightarrow \log e\,(1-\gamma ),$ where $\gamma \approx 0.577$ is Euler’s constant. At the same time, $C_{ea}( \mathcal{M})=\log m\rightarrow \infty .$ The value (\[euler\]) was obtained in the paper [@robb] as the “subentropy” of the chaotic state $\rho=I/m$. This is not a simple coincidence, see Sec. 4. **3.** If $\mathcal{H}$ is infinite-dimensional while $\mathcal{Y}$ is discrete, then the quantity (\[C\]) is usually infinite but there is additional input constraint $\left\{ \rho :\mathrm{Tr}\rho E\leq N\right\} ,$ where $E$  is a positive selfadjoint operator (usually, energy) and $N$ is a constant (energy constraint). Then instead of (\[Cchi\]) one has the constrained classical capacity $$C(\mathcal{M},N)=C_{\chi }(\mathcal{M},N)=\sup_{\rho :\mathrm{Tr}\rho E\leq N}\chi _{\Phi }\left( \rho \right) , \label{Ccon}$$ and instead of (\[Cea\]) – the constrained entanglement-assisted classical capacity $$C_{ea}(\mathcal{M},N)=\sup_{\rho :\mathrm{Tr}\rho E\leq N}I(\rho ;\mathcal{M} ). \label{Ceacons}$$ Some additional conditions are required to ensure finiteness of entropies in (\[chi\]) and (\[I\]), see [@const]. If $\mathcal{M}$ is continuous observable, we expect formulas (\[Ccon\]), (\[Ceacons\]) to hold with appropriate modifications (\[chi\_mod\]), resp. (\[I\_mod\]). Such is the case of the canonical observable with the energy constraint. Consider one Bosonic mode $Q,P$ and the canonical observable given by POVM $$M(d^{2}z)=|z\rangle \langle z|\frac{d^{2}z}{\pi };\quad z\in \mathbb{C}. \label{canobs}$$ This observable describes approximate joint measurement of $Q,P$ [@asp], Sec. VI.8 and in quantum optics is realized by heterodyning. The corresponding q-c channel $\mathcal{M}$ takes a density operator into the probability distribution $$\rho \rightarrow p_{\rho }(z)=\langle z|\rho |z\rangle \frac{d^{2}z}{\pi },$$ which is absolutely continuous with respect to the Lebesgue measure $\frac{ d^{2}z}{\pi }$ with the probability density $\langle z|\rho |z\rangle $ equal to the Husimi function. The posterior states are the coherent states $ \rho \left( z|M\right) =|z\rangle \langle z|$ which are pure and have zero entropy. Thus $I(\rho ;\mathcal{M})=S\left( \rho \right) .$ Denote by $\rho _{N}$ the Gaussian density operator with zero mean and the number of quanta $\mathrm{Tr}\rho _{N}a^{\dagger }a=N.$ It maximizes the quantum entropy under the constraint $$\mathrm{Tr}\rho a^{\dagger }a\leq N, \label{number}$$ namely $$\max_{(\ref{number})}S(\rho )=S(\rho _{N})=(N+1)\log (N+1)-N\log N\equiv g(N).$$ The formula (\[Ceacons\]) gives then the following expression for the entanglement-assisted classical capacity of channel $\mathcal{M}$ with the constraint (\[number\]) $$C_{ea}(\mathcal{M};N)=g(N)=\log (N+1)+\log \left( 1+\frac{1}{N}\right) ^{N}. \label{C4}$$ On the other hand, the channel is covariant with respect to the irreducible action of the Weyl (displacement) operators at the input and the shift group of the argument $z$ at the output. The output entropy of the channel $ \mathcal{M}$ is just the classical differential entropy $h(p_{\rho })$ and the continuous analog of (\[C2\]) gives $$C(\mathcal{M};N)=C_{\chi }(\mathcal{M};N)=\max_{(\ref{number})}h(p_{\rho })- \check{h}(\mathcal{M}),$$ where $$\check{h}(\mathcal{M})=\min_{|\psi \rangle \langle \psi |}\check{h}(p_{|\psi \rangle \langle \psi |})$$ is the minimal output differential entropy. By the Wehrl conjecture proved by Lieb [@lieb], $\check{h}(\mathcal{M})=\log e$ and the minimum is attained on any coherent state. On the other hand, $\max_{(\ref{number} )}h(p_{\rho })$ is attained on $\rho _{N}$ and $$\max_{(\ref{number})}h(p_{\rho })=h(p_{\rho _{N}})=\log e(N+1).$$ Indeed, $$\int p_{\rho }(z)|z|^{2}\frac{d^{2}z}{\pi }=\int \langle z|a^{\dagger }\rho a|z\rangle \frac{d^{2}z}{\pi }=\mathrm{Tr}\rho aa^{\dagger }=\mathrm{Tr}\rho a^{\dagger }a+1$$ and the constraint (\[number\]) implies $\int p_{\rho }(z)|z|^{2}\frac{ d^{2}z}{\pi }\leq N+1.$ But $\max h(p)$ under the last constraint is achieved on the probability density $$(N+1)^{-1}\exp \left( -\frac{|z|^{2}}{N+1}\right) =p_{\rho _{N}}(z).$$ Thus we obtain the value[^2] $$C(\mathcal{M};N)=C_{\chi }(\mathcal{M};N)=\log (N+1). \label{C5}$$ Ensemble-measurement duality ============================ Let us first describe the duality between quantum observables and ensembles, see [@hall], [@da], [@or]. If $M=\{M_{y};y\in \mathcal{Y}\}$ is a quantum observable and $\pi =\left\{ p_{x},\rho _{x};x\in \mathcal{X}\right\} $ an ensemble of quantum states then $$p_{xy}=p_{x}\mathrm{Tr}\rho _{x}M_{y}$$ is a probability distribution on $\mathcal{X\times Y}.$ On the other hand, $$p_{xy}=p_{y}^{\prime }\mathrm{Tr}\rho _{y}^{\prime }M_{x}^{\prime },$$ where, denoting $\bar{\rho}_{\pi }=\sum\limits_{x}p_{x}\rho _{x},$ we have $ p_{y}^{\prime }\rho _{y}^{\prime }=\bar{\rho}_{\pi }{}^{1/2}M_{y}\bar{\rho} _{\pi }{}^{1/2}$ so that $p_{y}^{\prime }=\mathrm{Tr}\bar{\rho}_{\pi }M_{y}$ and $M_{x}^{\prime }=p_{x}\bar{\rho}_{\pi }{}^{-1/2}\rho _{x}\bar{\rho}_{\pi }{}^{-1/2}.$ Here $M^{\prime }=\left\{ M_{x}^{\prime };x\in \mathcal{X} \right\} $ is the new observable and $\pi ^{\prime }=\left\{ p_{y}^{\prime },\rho _{y}^{\prime };y\in \mathcal{Y}\right\} $ is the new ensemble. Therefore the Shannon information between $x,y$ is $$I(\pi ,M)\mathbf{=}I(\pi ^{\prime },M^{\prime }\mathbf{).}$$ From this it is deduced ([@da], Proposition 3) that $$C(\mathcal{M})\equiv \max_{\pi }I(\pi ,M)=\max_{\rho }A(\pi _{\rho }^{\prime }\mathbf{),} \label{A1}$$ where $A(\pi _{\rho }^{\prime }\mathbf{)=}\max_{M^{\prime \prime }}I(\pi _{\rho }^{\prime },M^{\prime \prime })$ is the accessible information of the ensemble $\pi _{\rho }^{\prime }=\left\{ \mathrm{Tr}\rho M_{y},\frac{\rho ^{1/2}M_{y}\rho ^{1/2}}{\mathrm{Tr}\rho M_{y}}\right\} .$ Let us recall the well known bound [@h1] $$\label{ho} A(\pi )\leq S\left( \sum\limits_{y}p_{y}\rho _{y}\right) -\sum\limits_{y}p_{y}S\left( \rho _{y}\right) \equiv \chi \left( \pi \right)$$ with the equality attained if and only if the operators $\pi_y\rho_y$ all commute. Applying this to the dual situation we obtain $$\label{dualh} A(\pi _{\rho }^{\prime })\leq S\left( \sum\limits_{y}p_{y}^{\prime }\rho _{y}^{\prime }\right) -\sum\limits_{y}p_{y}^{\prime }S\left( \rho _{y}^{\prime }\right) = \chi \left( \pi _{\rho }^{\prime }\right) .$$ But $\sum\limits_{y}p_{y}^{\prime }\rho _{y}^{\prime }=\rho ,$ and $S\left( \rho _{y}^{\prime }\right) =H\left( \frac{V_{y}\rho V_{y}^{\ast }}{ p_{y}^{\prime }}\right) ,$ where $V_{y}$ is arbitrary operator satisfying $ M_{y}=V_{y}^{\ast }V_{y}$ because operators $V_{y}\rho V_{y}^{\ast }=V_{y}\rho ^{1/2}\rho ^{1/2}V_{y}^{\ast }$ and $\rho ^{1/2}V_{y}^{\ast }V_{y}\rho ^{1/2}=\rho ^{1/2}M_{y}\rho ^{1/2}=p_{y}^{\prime }\rho _{y}^{\prime }$ are unitarily equivalent via polar decomposition and hence have the same spectrum. The density operator $\frac{V_{y}\rho V_{y}^{\ast } }{p_{y}^{\prime }}=\rho \left( y|M\right) $ is the posterior state of the measurement of observable $M$ with the instrument $\{V_{y}\}$ in the state $ \rho .$ Thus $$\chi \left( \pi _{\rho }^{\prime }\right) =S\left( \rho \right) -\sum\limits_{y}p_{y}^{\prime }S\left( \rho \left( y|M\right) \right) =I(\rho ;\mathcal{M}) \label{hbound2}$$ i.e. the $\chi$-quantity in the right side of (\[ho\]) is dual to the quantum mutual information for the measurement channel and hence in addition to (\[A1\]) we have via (\[I\]) $$\label{A2} C_{ea}(\mathcal{M})=\max_{\rho }\chi \left( \pi _{\rho }^{\prime }\right) .$$ The inequality (\[dualh\]) appears in [@hall], Eq. (19), as “measurement-dependent dual” to (\[ho\]). The necessary and sufficient condition for the equality in the case of (\[dualh\]) becomes $$\label{equ} \rho^{1/2}M_y\rho M_{y'}\rho^{1/2}=\rho^{1/2}M_{y'}\rho M_y\rho^{1/2}$$ for all $y,y'$. Therefore necessary and sufficient condition for the equality $C_{ea}(\mathcal{M})=C(\mathcal{M})$ is that the condition (\[equ\]) is fulfilled for a density operator $\rho$ maximizing the quantity (\[hbound2\]). Consider the case of overcomplete system $M_{y}=|\psi _{y}\rangle \langle \psi _{y}|$ in $m-$ dimensional Hilbert space $\mathcal{H}$ where $\rho =I/m.$ The corresponding ensemble is $\pi _{\rho }^{\prime}\equiv\bar{\pi}=\left\{ \frac{\langle \psi _{y}|\psi _{y}\rangle }{m}, \frac{|\psi _{y}\rangle \langle \psi _{y}|}{\langle \psi _{y}|\psi _{y}\rangle }\right\} $ and $\chi \left( \bar{\pi}\right) =\log m=C_{ea}( \mathcal{M})$ (Notice that this is also equal to the classical capacity of the c-q channel $y\longrightarrow \frac{|\psi _{y}\rangle \langle \psi _{y}| }{\langle \psi _{y}|\psi _{y}\rangle },$ since this is the maximal possible value). The condition (\[equ\]) amounts to $$|\psi _{y}\rangle \langle \psi _{y}|\psi _{y'}\rangle \langle \psi _{y'}| =|\psi _{y'}\rangle \langle \psi _{y'}|\psi _{y}\rangle \langle \psi _{y}|.$$ We can always assume that the vectors $|\psi _{y}\rangle$ are all pairwise linearly independent, then the last condition is equivalent to the fact that they form an orthonormal basis [@hall]. Thus this is the only case where $C_{ea}(\mathcal{M})=C(\mathcal{M})$. In order to pass to continuous observables we use the fact that there are unitary operators $U_{y}$ such that $\frac{\rho ^{1/2}M_{y}\rho ^{1/2}}{ \mathrm{Tr}\rho M_{y}}=U_{y}\rho \left( y|M\right) U_{y}^{\ast }$ in the ensemble $\pi _{\rho }^{\prime }$. Since the posterior states $\rho \left( y|M\right) $ are well defined for arbitrary observable $M=\left\{ M(dy)\right\} $ and apriori state $\rho $ [@ozawa], this opens the way to the general definition of the ensemble $\pi _{\rho }^{\prime }=\left\{ \mathrm{Tr} \rho M(dy),U_{y}\rho \left( y|M\right) U_{y}^{\ast }\right\} .$ Applying this to the example 2, we find that the accessible information for the continuous ensemble $\bar{\pi}=\left\{ \nu (d\theta ),|\theta \rangle \langle \theta |;\theta \in \Theta \right\} $ is equal to (\[euler\]) while $ \chi \left( \bar{\pi}\right) =\log m,$ where $\chi \left( \pi _{\rho }^{\prime }\right) $ is defined as in (\[I\_mod\]). The continuous ensemble is the “Scrooge ensemble” for the density operator $I/d$ for which the value (\[euler\]) of the accessible information was obtained by different method in [@robb][^3]. In [@asp], Sec. IV.4 the Bayes estimation problem for this ensemble was solved; it was shown, in particular, that with $m\to\infty$ there is no better strategy than simple guessing. In information-theoretic scenario this would suggest zero capacity of the c-q channel $\theta\rightarrow |\theta \rangle \langle \theta |$ which however is not the case. In the case of infinite dimensional space with the constraint the formulas (\[A1\]), (\[A2\]) should be modified as $$\begin{aligned} C(\mathcal{M},N) &=&\sup_{\rho :\mathrm{Tr}\rho E\leq N}A(\pi _{\rho }^{\prime }\mathbf{),} \label{B1} \\ C_{ea}(\mathcal{M},N) &=&\sup_{\rho :\mathrm{Tr}\rho E\leq N}\chi \left( \pi _{\rho }^{\prime }\right) . \label{B2}\end{aligned}$$ Consider the canonical observable (\[canobs\]) with the state $\rho _{N}.$ The corresponding ensemble is $$\bar{\pi}=\left\{ \langle z|\rho _{N}|z\rangle \frac{d^{2}z}{\pi },\frac{ \rho _{N}^{1/2}|z\rangle \langle z|\rho _{N}^{1/2}}{\langle z|\rho _{N}|z\rangle };\,z\in\mathbb{C}\right\} .$$ By making computation in the Fock basis, we have $\langle z|\rho _{N}|z\rangle =(N+1)^{-1}\exp \left( -\frac{|z|^{2}}{ N+1}\right) $ and $\rho _{N}^{1/2}|z\rangle =c\left\vert \sqrt{\frac{N}{N+1}} z\right\rangle ,$ so that the ensemble states are $\left\vert \sqrt{\frac{N}{ N+1}}z\right\rangle \left\langle \sqrt{\frac{N}{N+1}}z\right\vert .$ But with the change of variable $z^{\prime }=\sqrt{\frac{N}{N+1}}z$ this ensemble is equivalent to the ensemble $$\bar{\pi}^{\prime }=\left\{ \exp \left( -\frac{|z|^{2}}{N}\right) \frac{ d^{2}z}{\pi N},|z\rangle \langle z|\right\} .$$ From (\[B1\]), (\[B2\]) it follows $$\begin{aligned} A(\bar{\pi}^{\prime }\mathbf{)} &\mathbf{=}&C(\mathcal{M},N)=\log (N+1), \\ \chi (\bar{\pi}^{\prime }\mathbf{)} &\mathbf{=}&C_{ea}(\mathcal{M},N)=\log (N+1)+\log \left( 1+\frac{1}{N}\right) ^{N}.\end{aligned}$$ The last expression is also equal to the constrained classical capacity of the c-q channel $z\longrightarrow |z\rangle \langle z|$. Appendix ======== 1\. In the paper [@bsst] it was shown that the depolarizing channel $$\Phi \left( \rho \right) =(1-p)\rho +p\frac{I}{m} \label{dep}$$ in $m$ dimensions is entanglement-breaking for $p\geq \frac{m}{m+1}$ : “The simulation is performed by having Alice measure in a pre-agreed random basis, send Bob the result through a $m$-ary symmetric noisy classical channel, after which he re-prepares an output state in the same basis”. We will supply an analytical proof by showing that $$\frac{1}{m+1}\rho +\frac{m}{m+1}\frac{I}{m}=m\int_{\Theta }|\theta \rangle \langle \theta |\rho |\theta \rangle \langle \theta |\nu (d\theta )$$ for all density operators $\rho ,$ which means that the depolarizing channel with $p=\frac{m}{m+1}$ is entanglement-breaking [@ebr]. Then the depolarizing channel for $p>\frac{m}{m+1}$ can be represented as mixture of this channel and completely depolarizing channel $\rho \rightarrow \frac{I}{m},$ which are both entanglement-breaking. It is sufficient to establish (\[dep\]) for all $\rho =|\theta ^{\prime }\rangle \langle \theta ^{\prime }|,$ $\theta ^{\prime }\in \Theta .$ Consider the operator $$\sigma =m\int_{\Theta }|\theta \rangle \langle \theta |\theta ^{\prime }\rangle \langle \theta ^{\prime }|\theta \rangle \langle \theta |\nu (d\theta ).$$ It has trace 1 since $$\mathrm{Tr}\sigma =m\int_{\Theta }|\langle \theta |\theta ^{\prime }\rangle |^{2}\nu (d\theta )=-m\int\limits_{0}^{1}r^{2}d(1-r^{2})^{m-1}=1$$ by (\[formula\]). From this (\[resid\]) follows by polarization. Next, $\sigma $ commutes with all the unitaries leaving invariant $|\theta ^{\prime }\rangle ,$ hence it has the form $$\left( 1-p\right) |\theta ^{\prime }\rangle \langle \theta ^{\prime }|+p \frac{I}{m}.$$ To find $p$ take $\langle \theta ^{\prime }|\sigma |\theta ^{\prime }\rangle ,$ then we obtain $$m\int_{\Theta }|\langle \theta |\theta ^{\prime }\rangle |^{4}\nu (d\theta )=-m\int\limits_{0}^{1}r^{4}d(1-r^{2})^{m-1}=\left( 1-p\right) +\frac{p}{m}.$$ Computing the integral with the formula (\[formula\]) we get the value $ \frac{2}{m+1},$ whence $p=\frac{m}{m+1}.$ 2\. Proof of the formula $$\int\limits_{0}^{1}m(1-u)\ln m(1-u)du^{m-1}=\ln m-\sum_{k=2}^{m}\frac{1}{k} .$$ Splitting the integral and integrating by parts we obtain $$\begin{aligned} &&m\ln m\int\limits_{0}^{1}(1-u)du^{m-1}+ \\ &+&m\int\limits_{0}^{1}(1-u)\ln (1-u)du^{m-1} \\ &=&\ln m+m\int\limits_{0}^{1}u^{m-1}\left[ 1+\ln (1-u)\right] du \\ &=&\ln m+1+\int\limits_{0}^{1}\ln (1-u)d\left( u^{m}-1\right) \\ &=&\ln m+1-\int\limits_{0}^{1}\frac{\left( u^{m}-1\right) }{u-1}du \\ &=&\ln m+1-\int\limits_{0}^{1}\sum_{k=0}^{m-1}u^{k}du.\end{aligned}$$ **Acknowledgement**. The author thanks P.W. Shor, M.E. Shirokov and M. J. W. Hall for enlightening comments. [99]{} M. Dall’Arno, G. M. D’Ariano, M.F. Sacchi, Informational power of quantum measurements arXiv:1103.1972. A. Barchielli, G. Lupieri, Instruments and mutual entropies in quantum information, Banach Center Publications **73** (2006) 65-80; arXiv:quant-ph/0412116, 2004. C. H. Bennett, P. W. Shor, J. A. Smolin, A. V. Thapliyal, Entanglement-assisted classical capacity of noisy quantum channel, Phys. Rev. Lett. **83**, 3081, 1999; arXiv:quant-ph//9904023. C. H. Bennett, P. W. Shor, J. A. Smolin, A. V. Thapliyal, Entanglement-assisted capacity and the reverse Shannon theorem, IEEE Trans. Inform. Theory; arXiv:quant-ph/0106052. E. B. Davies, Information and quantum measurement, IEEE Trans. Inf. Theory 24, 596 (1978). E.B. Davies, Quantum theory of open systems, Academic Press, London 1976. M. J. W. Hall, Quantum information and correlation bounds, Phys. Rev. A **55**, N1, 1997, 1050-2947. A. S. Holevo, Some estimates for information quantity transmitted by quantum communication channel, Probl. Inform. Transm., **9**, N3, 1973, 3-11. A. S. Holevo, Statistical structure of quantum theory. Lect. Notes Phys. **m67**, Springer, Berlin 2001. A. S. Holevo, Classical capacities of constrained quantum channel. Probab. theory and appl. **48**, N2, 2003, 359-374; arXiv:quant-ph/0211170. A. S. Holevo, Additivity conjecture and covariant channels, Proc. Conference “Foundations of Quantum Information” , Camerino, 16-19.04.2004. Int. J. Quant. Inform., **3**, N1, 2005, 41-48; arXiv:quant-ph/0212025. A. S. Holevo, On complementary channels and the additivity problem, Probab. theory and appl. **51**, N1, 2006, 133-143; arXiv:quant-ph/0509101. A. S. Holevo, Entanglement-breaking channels in infinite dimensions, Problems of Information Transmission, **44**:3 (2008) 3-18; arXiv:0802.0235. A. S. Holevo, Probabilistic and statistical aspects of quantum theory, 2nd English edition, Edizioni di Normale, Pisa 2011. R. Josza, D. Robb, W. K. Wootters, Lower bound for accessible information in quantum mechanics, Phys. Rev. A, 4**9**, (1994), 668-677. E. Lieb, Proof of an entropy conjecture of Wehrl, Commun. Math. Phys., **62**, 35-41, 1978. O. Oreshkov, J. Calsamiglia, R. Muñoz-Tapia, E. Bagan, Optimal signal states for quantum detectors, arXiv:1103.2365. M. Ozawa, Quantum measuring processes of continuous observables, J. Math. Phys. **25**, 79-87 (1984). M.E. Shirokov, Entropy reduction of quantum measurements, arXiv:1011.3127. [^1]: For a rigorous unified description of quantum and classical systems using the language of operator algebras see e.g. [@struc], [@bl]. [^2]: This formula as well as similar result for homodyne measurement ($Q$ or $P$) were obtained in [@hall] by “information exclusion” argument. [^3]: M. J. W. Hall, private communication.
--- abstract: 'Menon’s identity is a classical identity involving gcd sums and the Euler totient function $\phi$. In a recent paper, Zhao and Cao derived the Menon-type identity $\sum\limits_{\substack{k=1}}^{n}(k-1,n)\chi(k) = \phi(n)\tau(\frac{n}{d})$, where $\chi$ is a Dirichlet character mod $n$ with conductor $d$. We derive an identity similar to this replacing gcd with a generalization it. We also show that some of the arguments used in the derivation of Zhao-Cao identity can be improved if one uses the method we employ here.' address: - 'Department of Mathematics, University College, Thiruvananthapuram, Kerala - 695034, India' - 'Department of Mathematics, SD College, Alappuzha, Kerala - 688003, India' - | Department of Mathematics, Government College, Ambalapuzha, Kerala - 688561, INDIA\ Department of Collegiate Education, Government of Kerala, India author: - Arya Chandran - Neha Elizabeth Thomas - K Vishnu Namboothiri title: 'A Menon-type Identity concerning Dirichlet characters and a generalization of the gcd function' --- Introduction ============ The classical Menon’s identity which originally appeared in [@menon1965sum] is a gcd sum turning out to be equivalent to a product of the Euler totient function $\phi$ and the number of divisors function $\tau$. If $(m,n)$ denotes the gcd of $m$ and $n$, the identity is precisely the following: $$\begin{aligned} \label{menons-identity} \sum\limits_{\substack{m=1\\(m.n)=1}}^n (m-1,n)=\phi(n)\tau(n).\end{aligned}$$ This identity was generalized by several authors in various directions. B. Sury [@sury2009some] derived the following Menon-type identity $$\sum\limits_{\substack{1\leq m_1, m_2,\ldots,m_s\leq n\\(m_1,n)=1}}(m_1-1, m_2,\ldots,m_s,n)=\phi(n)\sigma_{s-1}(n)$$ where $\sigma_s(n)=\sum\limits_{d|n}d^s$ using properties of group actions. When $s=1$, this becomes the Menon’s identity. Zhao and Cao [@zhao2017another] recently derived the Menon-type identity $$\begin{aligned} \label{zhao} \sum\limits_{\substack{k=1}}^{n}(k-1,n)\chi(k) = \phi(n)\tau(\frac{n}{d}) \end{aligned}$$ where $\chi$ is the Dirichlet character mod $n$ with conductor $d$. When $\chi$ is the principal character mod $n$, this identity reduces to the Menon’s identity. Generalization of the Zhao-Cao identity involving even functions mod $n$ was derived by L. T[ó]{}th in [@toth2018menon]. For positive integers $a, b$ and $s$, E. Cohen [@cohen1956some] suggested a generalization of the gcd function denoted by $(a,b)_s$ (see next section for the definition of this function). In [@chandran2019generalization], the authors of this paper proposed a generalization to the Menon’s identity by replacing the gcd function with $(a,b)_s$. Various other generalizations of the Menon’s identity were provided by many authors, see for example [@haukkanen2005menon], [@haukkanen1996generalization], [@ramaiah1978arithmetical], [@toth2011menon] and the more recent papers [@haukkanen2019menon] and [@toth2019short]. A generalization of the Euler totient function known as Klee’s function $\Phi_s$ and the generalized divisor function $\tau_s$ defined in [@chandran2019generalization] are natural extensions of the Euler totient function and the divisor function (see next section for the definitions) respectively. A natural question arising is if the gcd function in the Zhao-Cao identity (\[zhao\]) is replaced by the generalized gcd function, what could be the possible change that can happen to this identity. We propose here a new version of identity (\[zhao\]) involving generalized gcd function instead of the usual gcd function. Our techniques closely follow the style of arguments appearing in [@zhao2017another]. The main results we propose are the following. \[theo1\] Let $s,n \in {\mathbb{N}}$ and $\chi$ be a primitive Dirichlet character mod $n$, where $n$ is the $s^{th}$ power of some natural number. We have $\sum\limits_{\substack{k=1\\(k,n)_s = 1}}^{n} (k-1,n)_s \chi(k) = \Phi_s(n)$. \[theo2\] Let $\chi$ be a Dirichlet character mod $n$, where $n = m^{qs}$, $m,q,s \in \mathbb{N}$. If $d =m^{ts}$, $1 \leq t \leq q$ is the conductor of $\chi$ then $\sum\limits_{\substack{k=1\\(k,n)_s=1}}^{n}(k-1,n)_s \chi(k) = \Phi_s(n)\tau_s(n/d)$ Notations and basic results =========================== Most of the notations, functions and identities we use in this paper are standard and can be found in [@tom1976introduction]. We state below some other less popular terms and functions we use in this paper. For positive integers $a,b, s$ the generalized gcd function $(a,b)_s$ gives the largest $l^s$ (where $l \in {\mathbb{N}}$) dividing both $a$ and $b$. $(a, b)_1$ is thus the usual gcd function. Like the gcd function, $(a,b)_s = (b,a)_s$. Though the next result seems to be elementary, since we couldn’t locate a proof of it anywhere, we state and prove it here itself. $(a,b)_{s}$ is multiplicative in first variable. For coprime positive integers $a$ and $c$, write $(ac,b)_s = l^s$. If the prime factorization of $l$ is $ p_1^{r_1}p_2^{r_2}\ldots p_n^{r_n}$, then $l^s = p_1^{sr_1}p_2^{sr_2}\ldots p_n^{sr_n}$. Now $l^s|ac$ and $l^s|b$. Since $(a,c)=1$, some of these prime powers must (exclusively) appear in the prime factorization of $a$ and the rest in $c$. We may assume that $p_1^{sr_1}, p_2^{sr_2},\ldots, p_t^{sr_t}$ are in the prime factorization of $a$ and the remaining primes $p_{t+1}^{sr_{t+1}},\ldots, p_n^{sr_n}$ are in $c$. Then $(a,b)_s =p_1^{sr_1}p_2^{sr_2}\ldots p_t^{sr_t}$ and $(c,b)_s = p_{t+1}^{sr_{t+1}}\ldots p_n^{sr_n}$. We can easily see that $(a,b)_{s}$ is not completely multiplicative as a single variable function. Also, it is not multiplicative in $s$. If $(a,b)_s = 1$, then $a, b$ are said to be relatively $s-$prime to each other. The Klee’s function $\Phi_s(n)$ is defined as the cardinality of the set $\{m\in {\mathbb{N}}: 1\leq m \leq n,(m,n)_s=1\}$. Thus $\Phi_s(n)$ denotes the number of positive integers $\leq n$ that are relatively $s-$prime to $n$. Various properties satisfied by $\Phi_s(n)$ are listed in [@chandran2019generalization Section 2]. If $M$ is a complete residue system mod $n$, then the subset of elements from $M$ that are relatively $s$-prime to $n$ is called an $s$-reduced system. Further, if $M$ is a subset of $\{a: 0\leq a < n\}$ then the $s$-reduced system is called a minimal $s$-reduced residue system (mod $n$). For natural numbers $n,s$, by $\tau_s(n)$ we mean the number of $l^s$ dividing $n$ where $l\in {\mathbb{N}}$. It was observed in [@chandran2019generalization] that $\Phi_s(n)$ and $\tau_s(n)$ are multiplicative in $n$. The following lemma is essential to prove one of the main result that we propose in this paper. \[l2\][@cohen1956some Lemma 3] Let $A = \{m \mid 1 \leq m \leq n \text{ and } (m,n)_s = 1\}$ and $d>0$ be any $s$^th^ power divisor of $n$. Then $A$ is the union of $\frac{\Phi_s(n)}{\Phi_s(d)}$ disjoint sets each of which is an $s$-reduced residue system mod $d$. Main Results and proofs ======================= We are going to provide proofs of claims we made in the first section. To prove theorem \[theo1\], we need the following lemma. \[l1\] Let $s,n \in \mathbb{N}$ and $\chi$ be a primitive Dirichlet character mod $p^n$, where $p$ is prime and $n $ is a multiple of $s$. If $m$ is a multiple of $s$ such that $s\leq m<n$, then $$\sum\limits_{\substack{k = 1\\(k,p^{n-m})_s=1}}^{p^{n-m}} \chi(kp^m+1) =\begin{cases} -1, \quad m = n-s\\ 0, \quad\text{ otherwise.} \end{cases}$$\ By the conditions imposed on $s, m $ and $n$, $n\neq s$. First we prove the case $m = n-s$. Since $p^{n-s}$ is not an induced modulus for $\chi$, there exist an integer $b$, $1\leq b <p^s$ with $(bp^{n-s}+1,p^n) = 1$ and $bp^{n-s}+1\equiv 1 (\text{mod } p^{n-s})$, but $\chi(bp^{n-s}+1) \neq 1$. Now $$\begin{aligned} \chi(bp^{n-s}+1) \sum\limits_{\substack{{k = 0}}}^{p^{s}-1} \chi(kp^{n-s}+1) & = \sum\limits_{\substack{{k = 0}}}^{p^{s}-1} \chi(kbp^{2n-2s}+bp^{n-s}+kp^{n-s}+1)\\ & = \sum\limits_{\substack{{k = 0}}}^{p^{s}-1} \chi((k+b)p^{n-s}+1) \\ & = \sum\limits_{\substack{{k = 0}}}^{p^{s}-1} \chi(kp^{n-s}+1). \end{aligned}$$ Hence $\sum\limits_{\substack{{k = 0}}}^{p^{s}-1} \chi(kp^{n-s}+1) = 0$ and so $ \sum\limits_{\substack{{k = 1}}}^{p^{s}} \chi(kp^{n-s}+1) = 0$. It follows that $$\begin{aligned} \sum\limits_{\substack{k =1\\(k,p^{s})_s=1}}^{p^{s}} \chi(kp^{n-s}+1) & = \sum\limits_{\substack{k =1}}^{p^{s}} \chi(kp^{n-s}+1)- \sum\limits_{\substack{k =1\\(k,p^{s})_s\neq 1}}^{p^{s}} \chi(kp^{n-s}+1)\\ & = - \sum\limits_{\substack{k =1\\(k,p^{s})_s\neq 1}}^{p^{s}} \chi(kp^{n-s}+1)\\ &= -\chi(kp^n+1)\\ &=-\chi(1) \\ &= -1 \end{aligned}$$ Next we consider the case $m\neq n-s$. We start as in the previous case. $$\begin{aligned} \chi(bp^{n-s}+1) \sum\limits_{\substack{{k = 1}\\{(k,p^{n-m})_s=1}}}^{p^{n-m}} \chi(kp^{m}+1) & = \sum\limits_{\substack{{k = 1}\\{(k,p^{n-m})_s=1}}}^{p^{n-m}} \chi(bkp^{m}p^{n-s}+kp^{m}+bp^{n-s}+1)\\ & = \sum\limits_{\substack{{k = 1}\\{(k,p^{n-m})_s=1}}}^{p^{n-m}} \chi(kp^m+bp^{n-s}+1) \end{aligned}$$ We claim that $\{kp^m+bp^{n-s}+1: 1\leq k\leq p^{n-m}, (k,p^{n-m})_s=1\}$ is the same as the residue system $kp^m+1$ mod $p^n$. Suppose $1 \leq k_1 \leq p^{n-m}$ and $(k_1,p^{n-m})_s=1$. If $c\equiv k_1p^m+bp^{n-s}+1\text{ (mod $p^n$)}$ for some integer $c$, then let $k_2\equiv k_1p^m+bp^{n-s-m}\text{ (mod $p^{n-m}$)}$. Note that if $(k_2,p^{n-m})_s=p^{rs}$ for some prime $p$ and $1 \leq r \leq \frac{n-m}{s}$, then we have $p^s\mid k_2$, which implies $p^s\mid k_1+bp^{n-s-m}$. But in this case $s\leq m \leq n-2s$ and $p^s\mid p^{n-s-m}$ implying that $p^s \mid k_1$ which is not possible. Therefore $(k_2,p^{n-m})_s=1$ and also $1\leq k_2 \leq p^{n-m}$. Now we have $k_2p^m+1 \equiv c\text{ (mod $p^n$)}$. If $k_1p^m+bp^{n-s}+1={k_1}' p^m+bp^{n-s}+1 \text{ (mod $p^n$)}$ then $k_1\equiv k_2\text{ (mod $p^{n-m}$)}$. Similarly if $k_2p^m+1 \equiv k_2' p^m+1\text{ (mod $p^n$)}$ then $k_2 \equiv k_2' \text{ (mod $p^{n-m}$)}$. Therefore both residue system has $\Phi_s(p^{n-m})$ different elements, it follows that $\chi(bp^{n-s}+1)\sum\limits_{\substack{k=1\\(k,p^{n-m})_s=1}}^{p^{n-m}}\chi(kp^m+1) = \sum\limits_{\substack{k=1\\(k,p^{n-m})_s=1}}^{p^{n-m}}\chi(kp^m+1)$, which implies $\sum\limits_{\substack{k=1\\(k,p^{n-m})_s=1}}^{p^{n-m}}\chi(kp^m+1) = 0$. This completes the proof. Let $f(n)=\sum\limits_{\substack{k=1\\(k,n)_s = 1}}^{n} (k-1,n)_s \chi_n(k)$, where $\chi_n$ is some Dirichlet character mod $n$. For $r,t \in {\mathbb{N}}$, we have $f(rt) = \sum\limits_{\substack{k=1\\(k,rt)_s = 1}}^{rt} (k-1,rt)_s \chi_{rt}(k)$. Now we use the fact that if $(r,t)=1$ the two sets $\{k \mid 1\leq k\leq rt, (k,rt)_s=1\}$ and $\{tk_1+rk_2 \mid 1\leq k_1\leq r, (k_1,r)_s=1, 1\leq k_2\leq t,(k_2,t)_s=1\}$ are the same. Note that $\chi$ mod $k$ can be factored uniquely as a product of the form $\chi_k = \chi_{k_1}\chi_{k_2}\cdots \chi_{k_r}$, where $k = k_1k_2\cdots k_r$ with $(k_i,k_j)=1$ if $i \neq j$. Especially if $\chi_k$ is primitive then each $\chi_{k_i}$ is primitive mod $k_i$. Since generalized gcd function is multiplicative in second variable, we get $$\begin{aligned} f(rt) & = \sum\limits_{\substack{k_1=1\\(k_1,r)_s = 1}}^{r}\sum\limits_{\substack{k_2=1\\(k_2,t)_s = 1}}^{t} (tk_1+rk_2-1,r)_s (tk_1+rk_2-1,t)_s\\& \times \chi_r(tk_1+rk_2)\chi_t(tk_1+rk_2)\\ & = \sum\limits_{\substack{k_1=1\\(k_1,r)_s = 1}}^{r}\sum\limits_{\substack{k_2=1\\(k_2,t)_s = 1}}^{t} (tk_1+rk_2-1,r)_s (tk_1+rk_2-1,t)_s \chi_r(tk_1)\chi_t(rk_2), \end{aligned}$$ We observe that $(tk_1+rk_2-1,r)_s = (tk_1-1,r)_s$ and $(tk_1+rk_2-1,t)_s = (rk_2-1,t)_s$. Then $$\begin{aligned} f(rt) & = \sum\limits_{\substack{k_1=1\\(k_1,r)_s = 1}}^{r}\sum\limits_{\substack{k_2=1\\(k_2,t)_s = 1}}^{t} (tk_1-1,r)_s (rk_2-1,t)_s \chi_r(tk_1)\chi_t(rk_2)\\ & = \sum\limits_{\substack{k_1=1\\(k_1,r)_s = 1}}^{r}(tk_1-1,r)_s\chi_r(tk_1)\sum\limits_{\substack{k_2=1\\(k_2,t)_s = 1}}^{t} (rk_2-1,t)_s \chi_t(rk_2). \end{aligned}$$ Since $(r,t) = 1$, $$\begin{aligned} f(rt) & = \sum\limits_{\substack{k_1=1\\(k_1,r)_s = 1}}^{r}(k_1-1,r)_s\chi_r(k_1)\sum\limits_{\substack{k_2=1\\(k_2,t)_s = 1}}^{t} (k_2-1,t)_s \chi_t(k_2)\\ & = f(r)f(t). \end{aligned}$$ Thus $f$ is multiplicative and so we need to verify our claim only for prime power $p^a$, where $a = qs$, $q \in {\mathbb{N}}$. We have $$\begin{aligned} f(p^a) &= \sum\limits_{\substack{k=1\\(k,p^a)_s = 1}}^{p^a}(k-1,p^a)_s\chi_{p^a}(k)\\ & =\sum\limits_{\substack{k=1}}^{p^a}(k-1,p^a)_s\chi_{p^a}(k)- \sum\limits_{\substack{k=1\\(k,p^a)_s\neq 1}}^{p^a}(k-1,p^a)_s\chi_{p^a}(k)\\ &= \sum\limits_{\substack{k=1}}^{p^a}(k-1,p^a)_s\chi_{p^a}(k)\\ &= \sum\limits_{\substack{k=1\\(k-1,p^a)_s \neq 1}}^{p^a}(k-1,p^a)_s\chi_{p^a}(k)+ \sum\limits_{\substack{k=1\\(k-1,p^a)_s = 1}}^{p^a}\chi_{p^a}(k)\\ & = \sum\limits_{\substack{k=1\\(k-1,p^a)_s \neq 1}}^{p^a}(k-1,p^a)_s\chi_{p^a}(k)+\sum\limits_{\substack{k=1\\}}^{p^a}\chi_{p^a}(k)- \sum\limits_{\substack{k=1\\(k-1,p^a)_s \neq 1}}^{p^a}\chi_{p^a}(k)\\ & = \sum\limits_{\substack{k=1\\(k-1,p^a)_s \neq 1}}^{p^a}(k-1,p^a)_s\chi_{p^a}(k)- \sum\limits_{\substack{k=1\\(k-1,p^a)_s \neq 1}}^{p^a}\chi_{p^a}(k)\\ & = \sum\limits_{\substack{k=1\\(k-1,p^a)_s \neq 1}}^{p^a}((k-1,p^a)_s-1)\chi_{p^a}(k)\\ & = \sum\limits_{\substack{t=1}}^{q} \sum\limits_{\substack{k=1\\(k-1,p^a)_s = p^{ts} }}^{p^a}(p^{ts}-1)\chi_{p^a}(k)\\ & = \sum\limits_{\substack{k=1\\(k-1,p^a)_s = p^{a} }}^{p^a}(p^{a}-1)\chi_{p^a}(k)+\sum\limits_{\substack{t=1}}^{q-1} \sum\limits_{\substack{k=1\\(k-1,p^a)_s = p^{ts} }}^{p^a}(p^{ts}-1)\chi_{p^a}(k)\\ & = (p^a-1)+\sum\limits_{\substack{t=1}}^{q-1}(p^{ts}-1)\sum\limits_{\substack{k=1\\(k-1,p^a)_s = p^{ts} }}^{p^a}\chi_{p^a}(k). \end{aligned}$$ Let us compute the sum $\sum\limits_{\substack{k=1\\(k-1,p^a)_s = p^{ts} }}^{p^a}\chi_{p^a}(k)$. We have $\sum\limits_{\substack{k=1\\(k-1,p^a)_s = p^{ts} }}^{p^a}\chi_{p^a}(k) =\sum\limits_{\substack{k=1\\(k,p^a)_s = p^{ts} }}^{p^a}\chi_{p^a}(k+1)$. For a fixed prime power $p^{ts}$ we must sum over all those $k$ in the range $1\leq k \leq p^a$ such that $(k,p^a)_s=p^{ts}$. If we write $k = jp^{ts}$ then $1\leq k \leq p^a$ and $(k,p^a)_s=1$ if and only if $1 \leq j \leq p^{a-ts}$ and $(j,p^{a-ts})_s=1$. Then the last sum can be written as $\sum\limits_{\substack{k=1\\(k,p^a)_s = p^{ts} }}^{p^a}\chi_{p^a}(k+1) = \sum\limits_{\substack{j=1\\(j,p^{a-ts})_s = 1 }}^{p^{a-ts}}\chi_{p^{a}}(jp^{ts}+1)$ and $f(p^a)= (p^a-1)+\sum\limits_{\substack{t=1}}^{q-1}(p^{ts}-1) \sum\limits_{\substack{j=1\\(j,p^{a-ts})_s = 1 }}^{p^{a-ts}}\chi_{p^{a}}(jp^{ts}+1)$. By lemma \[l1\], we obtain $\sum\limits_{\substack{j=1\\(j,p^{a-ts})_s = 1 }}^{p^{a-ts}}\chi_{p^{a}}(jp^{ts}+1)=\begin{cases} -1 \text{ if } t = q-1\\ 0 \text{ otherwise} \end{cases}$. Then $$\begin{aligned} f(p^a) & = p^a-1+(p^{(q-1)s}-1)(-1)\\ & =p^a-p^{qs-s}\\ & =p^a-p^{a-s}\\ & = \Phi_s(p^a),\text{which concludes the proof.} \end{aligned}$$ The above theorem reduces to theorem 1.1 in [@zhao2017another] when $s=1$. We would like to further remark that theorem 1.1 in [@zhao2017another] was proved using lemmas 2.1 and 2.2 in [@zhao2017another]. If we employ the technique we used above, only [@zhao2017another Lemma 2.1] is required to prove [@zhao2017another Theorem 1.1]. To prove theorem \[theo2\], we require the following two lemmas. First lemma generalizes [@zhao2017another Lemma 2.4]. \[l3\] Let $s,n \in {\mathbb{N}}$ and $\chi $ be a Dirichlet character mod $p^n$, where $n = qs$ for some $q\in {\mathbb{N}}$. Let $p^l$ be the conductor of $\chi$ where $l = rs$ for some $r \in {\mathbb{N}}$ and $1 \leq r \leq q$ . If $m$ is a multiple of $s$ such that $s\leq m < n$, we have $\sum\limits_{\substack{k=1\\(k,p^{n-m})_s = 1}}^{p^{n-m}} \chi(kp^m+1) = \begin{cases}\Phi_s(p^{n-m}), \text{ if } l \leq m < n\\ -p^{n-l}, \text{ if } m = l-s\\ 0, \text{ if } s\leq m < l-s. \end{cases}$ First we consider the case $ l \leq m < n$. We have $$\begin{aligned} \sum\limits_{\substack{k=1\\(k,p^{n-m})_s = 1}}^{p^{n-m}} \chi(kp^m+1) & = \sum\limits_{\substack{k=1\\(k,p^{n-m})_s = 1}}^{p^{n-m}} \chi(1)\\ & = \sum\limits_{\substack{k=1\\(k,p^{n-m})_s = 1}}^{p^{n-m}} 1\\ & = \Phi_s(p^{n-m}). \end{aligned}$$ Next we move on to the case $s\leq m \leq l-s$. Note that every Dirichlet character $\chi$ mod $k$ can be expressed as the product $\chi(n) = \psi(n)\chi_1(n)$ for all $n$, where $\psi$ is a primitive character modulo conductor of $\chi$ and $\chi_1$ is the principal character mod $n$. Then $\sum\limits_{\substack{k=1\\(k,p^{n-m})_s = 1}}^{p^{n-m}} \chi(kp^m+1) = \sum\limits_{\substack{k=1\\(k,p^{n-m})_s = 1}}^{p^{n-m}} \psi(kp^m+1) \chi_1(kp^m+1)$, where $\psi$ is the primitive character mod conductor of $\chi$ and $\chi_1$ is the principal character mod $p^n$. Since $s\leq m \leq l-s$, $(kp^m+1,p^n)=1$ and by lemma \[l2\] and lemma \[l1\], we get $$\begin{aligned} \sum\limits_{\substack{k=1\\(k,p^{n-m})_s = 1}}^{p^{n-m}} \chi(kp^m+1) & = \sum\limits_{\substack{k=1\\(k,p^{n-m})_s = 1}}^{p^{n-m}} \psi(kp^m+1)\\ & = \frac{\Phi_s(p^{n-m})}{\Phi_s(p^{l-m})} \sum\limits_{\substack{k=1\\(k,p^{l-m})_s = 1}}^{p^{l-m}}\psi(kp^m+1)\\ & = p^{n-l}\sum\limits_{\substack{k=1\\(k,p^{l-m})_s = 1}}^{p^{l-m}}\psi(kp^m+1)\\ & =\begin{cases} -p^{n-l} \text{ if } m = l-s\\ 0 \text{ if } s\leq m < l-s, \end{cases}\\ \text{which completes the proof}. \end{aligned}$$ Next we prove a lemma, which is key to the proof of theorem \[theo2\]. \[l4\] Let $s,a \in N $ and $\chi $ be a Dirichlet character mod $p^a$, where $a = qs$ for some $q \in {\mathbb{N}}$. If $p^{rs}$ is the conductor of $\chi$ where $r\in {\mathbb{N}}$ and $1 \leq r \leq q$ , we have $\sum\limits_{\substack{k=1\\(k,p^{a})_s = 1}}^{p^{a}} (k-1,p^a)_s \chi(k) = (q-r+1) \Phi_s(p^a).$ We prove the lemma case by case.\ Case $1$. $r=1$\ In this case $p^s$ is the conductor of $\chi$. From the proof of theorem \[theo1\], we have $f(p^a)= (p^a-1)+\sum\limits_{\substack{t=1}}^{q-1}(p^{ts}-1) \sum\limits_{\substack{j=1\\(j,p^{a-ts})_s = 1 }}^{p^{a-ts}}\chi_{p^{a}}(jp^{ts}+1)$. Using lemma \[l3\], $$\begin{aligned} \sum\limits_{\substack{k=1\\(k,p^{a})_s = 1}}^{p^{a}} (k-1,p^a)_s \chi(k) & = p^a-1+\sum\limits_{\substack{t=1}}^{q-1}(p^{ts}-1)\sum\limits_{\substack{j=1\\(j,p^{a-ts})_s = 1}}^{p^{a-ts}} \chi_{p^a}(jp^{ts}+1)\\ & = p^a-1+\sum\limits_{\substack{t=1}}^{q-1}(p^{ts}-1)\Phi_s(p^{a-ts})\\ & = p^a-1+\sum\limits_{\substack{t=1}}^{q-1}(p^{ts}-1)p^{a-ts}(1-\frac{1}{p^{s}})\\ & = p^a-1+\sum\limits_{\substack{t=1}}^{q-1}(p^a-p^{a-ts})(1-p^{-s})\\ & = p^a-1+\sum\limits_{\substack{t=1}}^{q-1}(p^a-p^{a-s}-p^{a-ts}+p^{a-(t+1)s})\\ & =p^a-1+\sum\limits_{\substack{t=1}}^{q-1}(p^a-p^{a-s})+\sum\limits_{\substack{t=1}}^{q-1}(p^{a-(t+1)s}-p^{a-ts})\\ & =p^a-1+(p^a-p^{a-s})(q-1)+(p^{a-qs}-p^{a-s})\\ & =p^a-1+(p^a-p^{a-s})(q-1)+1-p^{a-s}\\ & = q(p^a-p^{a-s})\\ & =q \Phi_s(p^a) \end{aligned}$$ Case $2$. $r=q$\ In this case $\chi$ is the primitive character mod $p^a$. The claim follows from theorem \[theo1\].\ Case $3$. $2 \leq r \leq q-1$\ As in the first case we have $$f(p^a)= (p^a-1)+\sum\limits_{\substack{t=1}}^{q-1}(p^{ts}-1) \sum\limits_{\substack{j=1\\(j,p^{a-ts})_s = 1 }}^{p^{a-ts}}\chi_{p^{a}}(jp^{ts}+1).$$ By lemma \[l3\], we get $\sum\limits_{\substack{j=1\\(k,p^{a-ts})_s = 1}}^{p^{a-ts}} \chi(jp^{ts}+1) = \begin{cases}\Phi_s(p^{a-ts}), \text{ if }r \leq t < q\\ -p^{a-rs}, \text{ if } t = r-1\\ 0, \text{ if } 1\leq t< r-1. \end{cases}$ Now $$\begin{aligned} f(p^a) & = p^a-1+\sum\limits_{\substack{t=1}}^{q-1}(p^{ts}-1)\sum\limits_{\substack{j=1\\(j,p^{a-ts})_s = 1}}^{p^{a-ts}} \chi_{p^a}(jp^{ts}+1)\\ & = p^a-1+\sum\limits_{\substack{t=1}}^{r-2}(p^{ts}-1)\sum\limits_{\substack{j=1\\(k,p^{a-ts})_s = 1}}^{p^{a-ts}} \chi(jp^{ts}+1)+ (p^{(r-1)s}-1)(-p^{a-rs})\\ & \text{ }+\sum\limits_{\substack{t=r}}^{q-1}(p^{ts}-1)\sum\limits_{\substack{j=1\\(k,p^{a-ts})_s = 1}}^{p^{a-ts}} \chi(jp^{ts}+1) \\ & = p^a-1-(p^{rs-s}-1)p^{a-rs}+\sum\limits_{\substack{t=r}}^{q-1}(p^{ts}-1)\Phi_s(p^{a-ts})\\ & = p^a-1-(p^{rs-s}-1)p^{a-rs}+\sum\limits_{\substack{t=r}}^{q-1}(p^{ts}-1)p^{a-ts}(1-\frac{1}{p^s})\\ & =p^a-1-p^{a-s}+p^{a-rs}+\sum\limits_{\substack{t=r}}^{q-1}(p^{ts}-1)p^{a-ts}(1-p^{-s}) \\ & =p^a-1-p^{a-s}+p^{a-rs}+\sum\limits_{\substack{t=r}}^{q-1}(p^{a}-p^{a-s}-p^{a-ts}+p^{a-(t+1)s})\\ &=p^a-1-p^{a-s}+p^{a-rs}+\sum\limits_{\substack{t=r}}^{q-1}(p^{a}-p^{a-s})+\sum\limits_{\substack{t=r}}^{q-1}(p^{a-(t+1)s}-p^{a-ts})\\ & = p^a-1-p^{a-s}+p^{a-rs}+(p^{a}-p^{a-s})(q-r)+p^{a-qs}-p^{a-rs}\\ & =p^a-1-p^{a-s}+(p^a-p^{a-s})(q-r)+1 \\ & =(q-r+1)(p^a-p^{a-s})\\ & = (q-r+1)\Phi_s(p^a). \end{aligned}$$ Lemma 3.4 is very much similar to [@zhao2017another Lemma 3.1]. The identity in [@zhao2017another Lemma 3.1] reduces to the Menon’s identity when $\chi$ is a principal character. But by the assumptions in the lemma above, $\chi$ cannot be a principal character so cannot be strictly taken as a generalization of [@zhao2017another Lemma 3.1]. Finally we will prove theorem \[theo2\], which is similar to theorem 1.2 in [@zhao2017another]. But our conditions are more restrictive than those appeared in [@zhao2017another Theorem 1.2]. To prove this, we use the fact that if $n = p_1^{a_1}p_2^{a_2}\cdots p_r^{a_r}$ then $\chi_n = \chi_{p_1^{a_1}}\chi_{p_2^{a_2}}\cdots\chi_{p_r^{a_r}}$, where $\chi_n$ is the Dirichlet character mod $n$. Also if $g(\chi)$ denotes the conductor of $\chi$, then $g(\chi_n) = g(\chi_{p_1^{a_1}})g(\chi_{p_2^{a_2}})\cdots g(\chi_{p_r^{a_r}})$. Let $n =p_1^{a_1s}p_2^{a_2s}\cdots p_r^{a_rs}$, $d = p_1^{b_1s}p_2^{b_2s}\cdots p_r^{b_rs}$ where $1\leq b_i\leq a_i$.Now $f(n) = \sum\limits_{\substack{k=1\\(k,n)_s=1}}^{n}(k-1,n)_s \chi_n(k)$ is multiplicative. Therefore $$\begin{aligned} \sum\limits_{\substack{k=1\\(k,n)_s=1}}^{n}(k-1,n)_s \chi(k)& = f(n)\\& = f(p_1^{a_1s})f(p_2^{a_2s})\cdots f(p_r^{a_rs})\\& = \prod\limits_{\substack{i=1}}^{r}f(p_i^{a_is})\\& = \prod\limits_{\substack{i=1}}^{r}\sum\limits_{\substack{k=1\\(k,p_i^{a_is})_s=1}}^{p_i^{a_is}}(k-1,p_i^{a_is})_s \chi_{p_i^{a_is}}(k) \end{aligned}$$ Note that $p_1^{b_1s}p_2^{b_2s}\cdots p_r^{b_rs} = g(\chi_{p_1^{a_1s}})g(\chi_{p_2^{a_2s}})\cdots g(\chi_{p_r^{a_rs}})$. It is clear that $g(\chi_{p_i^{a_is}})=p_i^{b_is}$. Hence by lemma \[l4\] , $$\begin{aligned} \sum\limits_{\substack{k=1\\(k,n)_s=1}}^{n}(k-1,n)_s \chi(k)& = \prod\limits_{\substack{i=1}}^{r}(a_i-b_i+1)\Phi_s(p_i^{a_is})\\& =\prod\limits_{\substack{i=1}}^{r}\tau_s(p_i^{(a_i-b_i)s})\Phi_s(p_i^{a_is})\\& = \Phi_s(n)\tau_s(\frac{n}{d}),\\ \text{which concludes the proof.} \end{aligned}$$ A strict generalization of theorem 1.2 in [@zhao2017another] would have been $\sum\limits_{\substack{k=1\\(k,n)_s=1}}^{n}(k-1,n)_s \chi(k) = \Phi_s(n)\tau_s(n/d)$, where $\chi$ is a Dirichlet character mod $n$ with conductor $d$. But this identity is not true. For example, if we take $q = 1$, $s = 2$, $r=0$ and $p = 2$, the LHS of the identity in the above theorem will give $(0,4)_2+(2,4)_2=5$. But RHS of the identity evaluates to $6$. In [@toth2018menon], T[ó]{}th derived an identity similar to Menon’s identity involving even functions mod $n$, Möbius function and the Euler totient function. Note that an arithmetical function is $n$-even if $f(k)=f((k,n))$. A concept similar to $n$-even function is $(n,s)$-even functions defined by McCarthy (see [@mccarthy1960generation] for details). An arithmetical function $f$ is $(n,s)$-even if $f(k)=f((k, n^s)_s)$. Many of the properties of such functions were studied in [@namboothiri2019discrete]. We feel that T[ó]{}th’s results can be generalized to $(n,s)$-even functions and similar identities can be derived if one uses the results appearing in [@namboothiri2019discrete]. Acknowledgements ================ The first author thanks the University Grants Commission of India for providing financial support for carrying out research work through their Junior Research Fellowship (JRF) scheme. The second author thanks the Kerala State Council for Science,Technology and Environment, Thiruvananthapuram, Kerala, India for providing financial support for carrying out research work. [10]{} Tom Apostol. . Springer, 1976. Arya Chandran, Neha Elizabeth Thomas, and K Vishnu Namboothiri. A generalization of euler totient function and two menon-type identities. , 2019. Eckford Cohen. Some totient functions. , 23(4):515–522, 1956. Pentti Haukkanen. Menon’s identity with respect to a generalized divisibility relation. , 70(3):240–246, 2005. Pentti Haukkanen and L[á]{}szl[ó]{} T[ó]{}th. Menon-type identities again: Note on a paper by li, kim and qiao. , 2019. Pentti Haukkanen and Jun Wang. A generalization of menon’s identity with respect to a set of polynomials. , 53(3):331–338, 1996. VL Klee. A generalization of euler’s $\varphi$-function. , 55(6):358–359, 1948. Paul J McCarthy. The generation of arithmetical identities. , 203:55–63, 1960. P Kesava Menon. On the sum $\sum (a-1, n),[(a, n)= 1]$. , 29(3):155–163, 1965. K Vishnu Namboothiri. The discrete fourier transform of (r, s)-even functions. , 50(1):253–268, 2019. V Sita Ramaiah. Arithmetical sums in regular convolutions. , 303(304):265–283, 1978. Balasubramanian Sury. Some number-theoretic identities from group actions. , 58(1):99–108, 2009. L[á]{}szl[ó]{} T[ó]{}th. Menon’s identity and arithmetical sums representing functions of several variables. , 2011. L[á]{}szl[ó]{} T[ó]{}th. Menon-type identities concerning dirichlet characters. , 14(04):1047–1054, 2018. L[á]{}szl[ó]{} T[ó]{}th. Short proof and generalization of a menon-type identity by li, hu and kim. , 23(3):557–561, 2019. Xiao-Peng Zhao and Zhen-Fu Cao. Another generalization of menon’s identity. , 13(9):2373–2379, 2017.
--- abstract: | This paper is an extension of [@K-L]. In this paper, we will study the blowup behavior of a surface sequence $\Sigma_k$ immersed in $\R^n$ with bounded Willmore functional and fixed genus $g$. We will prove that, we can decompose $\Sigma_k$ into finitely many parts: $$\Sigma_k=\bigcup_{i=1}^m\Sigma_k^i,$$ and find $p_k^i\in \Sigma_k^i$, $\lambda_k^i \in\R$, such that $\frac{\Sigma_k^i-p_k^i} {\lambda_k^i}$ converges locally in the sense of varifolds to a complete branched immersed surface $\Sigma_\infty^i$ with $$\sum_i\int_{\Sigma_\infty^i}K_{\Sigma_\infty^i}=2\pi(2-2g).$$ The basic tool we use in this paper is a generalized convergence theorem of F. Hélein. author: - | Yuxiang Li\ [*Department of Mathematical Sciences*]{},\ [*Tsinghua University,*]{}\ [*Beijing 100084, P.R.China.*]{}\ [*Email: yxli@math.tsinghua.edu.cn.*]{} date: - - title: '**Weak limit of an immersed surface sequence with bounded Willmore functional**' --- [[**Keywords**]{}: Willmore functional, Bubble tree.]{} [[**Mathematics Subject Classification**]{}: Primary 58E20, Secondary 35J35.]{} Introduction ============ For an immersed surface $\ f : \Sigma \rightarrow \R^n\ $ the Willmore functional is defined by $$W(f) = \frac{1}{4} \int_\Sigma |H_f|^2 d \mu_{f},$$ where $H_f=\Delta_{g_f}f$ denotes the mean curvature vector of $f$, $g_f = f^* g_{euc}$ the pull-back metric and $\mu_f$ the induced area measure on $\Sigma$. This functional first appeared in the papers of Blaschke [@Bl] and Thomsen [@T], and was reinvented and popularized by Willmore [@W]. We denote the infimum of Willmore functional of immersed surfaces of genus $p$ by $\beta_p^n$. We have $\beta_p^n\geq 4\pi$ by Gauss-Bonnet formula, and $\beta_p^n<8\pi$ as observed by Pinkall and Kusner [@K] independently. Willmore conjectured that $\beta_1^n$ is attained by Clifford torus. This conjecture is still open. Given a surface sequence with bounded Willmore functional and measure, we are particularly interested to know what the limit looks like? In other words, we expect to understand the blowup behavior of such a surface sequence. It is very important as we meet blowup almost everywhere in the study of Willmore functional. For example, if $\Sigma_t$ is a Willmore flow defined on $[0,T)$, then by $\epsilon$-regularity proved in [@K-S], $\int_{B_\rho\cap \Sigma_t}|A_t|^2<\epsilon$ implies $\|\nabla_{g_t}^mA_t\|_{L^\infty(B_\frac{\rho}{2} \cap \Sigma_t)} <C(m,\rho)$. Then $\Sigma_t$ converges smoothly in any compact subset of $\R^n$ minus the concentration points set which is defined by $$\mathcal{S}=\{p\in\R^n:\lim_{r\rightarrow 0} \liminf_{t\rightarrow T} \int_{B_r(p)\cap\Sigma_t} |A_t|^2>0\}.$$ So, if we want to have a good knowledge of Willmore flow, we have to learn the behavior, especially the structure of the bubble trees of $\Sigma_t$ near the concentration points. Note that $W(f_k)<C$ implies $\int_{\Sigma}|A_{k}|^2 d\mu_k<C'$. One expects that $\|f_k\|_{W^{2,2}}$ is equivalent to $\int|A_k|^2d\mu_k=\int g_k^{ij}g_k^{km}A_{ik}A_{jm} \sqrt{|g_k|}dx$. However, it is not always true. One reason is that the diffeomorphism group of a surface is extremely big. Therefore, even when an immersion sequence $f_k$ converges smoothly, we can easily find a diffeomorphism sequence $\phi_k$ such that $f_k\circ \phi_k$ will not converge. Moreover, the Sobolev embedding $W^{2,2q}\hookrightarrow C^1$ is invalid when $q=1$, so that it is impossible to estimate the $L^\infty$ norms of $g^{-1}_k$ and $g_{k}$ via the Sobolev inequalities directly. To overcome these difficulties, an approximate decomposition lemma was used by L. Simon when he proved the existence of the minimizer [@S]. He proved that $\beta_p^n$ can be attained if $p=1$ or $$\label{simon} p>1,\s and\s \beta_p^n< \omega_p^n=\min\Big\{4\pi+\sum\limits_{i}(\beta_{p_i}^n-4\pi): \sum\limits_{i} p_i =p,\,1 \leq p_i < p\Big\}.$$ Then Bauer and Kuwert proved that is always true, thus $\beta_p^n$ can be attained for any $p$ and $n$ [@B-K]. Later, such a technique was extended by W. Minicozzi to get the minimizer of $W$ on Lagrangian tori [@M], by Kuwert-Schätzle to get the minimizer of $W$ in a fixed conformal class [@K-S3], and by Schätzle to get the minimizer of $W$ with boundary condition [@Sh]. In a recent paper [@K-L], we presented a new approach. Given an immersion sequence $f_k$, we consider each $f_k$ as a conformal immersion of $(\Sigma,h_k)$ in $\R^n$, where $h_k$ is the smooth metric with Gaussian curvature $\pm1$ or 0. On the one hand, the conformal diffeomorphism group of $(\Sigma,h_k)$ is very small. On the other hand, if we set $g_{f_k}=e^{2u_k} g_{euc}$ on an isothermal coordinate system, then we can estimate $\|u_k\|_{L^\infty}$ from the compensated compactness property of $K_{f_k}e^{2u_k}$. Thus we may get the upper boundary of $\|f_k\|_{W^{2,2}}$ via the equation $\Delta_{h_k}f_k=H_{f_k}$. However, the compensated compactness only holds when the $L^2$ norm of the second fundamental form is small locally, thus the blowup analysis is needed here. Our basic tools are the following 2 results: [@H]\[Helein\] Let $f_k\in W^{2,2}(D,\R^n)$ be a sequence of conformal immersions with induced metrics $(g_k)_{ij} = e^{2u_k} \delta_{ij}$, and assume $$\int_D |A_{f_k}|^2\,d\mu_{g_k} \leq \gamma < \gamma_n = \begin{cases} 8\pi & \mbox{ for } n = 3,\\ 4\pi & \mbox{ for }n \geq 4. \end{cases}$$ Assume also that $\mu_{g_k}(D) \leq C$ and $f_k(0) = 0$. Then $f_k$ is bounded in $W^{2,2}_{loc}(D,\R^n)$, and there is a subsequence such that one of the following two alternatives holds: - $u_k$ is bounded in $L^\infty_{loc}(D)$ and $f_k$ converges weakly in $W^{2,2}_{loc}(D,\R^n)$ to a conformal immersion $f \in W^{2,2}_{loc}(D,\R^n)$. - $u_k \to - \infty$ and $f_k \to 0$ locally uniformly on $D$. \[D.K.\][@D-K] Let $h_k,h_0$ be smooth Riemannian metrics on a surface $M$, such that $h_k \to h_0$ in $C^{s,\alpha}(M)$, where $s \in \N$, $\alpha \in (0,1)$. Then for each $p \in M$ there exist neighborhoods $U_k, U_0$ and smooth conformal diffeomorphisms $\vartheta_k:D \to U_k$, such that $\vartheta_k \to \vartheta_0$ in $C^{s+1,\alpha}(\overline{D},M)$. A $W^{2,2}$-conformal immersion is defined as follows: \[defconformalimmersion\] Let $(\Sigma,g)$ be a Riemann surface. A map $f\in W^{2,2}(\Sigma,g,\mathbb{R}^n)$ is called a conformal immersion, if the induced metric $g_{f} = df\otimes df$ is given by $$g_{f} = e^{2u} g \quad \mbox{ where } u \in L^\infty(\Sigma).$$ For a Riemann surface $\Sigma$ the set of all $W^{2,2}$-conformal immersions is denoted by $\CI(\Sigma,g,\R^n)$. When $f\in W^{2,2}_{loc} (\Sigma,g,\R^n)$ and $u\in L^\infty_{loc}(\Sigma)$, we say $f\in W^{2,2}_{conf,loc}(\Sigma,g,\R^n)$. F. Hélein first proved Theorem \[Helein\] is true for $\gamma< \frac{8\pi}{3}$ [@H Theorem 5.1.1]. In [@K-L], we show that the constant $\gamma_n$ is optimal. Theorem \[Helein\] together with Theorem \[D.K.\] give the convergence of a $W^{2,2}$-conformal sequence of $(D,h_k)$ in $\R^n$ with $h_k$ converging smoothly to $h_0$. Then using the theory of moduli space of Riemann surface, we proved in [@K-L] the following \[KL\][@K-L] Let $f\in W^{2,2}_{conf} (\Sigma,h_k,\R^n)$. If $$\label{omega} W(f_k)\leq \left\{\begin{array}{ll} 8\pi-\delta&p=1\\ \min\{8\pi,\omega_p\}-\delta&p>1 \end{array}\right.,\s \delta>0,$$ then the conformal class sequence represented by $h_k$ converges in $\mathcal{M}_p$. In other words, $h_k$ converges to a metric $h_0$ smoothly. This was also proved by T. Rivière [@R]. Then up to Möbius transformations, $f_k$ will converge weakly in $W^{2,2}_{loc}(\Sigma \setminus\{\mbox{finite points}\},h_0)$ to a $W^{2,2}(\Sigma,h_0)$-conformal immersion. In this way, we give a new proof of the existence of minimizer of Willmore functional with fixed genus. also gives us a hint that, it is the degeneration of complex structure that makes the trouble for the convergence of an immersion sequence with $$\label{bmw} \mu(f_k)+W(f_k)<C.$$ In [@C-L], the Hausdorff limit of $\{f_k\}$ with was studied, using conformal immersion as a tool. We proved that, the limit of $f_0$ is a conformal branched immersion from a stratified surface $\Sigma_\infty$ into $\R^n$. Briefly speaking, if $(\Sigma_0,h_0)$ is the limit of $(\Sigma,h_k)$ in $\overline{\mathcal{M}_p}$, then $f_k$ converges weakly in the $W^{2,2}$ sense in any component of $\Sigma_0$ away from the blowup points $$\mathcal{S}(f_k)=\{p\in D:\lim_{r\rightarrow 0} \liminf_{k\rightarrow+\infty}\int_{B_r(p,h_0)}|A(f_k)|^2d\mu_{f_k}\geq 4\pi\}.$$ Meanwhile, some bubble trees, which consist of $W^{2,2}$ branched conformal immersions of $S^2$ in $\R^n$ will appear. As a corollary, we get the following [@C-L] Let $f_k:\Sigma\rightarrow \R^n$ be a sequence of smooth immersions with . Assume the Hausdorff limit of $f_k(\Sigma)$ is not a union of $W^{2,2}$ branched conformal immersed spheres. Then the complex structure of $c_k$ induced by $f_k$ diverges in the moduli space if and only if there are a seqence of closed curves $\gamma_k$ which are nontrivial in $H^1(\Sigma)$, such that the length of $f_k(\gamma_k)$ converges to 0. Thus, when the conformal class induced by $f_k$ diverges in the moduli space, topology will be lost. They are two reasons why the topology is lost. One reason is that Theorem \[Helein\] does not ensure the limit is an immersion on each component of $\Sigma_0$. If $f_k$ converges to a point in some components, then some topologies are taken away. The other reason is that on each collar which is conformal to $Q(T_k)=S^1\times[-T_k,T_k]$ with $T_k\rightarrow+\infty$, there must exist a sequence $t_k\in[-T_k,T_k]$ such that $f_k(S^1\times\{t_k\})$ will shrink to a point. It is not easy to calculate how many topologies are lost, but it is indeed possible to find where $\int_\Sigma K_{f_k}d\mu_{f_k}$ is lost. We have to study those bubbles which have nontrivial topologies but shrink to points. For this sake, we should check if those conformal immersion sequences which converge to points will converge to immersions after being rescaled: \[convergence\] Let $\Sigma$ be a smooth connected Riemann surface without boundary, and $\Omega_k\subset\subset\Sigma$ be domains with $$\Omega_1\subset \Omega_2\subset\cdots\Omega_k\subset\cdots,\s \bigcup_{i=1}^\infty\Omega_i=\Sigma.$$ Let $\{h_k\}$ be a smooth metric sequence over $\Sigma$ which converges to $h_0$ in $C^\infty_{loc}(\Sigma)$, and $\{f_k\}$ be a conformal immersion sequence of $(\Omega_k,h_k)$ in $\R^n$ satisfying - $\mathcal{S}(f_k):= \{p\in\Sigma: \lim\limits_{r\rightarrow 0}\liminf\limits_{k\rightarrow+\infty} \int_{B_r(p,h_0)}|A_{f_k}|^2d\mu_{f_k}\geq 4\pi \}=\emptyset$. - $f_k(\Omega_k)$ can be extended to a closed compact immersed surface $\Sigma_k$ with $$\int_{\Sigma_k}(1+|A_{f_k}|^2)d\mu_{f_k}<\Lambda.$$ Take a curve $\gamma:[0,1]\rightarrow \Sigma$, and set $\lambda_k=diam\, f_k(\gamma[0,1])$. Then we can find a subsequence of $\frac{f_k -f_k(\gamma(0))}{\lambda_k}$ which converges weakly in $W^{2,2}_{loc}(\Sigma)$ to an $f_0\in W_{conf,loc}^{2,2}(\Sigma,\R^n)$. Further, we can find an inverse $I=\frac{y-y_0}{|y-y_0|^2}$ with $y_0\notin f_0(\Sigma)$ such that $$\int_\Sigma(1+|A_{I(f_0)}|^2)d\mu_{I(f_0)}<+\infty.$$ When $\Sigma$ is a compact closed surface minus finitely many points, $f_0$ may not be compact. However, by Removability of singularity (see Theorem \[removal\] in section 2), $I(f_0)$ is a conformal branched immersion. Thus $f_0$ is complete. We call $f$ a generalized limit of $f_k$, if we can find a point $x_0\notin \mathcal{S}(f_k)$ and a positive sequence $\lambda_k$ which is equivalent to 1 or tends to 0, such that $\frac{f_k-f_k(x_0)}{\lambda_k}$ converges to $f$ weakly in $W^{2,2}_{loc}(\Sigma\setminus \mathcal{S}(f_k))$. Obviously, if $f$ and $f'$ are both generalized limits of $f_k$, then $f=\lambda f'+b$ for some $\lambda$ and $b$. We will not distinguish between $f$ and $f'$. Near the concentration points, we will get some bubbles. The divergence of complex structure also gives us some bubbles. In [@C-L], we only considered the bubbles with $\lambda_k\equiv1$. In this paper, we will study the bubbles with $\lambda_k \rightarrow 0$ which do not appear in the Hausdorff limit. All the bubbles can be considered as conformal branched immersions from $\C$ (or $S^1\times \R$, $S^2$) into $\R^n$. However, the structures of bubble trees here are much more complicated than those of harmonic maps. For example, there might exist infinite many bubbles here, therefore, we should neglect the bubbles which do not carry total Gauss curvature. We say a conformal branched immersion of $S^1\times\R$ into $\R^n$ is trivial, if for any $t$, $$\int_{S^1\times\{t\}}\kappa\neq 2m\pi+\pi,\s for\s some\s m\in\mathbb{Z}.$$ The bubble trees constructed in this paper consist of finitely many branches. Small branches are on the big branches level by level. Each branch consists of nontrivial bubbles, bubbles with concentration, and the first bubble (see definitions in Section 4). We can classify the bubbles into four types: $T_\infty$, $T_0$, $B_\infty$ and $B_0$ (see Definition \[typeofbubble\]). We will show that a $T_0$ type bubble must follow a $B_\infty$ type bubble, and a $T_\infty$ type bubble must follow a $B_0$ type bubble. Moreover, we have total Gauss curvature identity. To state the total Gauss curvature identity precisely, we have to divide it into 3 cases. [**Hyperbolic case (genus$>1$):**]{} Let $\Sigma_0$ be the stable surface in $ \overline{\mathcal{M}}_g$ with nodal points $\mathcal{N}=\{ a_1,\cdots, a_{m}\}$. $\Sigma_0$ is obtained by pinching some curves in a surface to points, thus $\Sigma_0\setminus\mathcal{N}$ can be divided into finitely many components $\Sigma_0^1$, $\cdots$, $\Sigma_0^s$. For each $\Sigma_0^i$, we can extend $\Sigma_0^i$ to a smooth closed Riemann surface $\overline{\Sigma_0^i}$ by adding a point at each puncture. Moreover, the complex structure of $\Sigma_0^i$ can be extended smoothly to a complex structure of $\overline{\Sigma_0^i}$. We say $h_0$ to be a hyperbolic structure on $\Sigma_0$ if $h_0$ is a smooth complete metric on $\Sigma_0\setminus\mathcal{N}$ with finite volume and Gauss curvature $-1$. We define $\Sigma_{0}(a_j,\delta)$ to be the domain in $\Sigma_0$ which satisfies $$a_j\in \Sigma_0(a_j,\delta),\s and \s injrad_{\Sigma_0\setminus\mathcal{N}}^{h_0}(p)<\delta\s \forall p\in\Sigma_0(a_j,\delta)\setminus\{a_j\}.$$ We set $h_0^i$ to be a smooth metric over $\overline{\Sigma_0^i}$ which is conformal to $h_0$ on $\Sigma_0^i$. We may assume $h_0^i$ has curvature $\pm1$ or curvature $0$ and measure 1. Now, we let $\Sigma_k$ be a sequence of compact Riemann surfaces of fixed genus $g$ whose metrics $h_k$ have curvature $-1$, such that $\Sigma_k \rightarrow \Sigma_0$ in $\overline{\mathcal{M}_g}$. Then, there exist a maximal collection $\Gamma_k = \{\gamma_k^1,\ldots,\gamma_k^{m}\}$ of pairwise disjoint, simply closed geodesics in $\Sigma_k$ with $\ell^j_k = L(\gamma_k^j) \to 0$, such that after passing to a subsequence the following hold: - There are maps $\varphi_k \in C^0(\Sigma_k,\Sigma_0)$, such that $\varphi_k: \Sigma_k \backslash \Gamma_k \to \Sigma_0 \backslash \mathcal{N}$ is diffeomorphic and $\varphi_k(\gamma_k^j) = a_j$ for $j = 1,\ldots,m$. - For the inverse diffeomorphisms $\psi_k:\Sigma_0 \backslash \mathcal{N} \to \Sigma_k \backslash \Gamma_k$, we have $\psi_k^\ast (h_k) \to h_0$ in $C^\infty_{loc}(\Sigma_0 \backslash \mathcal{N})$. - Let $c_k$ be the complex structure over $\Sigma_k$, and $c_0$ be the complex structure over $\Sigma_0\setminus\mathcal{N}$. Then $$\psi_{k}^*(c_k)\rightarrow c_0\s in\s C^\infty_{loc}(\Sigma_0\setminus\mathcal{N}).$$ - For each $\gamma_k^j$, there is a collar $U_k^j$ containing $\gamma_k^j$, which is isometric to cylinder $$Q_k^j=S^1\times(-\frac{\pi^2}{l_k^j},\frac{\pi^2}{l_k^j}),\s with\s metric\s h_k^j=\left(\frac{1}{2\pi\sin(\frac{l_k^j}{2\pi}t+\theta_k)}\right)^2(dt^2+d\theta^2),$$ where $\theta_k=\arctan(\sinh( \frac{l_k^j}{2}))+\frac{\pi}{2}$. Moreover, for any $(\theta,t)\in S^1\times (-\frac{\pi^2}{l_k^j},\frac{\pi^2}{l_k^j})$, we have $$\label{injrad} \sinh(injrad_{\Sigma_k}(t,\theta))\sin( \frac{l_k^jt}{2\pi}+\theta_k) =\sinh\frac{l_k^j}{2}.$$ Let $\phi_k^j$ be the isometric between $Q_k^j$ and $U_k^j$. Then $\varphi_k\circ\phi_k^{j}(T_k^j+t,\theta)\cup \varphi_k\circ\phi_k^{j}(-T_k^j+t,\theta)$ converges in $C^\infty_{loc}((-\infty,0)\cup (0,\infty))$ to an isometric from $S^1\times(-\infty,0)\cup S^1\times(0,+\infty)$ to $\Sigma_0(a_j,1)\setminus \{a_j\}$. Items 1) and 2) in the above can be found in Proposition 5.1 in [@Hum]. The main part of 3) is just the collar Lemma. Now, we consider a sequence $f_k\in W^{2,2}_{conf} (\Sigma,h_k,\R^n)$, with $$\mu(f_k)+W(f_k)<\Lambda.$$ By Theorem \[convergence\], on each component $\Sigma_k^i$, $f_k\circ \psi_k$ has a generalized limit $f_0^i\in W^{2,2}_{conf}(\overline{\Sigma_k^i}\setminus A^i,h_0^i,\R^n)$, where $A^i$ is a finite set. We have the following \[main\] Let $f^1$, $f^2$, $\cdots$ be all of the non-trivial bubbles of $\{f_k\}$. Then $$\sum_i\int_{\overline{\Sigma_k^i}}K_{f_0^i}d\mu_{f_0^i}+ \sum_i\int_{S^2}K_{f^i}d\mu_{\varphi^i}=2\pi\chi(\Sigma).$$ [**Torus case:**]{} Let $(\Sigma,h_k)=\C/(\pi,z)$, where $|z|\geq\pi$ and $|\Re{z}|\leq\frac{\pi}{2}$. We can write $$(\Sigma,h_k)=S^1\times\R/G_k,$$ where $S^1$ is the circle with perimeter 1 and $G_k\cong \mathbb{Z}$ is the transformation group generalized by $$(t,\theta)\rightarrow (t+a_k,\theta+\theta_k),\s where\s a_k\geq \sqrt{\pi^2-\theta_k^2},\s and\s \theta_k\in [-\frac{\pi}{2},\frac{\pi}{2}].$$ $(\Sigma_k,h_k)$ diverges in $\mathcal{M}_1$ if and only if $a_k\rightarrow+\infty$. Then any $f_k\in W^{2,2}_{conf}(\Sigma,h_k,\R^n)$ can be lifted to a conformal immersion $f_k':S^1\times\R \rightarrow\R^n$ with $$f_k'(t,\theta)=f_k'(t+a_k,\theta+\theta_k).$$ After translating, we may assume that $f_k'(-t+\frac{a_k}{2},\theta)$ and $f_k'(t-\frac{a_k}{2},\theta)$ have no concentrations. We let $\lambda_k=diam f_k'(S^1\times{\frac{a_k}{2}})$, then $\frac{f_k'(-t+\frac{a_k}{2},\theta)-f(\frac{a_k} {2},0)}{\lambda_k}$ and $\frac{f_k'(t-\frac{a_k}{2},\theta) -f_k'(\frac{a_k}{2},\theta_k)}{\lambda_k}$ will converge to $f_0^1$ and $f_0^2$ respectively in $W^{2,2}_{loc}(S^1\times[0,+\infty))$. However, they can be glued together via $$f_0=\left\{\begin{array}{ll} f_0^1(-t,\theta)&t\leq 0\\ f_0^2(t,\theta+\theta_0)&t>0, \end{array}\right.$$ into a conformal immersion of $S^1\times\R$ in $\R^n$, where $\theta_0=\lim\limits_{k\rightarrow+\infty}\theta_k$. Then we have \[main2\] $$\int_{S^1\times\R}K_{f_0}d\mu_{f_0} +\sum_{i=1}^m\int_{S^1\times\R}K_{f^i}d\mu_i=0,$$ where $f^1$, $\cdots$, $f^m$ are all of the non-trivial bubbles of $f_k'$. [**Sphere case:**]{} When $\Sigma$ is the sphere, we can let $h_k\equiv h_0$. There is no bubble from collars. We have \[sphere\] Let $f_0$ be the generalized limit of $f_k$. Then $$\int_{S^2}K_{f_0}d\mu_{f_0}+\sum_{i=1}^m\int_{S^1\times\R} K_{f^i}d\mu_{f^i}=4\pi,$$ where $f^1$, $\cdots$, $f^m$ are all of the non-trivial bubbles. Put Theorem \[main\]–\[sphere\] together, we get the main theorem of this paper, which is a precise version of Theorem \[KL\]: Let $\Sigma_k$ be a sequence of surfaces immersed in $\R^n$ with bounded Willmore functional. Assume $g(\Sigma_k)=g$. Then we can decompose $\Sigma_k$ into finite parts: $$\Sigma_k=\bigcup_{i=1}^m\Sigma_k^i,\s \Sigma_i\cap\Sigma_j =\emptyset,$$ and find $p_k^i\in \Sigma_k^i$, $\lambda_k^i \in\R$, such that $\frac{\Sigma_k^i-p_k^i} {\lambda_k^i}$ converges locally in the sense of varifolds to a complete branched immersed surface $\Sigma_\infty^i$ with $$\sum_i\int_{\Sigma_\infty^i}K_{\Sigma_\infty^i}=2\pi(2-2g), \s and\s \sum_{i}W(\Sigma_\infty^i)\leq \lim_{k\rightarrow+\infty} W(\Sigma_k).$$ Parts of Theorem \[sphere\] have appeared in [@L-L], in which we assumed that $\{f_k\}\subset W^{2,2}_{conf}(D,\R^n)$ and does not converge to a point. Preliminary =========== Hardy estimate -------------- Let $f\in W^{2,2}_{conf}(D,\R^n)$ with $g_f= e^{2u}(dx^1\otimes dx^1+dx^2\otimes dx^2)$ and $\int_D|A_f|^2<4\pi-\delta$. $f$ induces a Gauss map $$G(f)=e^{-2u}(f_1\wedge f_2):D\rightarrow G(2,n)\hookrightarrow \C\mathbb{P}^{n-1}.$$ Following [@M-S], we define the map $\Phi(f):\C \to \C P^{n-1}$ by $$\Phi(f)(z) = \left\{\begin{array}{ll} G(f)(z) & \mbox{ if }z \in D\\ G(f)(\frac{1}{\overline{z}}) & \mbox{ if }z \in \C \backslash \overline{D}. \end{array}\right.$$ Then $\Phi(f)\in W_0^{1,2}(\mathbb{C},\C P^{n-1})$ and $\int_{\mathbb{C}}{\Phi}^*(f) (\omega) =0$, where $\omega$ is the Kähler form of $\C\mathbb{P}^{n-1}$. Thus by Corollary 3.5.7 in [@M-S], $\Psi(f)=*\Phi^*(f)(\omega)$ is in Hardy space, and $$\label{Psi} \|\Psi(f)\|_{ \mathcal{H}}<C(\delta)\|A_f\|_{L^2(D)}.$$ Note that $$\label{psi-k} \Psi(f)|_{D}=K_{f}e^{2u}.$$ If we set that $v$ solves the equation $-\Delta v=\Psi(f)$, $v(\infty)=0$, then we have $$\|v\|_{L^\infty(\R^n)}+\|\nabla v\|_{L^2(\R^n)}+\|\nabla^2 v\|_{L^1(\R^n)}< C\|\Psi(f)\|_{\mathcal{H}}.$$ Noting that $u-v$ is harmonic on $D$, we get $$\label{hardy0} \|u\|_{L^\infty(D_\frac{1}{2})}+\|\nabla u\|_{L^2(D_\frac{1}{2})}+\|\nabla^2 u\|_{L^1(D_\frac{1}{2})}< C(\|\Psi(f)\|_{\mathcal{H}}+\|u\|_{L^1(D)}).$$ Gauss-Bonnet formula -------------------- Let $f\in W^{2,2}_{conf}(\Sigma,g,\R^n)$ with $g_f=e^{2u}g$. Let $\gamma$ be a smooth curve. On $\gamma$, we define $$\label{geodesic.curvature} \kappa_{f}=\frac{\partial u}{\partial n}+\kappa_g,$$ where $n$ is one of the unit normal field along $\gamma$ which is compatible to $\kappa_g$. By , $\frac{\partial u}{\partial n}$ is well-defined. In [@K-L], we proved that $u$ satisfies the weak equation $$-\Delta_g u=K_{f}e^{2u}-K_g.$$ Then, for any domain $\Omega$ with smooth boundary, we have the Gauss-Bonnet formula: $$\int_{\partial\Omega}\kappa_f=\chi(\overline{\Omega})+\int_{\Omega} K_fd\mu_f.$$ Convergence of $\int K_{f_k}d\mu_{f_k}$ --------------------------------------- By and Theorem \[Helein\], we have : \[measureconvergence\] Let $f_k$ be a conformal sequence from $D$ into $\R^n$ with $g_{f_k}=e^{2u_k}g_0$ and $\int_D|A_{f_k}|^2d\mu_f\leq \gamma<4\pi$, which converges to $f_0$ weakly. We assume $f_0$ is not a point map, and $g_{f_0}=e^{2u_0}g_0$. Then we can find a subsequence, such that $$\label{hardy} K_{f_k}d\mu_{f_k} \rightharpoonup K_{f_0}d\mu_{f_0}\s over \s D_\frac{1}{2},\s \mbox{in distribution,}$$ and $$u_k\rightharpoonup u_0,\s in\s W^{1,2}(D_\frac{1}{2}).$$ We will use the following Let $f_k$ be a conformal sequence of $D\setminus D_\frac{1}{2}$ in $\R^n$, which converges to $f_0\in W^{2,2}_{{conf},loc}(D\setminus D_\frac{1}{2},\R^n)$. For any $t\in(\frac{1}{2},1)$ with $\partial D_t\cap \mathcal{S}(f_k) =\emptyset$, we have $$\lim_{k\rightarrow+\infty}\int_{\partial D_t}\kappa_{f_k} ds_k= \int_{\partial D_t}\kappa_{f_0}ds_0.$$ Take $s\in (t,1)$, such that $\mathcal{S}(f_k)\cap \overline{D_s\setminus D_t}=\emptyset$. Let $g_{f_k}=e^{2u_k}g_0$ and $\varphi\in C^\infty_0(D_s)$, which is 1 on $D_t$. Then we have $$-\int_{\partial D_t}\frac{\partial u_k}{\partial r}ds=- \int_{D_s\setminus D_t}\nabla u_k\nabla\varphi d\sigma +\int_{D_s\setminus D_t}\varphi K_ke^{2u_k}d\mu_{f_k},$$ and the right-hand side will converge to $$- \int_{D_s\setminus D_t}\nabla u_0\nabla\varphi d\sigma+\int_{D_s\setminus D_t}\varphi K_0e^{2u_0}d\mu_{f_0},\s as\s k\rightarrow+\infty.$$ Then we get $$-\int_{\partial D_t}\frac{\partial u_k}{\partial r}ds \rightarrow -\int_{\partial D_t}\frac{\partial u_0}{\partial r}ds.$$ By we get $$\int_{\partial D_t}\kappa_k\rightarrow \int_{\partial D_t}\kappa_0.$$ Removability of singularity --------------------------- We have the following \[removal\][@K-L] Suppose that $f\in W^{2,2}_{{conf},loc}(D\backslash \{0\},\R^n)$ satisfies $$\int_D |A_f|^2\,d\mu_g < \infty \quad \mbox{ and } \quad \mu_g(D) < \infty,$$ where $g_{ij} = e^{2u} \delta_{ij}$ is the induced metric. Then $f \in W^{2,2}(D,\R^n)$ and we have $$\begin{aligned} u(z) & = & m\log |z|+ \omega(z) \quad \mbox{ where } m\geq 0,\, z\in \mathbb{Z},\,\omega \in C^0 \cap W^{1,2}(D),\\ -\Delta u & = & -2m\pi \delta_0+K_g e^{2u} \quad \mbox{ in }D.\end{aligned}$$ The multiplicity of the immersion at $f(0)$ is given by $$\theta^2\big(f(\mu_g \llcorner D_\sigma(0)),f(0)\big) = m+1 \quad \mbox{ for any small } \sigma > 0.$$ Moreover, we have $$\label{kappa2} \lim_{t\rightarrow 0}\int_{\partial D_t}\kappa_{f} ds_f=2\pi (m+1).$$ We only prove . For the proof of other part of the theorem, one can refer to [@K-L]. Observe that $$|\int_{\partial D_t}\frac{\partial u} {\partial r}-\int_{\partial D_{t'}}\frac{\partial u} {\partial r}|=|\int_{D_t\setminus D_{t'}} Kd\mu|\rightarrow 0$$ as $t, t'\rightarrow 0$. Then $\lim\limits_{t\rightarrow 0}\int_{\partial D_t}\frac{\partial u} {\partial r}$ exists. Since $\omega\in W^{1,2}(D_r)$, we can find $t_k\in [2^{-k-1},2^{-k}]$, s.t. $$(2^{-k}-2^{-k-1})\int_{\partial D_{t_k}}| \frac{\partial w}{\partial r}| =\int_{2^{-k-1}}^{2^{-k}}(\int_{\partial D_t} |\frac{\partial w}{\partial r}|)dt\leq C\|\nabla w\|_{L^2 (D_{2^{-k}})}2^{-k},$$ which implies that $\int_{\partial D_{t_k}}\frac{\partial w}{\partial r} \rightarrow 0$. Then we get $\int_{\partial D_{t_k}}\frac{\partial u} {\partial r}\rightarrow 2\pi m$, which implies that $$\lim_{t\rightarrow 0} \int_{\partial D_{t}}\frac{\partial u} {\partial r}\rightarrow 2\pi m.$$ In the proof of Theorem \[removal\] in [@K-L], we get that $$\label{isolate} \lim_{z\rightarrow 0}\frac{|f(z)-f(0)|}{|z|^{m+1}}=\frac{e^{w(0)}}{m+1}.$$ We give the following definition: \[defconformalimmersion\] A map $f\in W^{2,2}(\Sigma,\mathbb{R}^n)$ is called a $W^{2,2}$- branched conformal immersion, if we can find finitely many points $p_1$, $\cdots$, $p_m$, s.t. $f\in W^{2,2}_{conf,loc}(\Sigma\setminus\{p_1,\cdots,p_m\})$, and $$\mu(f)<+\infty,\s \int_{\Sigma}|A_f|^2d\mu_f<+\infty.$$ For the behavior at infinity of complete conformally parameterized surfaces, we have the following \[removal2\] Suppose that $f\in W^{2,2}_{{conf},loc}(\C\setminus D_R,\R^n)$ with $$\int_{\C\setminus D_R} |A_f|^2\,d\mu_{g} < \infty,$$ where $g_{ij} = e^{2u} \delta_{ij}$ is the induced metric. We assume $f(\C\setminus D_{2R})$ is complete. Then we have $$u(z) = m\log |z|+ \omega(z) \quad \mbox{ where } m\geq 0,\, z\in \mathbb{Z},\,\omega \in W^{1,2}(\C\setminus D_{2R}).$$ Moreover, we have $$\label{kappa3} \lim_{t\rightarrow +\infty}\int_{\partial D_t}\kappa_{f} ds_f=2\pi (m+1).$$ The proof of is similar to that of . Other part of the proof can be found in [@M-S]. Though Muller-Sverak’s result was stated for smooth surface, it is easy to check that their proof also holds for a $W^{2,2}$ conformal immersion. Proof of Theorem \[convergence\] ================================ We first prove the following \[convergence2\] Suppose $(\Sigma,h_k)$ to be smooth Riemann surfaces, where $h_k$ converges to $h_0$ in $C^\infty_{loc}(\Sigma)$. Let $\{f_k\}\subset W^{2,2}_{{conf},loc}(\Sigma,h_k,\R^n)$ with $$\mathcal{S}(f_k)=\{p\in\Sigma: \lim_{r\rightarrow 0} \liminf_{k\rightarrow+\infty}\int_{B_r(p,h_0)}|A_{f_k}|^2d\mu_{f_k}\geq 4\pi\} =\emptyset.$$ Then $f_k$ converges in $W^{2,2}_{loc}(\Sigma,h_0,\R^n)$ to a point or an $f_0\in W^{2,2}_{conf,loc}(\Sigma,h_0,\R^n)$. Let $g_{f_k}=e^{2u_k}h_k$. We only need to prove the following statement: for any $p\in\Sigma$, we can find a neighborhood $V$ which is independent of $\{f_k\}$, such that $f_k$ converges weakly to $f_0$ in $W^{2,2}(V,h_0)$. Moreover, $\|u_k\|_{L^\infty(V)}<C$ if and only if $f_0\in W^{2,2}_{conf} (V,\R^n)$; $u_k\rightarrow-\infty$ uniformly, if and only if $f_0$ is a point map. Now we prove this statement: Given a point $p$, we choose $U_k$, $U_0$, $\vartheta_k$, $\vartheta_0$ as in the Theorem \[D.K.\]. Set $\vartheta_k^*(h_k)=e^{2v_k}g_0$, where $g_0=(dx^1)^2+(dx)^2$. We may assume $v_k\rightarrow v_0$ in $C^\infty_{loc}(D)$. Let $\hat{f}_k=f_k(\vartheta_k)$ which is a map from $D$ into $\R^n$. It is easy to check that $\hat{f}_k\in W^{2,2}_{conf}(D,\R^n)$ and $g_{f_k}=e^{2u_k+2v_k}g_0$. By Theorem \[Helein\], we can assume that $\hat{f}_k$ converges to $\hat{f}_0$ weakly in $W^{2,2}(D_\frac{3}{4})$. Moreover, $\hat{f}_0$ is a point when $u_k+v_k\ \rightarrow-\infty$ uniformly on $D_\frac{3}{4}$, and a conformal immersion when $\sup_{k} \|u_k+v_k\|_{L^\infty(D_\frac{3}{4})}<+\infty$. Let $V=\vartheta_0(D_\frac{1}{2})$. Since $\vartheta_k$ converges to $\vartheta_0$, $\vartheta_k^{-1}(V)\subset D_\frac{3}{4}$ for any sufficiently large $k$ and $f_k=\hat{f}_k(\vartheta_k^{-1})$ converges to $f_0=\hat{f}_0 (\vartheta_0^{-1})$ weakly in $W^{2,2}(V,h_0)$. Moreover, $f_0$ is a conformal immersion when $\|u_k\|_{L^\infty(V)}<C$, and a point when $u_k\rightarrow-\infty$ uniformly in $V$. [*The proof of Theorem \[convergence\]:*]{} When $f_k$ converges to a conformal immersion weakly, the result is obvious. Now we assume that $f_k$ converges to a point. For this case, $\lambda_k \rightarrow 0$. Put $f_k'=\frac{f_k-f_k(\gamma(0))}{\lambda_k}$, $\Sigma_k'=\frac{\Sigma_k-f_k(\gamma(0))}{\lambda_k}$. We have two cases: Case 1: $diam(f_k')<C$. Letting $\rho$ in inequality (1.3) in [@S] tend to infinity, we get $\frac{\Sigma_k'\cap B_\sigma(\gamma(0))}{\sigma^2}\leq C$ for any $\sigma>0$, hence we get $\mu(f_k')<C$ by taking $\sigma=diam(f_k')$. Then Lemma \[convergence2\] shows that $f_k'$ converges weakly in $W^{2,2}_{loc}(\Sigma,h_0)$. Since $diam\, f_k' (\gamma)=1$, the weak limit is not a point. Case 2: $diam(f_k')\rightarrow +\infty$. We take a point $y_0\in\R^n$ and a constant $\delta>0$, s.t. $$B_\delta(y_0)\cap \Sigma_k'=\emptyset,\s \forall k.$$ Let $I=\frac{y-y_0}{|y-y_0|^2}$, and $$f_k''=I(f_k'),\s \Sigma_k''=I(\Sigma_k').$$ By conformal invariance of Willmore functional [@C; @W], we have $$\int_{\Sigma''}|A_{\Sigma''}|^2d\mu_{\Sigma''} =\int_{\Sigma}|A_\Sigma|^2d\mu_{\Sigma}<\Lambda.$$ Since $\Sigma_k''\subset B_\frac{1}{\delta}(0)$, also by (1.3) in [@S], we get $\mu(f_k'')<C$. Thus $f_k''$ converges weakly in $W^{2,2}_{loc}(\Sigma\setminus \mathcal{S}(f_k''),h_0)$. Next, we prove that $f_k''$ will not converge to a point by assumption. If $f_k''$ converges to a point in $W^{2,2}_{loc}(\Sigma\setminus \mathcal{S}(f_k''))$, then the limit must be 0, for $diam\,(f_k')$ converges to $+\infty$. By the definition of $f_k''$, we can find a $\delta_0>0$, such that $f_k''(\gamma)\cap B_{\delta_0}(0,h_0)=\emptyset$. Thus for any $p\in \gamma([0,1]) \setminus \mathcal{S}(f_k'')$, $f_k''$ will not converge to $0$. A contradiction. Then we only need to prove that $f_k'$ converges weakly in $W^{2,2}_{loc}(\Sigma,h_0,\R^n)$. Let $f_0''$ be the limit of $f_k''$. By Theorem \[removal\], $f_0''$ is a branched immersion of $\Sigma$ in $\R^n$. Let $\mathcal{S}^*=f_0^{''-1}(\{0\})$. By , $\mathcal{S}^*$ is isolate. First, we prove that for any $\Omega\subset\subset\Sigma\setminus (\mathcal{S}^*\cup\mathcal{S}(\{f_k''\})$, $f_k'$ converges weakly in $W^{2,2}(\Omega,h_0,\R^n)$: Since $f_0''$ is continuous on $\bar{\Omega}$, we may assume $dist(0,f_0''(\Omega))>\delta>0$. Then $dist(0,f_k''(\Omega))>\frac{\delta}{2}$ when $k$ is sufficiently large. Noting that $f_k' =\frac{f_k''}{|f_k''|^2}+y_0$, we get that $f_k'$ converges weakly in $W^{2,2}(\Omega,h_0,\R^n)$. Next, we prove that for each $p\in \mathcal{S}^*\cup\mathcal{S}(\{f_k''\})$, $f_k'$ also converges in a neighborhood of $p$. We use the denotation $U_k$, $U_0$, $\vartheta_k$ and $\vartheta_0$ with $\theta_k(0)=p$ again. We only need to prove that $\hat{f}_k'=f_k'(\vartheta_k)$ converges weakly in $W^{2,2}(D_\frac{1}{2})$. Let $g_{\hat{f}_k'}=e^{2\hat{u}_k'}(dx^2+dy^2)$. Since $\hat{f}_k'\in W^{2,2}_{conf} (D_{4r})$ with $\int_{D_{4r}}|A_{\hat{f}_k'}|^2d\mu_{\hat{f}_k'}<4\pi$ when $r$ is sufficiently small and $k$ sufficiently large, by the arguments in subsection 2.1, we can find a $v_k$ solving the equation $$-\Delta v_k=K_{\hat{f}_k'}e^{2\hat{u}_k'},\s z\in D_r\s and\s \|v_k\|_{L^\infty(D_r)}<C.$$ Since $f_k'$ converges to a conformal immersion in $D_{4r}\setminus D_{\frac{1}{4}r}$, by Theorem \[Helein\], we may assume that $\|\hat{u}_k'\|_{L^\infty(D_{2r}\setminus D_r)}<C$. Then $\hat{u}_k'-v_k$ is a harmonic function with $\|\hat{u}_k'-v_k\|_{L^\infty(\partial D_{2r}(z))}<C$, then we get $\|\hat{u}_k'(z)-v_k(z)\|_{L^\infty(D_{2r}(z))}<C$ by the Maximum Principle. Thus, $\|\hat{u}_k'\|_{L^\infty(D_{2r})}<C$, which implies $\|\nabla f_k'\|_{L^\infty(D_{2r})}<C$. By the equation $\Delta \hat{f}_k'=e^{2\hat{u}_k'}H_{\hat{f}_k'}$, and the fact that $\|e^{2\hat{u}_k'}H_{\hat{f}_k'}\|_{L^2 (D_{2r})}^2< e^{\|\hat{u}_k'\|_{L^\infty}}\int_{D_{2r}}|H_{\hat{f}_k'}|^2d\mu_{{\hat{f}_k'}}$, we get $\|\nabla{\hat{f}_k'}\|_{W^{1,2}(D_{r})}<C$. Recalling that $\hat{f}_k'$ converges in $C^0(D_r\setminus D_\frac{r}{2})$, we complete the proof. In fact, we proved that $\mathcal{S}^*=\emptyset$. Analysis of the neck ==================== For a sequence of conformal immersions from a surface into $\R^n$ with the conformal class divergence, the blowup comes from concentrations and collars. Both cases can be changed into a blowup analysis of a conformal immersion sequence of $S^1\times[0,T_k]$ in $\R^n$ with $T_k\rightarrow+\infty$. So we first analyze the blow up procedure on long cylinders without concentrations. Classification of bubbles of a simple sequence over an infinite cylinder ------------------------------------------------------------------------ Let $f_k$ be an immersion sequence of $S^1\times [0,T_k]$ in $\R^n$ with $T_k\rightarrow+\infty$. We say $f_k$ has concentration, if we can find a sequence $\{(\theta_k,t_k)\}\subset S^1\times [0,T_k]$, such that $$\lim_{r\rightarrow 0}\liminf_{k\rightarrow+\infty}\int_{D_r(\theta_k,t_k)} |A_{f_k}|^2d\mu_{f_k}\geq 4\pi.$$ We say $\{f_k\}$ is simple if: - $f_k$ has no concentration; - $f_k(S^1\times[0,T_k])$ can be extended to a compact closed immersed surface $\Sigma_k$ with $$\int_{\Sigma_k}(1+|A_{f_k}|^2)d\mu_{f_k}<\Lambda.$$ When $\{f_k\}$ is simple, we say $f_0$ is a bubble of $f_k$, if we can find a sequence $\{t_k\}\subset [0,T_k]$ with $$t_k\rightarrow+\infty,\s and\s T_k-t_k\rightarrow+ \infty,$$ such that $f_0$ is a generalized limit of $f_k(\theta,t_k+t)$. If $f_0$ is nontrivial, we call it a nontrivial bubble. For convenience, we call the generalized limit of $f(\theta,t+T_k)$ and $f(\theta,t)$ the top and the bottom respectively. Note that the top and the bottom are in $W^{2,2}_{conf}(S^1\times(-\infty, 0])$ and $W^{2,2}_{conf}(S^1\times[0,+\infty))$ respectively. Let $f^1$ and $f^2$ be two bubbles which are limits of $f_k(\theta,t+t_k^1)$ and $f_k(\theta,t+t_k^2)$ respectively. We say these two bubbles are the same, if $$\sup_k|t_k^1-t_k^2|<+\infty.$$ When $f^1$ and $f^2$ are not the same, we say $f^1$ is in front of $f^2$ (or $f^2$ is behind $f^1$) if $t_k^1<t_k^2$. We say $f^2$ follows $f^1$, if $f^2$ is behind $f^1$ and there are no non-trivial bubbles between $f^1$ and $f^2$. Obviously, the bubbles in this section must be in $W^{2,2}_{conf}(S^1\times\R)$, and must be one of the following: - $S^2$-type, i.e. $I(f^0)(S^1\times\{\pm\infty\}) \neq 0$; - Catenoid-type, i.e. $I(f^0)(S^1\times\{\pm\infty\})=0$; - Plain-type, i.e. one and only one of $I(f^0)(S^1\times\{\infty\})$, $I(f^0)(S^1\times\{-\infty\})$ is 0, where $I=\frac{y-y_0}{|y-y_0|^2}$, $y_0\notin f^0(S^1\times\R)$. We give another classification of bubbles: \[typeofbubble\] We call a bubble $f^0$ to be a bubble of - type $T_{\infty}$ if $diam f^0(S^1\times\{+\infty\})=+\infty$; type $T_0$ if $diam f^0(S^1\times\{+\infty\})=0$; - type $B_{\infty}$ if $diam f^0(S^1\times\{-\infty\})=+\infty$; type $B_0$ if $diam f^0(S^1\times\{-\infty\})=0$. We say $f_k$ has $m$ non-trivial bubbles, if we can not find the ($m+1$)-th non-trivial bubble for any subsequence of $f_k$. Let $f_0$ be a bubble. By and , $$\lim_{t\rightarrow+\infty}\int_{S^1\times\{t\}}\kappa_{f^0} =2m^+\pi,\s and\s \lim_{t\rightarrow+\infty}\int_{S^1\times\{t\}}\kappa_{f^0} =2m^-\pi$$ for some $m^+$ and $m^-\in\mathbb{Z}$. Then $f^0$ is trivial implies that $\int_{S^1\times \R}K_{f^0}d\mu_{f^0}=0$. Thus both $S^2$ type of bubbles and catenoid type of bubbles are non-trivial. It is easy to check that $\mu(f^0)<+\infty$ implies that $f^0$ is a sphere-type bubble and is of type $(B_0,T_0)$. If $f^{0'}$ is a catenoid-type bubble, then it is of type $(B_\infty,T_\infty)$; If $f^{0'}$ is a plain-type bubble, then it is of type $(B_\infty,T_0)$ or $(B_0,T_\infty)$. First, we study the case that $f_k$ has no bubbles. Basically, we want to show that after scaling, the image of $f_k$ will converge to a topological disk. If $f_k$ has no bubbles, then $$\frac{diam\, f_k(S^1\times \{1\})}{diam\, f_k(S^1\times\{T_k-1\})}\rightarrow 0\s or\s +\infty.$$ Assume this lemma is not true. Then we may assume $\frac{diam\, f_k(S^1\times \{1\})}{diam\, f_k(S^1\times\{T_k-1\})}\rightarrow\lambda\in (0,+\infty)$. Let $\lambda_k=diam f_k(S^1\times\{1\})$. By Theorem \[convergence2\], $\frac{f_k(\theta,t)-f_k(0,1)}{\lambda_k}$ converges to $f^B$ weakly in $W^{2,2}_{loc} (S^1\times(0,+\infty))$, and $\frac{f_k(\theta,t+T_k)-f_k(0,T_k-1)}{\lambda_k}$ converges to $f^T$ weakly in $W^{2,2}_{loc} (S^1\times(-\infty,0))$ respectively. When $diam f^B(S^1\times \{+\infty\})=0$, we set $\delta_k$ and $t_k$ to be defined by $$\delta_k=diam f_k(S^1\times\{t_k\})= \inf_{t\in [1,T_k-1]}f_k(S^1\times\{t\}).$$ Obviously, $\delta_k\rightarrow 0$, and $t_k\rightarrow+\infty$, $T_k-t_k \rightarrow+\infty$. $ \frac{f_k(\theta,t)-f_k(0,t_k)}{\delta_k}$ will converge to a non-trivial bubble. A contradiction. When $diam f^B(S^1\times \{+\infty\})=+\infty$, we set $\delta_k'$ and $t_k'$ to be defined by $$\delta_k'=diam f_k(S^1\times\{t_k'\})= \sup_{t\in [1,T_k-1]}f_k(S^1\times\{t\}),$$ then we can also get a bubble. Now we assume $f_k$ has no bubbles, and $\frac{diam\, f_k(S^1\times \{1\})}{diam\, f_k(S^1\times\{T_k-1\})}\rightarrow +\infty$. Let $\lambda_k=diam\, f_k(S^1\times\{T_k-1\})$. The bottom $f^B$ is the weak limit of $f_k'=\frac{f_k(\theta,t)-f_k(0,1)}{\lambda_k}$. Let $\phi$ be the conformal diffeomorphism from $D\setminus\{0\}$ to $S^1\times[0,+\infty)$. Then $f^B\circ\phi$ is an immersion of $D$ in $\R^n$ perhaps with branch point $0$. Moreover, by the arguments in [@C-L] or in [@C], we have $$f^B(\phi(0))=\lim_{t\rightarrow+\infty}\lim_{k\rightarrow+\infty} f_k'(\theta,T_k-t).$$ Since $diam f_k'(S^1\times\{T_k-1\})\rightarrow 0$, $f_k'(\theta,T_k-t)$ converges to a point, then the Hausdorff limit of $f_k'((0,T_k))$ is a branched conformal immersion of $D$. In fact, the above results and arguments hold for a sequence $\{f_k\}$ which has neither $S^2$-type nor catenoid-type bubbles.\ Next, we show when $\{f_k\}$ has bubbles, how we will find out all of them. We need the following simple lemma: \[interval\] After passing to a subsequence, we can find $0=d_k^0<d_k^1<\cdots<d_k^l=T_k$, where $l\leq\frac{\Lambda}{4\pi}$, such that $$d_k^i-d_k^{i-1}\rightarrow+\infty,\s i=1,\cdots,l\s and \int_{S^1\times\{d_k^i\}}\kappa_k=2m_i\pi+\pi,\s m_i\in\mathbb{Z},\s i=1,\cdots,l-1,$$ and $$\lim_{T\rightarrow+\infty}\sup_{t\in [d_k^{i-1}+T, d_k^i-T]}\left| \int_{S^1\times \{t\}}\kappa_k-\int_{S^1\times\{d_k^{i-1}+T\}}\kappa_k\right|< \pi.$$ Let $\Lambda<4m\pi$. We prove the lemma by induction of $m$. We first prove it is true for $m=1$. Let $$\lim_{t\rightarrow+\infty} \lim_{k\rightarrow+\infty}\int_{S^1\times\{t\}}=2m_1\pi,\s \lim_{t\rightarrow+\infty} \lim_{k\rightarrow+\infty}\int_{S^1\times\{T_k-t\}}=2m_2\pi,$$ where $m_1$ and $m_2$ are integers. Thus, we can find $T$, such that $$\left|\int_{S^1\times\{T\}}\kappa_k-2m_1\pi\right|<\epsilon,\s and \s \left|\int_{S^1\times\{T_k-T\}}\kappa_k-2m_2\pi\right|<\epsilon$$ when $k$ is sufficiently large. Take a $t_0\in (T,T_k-T)$, such that $$\int_{S^1\times[T,t_0]}|A_{f_k}|^2<2\pi,\s \int_{S^1\times[t_0,T_k-T]}|A_{f_k}|^2\leq 2\pi.$$ By Gauss-Bonnet, $$\left|\int_{S^1\times\{t\}}\kappa_k-\int_{S^1\times\{T\}}\kappa_k\right| \leq \int_{S^1\times[T,t]}|K_{f_k}|d\mu_{f_k} \leq \frac{1}{2}\int_{S^1\times[T,t_0]}|A_{f_k}|^2d\mu_{f_k}<\pi, \s\forall t\in(T,t_0),$$ $$\left|\int_{S^1\times\{t\}}\kappa_k-\int_{S^1\times\{T_k-T\}}\kappa_k\right| \leq \frac{1}{2}\int_{S^1\times[t_0,T_k-T]}|A_{f_k}|^2d\mu_{f_k}<\pi, \s\forall t\in(t_0,T_k-T).$$ Thus, we can take $\epsilon$ to be very small so that $\int_{S^1\times\{t\}} \neq 2i\pi$ for any $i\in\mathbb{Z}$ and $t\in (T,T_k-T)$. Now, we assume the result is true for $m$, and prove it is also true for $m+1$. We have two cases. Case 1, there is a sequence $\{t_k\}$, such that $t_k\rightarrow+\infty$, $T_k-t_k\rightarrow+\infty$, $\int_{S^1\times\{t_k\}}\kappa_k=2m_k\pi+\pi$ for some $m_k\in\mathbb{Z}$. For this case, we let $f_k'=\frac{f_k(t+t_k,\theta)-f_k(t_k,0)}{\lambda_k}$ which converges weakly to $f_0'$, where $\lambda_k=diam f_k(S^1\times\{t_k\})$. Then by Gauss-Bonnet $$\int_{S^1\times\R}|K_{f_0'}|\geq \left|\int_{S^1\times(0,+\infty)}K_{f_0'}\right| +\left|\int_{S^1\times(-\infty,0)}K_{f_0'}\right|\geq 2\pi.$$ Thus, $\int_{S^1\times\R}|A_{f_0'}|^2\geq 4\pi$. We can find $T$, such that $$\int_{S^1\times[0,t_k-T]}|A_{f_k}|^2<4(m-1)\pi,\s and\s \int_{S^1\times[t_k+T,T_k]}|A_{f_k}|^2<4(m-1)\pi$$ when $k$ is sufficiently large. Thus, we can use induction on $[0,t_k-T]$ to get $0=\bar{d}_k^0< \bar{d}_k^1<\cdots<\bar{d}_k^{\bar{l}}=t_k-T$, and on $[t_k+T,T_k]$ to get $t_k+T=\tilde{d}_k^0<\cdots< \tilde{d}_k^{\tilde{l}}=T_k$. We can set $$d_k^i=\left\{\begin{array}{ll} \bar{d}_k^i&i<\bar{l}\\ t_k&i=\bar{l}\\ \tilde{d}_k^{i-l}&i>\bar{l} \end{array}\right.$$ Then, we complete the proof. Set $f_k^i=\frac{f_k(t+d_k^i,\theta)-f_k(d_k^i,0)}{ diam\, f_k(S^1\times\{d_k^i\})}$, and assume $f_k^i\rightharpoonup f^i$. It is easy to check that $$\lim_{T\rightarrow+\infty}\lim_{k\rightarrow+\infty} \int_{S^1\times\{d_k^i+T\}}\kappa_k =\lim_{T\rightarrow+\infty}\lim_{k\rightarrow+\infty} \int_{S^1\times\{d_k^{i+1}-T\}}\kappa_k,$$ we get $$\lim_{T\rightarrow+\infty}\lim_{k\rightarrow+\infty}\int_{S^1\times[d_k^i+T, d_k^{i+1}-T]}K_{f_k}=0.$$ In fact, we can get that for any $t_k<t_k'$ with $$t_k-d_k^i\rightarrow+\infty, \s and\s d_k^{i+1}-t_k'\rightarrow+\infty,$$ we have $$\lim_{k\rightarrow+\infty}\int_{S^1\times[t_k, t_k']}K_{f_k}=0.$$ Hence, we get \[simple\] Let $f_k$ be a simple sequence on $S^1\times[0,T_k]$. Then after passing to a subsequence, $f_k$ has finitely many bubbles. Moreover, we have $$\lim_{T\rightarrow+\infty}\lim_{k\rightarrow+\infty} \int_{S^1\times [T,T_k-T]}K_{f_k}d\mu_{f_k}=\sum_{i=1}^m \int_{S^1\times\R}K_{f^i}d\mu_{f^i},$$ where $f^1$, $\cdots$, $f^m$ are all of the bubbles. Next, we prove a property of the order of the bubbles. Let $f^1$, $f^2$ be two bubbles. Then 1). If $f^1$ and $f^2$ are of type $T_0$ and $B_0$ respectively, then there is at least one catenoid-type bubble between them. 2). If $f^1$ and $f^2$ are of type $T_\infty$ and $B_\infty$ respectively, then there is at least one $S^2$-type bubble between $f^1$ and $f^{2}$. 1). Suppose $\frac{f_k(\theta,t_k^1+t)-f_k(0,t_k^1)} {diam\, f_k(S^1\times\{t_k^1\})}\rightharpoonup f^1$, and $\frac{f_k(\theta,t_k^{2}+t)-f_k(0,t_k^{2})} {diam\, f_k(S^1\times\{t_k^2\})}\rightharpoonup f^{2}$. Let $t_k'$ be defined by $$\label{infdiam} diam\, f_k(S^1\times\{t_k'\})=\inf\{diam\, f_k(S^1\times\{t\}):t\in[t_k^1+T,t_k^{2}-T] \},$$ where $T$ is sufficiently large. Since $f^1$ is of type $T_0$ and $f^{2}$ of type $B_0$, we get $$\lim_{t\rightarrow+\infty}diam\, f^1(S^1\times\{t\})=0,\s and\s \lim_{t\rightarrow-\infty}diam\, f^{2}(S^1\times\{t\})=0.$$ Then, we have $$t_k'-t_k^1\rightarrow+\infty,\s t_k^{2}-t_k' \rightarrow+\infty.$$ If we set $f_k'(t)=\frac{f_k(\theta,t_k'+t)- f_k(0,t_k')}{diam\, f_k(S^1\times\{t_k'\})}$, then $f_k'$ will converge to a bubble $f'$ with $$diam\, f'(S^1\times \{0\}) =\inf \{diam\, f'(S^1\times\{t\}):t\in \R \}=1.$$ Thus, $f'$ is a catenoid type bubble. 2). If we replace with $$diam\, f_k(S^1\times\{t_k'\})=\sup\{diam\, f_k(S^1\times\{t\}):t\in[t_k^1+T,t_k^{2}-T] \},$$ we will get 2). The structure of the bubble tree of a simple sequence is clear now: [*The $S^2$ type bubbles stand in a line, with a unique catenoid type bubble between the two neighboring $S^2$-type bubbles. There might exist plain-type bubbles between the neighboring $S^2$ type and catenoid type bubbles. A $T_0$ type bubble must follow a $B_\infty$ type bubble, and a $T_\infty$ type bubble must follow a $B_0$ type bubble.*]{} Bubble trees for a sequence of immersed $D$ ------------------------------------------- In this subsection, we will consider a conformal immersion sequence $f_k: D\rightarrow \R^n$ with $\mathcal{S}(f_k)=\{0\}$. We assume that $f_k(D)$ can be extended to a closed embedded surface $\Sigma_k$ with $$\int_{\Sigma_k}(1+|A_{\Sigma_k}|^2)d\mu<\Lambda.$$ Take $z_k$ and $r_k$, s.t. $$\label{top} \int_{D_{r_k}(z_k)}|A_{f_k}|^2d\mu_{f_k}=4\pi-\epsilon,$$ and $\int_{D_r(z)}|A_{f_k}|^2d\mu_{f_k}<4\pi-\epsilon$ for any $r<r_k$ and $D_{r}(z)\subset D_\frac{1}{2}$, where $\epsilon$ is sufficiently small. We set $f_k'=f_k(z_k+r_kz)-f_k(z_k)$. Then $\mathcal{S}(f_k',D_L)=\emptyset$ for any $L$. Thus, we can find $\lambda_k$, s.t. $\frac{f_k'(z)}{\lambda_k}$ converges weakly to $f^F$ which is a conformal immersion of $\C$ in $\R^n$. We call $f^F$ the first bubble of $f_k$ at the concentration point $0$. It will be convenient to make a conformal change of the domain. Let $(r, \theta)$ be the polar coordinates centered at $z_k$. Let $\varphi_k:S^1\times\R^1\rightarrow\R^2$ be the mapping given by $$r=e^{-t},\theta=\theta.$$ Then $$\varphi_k^*(dx^1\otimes dx^1+dx^2\otimes dx^2)= \frac{1}{r^2}(dt^2+d\theta^2).$$ Thus $f_k\circ\varphi_k$ can be considered as a conformal immersion of $S^1\times [0,+\infty)$ in $\R^n$. For simplicity, we will also denote $f_k\circ\varphi_k$ by $f_k$. Set $T_k=-\log r_k$. Similarly to Lemma \[interval\], we have \[interval2\] There is $t_k^0=0<s_k^1<s_k^2< \cdots< s_k^l=T_k$, such that $l\leq \frac{\Lambda}{4\pi}$ and 1). $\int_{S^1\times(s_k^i-1,s_k^i+1)}|A_{f_k}|^2\geq 4\pi$; 2). $\lim\limits_{T\rightarrow +\infty}\lim\limits_{ k\rightarrow+\infty}\sup\limits_{t\in [d_k^i+T,d_k^{i+1}-T]} \int_{S^1\times(t-1,t+1)}|A_{f_k}|^2<4\pi$. Let $f_k^i=f_k(\theta,s_k^i+t)$. A generalized limit of $f_k^i$ is called a bubble with concentration (which may be trivial). There are $W^{2,2}$-conformal immersions of $S^1\times\R$ with finite branch points and finite $L^2$ norm of the second fundamental form. However, if we neglect the concentration points, we can also define the types of $T_\infty$, $T_0$, $B_{\infty}$, and $B_0$ for it. Obviously, we can find a $T'$, such that $f_k$ is simple on $S^1\times[s_k^{i}+T',s_k^{i+1}-T']$. Note that the top of $f_k$ on $S^1\times[s_k^i+T',s_k^{i+1}-T']$ is just a part of a generalized limit of $f_k^{i+1}$ and the bottom of $f_k$ on $S^\times[s_k^i+T',s_k^{i+1}-T']$ is just a part of a generalized limit of $f_k^{i-1}$. We call the union of nontrivial bubbles of $f_k$ on each $[s_k^i,s_k^{i+1}]$, the generalized limit of $f_k^i$ and $f^F$ the first level of bubble tree. By Proposition \[simple\], we have $$\begin{array}{lll} \lim\limits_{r\rightarrow 0}\lim\limits_{ k\rightarrow+\infty}\dint_{D_r}K_{f_k}&=& \sum\limits_{i=1}^{l}\lim\limits_{T\rightarrow+\infty} \lim\limits_{k\rightarrow+\infty}\dint_{S^1\times [s_k^i-T'-T,s_k^i+T'+T]} K_{f_k^i}\\[\mv] &&+ \sum\limits_{i=0}^l\lim\limits_{T\rightarrow+\infty} \lim\limits_{k\rightarrow+\infty}\dint_{S^1\times [s_k^i+T'+T,s_k^{i+1}-T'-T]} K_{f_k^i}\\[\mv] &=&\sum\limits_{(r,\theta)\in \mathcal{S}(\{f_k^i\})}\lim\limits_{r\rightarrow 0}\lim\limits_{k\rightarrow+\infty} \dint_{B_r(t,\theta)}K_{f_k^i}+ \sum_j\int_{S^1\times\R}K_{f^j}, \end{array}$$ where $\{f^j\}$ are all the bubbles of the first level. Next, at each concentration point of $\{f_k^i\}$, we get the first level of $\{f_k^i\}$. We usually call them the second level of bubble trees. Such a construction will stop after finite steps. \[identity1\]After passing to a subsequence, $f_k$ has finitely many non-trivial bubbles. Moreover, for any $r<1$ $$\lim_{k\rightarrow+\infty} \int_{D_r}K_{f_k}d\mu_{f_k}=\int_{D_r}K_{f^0}d\mu_{f^0}+ \sum_{i=1}^m\int_{S^1\times\R}K_{f^i}d\mu_{f^i},$$ where $f^0$ is the generalized limit of $f_k$, and $f^1$, $f^2$, $\cdots$, $f^m$ are all of the non-trivial bubbles. Immersion sequence of cylinder which is not simple -------------------------------------------------- Now we assume $f_k$ is not simple on $S^1\times[0,T_k]$. We also assume $f_k(S^1\times[0,T_k])$ can be extended to a closed immersed surface $\Sigma_k$ with $$\int_{\Sigma_k}(1+|A_{\Sigma_k}|^2)d\mu<\Lambda.$$ Moreover, we assume $f_k(t,\theta)$ and $f_k(T_k+t,\theta)$ have no concentration. Then we still have Lemma \[interval2\]. The other properties are the same as those of the immersion of $D$. Moreover, we have $$\lim_{k\rightarrow+\infty} \int_{S^1\times[0,T_k]}K_{f_k}d\mu_{f_k}=\int_{S^1\times[0,+\infty)}K_{f^B} d\mu_{f^B}+ \int_{S^1\times(-\infty,0]}K_{f^T}d\mu_{f^T}+\sum_{i=1}^m\int_{\C}K_{f^i}d\mu_{f^i},$$ where $f^1$, $\cdots$, $f^m$ are all of the nontrivial bubbles. Proof of Theorem \[main\] ========================= Since Theorem \[main2\] can be deduced directly from subsection 4.3, and Theorem \[sphere\] can be deduced directly from subsection 4.2, we only prove Theorem \[main\]. [*Proof of Theorem \[main\]:*]{} Take a curve $\gamma_i\subset\Sigma_0^i\setminus\mathcal{S}(\{f_k\circ \psi_k\})$ with $\gamma_i(0)=p_i$. We set $\lambda_i=diam\,f_k(\gamma_i)$, and $\tilde{f}_k^i=\frac{f_k\circ\psi_k-f_k\circ\psi_k(p_i)}{\lambda_k^i}$ which is a mapping from $\Sigma_0^i$ into $\R^n$. It is easy to check that $\tilde{f}_k^i\in W^{2,2}_{{conf},loc} (\Sigma_0^i,\psi_k^{\ast}(h_k),\R^n)$. Given a point $p\in\Sigma_0^i$. We choose $U_k$, $U_0$, $\vartheta_k$, $\vartheta_0$ as in the Theorem \[D.K.\]. Let $\hat{f}_k^i=\tilde{f}_k^i(\vartheta_k)$ which is a map from $D$ into $\R^n$. Let $V=\vartheta(D_\frac{1}{2})$. Since $\vartheta_k$ converges to $\vartheta_0$, $\vartheta_k^{-1}(V)\subset D_\frac{3}{4}$ for any sufficiently large $k$. When $p$ is not a concentration point, by Lemma \[measureconvergence\], for any $\varphi$ with $supp\varphi\subset\subset V$, we have $$\int_{V}\varphi K_{\tilde{f}_k^i}d\mu_{\tilde{f}_k^i} =\int_{D_\frac{3}{4}}\varphi(\vartheta_k)K_{\hat{f}_k^i} d\mu_{\hat{f}_k^i}\rightarrow \int_{D_\frac{3}{4}}\varphi(\vartheta_0) K_{\hat{f}_0^i}=\int_V\varphi K_{f_0^i}d\mu_{f_0^i}.$$ When $p$ is a concentration point, by Lemma \[identity1\], we get $$\int_{V}\varphi K_{\tilde{f}_k^i}d\mu_{\tilde{f}_k^i} \rightarrow \int_V\varphi K_{f_0^i}d\mu_{f_0^i}+ \varphi(p)\sum_j\int_{S^1\times \R}K_{f^i_j}d\mu_{f^i_j},$$ where $\{f^i_j\}$ is the set of nontrivial bubbles of $\hat{f}_k^i$ at $p$. Next, we consider the convergence of $f_k$ at the collars. Let $a^j$ be the intersection of $\overline{\Sigma_0^i}$ and $\overline{\Sigma_0^{i'}}$. We set $\check{f}_k^j=f_k(\phi_k^j)$, and $T_k^j =\frac{\pi^2}{l_k^j}-T$. We may choose $T$ to be sufficiently large such that $\check{f}_k^j(T_k^j-t,\theta)$ and $\check{f}_k^j (-T_k^j+t,\theta)$ have no blowup point. Then $\check{f}_k^j$ satisfies the conditions in subsection 2.4. So the convergence of $\check{f}_k^j$ is clear. Since $$\check{f}_k^j=f_k\circ\phi_k^j=f_k\circ\psi_k\circ(\varphi_k\circ\phi_k^j)= \tilde{f}_k(\varphi_k\circ\phi_k^j).$$ The images of the limit of $\check{f}_k^j(T_k^j-t,\theta)$ and $\check{f}_k^j (-T_k^j+t,\theta)$ are parts of the images of $\tilde{f}_0^i$ and $\tilde{f}_0^{i'}$. Then we have $$\lim_{\delta\rightarrow 0} \lim_{k\rightarrow+\infty}\int_{\Sigma_0(\delta,a^j)} K_{f_k}=\sum_i\int_{S^1\times\R}K_{f^{i'}},$$ where all $f^{i'}$ are nontrivial bubbles of $\check{f}_k^j$. A remark about trivial bubbles ============================== The methods in section 4 can be also used to find all bubbles with $\|A\|_{L^2}\geq\epsilon_0$ for a fixed $\epsilon_0>0$. We only consider the simple sequence $f_k$ on $S^1\times[0,T_k]$ here. Let $t_k$ be a sequence with $t_k, T_k-t_k\rightarrow \infty$, such that $\frac{f_k(t+t_k,\theta)-f_k(t_k,0)}{\lambda_k}$ converges to a $f_0\in W^{2,2}(S^1\times\R,\R^n)$ with $\int_{S^1\times\R}|A_{f_0}|^2\geq\epsilon_0^2$. Take $T$, such that $\int_{S^1\times[-T,T]} |A_{f_0}|^2\geq\frac{\epsilon_0^2}{2}$. We consider the convergence on $S^1\times [0,t_k-T]$ and $S^1\times[t_k+T,T_k]$ respectively. In this way, we can find out all the bubbles. [s2]{} M. Bauer and E. Kuwert: Existence of minimizing Willmore surfaces of prescribed genus. [*Int. Math. Res. Not.*]{}, [**10**]{} (2003), 553-576. W. Blaschke: Vorlesungen über Differentialgeometrie, III, Springer 1929. B. Y. Chen: Some conformal invariants of submanifolds and their applications, [*Boll. Un. Mat. Ital.*]{} [**10**]{} (1974), 380–385. J. Chen, Y. Li: Bubble tree of a class of conformal mapping & applications to Willmore functional. [*Prepare*]{}. L. Chen: Convergence behaviors of a conformal immesrion sequence of cylinders. [*Preprint*]{}. D. DeTurck and J. Kazdan: Some regularity theorems in Riemannian geometry. [*Ann. Sci. École Norm. Sup. (4)*]{} [**14**]{} (1981), 249–260. F. Hélein: Harmonic maps, conservation laws and moving frames. Translated from the 1996 French original. With a foreword by James Eells. Second edition. Cambridge Tracts in Mathematics, 150. Cambridge University Press, Cambridge, 2002. C. Hummel: Gromov’s compactness theorem for pseudo-holomorphic curves. [*Progress in Mathematics*]{} [**151**]{}, Birkhäuser Verlag, Basel (1997) R. Kusner: Comparison surfaces for the Willmore problem. [*Pacific J. Math.*]{}, [**138**]{} (1989), 317–345. E. Kuwert and Y. Li: $W^{2,2}$-conformal immersions of a closed Riemann surface into $\R^n$. [*arXiv:1007.3967*]{}. E. Kuwert and R. Schätzle: The Willmore flow with small initial energy. [*J. Differential Geom.*]{}, [**57**]{} (2001), 409–441. E. Kuwert and R. Schätzle: Removability of point singularities of Willmore surfaces, [*Ann. of Math.*]{} [**160**]{} (2004), 315–357. E. Kuwert and R. Schätzle: Closed surfaces with bounds on their Willmore energy, [*Preprint Centro di Ricerca Matematica Ennio De Giorgi, Pisa*]{} 2008. Y. Luo, Y. Li, H. Tang: On the convergence of a conformal map sequence from 2-disk to $\R^n$. Preprint W.P. Minicozzi II: The Willmore functional on Lagrangian tori: its relation to area and existence of smooth minimizers, [*J. Amer. Math. Soc.*]{} [**8**]{} (1995) 761-791. S. Müller and V. Šverák: On surfaces of finite total curvature, [*J. Differential Geom.*]{} [**42**]{} (1995), 229–258. T. Rivière: Lipschitz conformal Immersions from degenerating surfaces with $L^2$-bounded second fundamental form. Preprint. L. Simon: Existence of surfaces minimizing the Willmore functional, [*Comm. Anal. Geom.*]{} [**1**]{} (1993), 281–326. R. Schätzle: The Willmore boundary problem, [*Cal. Var. P.D.E.*]{}, [**37**]{} (2010), 275-302. G. Thomsen: Über Konforme Geometrie, I: Grundlagen der Konformen Fläschentheorie. [*Abh. Math. Sem. Hamburg*]{} [**3**]{} (1923) 31-56. T. J. Willmore: Total Curvature in Riemannian Geometry, John Wiley & Sons, New York (1982).
--- abstract: 'The thermodynamic and superfluid properties of the dilute two-dimensional binary Bose mixture at low temperatures are discussed. We also considered the problem of the emergence of the long-range order in these systems. All calculations are performed by means of celebrated Popov’s path-integral approach for the Bose gas with a short-range interparticle potential.' author: - Pavlo Konietin - Volodymyr Pastukhov date: 'Received: date / Accepted: date' title: 2D dilute Bose mixture at low temperatures --- [example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore Introduction {#intro} ============ The spatial dimensionality plays a crucial role in the behavior of interacting many-boson systems. Perhaps, the most exciting phase diagram is obtained in the two-dimensional case (for review, see [@Posazhennikova; @Hadzibabic]), where the ground-state Bose condensate state [@Schick; @Lozovik; @Ovchinnikov; @Cherny; @Pilati; @Mazzanti] is altered by the low-temperature Beresinskii-Kosterlitz-Thouless (BKT) phase with the characteristic power-law [@Popov_72] decay of the one-particle density matrix. To describe these systems appropriately one needs to extend [@Mora] the standard approach with the separated condensate and to use the phase-density formulation [@Popov], renormalization-group [@Fisher; @Kolomeisky; @Dupuis_1; @Dupuis_2; @Rancon; @Krieg] or the effective field-theoretic [@Andersen; @Chien] treatments. Particularly Popov’s theory allows to find out the low-energy structure of one-particle Green’s functions [@Popov_Seredniakov] and improved version of this approach [@Andersen_etal; @Khawaja] which takes into account phase fluctuations exactly is capable to explain [@Cockburn] experiments with two-dimensional Bose gases. In contrast to the two-dimensional Bose systems in an optical lattice [@Kagan; @Altman; @Kuklov1; @Kuklov2; @Schmidt; @Guglielmino; @Rousseau] that are characterized by nontrivial phase diagrams the equilibrium properties of the homogeneous binary 2D bosonic gases with continuous translational symmetry are less studied. Reliable results for the stability condition of the low-density mixtures were obtained in Refs. [@Kolezhuk; @Lee] by means of a renormalization-group technique and recently in [@Petrov] the ground-state behavior of the two-dimensional binary Bose gases in the Bogoliubov approximation was discussed in a context of the droplet-formation phenomenon. In the present paper we have analysed the ground-state properties of the two sorts of Bose particles interacting with the analytically tractable short-range two-body potentials in two spatial dimensions. Formulation {#sec: 2} =========== We adopt a path-integral formulation for the two-component Bose system with the Euclidean action $$\begin{aligned} \label{S} S=S_0+S_{int},\end{aligned}$$ where the ideal-gas term reads $$\begin{aligned} \label{S_0} S_0=\int dx \,\Psi^*_a(x)\left\{\partial_{\tau}+\frac{\hbar^2 }{2m_a}\Delta+\mu_a\right\}\Psi_a(x)\end{aligned}$$ and the second one describes intra and interspecies two-body interaction $$\begin{aligned} S_{int}=-\frac{1}{2}\int dx\int dx'\Phi_{ab}(x-x') |\Psi_a(x)|^2|\Psi_b(x')|^2.\end{aligned}$$ The notations are typical: the $(2+1)$-vector $x=(\tau, {\bf r})$, $\int dx=\int^{\beta}_0d\tau\int_{\mathcal{A}}d {\bf r}$, where $\mathcal{A}$ is a large two-dimensional periodicity “volume”, $\beta=1/T$ is the inverse temperature, the complex-valued $\beta$-periodic fields $\Psi_a(x)$ describe bosonic states, and the summation over repeated indices $a,b=(A,B)$ is assumed. We also denoted the chemical potentials $\mu_a$, masses of particles of each sort $m_a$ and interaction potentials $\Phi_{ab}(x)=\delta(\tau)\Phi_{ab}({\bf r})$. It is well-known that the Popov prescription [@Popov] is very convenient for the description of two-dimensional one-component many-boson systems, therefore in the remainder of the paper we applied this approach for the two-component Bose gas. The key idea is to introduce momentum scale $\hbar \Lambda$ that separates fields $\Psi_a(x)$ on the “slowly” $\psi_a(x)$ and “rapidly” $\tilde{\psi}_a(x)$ varying parts: $$\begin{aligned} \label{Psi} \Psi_a(x)=\psi_a(x)+\tilde{\psi}_a(x), \, \Psi^*_a(x)=\psi^*_a(x)+\tilde{\psi}^*_a(x),\end{aligned}$$ (note that $\int_{\mathcal{A}}d{\bf r}\,\psi_a(x)\tilde{\psi}_a(x)=0$) with the following functional integration over $\tilde{\psi}_a(x)$. In such a way after passing to the phase-density representation of the “slowly” varying $\psi_a(x)$ fields $$\begin{aligned} \label{psi} \psi_a(x)=\sqrt{n_a(x)}e^{i\varphi_a(x)}, \, \psi_a^*(x)=\sqrt{n_a(x)}e^{-i\varphi_a(x)},\end{aligned}$$ one obtains the effective hydrodynamic action which accurately captures the low-energy physics of the two-dimensional Bose systems in the whole temperature region. In the following we will discuss the zero-temperature limit only. If in addition the system is dilute, the role of the $\tilde{\psi}_a(x)$-fields is reduced to the replacement of the original interparticle potentials $\Phi_{ab}({\bf r})$ in the hydrodynamic action by the elements of the $t$-matrix (see Appendix for details). The final hydrodynamic action reads $$\begin{aligned} \label{S_h} S_h=\int dx\left\{n_a(x)i\partial_{\tau}\varphi_a(x)-\frac{\hbar^2}{2m_a}n_a(x)(\nabla \varphi_a(x))^2\right.\nonumber\\ \left.-\frac{\hbar^2}{8m_a}\frac{(\nabla n_a(x))^2}{n_a(x)} -\frac{1}{2}t_{ab}n_a(x)n_b(x)+\mu_a n_a(x)\right\},\end{aligned}$$ where $t_{ab}$ is responsible for the two-body collisions. For the spatial homogeneous systems one makes use of the following decomposition in terms of the Fourier harmonics $$\begin{aligned} \label{n_a} n_a(x)=n_a+\frac{1}{\sqrt{\mathcal{A}\beta}}\sum_{K}e^{iKx}n^a_{K}, \ \ \varphi_a(x)=\frac{1}{\sqrt{\mathcal{A}\beta}}\sum_{K}e^{iKx}\varphi^a_{K},\end{aligned}$$ where $K=(\omega_k, {\bf k})$ stands for the bosonic Matsubara frequency $\omega_k$ and two-dimensional wave-vector ${\bf k}$ (recall that $|{\bf k}|\le\Lambda$ and ${\bf k}\neq 0$). From the identities $-\partial \Omega/\partial\mu_a=n_aV$ and non-participation of the “rapidly” varying fields in the zero-temperature thermodynamics it is easy to show [@Pastukhov_q2D] that $n_a$ are equilibrium densities of two components. In the extremely dilute limit the properties of the system are correctly described by the Gaussian part of the action (\[S\_h\]) $$\begin{aligned} \label{S_G} S_{G}=-\beta\mathcal{A}\frac{t_{ab}}{2}n_an_b-\frac{1}{2}\sum_{K} \left\{\vphantom{\left[\frac{\varepsilon_a(k)}{2 n_a}\delta_{ab} +t_{ab}\right]}\omega_k\varphi^a_K n^a_{-K}-\omega_k\varphi^a_{-K}n^a_{K} \right.\nonumber\\ \left.+2\varepsilon_a(k)n_a |\varphi^a_{K}|^2+\left[\frac{\varepsilon_a(k)}{2 n_a}\delta_{ab} +t_{ab}\right]n^a_{K}n^b_{-K}\right\},\end{aligned}$$ (where $\varepsilon_a(k)=\hbar^2k^2/2m_a$ are the free-particle dispersions; $\delta_{ab}$ is the Kronecker delta) and performing a simple integration of the partition function in the low-temperature limit we obtained the ground-state energy of the binary Bose gas $$\begin{aligned} \label{E_0} \frac{E_0}{\mathcal{A}} = \frac{1}{2}n_an_bt_{ab} +\frac{1}{2\mathcal{A}}\sum_{|{\bf k}|\le\Lambda }\left[E_{+}(k) + E_{-}(k)-\varepsilon_A(k)-\varepsilon_B(k)-n_at_{aa}\right].\end{aligned}$$ Here the two branches [@Vakarchuk1; @Vakarchuk2; @Rovenchak] of the Bogoliubov spectrum read $$\begin{aligned} &&E^2_{\pm}(k) = \frac{1}{2}\left\{E^2_A(k) + E^2_B(k)\right.\nonumber\\ &&\left. \pm \sqrt{[E^2_A(k) - E^2_B(k)]^2 +16\varepsilon_A(k)\varepsilon_B(k)n_An_Bt^2_{AB}}\right\},\end{aligned}$$ where $E^2_a(k)=\varepsilon^2_a(k)+2\varepsilon_a(k)n_at_{aa}$ represents the dispersion relation of an individual component. In the long-length limit these two branches of the spectrum of collective modes exhibit phonon-like behavior $E_{\pm}(k\rightarrow 0)=\hbar kc_{\pm}$. The terms presented in the last row of Eq. (\[E\_0\]) do not appear during functional integration and should be inserted by hand [@Chang]. This procedure, however, is in agreement with calculations performed in the operator formalism [@Pastukhov_InfraredStr] and various regularization schemes [@Salasnich_Toigo]. In order to study the superfluid properties of the two-component system let us suppose the each constituent to move slowly with velocity ${\bf v}_a$. In practice, while describing the superfluid hydrodynamics the smallness of ${\bf v}_a$ means that $|{\bf v}_a|\ll c_{-}$ providing the local thermodynamic equilibrium. The action of the moving Bose mixture is readily written by using the gauge transformation $\Psi_a(x)\rightarrow \Psi_a(x)e^{-im_a{\bf r}{\bf v}_a/\hbar}$ of the initial one (\[S\]) $$\begin{aligned} \label{S_v} S_v=S-\beta\mathcal{A}\frac{\rho_av^2_a}{2}+i\hbar\int dx \Psi^*_a(x){\bf v}_a\nabla\Psi_a(x),\end{aligned}$$ where $\rho_a$ are introduced for the mass densities. In the same manner as was argued above it is easy to show that the fields $\tilde{\psi}_a(x)$ do not contribute to the macroscopic properties of the two-component Bose system at low temperatures. Therefore the last term in (\[S\_v\]) only shifts the Matsubara frequencies $\omega_k\rightarrow \omega_k+i\hbar{\bf k}{\bf v}_a$ standing next to $\varphi^a_K n^a_{-K}$ term in the effective hydrodynamic action (\[S\_h\]), (\[S\_G\]). Again the explicit calculations for the energy of the moving Bose mixture can be made to the end only in the extremely dilute limit, where one can neglect the anharmonic terms in $S_h$. Up to the quadratic order over velocities ${\bf v}_a$ we have $$\begin{aligned} \label{E_v} E_v/\mathcal{A}=E_0/\mathcal{A}+\frac{1}{2}\rho_{ab}{\bf v}_a{\bf v}_b+\ldots,\end{aligned}$$ where the symmetric matrix of superfluid densities reads $$\begin{aligned} \label{rho_ab} \rho_{ab}=\rho_a\delta_{ab}-\Delta\rho_{ab},\end{aligned}$$ with $$\begin{aligned} \label{rho_AA} \Delta\rho_{AA}=\frac{1}{2\mathcal{A}\beta}\sum_K\frac{\hbar^2k^2[\omega^2_k+E^2_B(k)]} {[\omega^2_k+E^2_{+}(k)][\omega^2_k+E^2_{-}(k)]}\nonumber\\ \times\left\{1-\frac{2\omega^2_k[\omega^2_k+E^2_B(k)]} {[\omega^2_k+E^2_{+}(k)][\omega^2_k+E^2_{-}(k)]}\right\},\end{aligned}$$ $$\begin{aligned} \label{rho_AB} \Delta\rho_{AB}=-\frac{4}{\mathcal{A}\beta}\sum_K\frac{\hbar^2k^2\omega^2_k\varepsilon_A(k) \varepsilon_A(k)n_An_Bt^2_{AB}}{[\omega^2_k+E^2_{+}(k)]^2[\omega^2_k+E^2_{-}(k)]^2},\end{aligned}$$ calculated in two dimensions. At zero temperature the whole system is superfluid and the Galilean invariance requires $\Delta\rho_{AA}=\Delta\rho_{BB}=-\Delta\rho_{AB}=\Delta\rho$ that can be easily verified by the direct integration over the Matsubara frequencies in Eqs. (\[rho\_AA\]), (\[rho\_AB\]). In two-dimensional systems of bosons residing exactly in the ground state the lowest one-particle state is macroscopically occupied. Nevertheless the developed thermodynamic fluctuations at any finite temperatures totally deplete this Bose condensate, it is interesting to calculate its value for the weakly-interacting gas from the methodological point of view. In the Popov approach the condensate density is obtained as follows: $$\begin{aligned} \sqrt{n_{0a}}=\langle\Psi_a(x)\rangle|_{T=0}\rightarrow \langle\psi_a(x)\rangle,\end{aligned}$$ where the last average should be calculated very carefully [@Pastukhov_twocomp] within the hydrodynamic action (\[S\_h\]) $$\begin{aligned} \sqrt{n_{0a}}=\lim_{\tau'\rightarrow \tau-0}\langle \sqrt{n_a(x)}e^{i\varphi_a(x')}\rangle{\big|}_{{\bf r}'={\bf r}}.\end{aligned}$$ Taking into account the Gaussian fluctuations only, one gets the result $$\begin{aligned} n_{0A}=n_{A}-\frac{1}{2 \mathcal{A}}\sum_{|{\bf k}| \le \Lambda}\left\{ \frac{\varepsilon_A(k)+n_At_{AA}}{E_{+}(k)+E_{-}(k)}\left[1+\frac{E^2_B(k)}{E_{+}(k)E_{-}(k)}\right]-1 \right\}.\end{aligned}$$ In addition to the superfluid properties of the binary Bose mixture the quantity $\rho_{ab}$ together with the inverse compressibility matrix $\partial \mu_a/\partial n_b$ determine two velocities of a sound propagation [@Andreev] in the two-component bosonic medium. Finally it also identifies the exponents of the one-body density matrices $$\begin{aligned} F_{ab}(|{\bf r}-{\bf r}'|)=\langle\Psi^*_a(x)\Psi_b(x')\rangle|_{\tau\rightarrow \tau'},\end{aligned}$$ at large particle separations when $T\neq0$. Particularly by using results of Ref. [@Pastukhov_twocomp] for the various two-legged vertices it is easy to argue that the [*exact*]{} asymptotic behavior of $F_{ab}(r)$ reads $$\begin{aligned} F_{ab}(r\rightarrow \infty)\propto \frac{\delta_{ab}}{r^{\eta_a}}, \ \ \eta_a=\frac{m^2_aT}{2\pi \hbar^2}\rho^{-1}_{aa},\end{aligned}$$ that indicates the Berezinskii-Kosterlitz-Thouless phase (here $\rho^{-1}_{ab}$ are elements of the inverse to $\rho_{ab}$ matrix). At very low temperatures when the temperature depletion of the superfluid density which is of order $T^3$ for the two-dimensional systems can be neglected the exponents $\eta_a$ are fully determined by $\Delta\rho$ given by Eq. (\[rho\_AA\]) for the dilute mixtures. Model with the short-range interaction {#sec:3} ====================================== For the specific calculations we choose the two-body potentials in the following form $$\begin{aligned} \label{Phi_r} \Phi_{ab}({\bf r})=\frac{g_{ab}}{\pi R^2_{ab}}e^{-r^2/R^2_{ab}},\end{aligned}$$ where $g_{ab}$ are the coupling constants responsible for the interaction strength and parameters $R_{ab}$ characterize the effective range of the potentials. In the limit $R_{ab}\rightarrow 0$ the function in Eq. (\[Phi\_r\]) tends to $\delta$-function and the coupling constants $g_{ab}$ to the leading order can be rewritten via the experimentally measured $s$-wave scattering lengths $l_{ab}$ [@Volosniev] $$\begin{aligned} \label{g_ab} \frac{1}{g_{ab}}=-\frac{m_{ab}}{\pi\hbar^2}\ln[e^{\gamma/2}l_{ab}/R_{ab}],\end{aligned}$$ where we denote the reduced masses $1/m_{ab}=1/m_a+1/m_b$ and $\gamma=0.57721\ldots$ is the Euler-Mascheroni constant. On the other hand, by using equations obtained in Appendix we are in position to express $g_{ab}$ via elements of the $t$-matrix $$\begin{aligned} \frac{1}{g_{ab}}=\frac{1}{t_{ab}}-\frac{m_{ab}}{\pi\hbar^2}\ln[2e^{-\gamma/2}/(R_{ab}\Lambda)],\end{aligned}$$ found in the limit $R_{ab}\Lambda\ll 1$. These two equation allow to eliminate the dependence on the nonuniversal parameters $g_{ab}$ and $R_{ab}$ in the formula for the matrix $t_{ab}$ and as a consequence in the hydrodynamic action (\[S\_h\]) $$\begin{aligned} \label{t_ab} t_{ab}=\frac{\pi \hbar^2}{m_{ab}}\frac{1}{\ln\left[\frac{2e^{-\gamma}}{l_{ab}\Lambda}\right]}.\end{aligned}$$ To simplify further consideration we assume the equal-mass limit $m_a=m_b=m$ in which the excitation spectrum of the system looks Bogoliubov-like $E_{\pm}(k)=\hbar^4k^4/4m^2+\hbar^2k^2c^2_{\pm}$ with the sound velocities given by $$\begin{aligned} \label{c_pm} c^2_{\pm}=(c^2_A+c^2_B)/2\pm \sqrt{(c^2_A-c^2_B)^2/4+n_An_Bt^2_{AB}/m^2},\end{aligned}$$ where $c_a=\sqrt{n_at_{aa}/m}$ denote the sound velocities of individual components. A great advantage of two spatial dimensions in the equal-mass limit is that all integrals can be performed analytically to the very end. Particularly, for the ground-state energy (\[E\_0\]) one obtains $$\begin{aligned} \label{E_Bog} E_{0}/\mathcal{A}=\frac{1}{2}t_{ab}n_an_b+\frac{m^3}{4\pi\hbar^2}\sum_{j=\pm}c^4_j\ln\left[e^{1/4}\frac{mc_j}{\hbar\Lambda}\right],\end{aligned}$$ (recall that this expression is valid only in the limit $\hbar^2\Lambda^2/m\gg \mu_{a}\sim t_{ab}n_b$) which is consistent with the general equation for two-dimensional binary systems [@Werner]. With the same accuracy we have calculated the quantity $\Delta\rho$ determining the matrix of superfluid densities $$\begin{aligned} \Delta\rho/m=\frac{n_An_Bt^2_{AB}}{8\pi\hbar^2}\frac{c^4_{+}+2c^2_{+}c^2_{-} \ln[c^2_{-}/c^2_{+}]-c^4_{-}}{(c^2_{+}-c^2_{-})^3},\end{aligned}$$ and in our approximation shifts the BKT exponents $\eta_a=\frac{mT}{2\pi\hbar^2 n_a}\left[1+\Delta\rho/(mn_a)+\ldots\right]$ at low temperatures. For the dilute Bose mixture with the symmetric interaction $t_{AA}=t_{BB}$ at any density ratios this effect becomes more tangible when the system is extremely close to the phase-separation region. The interaction-induced condensate depletion of a component $A$ at absolute zero reads $$\begin{aligned} \label{n_Bog} n_A-n_{0A}=\frac{m^2}{4\pi\hbar^4}\left\{\vphantom{\ln\left[\frac{c^2_{-}}{c^2_{+}}\right]}c^2_A+\frac{(c^2_A-c^2_{+})(c^2_A-c^2_{-})}{c^2_{+}-c^2_{-}}\ln\left[\frac{c^2_{-}}{c^2_{+}}\right]\right\}.\end{aligned}$$ The final stage of the above calculations is the determination of the cut-off parameter $\Lambda$. The most natural way to find it was proposed originally by Popov providing the minimization of the thermodynamic potential (in our zero-temperature case just the ground-state energy). It is easy to confirm by the direct calculations with a logarithmic accuracy that $E_{0}$ does not depend on $\Lambda$ [@Petrov], i.e., $\partial E_{0}/\partial \Lambda=0$. This observation allows to choose the cut-off parameter up to an irrelevant factor from the dimensional arguments, for instance, $\Lambda^2\sim \max\{n_A,n_B\}$ or $\Lambda^2\sim n=n_A+n_B$, which correctly reproduces the one-component limits [@Schick] and together with smallness of the coupling constants $g_{ab}$ provide the system not to be in crystalline phase [@Kroiss]. It is instructive to apply the above-presented Popov’s approach to the one-component model. The properties of the single-component two-dimensional Bose gas can be obtained from Eqs. (\[E\_Bog\]), (\[n\_Bog\]) by tending density of one sort of particles (let say $n_B$) to zero. Then identifying $n_A$ with the total density $n$ of the system and setting $m_A=m$, $t_{AA}=t$ ($l_{AA}=l$) we obtain in the leading order that $\partial E_{0}/\partial \Lambda=0$, i.e., we can again choose parameter $\Lambda^2\sim n$ by using the dimensional arguments. But the condition of cancellation of subleading terms gives for the ground-state energy $E_0/\mathcal{A}=\frac{2\pi\hbar^2n^2}{m}\lambda^2(1-\lambda^2/2)$, where $\Lambda^2=4\pi e n\lambda^2$ and $\lambda$ is determined by the transcendental equation $1/\lambda^2=\ln\frac{1}{nl^2}-1-2\gamma-\ln\pi+\ln\frac{1}{\lambda^2}$. The iterative solution in the dilute regime $nl^2\ll 1$ yields for the coefficient in formula for $E_0$: $$\begin{aligned} \lambda^2(1-\lambda^2/2)\to \frac{1}{1/2+1/\lambda^2}= \frac{1}{\ln\frac{1}{nl^2}+\ln\ln\frac{1}{nl^2}-1/2-2\gamma-\ln\pi}+\ldots,\end{aligned}$$ which should be compared with that of Refs. [@Mora09; @Astrakharchik_etal]. It is easily seen that the Popov’s treatment correctly reproduces to the leading order the results of more sophisticated approaches and we therefore may use this formulation to obtain the beyond-mean-field stability condition of two-component systems. Firstly, of course, one should calculate the cut-off parameter, which is determined as follows $$\begin{aligned} \label{cut_off} \sum_{j=\pm}\ln\left[e^{1/2}\frac{mc_j}{\hbar\Lambda}\right]\frac{\partial c^4_j}{\partial \Lambda}=0,\end{aligned}$$ in this case. The general consideration of the thermodynamic stability leads to complicated transcendental equation that has to be solved for an arbitrary set of $s$-wave scattering lengths and concentrations of ingredients. For very dilute systems, however, only region close to the mean-field phase-separation condition $\det t_{ab}=0$ (here $t_{ab}=4\pi\hbar^2/[m|\ln nl^2_{ab}|]$) is the most interesting. Thus assuming that all $l_{ab}$ are of the same order magnitude and again introducing dimensionless cut-off parameter $\lambda^2=\Lambda^2/(4\pi e n)$ we found the asymptotic solution of Eq. (\[cut\_off\]) $$\begin{aligned} 1/\lambda^2= \ln\frac{1}{nl^2}+\ln\ln\frac{1}{nl^2}-1-2\gamma-\ln\pi +\frac{n_an_b}{n^2}\ln\frac{l^2}{l^2_{ab}}+\ldots,\end{aligned}$$ where $l\sim l_{ab}$ is arbitrary length-scale and the appropriate ground-state energy $$\begin{aligned} E_0/\mathcal{A}=\frac{2\pi \hbar^2}{m}\frac{n_an_b}{\ln\frac{1}{nl^2_{ab}}+\ln\ln\frac{1}{nl^2_{ab}}-1/2-2\gamma-\ln\pi } +\ldots,\end{aligned}$$ calculated with the same accuracy. This result particularly states that the mean-field stability condition $l_{AA}l_{BB}>l^2_{AB}$ of the binary two-dimensional Bose mixtures is unaffected by the Bogoliubov approximation (recall that the obtained formulae are correct only for very dilute systems, i.e., when $1/|\ln nl^2_{ab}|\ll 1)$. Conclusions {#sec:4} =========== In summary, by means of the Popov’s prescription we have analyzed the equilibrium thermodynamic and superfluid properties of the two-dimensional dilute binary Bose mixtures. It is shown that the presence of the interspecies interaction shifts the exponents of one-particle density matrices in the Berezinskii-Kosterlitz-Thouless phase at low temperatures. Our findings could serve a starting platform for further theoretical study of the impact of the beyond-mean-field effects on the macroscopic properties of the two-component two-dimensional Bose systems. But in order to compare the obtained results with experiments one should extend the proposed approach to more realistic binary Bose mixtures, namely to the two-component systems loaded in the pancake trap. The second question has to be answered is the inclusion of nonuniversal parts of the interaction potentials which are recently found [@Salasnich] to affect significantly on the thermodynamic properties of two-dimensional bosons in the one-component case. All these problems will be considered in future studies. We thank Dr. A. Rovenchak for invaluable suggestions. This work was partly supported by Project FF-30F (No. 0116U001539) from the Ministry of Education and Science of Ukraine. Appendix {#sec:5} ======== After transformation (\[Psi\]) the action of the two-component Bose mixture is $$\begin{aligned} \label{S_trans} S\rightarrow\int dx \,\psi^*_a(x)\left\{\partial_{\tau}+\frac{\hbar^2 }{2m_a}\Delta+\mu_a\right\}\psi_a(x)\nonumber\\ +\int dx \,\tilde{\psi}^*_a(x)\left\{\partial_{\tau}+\frac{\hbar^2 }{2m_a}\Delta+\mu_a\right\}\tilde{\psi}_a(x)\nonumber\\ -\frac{1}{2}\int dx\int dx'\Phi_{ab}(x-x')|\psi_a(x)|^2|\psi_b(x')|^2\nonumber\\ -\int dx\int dx'\Phi_{ab}(x-x')\left\{\psi^*_a(x) \psi^*_b(x')\psi_b(x')\tilde{\psi}_a(x)+{\rm c.c}\right\}\nonumber\\ -\int dx\int dx'\Phi_{ab}(x-x')\left\{|\psi_a(x)|^2|\tilde{\psi}_b(x')|^2+\psi^*_a(x) \tilde{\psi}^*_b(x')\psi_b(x')\tilde{\psi}_a(x)\right\}\nonumber\\ -\frac{1}{2}\int dx\int dx'\Phi_{ab}(x-x')\left\{\psi^*_a(x)\psi^*_b(x')\tilde{\psi}_b(x')\tilde{\psi}_a(x)+{\rm c.c}\right\}\nonumber\\-\int dx\int dx'\Phi_{ab}(x-x')\left\{\tilde{\psi}^*_a(x) \tilde{\psi}^*_b(x')\tilde{\psi}_b(x')\psi_a(x)+{\rm c.c}\right\}\nonumber\\ -\frac{1}{2}\int dx\int dx'\Phi_{ab}(x-x')|\tilde{\psi}_a(x)|^2|\tilde{\psi}_b(x')|^2, \end{aligned}$$ where nine interaction vertices are presented diagrammatically in Fig. 1. \[width=0.5 ,clip,angle=-0\][vertices.eps]{} For a very dilute system while integrating out $\tilde{\psi}$-fields one can use simple perturbation theory considering the ideal gas action (the second term in Eq. (\[S\_trans\])) as a zero-order approximation. Furthermore, in the low-temperature limit the only nonzero contribution to the effective action governing “slowly” varying fields is given by the graphs depicted in Fig. 2. \[width=0.75 ,clip,angle=-0\][t\_matrix.eps]{} This infinite series of diagrams is summed up to give the linear integral equation for the renormalized symmetrical vertices $$\begin{aligned} \label{t_eq} t_{ab}(P,Q|Q+K,P-K)=[g_{ab}(k)+\delta_{ab}g_{aa}(p-k-q)]/2^{\delta_{ab}}\nonumber\\ -\frac{1}{2^{\delta_{ab}}\mathcal{A}\beta}\sum_{S}[g_{ab}(s)+\delta_{ab}g_{aa}(p-s-q)]\nonumber\\ \times G_a(P-S)G_b(Q+S)t_{ab}(P-S,Q+S|Q+K,P-K),\end{aligned}$$ where $g_{ab}(k)$ is the Fourier transform of $\Phi_{ab}({\bf r})$ and we used notation for Green’s functions of the ideal gases $G_a(P)=1/[i\omega_p-\varepsilon_a(p)+\mu_a]$. In the dilute limit, where $\mu_{a}\ll \hbar^2\Lambda^2/m_a$ (the most natural choice in the two-component case is $\Lambda^2\sim n_A+n_B$) and the physically relevant region of the wave-vectors integration $p,q,k\sim \sqrt{m\mu_{a}}/\hbar\ll\Lambda$, we can neglect the dependence on $P$ and $Q$ under the integral in Eq. (\[t\_eq\]). If it is also assumed that the interaction potentials are short-ranged, i.e., $g_{ab}(s)$ weakly depends on $s$ then the $t_{ab}(-S,S|0,0)$ is also independent on the transferred momentum. The latter observation allows to find the asymptotic solution of (\[t\_eq\]) (denoting $t_{ab}(0,0|0,0)\equiv t_{ab}$) $$\begin{aligned} \frac{1}{t_{ab}}=\frac{1}{g_{ab}(0)} +\int^{\infty}_{\Lambda}\frac{ds\,s}{2\pi}\frac{g_{ab}(s)}{g_{ab}(0)} \frac{1}{\varepsilon_{a}(s)+\varepsilon_{b}(s)}.\end{aligned}$$ Actually $t_{ab}$ enters the hydrodynamic action as a matrix of the effective coupling constants. [99]{} A. Posazhennikova, Phys. Rev. Mod. [**78**]{}, 1111 (2006). Z. Hadzibabic, and J. Dalibard, Riv. Nuovo Cim. [**34**]{}, 389 (2011). M. Schick, Phys. Rev. A [**3**]{}, 1067 (1971). Yu. E. Lozovik, V. I. Yudson, Physica A [**93**]{}, 493 (1978). A. A. Ovchinnikov, J. Phys.: Condens. Matter [**5**]{}, 8665 (1993). A. Yu. Cherny and A. A. Shanenko, Phys. Rev. E [**64**]{}, 027105 (2001). S. Pilati, J. Boronat, J. Casulleras, and S. Giorgini, Phys. Rev. A [**71**]{}, 023605 (2005). F. Mazzanti, A. Polls, and A. Fabrocini, Phys. Rev. A [**71**]{}, 033615 (2005). V. N. Popov, Theor. Math. Phys. [**11**]{}, 565 (1972). C. Mora and Y. Castin, Phys. Rev. A [**67**]{}, 053615 (2003). V. N. Popov, [*Functional Integrals and Collective Excitations*]{} (Cambridge University Press, Cambridge, 1987). D. S. Fisher and P. C. Hohenberg, Phys. Rev. B [**37**]{}, 4936 (1988). E. B. Kolomeisky and J. P. Straley, Phys. Rev. B [**46**]{}, 11749 (1992). N. Dupuis, Phys. Rev. Lett. [**102**]{}, 190401 (2009). N. Dupuis, Phys. Rev. A [**80**]{}, 043627 (2009). A. Rancon and N. Dupuis, Phys. Rev. A [**85**]{}, 063607 (2012). J. Krieg, D. Strassel, S. Streib, S. Eggert, and P. Kopietz, Phys. Rev. B [**95**]{}, 024414 (2017). J. O. Andersen, Eur. Phys. J. B [**28**]{}, 389 (2002). C. C. Chien, J. H. She, and F. Cooper, Ann. Phys. [**347**]{}, 192 (2014). V. N. Popov and A. V. Seredniakov, Sov. Phys. JETP [**50**]{}, 193 (1979). J. O. Andersen, U. Al Khawaja, and H. T. C. Stoof, Phys. Rev. Lett. [**88**]{}, 070407 (2002). U. Al Khawaja, J. O. Andersen, N. P. Proukakis, and H. T. C. Stoof, Phys. Rev. A [**66**]{}, 013615 (2002). S. P. Cockburn and N. P. Proukakis, Phys. Rev. A [**86**]{}, 033610 (2012). M. Yu. Kagan and D. V. Efremov, Phys. Rev. B [**65**]{}, 195103 (2002). E. Altman, W. Hofstetter, E. Demler and M. D. Lukin, New J. Phys. [**5**]{}, 113 (2003). A. Kuklov, N. Prokof’ev, and B. Svistunov, Phys. Rev. Lett. [**92**]{}, 050402 (2004) A. Kuklov, N. Prokof’ev, and B. Svistunov, Phys. Rev. Lett. [**92**]{}, 030403 (2004). K. P. Schmidt, J. Dorier, A. Lauchli, and F. Mila, Phys. Rev. B [**74**]{}, 174508 (2006). M. Guglielmino, V. Penna, and B. Capogrosso-Sansone, Phys. Rev. A [**82**]{}, 021601(R) (2010). L. de Forges de Parny and V. G. Rousseau, Phys. Rev. A [**95**]{}, 013606 (2017). A. K. Kolezhuk, Phys. Rev. A [**81**]{}, 013601 (2010). Yu-Li Lee, and Yu-Wen Lee, J. Phys. Soc. Jpn. [**80**]{}, 044003 (2011). D. S. Petrov and G. E. Astrakharchik, Phys. Rev. Lett. [**117**]{}, 100401 (2016). V. Pastukhov, Ann. Phys., [**372**]{}, 149 (2016). I. O. Vakarchuk, V. S. Pastukhov, J. Phys. Stud. [**12**]{}, 1001 (2008). I. O. Vakarchuk, V. S. Pastukhov, J. Phys. Stud. [**12**]{}, 3002 (2008). A. Rovenchak, Low Temp. Phys. [**42**]{}, 36 (2016). Chih-chun Chang and R. Friedberg, Phys. Rev. B [**51**]{}, 1117 (1995). V. Pastukhov, J. Low Temp. Phys. [**186**]{}, 148 (2017). L. Salasnich and F. Toigo, Phys. Rep. [**640**]{}, 1 (2016). V. Pastukhov, Phys. Rev. A [**95**]{}, 023614 (2017). A. F. Andreev, E. P. Bashkin, Zh. Eksp. Teor. Fiz. [**69**]{}, 319 (1975) \[Sov. Phys. JETP [**42**]{}, 164 (1975)\]. A. G. Volosniev, H.-W. Hammer, and N. T. Zinner, Phys. Rev. A [**92**]{}, 023623 (2015). F.  Werner and Y. Castin, Phys. Rev. A [**86**]{}, 053633 (2012). P. Kroiss, M. Boninsegni, and L. Pollet, Phys. Rev. B [**93**]{}, 174520 (2016). C. Mora and Y. Castin, Phys. Rev. Lett. [**102**]{}, 180404 (2009). G. E. Astrakharchik, J. Boronat, I. L. Kurbakov, Yu. E. Lozovik, and F. Mazzanti, Phys. Rev. A [**81**]{}, 013612 (2010). L. Salasnich, Phys. Rev. Lett. [**118**]{}, 130402 (2017).
--- author: - | Zixuan Huang, Junming Fan, Shenggan Cheng, Shuai Yi, Xiaogang Wang, \ Hongsheng Li,  bibliography: - 'egbib.bib' title: 'HMS-Net: Hierarchical Multi-scale Sparsity-invariant Network for Sparse Depth Completion' --- [Shell : Bare Advanced Demo of IEEEtran.cls for IEEE Computer Society Journals]{} Introduction {#sec:introduction} ============ completion, aiming at generating a dense depth map from the input sparse depth map, is an important task for computer vision and robotics. In Fig. \[fig:intro\] (a), (b), (e), we show one example input sparse depth map, its corresponding RGB image, and the depth completion result by our proposed method. Because of the limitation of current LIDAR sensors, the inputs of depth completion are generally sparse. For instance, the \$100,000 Velodyne HDL-64E has only a vertical resolution of $\sim 0.4^{\circ}$ and an azimuth angular resolution of $0.08^{\circ}$. It generates sparse depth maps, which might be insufficient for many real-world applications. Depth completion algorithms could estimate dense depth maps from sparse inputs and has great pontential in practice. With an accurate depth completion algorithm, many high-level vision tasks, such as semantic segmentation, 3D object detection, visual odometry and SLAM with 3D point clouds, could be solved more effectively. Therefore, it becomes a hot research topic for self-driving cars and UAVs, and is listed as one of the ranked tasks in the KITTI benchmark [@Uhrig2017THREEDV]. Many different methods have been proposed for depth completion, which could be generally categorized into learning-based [@Uhrig2017THREEDV; @liao2017parse; @ma2017sparse; @chodosh2018deep] and non-learning-based methods[@ku2018defense; @nadaraya1964estimating; @barron2016fast]. Non-learning-based approaches generate dense depth maps from sparse inputs based on hand-crafted rules. Therefore, the outputs of these algorithms are generated based on assumed prior by humans. As a result, they are not robust enough to sensor noises and are usually specifically designed for certain datasets. In addition, most non-learning-based methods ignore the correlations among sparse input depth points and might result in inaccurate object boundaries. An example of errors by a non-learning-based method [@ku2018defense] is shown in Fig. \[fig:intro\](e). The noises in the white box are not removed at all, and boundaries of the cars and trees in the yellow box are inaccurate. -- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![(a) CNN with only sparsity-invariant convolution could only gradually downsample feature maps, which loses much resolution at later stages. (b) Our proposed encoder-decoder network with novel sparsity-invariant operations could effectively fuse multi-scale features from different layers for depth completion.[]{data-label="fig:convnet_encoder_decoder"}](figures/networks_legend.pdf "fig:") ![(a) CNN with only sparsity-invariant convolution could only gradually downsample feature maps, which loses much resolution at later stages. (b) Our proposed encoder-decoder network with novel sparsity-invariant operations could effectively fuse multi-scale features from different layers for depth completion.[]{data-label="fig:convnet_encoder_decoder"}](figures/networks_convnet.pdf "fig:") \(a) CNN with sparsity-invariant convolution **only** ![(a) CNN with only sparsity-invariant convolution could only gradually downsample feature maps, which loses much resolution at later stages. (b) Our proposed encoder-decoder network with novel sparsity-invariant operations could effectively fuse multi-scale features from different layers for depth completion.[]{data-label="fig:convnet_encoder_decoder"}](figures/networks_encoder_decoder.pdf "fig:") \(b) Proposed **Sparsity-invariant Encoder-decoder** Network -- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------- ----------------------------------------------------- ![image](figures/1234_f.pdf){height="2.25cm"} ![image](figures/image_f.pdf){height="2.25cm"} \(a) Input Sparse Depth Map \(b) Image ![image](figures/adnn.png){height="2.25cm"} ![image](figures/sparseconv_f.pdf){height="2.25cm"} \(c) Result by ADNN [@chodosh2018deep] \(d) Result by SPConv [@Uhrig2017THREEDV] ![image](figures/classical_f.pdf){height="2.25cm"} ![image](figures/ours_new.pdf){height="2.25cm"} \(e) Result by IP-Basic [@ku2018defense] \(f) Result by Proposed HMS-Net ---------------------------------------------------- ----------------------------------------------------- For learning-based approaches, state-of-the-art methods are mainly based on deep neural networks. Previous methods mainly utilized deep convolutional neural networks (CNN) for generating dense depth maps from sparse inputs. Ma and Karaman [@ma2017sparse] simply filled $0$s to locations without depth inputs to create dense input maps, which might introduce ambiguity to very small depth values. Chodosh et al. [@chodosh2018deep] proposed to extract multi-level sparse codes from the inputs and used a 3-layer CNN for depth completion. However, those two methods used conventional convolution operations designed for dense inputs (see Fig. \[fig:intro\](c) for an example). Uhrig et al. [@Uhrig2017THREEDV] proposed sparsity-invariant convolution, which is specifically designed to process sparse maps and enables processing sparse inputs more effectively with CNN. However, the sparsity-invariant convolution in [@Uhrig2017THREEDV] only mimics the behavior of convolution operations in conventional dense CNNs. Its feature maps of later stages lose much spatial information and therefore cannot effectively integrate both low-level and high-level features for accurate depth completion (see Fig. \[fig:convnet\_encoder\_decoder\](a) for illustration). On the other hand, there exist effective multi-scale encoder-decoder network structures for dense pixel-wise classification tasks (see Fig. \[fig:convnet\_encoder\_decoder\](b)), such as U-Net [@ronneberger2015u], Feature Pyramid Network [@Lin2017CVPR], Full Resolution Residual Network [@pohlen2017full]. Direct integration of the sparsity-invariant convolution in [@Uhrig2017THREEDV] into the multi-scale structures is infeasible, as those structures also require other operations for multi-scale feature fusion, such as sparsity-invariant feature upsampling, average, and concatenation. To overcome such limitation, we propose three novel sparsity-invariant operations to enable using encoder-decoder networks for depth completion. The three novel operations include sparsity-invariant upsampling, sparsity-invariant average, and joint sparsity-invariant concatenation and convolution. To effectively and efficiently handle sparse feature maps, sparsity masks are utilized at all locations of the feature maps. They record the locations of the sparse features at the output of each processing stage and guide the calculation of the forward and backward propagation. Each sparsity-invariant operation is designed to properly maintain and modify the sparisity masks across the network. The design of those operations are non-trivial and are the keys to using encoder-decoder structures with sparse features. Based on such operations, we propose a multi-scale encoder-decoder network, HMS-Net, which adopts a series of sparsity-invariant convlutions with downsampling and upsampling to generate multi-scale feature maps and shortcut paths for effectively fusing multi-scale features. Extensive experiments on KITTI [@Uhrig2017THREEDV] and NYU-depth-v2 [@silberman2012indoor] datasets show that our algorithm achieves state-of-the-art depth completion accuracy. The main contributions of our work can be summarized to threefold. 1) We design three sparsity-invariant operations for handling sparse inputs and feature maps, which are important to handle sparse feature maps. 2) Based on the proposed sparsity-invariant operations, a novel hierarchical multi-scale network structure fusing information from different scales is designed to solve the depth completion task. 3) Our method outperforms state-of-the-art methods in depth completion. On KITTI depth completion benchmark, our method without RGB information ranks *1st* among all peer-reviewed methods with RGB inputs, while our method with RGB guidance ranks *2nd* among all RGB-guided methods. Related work ============ Depth completion ---------------- Depth completion is an active research area with a large number of applications. According to the sparsity of the inputs, current methods could be divided into two categories: sparse depth completion and depth enhancement. The former methods aim at recovering dense depth maps from spatially sparse inputs, while the later methods work on conventional RGB-D depth data and focus on filling irregular and relatively small holes in input dense depth maps. Besides, if the input depth maps are regularly sampled, the depth completion task could be regarded as a depth upsampling (also known as depth super-resolution) task. In other words, depth upsampling algorithms handle a special subset of the depth completion task. The inputs of depth upsampling are depth maps of lower resolution. According to whether RGB information is utilized, depth upsampling methods could be divided into two categories: guided depth upsampling and depth upsampling without guidance. ### Sparse depth completion This type of methods take sparse depth maps as inputs. To handle sparse inputs and sparse intermediate feature maps, Uhrig et al. [@Uhrig2017THREEDV] proposed sparsity-invariant convolution to replace the conventional convolution in convolution neural networks (CNN). The converted sparsity-invariant CNN keeps track of sparsity masks at each layer and is able to estimate dense depth maps from sparse inputs. Ku et al. [@ku2018defense] proposed to use a series of hand-crafted image processing algorithms to transform the sparse depth inputs into dense depth maps. The proposed framework first utilized conventional morphological operations, such as dilation and closure, to make the input depth maps denser. It then filled holes in the intermediate denser depth maps to obtain the final outputs. Eldesokey et al. [@eldesokey2018propagating] proposed an algebraically-constrained normalized convolution operation for handling sparse data and propagate depth confidence across layers. The regression loss and depth confidence are jointly optimized. Ren et al. [@ren2018sbnet] focused on another task, the efficiency of convolution, but also have the design of sparsity masks. However, their algorithm could not be used in the scenario where the sparsity of the input varies in a large range. The above mentioned methods did not consider image information captured by the calibrated RGB cameras. There also exist works utilizing RGB images as additional information to achieve better depth completion. Schneider et al. [@schneider2016semantically] combined both intensity cues and object boundary cues to complete sparse depth maps. Liao et al. [@liao2017parse] utilized a residual neural network and combine the classification and regression losses for depth estimation with both RGB and sparse depth maps as inputs. Ma and Karaman [@ma2017sparse] proposed to use a single deep regression network to learn directly from the RGB-D raw data, where the depth channel only has sparse depth values. However, the proposed algorithm mainly focused on the indoor scenes and was only tested on the indoor NYUv2 dataset. Instead, Van Gansbeke et al. [@van2019sparse] combine RGB and depth information through summation according to two predicted confidence maps. Chodosh et al. [@chodosh2018deep] utilized compressed sensing techniques and Alternating Direction Neural Networks to create a deep recurrent auto-encoder for depth completion. Sparse codes were extracted at the outputs of multiple levels of the CNN and were used to generate dense depth prediction. Zhang and Funkhouser [@zhang2018deep] adopted a neural network to predict dense surface normals and occlusion boundaries from RGB images. These predictions were then used as auxiliary information to accomplish the depth completion task from sparse depth data. Qiu et al. [@qiu2019deeplidar] further extended this idea on outdoor datasets by generating surface normals as intermediate representation. Jaritz et al. [@Jaritz20183DV] argued that by using networks with large receptive field, the networks were not required to have special treatment for sparse data. They instead trained networks with depth maps of different sparsities. Cheng et al. [@Cheng2018ECCV] proposed to learn robust affinities between pixels to spatially propagate depth values via convolution operations to fulfill the entire depth map, while Eldesokey el al. [@eldesokey2019confidence] used continuous confidence instead of validity maps and algebraically constrained filters to tackle the sparsity-invariance problem and also guidance from both RGB images and the output confidence produced by their unguided network. Yang et al. [@yang2019dense] proposed to yield the full posterior over depth maps with a Conditional Prior Network from their previous work [@yang2018conditional]. And a self-supervised training framework was designed by Ma et al. [@Ma2018arxiv], which explores temporal relations of video sequences to provide additional photometric supervisions for depth completion networks. ### Depth enhancement The inputs of depth enhancement or depth hole-filling methods are usually dense depth maps with irregular and rare small holes. The input depth maps are usually captured with RGB images. Matyunin et al. [@matyunin2011temporal] used the depth from the neighborhoods of the hole regions to fill the holes, according to the similarity of RGB pixels. In [@camplani2012efficient], the missing depth values are obtained by iteratively applying a joint-bilateral filter to the hole regions’ neighboring pixels. Yang et al. [@Yang:253660] propose an efficient depth image recovery algorithm based on auto-regressive correlations and recover high-quality multi-view depth frames with it. They also utilized color image, depth maps from neighboring views and temporal adjacency to help the recovery. Chen et al.[@chen2012depth] firstly created smooth regions around the pixels without depth, and then adopted a bilateral filter without smooth region constraints to fill the depth values. Yang et al. [@yang2014color] proposed an adaptive color-guided autoregressive model for depth enhancement, which utilized local correlation in the initial depth map and non-local similarity in the corresponding RGB image. ### Guided depth upsampling Depth upsampling methods take low-resolution depth maps as inputs and output high-resolution ones. As the guidance signals, the provided RGB images bring valuable information (e.g., edges) for upsampling. Li et al. [@li2016deep] proposed a CNN to extract features from the low-resolution depth map and the guidance image to merge their information for estimating the upsampled depth map. In [@ferstl2013image], an anisotropic diffusion tensor is calculated from a high-resolution intensity image to serve as the guidance. An energy function is designed and solved to solve the depth upsampling problem. Hui et al. [@hui2016depth] proposed a convolution neural network, which fused the RGB guidance signals at different stages. They trained the neural network in the high-frequency domain. Xie et al. [@xie2016edge] upsampled the low-resolution depth maps with the guidance of image edge maps and a Markov Random Field model. Jiang et al. [@guo2018hierarchical] proposed a depth super-resolution algorithm utilizing transform domain regularization with an auto-regressive model, as well as a spatial domain regularization by injecting a multi-directional total variation prior. ### Depth upsampling without guidance Depth upsampling could also be achieved without the assistance of corresponding RGB images. As an MRF based method, the approach in [@mac2012patch] matched against the height field of each input low-resolution depth patch, and searched the database for a list of most appropriate high-resolution candidate patches. Selecting the correct candidates was then posed as a Markov Random Field labeling problem. Ferstl et al. [@ferstl2015variational] learned a dictionary of edge priors from an external database of high- and low-resolution depth samples, and utilized a novel variational sparse coding approach for upsampling. Xie et al. [@xie2015joint] used a coupled dictionary learning method with locality coordinate constraints to transform the original depth maps into high-resolution depth maps. Riegler et al. [@riegler2016atgv] integrated a variational method that modeled the piecewise affine structures in depth maps on top of a deep network for depth upsampling. Multi-scale networks for pixelwise prediction --------------------------------------------- Neural networks that utilize multi-scale feature maps for pixelwise prediction (e.g., semantic segmentation) were widely investigated. Combining both low-level and high-level features was proven to be crucial for making accurate pixelwise prediction. Ronneberger et al. [@ronneberger2015u] proposed a U-shaped network (U-Net). It consists of an iterative downsampling image encoder that gradually summarized image information into smaller but deeper feature maps, and an iterative upsampling decoder that gradually combined low-level and high-level features to output the pixelwise prediction maps. The low-level information from the encoder was passed to the high-level information of the decoder by direct concatenation of the feature maps of the same spatial sizes. Such a network has shown its great usefulness in many 2D and 3D segmentation tasks. Similar network structures, which include Hourglass [@Newell2016ECCV] and Feature Pyramid Network [@Lin2017CVPR], have also been investigated to tackle pixelwise prediction tasks. Recently, Pohlen et al. [@pohlen2017full] proposed the full-resolution residual network (FRRN), which treated full-resolution information as a residual flow to pass valuable information across different scales for semantic segmentation. He et al. [@he2018learning] designed a fully fused network to utilize both images and focal length to learn the depth map. However, even with the sparsity-invariant convolution operation proposed in [@Uhrig2017THREEDV], the multi-scale encoder-decoder networks cannot be directly converted to handle sparse inputs. This is because there exist many operations that do not support sparse feature maps. Our proposed sparsity-invariant operations solve the problem and allow encoder-decoder networks to be used for sparse data. Method {#sec:method} ====== We introduce our proposed framework for depth completion in this section. In Section \[ssec:si\_convolution\], we first review sparsity-invariant convolution proposed in [@Uhrig2017THREEDV]. In Section \[ssec:si\_operations\], we then introduce three novel sparsity-invariant operations, which are crucial for adopting multi-scale encoder-decoder networks to process sparse inputs. In Section \[ssec:hms\_net\], based on such sparsity-invariant operators, the hierarchical multi-scale encoder-decoder network (HMS-Net) for effectively combining multi-scale features is proposed to tackle the depth completion task. Sparsity-invariant Convolution {#ssec:si_convolution} ------------------------------ In this subsection, we first review sparsity-invariant convolution in [@Uhrig2017THREEDV], which modifies conventional convolution to handle sparse input feature maps. The sparsity-invariant convolution is formulated as $$\begin{aligned} \boldsymbol{z}(u,v)=\frac{\sum_{i,j=-k}^k \boldsymbol{m_x}(u+i,v+j) \boldsymbol{w}(i,j) \boldsymbol{x}(u+i,v+j)}{\sum_{i,j=-k}^k \boldsymbol{m_x}(u+i,v+j)+\epsilon}+b. \label{eqn:si_conv}\end{aligned}$$ The sparisity-invariant convolution takes a sparse feature map $\boldsymbol{x}$ and a binary single-channel sparsity mask $\boldsymbol{m}$ as inputs, which both has the same spatial size $H \times W$. The convolution generates output features $\boldsymbol{z}(u,v)$ for each location $(u,v)$. At each spatial location $(u,v)$, the binary sparsity mask $\boldsymbol{m_x}(u,v)$ records whether there are input features at this location, i.e., $1$ for features existence and $0$ otherwise. The convolution kernel $\boldsymbol{w}$ is of size $(2k+1) \times (2k+1)$, and $b$ represents a learnable bias vector. Note that the kernel weights $\boldsymbol{w}$ and bias vector $b$ are learned via back-propagation, while the sparsity mask $\boldsymbol{m_x}$ is specified by the previous layer and is trained. The key difference with conventional convolution is the use of binary sparsity mask $\boldsymbol{m_x}$ for convolution calculation. The mask values on numerator of Eq. denote that when conducting convolution, only input features at the valid or visible locations specified by the sparsity mask $\boldsymbol{m_x}$ are considered. The mask values in denominator denote that, since only a subset of input features are involved, the output features should be normalized according to the number of valid input locations. $\epsilon$ represents a very small number and is used to avoid division by 0. Note that the sparsity mask should always indicate the validity or sparsity of each location of the feature maps. Since the convolution layers in a neural network is generally stacked for multiple times, the output sparsity mask $\boldsymbol{m_z}$ at each stage should be modified to match the output of each stage $\boldsymbol{z}$. For each output feature location $(u,v)$, if there is at least one valid input location in its receptive field of the previous input, its sparsity mask $\boldsymbol{m_z}(u,v)$ should be updated to $1$. In practice, the output sparsity mask is obtained by conducting max pooling on the input sparsity mask with the same kernel size of convolution $(2k+1) \times (2k+1)$. ![Illustration of the proposed sparsity-invariant upsampling operation. $\boldsymbol{F}$ stands for Bilinear Upsampling.[]{data-label="fig:si_upsampling"}](figures/bu.pdf){height="3.5cm"} Sparsity-invariant operations {#ssec:si_operations} ----------------------------- The sparsity-invariant convolution successfully converts conventional convolution to handle sparse input features and is able to stack multiple stages for learning highly non-linear functions. However, only modifying convolution operations is not enough if one tries to utilize state-of-the-art multi-scale encoder-decoder structure for pixelwise prediction. As shown in Fig. \[fig:convnet\_encoder\_decoder\](b), average, upsampling, and concatenation are also common operations in a multi-scale encoder-decoder networks. Therefore, we propose the sparsity-invariant version of these operations: sparsity-invariant average, sparsity-invariant upsampling, and joint sparsity-invariant concatenation and convolution. The three sparsity-invariant operations allow effectively handling sparse feature maps across the entire encoder-decoder network. They are the foundations of complex building blocks in our overall framework. Designing the sparsity-invariant operations is non-trivial. Our proposed sparsity-invariant operations share the similar spirit of sparsity-invariant convolution: using single-channel sparsity masks to track the validity of feature map locations. The sparsity masks could be used to guide and regularize the calculation of the operations. ### Sparsity-invariant bilinear upsampling {#sssec:si_upsample} One of the most important operations in encoder-decoder networks is the upsampling operation in the decoder part. We first propose the sparsity-invariant bilinear upsampling operation. Let $\boldsymbol{x}$ and $\boldsymbol{m_x}$ denote the input sparse feature map and the corresponding input sparsity mask of size $H \times W$. The operation generates the output feature map $\boldsymbol{z}$ and its corresponding sparsity mask $\boldsymbol{m_z}$ of size $2H \times 2W$. Let $F$ represents the conventional bilinear upsampling operator, which bilinearly upsamples the input feature map or mask by two times. The proposed sparsity-invariant bilinear upsampling can be formulated as $$\begin{aligned} \boldsymbol{z} &= \frac{F(\boldsymbol{m_x} \odot \boldsymbol{x})}{F (\boldsymbol{m_x})+\epsilon}, \label{eq:si_upsample_output} \\ \boldsymbol{m_z}&= \boldsymbol{1} \left[ F(\boldsymbol{m_x}) \neq 0 \right], \label{eq:si_upsample_mask}\end{aligned}$$ where $\odot$ denotes the spatial elementwise multiplication, $\epsilon$ is a very small number to avoid division by zero, and $\boldsymbol{1} [\cdot]$ denotes the indicator function, i.e., $\boldsymbol{1} [\text{true}] = 1$ and $\boldsymbol{1} [\text{false}] = 0$. The proposed sparsity-invariant bilinear upsampling operation is illustrated in Fig. \[fig:si\_upsampling\]. As shown by Eq. , the proposed operation first uses the input sparsity mask $\boldsymbol{m_x}$ to mask out the invalid features from the input feature maps $\boldsymbol{x}$ as $\boldsymbol{m_x} \odot \boldsymbol{x}$. The traditional bilinear upsampling $F$ operator is then applied to upsample both the masked feature maps $\boldsymbol{m_x} \odot \boldsymbol{x}$ and the input sparsity mask $\boldsymbol{m_x}$. The upsampled sparse features $F(\boldsymbol{m_x} \odot \boldsymbol{x})$ are then normalized at each location according to the upsampled sparsity mask values $F(\boldsymbol{m_x})$. The final sparsity mask $\boldsymbol{m_z}$ is obtained by identifying the non-zero locations of the upsampled sparsity mask $F(\boldsymbol{m_x})$. Note that for sparsity-invariant max-pooling or downsampling, it could be calculated the same as Eqs. and by replacing the upsampling function $F$ with max-pooling or downsampling operators. ![Illustration of the proposed sparsity-invariant average.[]{data-label="fig:si_addition"}](figures/sum.pdf){height="4.5cm"} ### Sparsity-invariant average Pixelwise average of two feature maps of the same spatial sizes is needed for fusing features from different levels without increasing the output channels. For average of sparse input feature maps, however, specifically designed average operation is needed. We propose sparsity-invariant average, which takes two input sparse feature maps, $\boldsymbol{x}$ and $\boldsymbol{y}$, with their corresponding sparsity masks, $\boldsymbol{m_x}$ and $\boldsymbol{m_y}$, as inputs. It generates the fused sparse feature maps $\boldsymbol{z}$ with its corresponding sparsity mask $\boldsymbol{m_z}$. Unlike the sparsity-invariant upsampling or downsampling, the key difference is that the average operation takes two sparse features as inputs. We formulate the sparsity-invariant average as $$\begin{aligned} &\boldsymbol{z}= \frac{\boldsymbol{m_x} \odot {\boldsymbol{x}} + \boldsymbol{m_y} \odot \boldsymbol{y}}{\boldsymbol{m_x} + \boldsymbol{m_y} + \epsilon}, \label{eq:si_addition_output}\\ &\boldsymbol{m_z}=\boldsymbol{m_x}\vee\boldsymbol{m_y}, \label{eq:si_addition_mask}\end{aligned}$$ where $\vee$ denotes elementwise alternation (i.e., logical “or”) function, $\odot$ represents elementwise multiplication, and $\epsilon$ is a very small number to avoid division by zero. The sparsity-invariant average is illustrated in Fig. \[fig:si\_addition\]. The two input sparse features are first masked out by their corresponding masks to obtain $\boldsymbol{m_x} \odot {\boldsymbol{x}}$ and $\boldsymbol{m_y} \odot \boldsymbol{y}$. Both the masked features and the masks are then pixelwisely added. The added features are then normalized by added sparsity masks to calculate the output sparse features $\boldsymbol{z}$. For the output sparsity mask, at each location, if the location is valid for either of the input feature maps, the mask is set to 1 for this location. At each location of the output feature map, the output feature vector is the mean of two input feature vectors if both input maps are valid at this location. If only one input feature is valid at a location, the valid single input feature vector is directly copied to the same location of the output feature maps. The two types of output feature vectors would have similar magnitude because the added features are properly normalized by the added sparsity mask. Note feature addition was also explored in [@Uhrig2017THREEDV]. However, an output sparsity mask was not generated. Without such a mask, following convolution operations cannot handle sparse output features. Therefore, their operation could only be used as the last layer of a neural network. ### Joint sparsity-invariant concatenation and convolution {#sssec:si_concat} ![Sparsity patterns vary from regions, thus we need several different kernels to deal with out feature maps after concatenation.[]{data-label="fig:si_concat_problem"}](figures/cat_problem.pdf){height="4.0cm"} ![Illustration of the proposed joint sparsity-invariant concatenation and convolution.[]{data-label="fig:si_concat"}](figures/cat.pdf){height="4.8cm"} Another commonly used approach of fusing two feature maps with the same spatial size is feature concatenation, i.e., concatenation along the channel dimension. However, different from aforementioned other operations, concatenation would introduce sparsity in both the spatial dimension (H $\times$ W) and the feature dimension (C), and the latter actually prevent us from simply extending the idea of sparsity-invariant average in the previous subsection. Let’s consider the scenario where two feature maps with shape $C_1 \times H \times W$ and $C_2 \times H \times W$ are now being concatenated into one with shape $(C_1+C_2) \times H \times W$ as illustrated in Fig. \[fig:si\_concat\_problem\]. The concatenation is further followed by a convolution layer ($1\times 1$ for simplicity), which is a common operation in CNNs. However, we know that convolution performs filtering on a location by extracting one local feature vector with length $C$ and summing all entries up into a number with learnable weights. Then, the convolution kernels iterate over the whole feature map, treating every location equally. Recall the situation in sparsity-invariant convolution where the feature vector for a certain location only have two possible sparsity patterns-the whole vector of length $C$ is valid, or all the entries of this vector are zeros. Note that the latter situation would not influence the training as its contribution to the output as well as the gradient of kernels is zero. Therefore, it is enough for us to use one set of convolution kernels, equally for every valid location. However, when we are convolving on the feature maps after concatenation, we have four different types of vectors or sparsity patterns for each location: the first $C_1$ feature channels of the vector is valid while the latter $C_2$ feature channels are not; or $C_2$ is valid while $C_1$ is not, or both of them are valid/invalid. Therefore, we need three different sets of kernels to tackle these four different sparsity patterns. In other words, to effectively handle different scenarios at different locations of the concatenated feature maps, we propose to use an adaptive-kernel version convolution to solve the difficulty and combine it with concatenation together. Another advantage of combining them is that all convolution kernels would generate outputs with the same spatial sparsity patterns. Therefore the output mask is still of single channel, which is computationally efficient and reduces the model complexity significantly. Specifically, our joint sparsity-invariant concatenation and convolution is described and explained formally as following: -- ------------------------------------------------------ --------------------------------------------------- ![image](figures/structure.pdf) ![image](figures/structureRGB.pdf) \(a) Sparsity-invariant network without RGB guidance \(b) Sparsity-invariant network with RGB guidance -- ------------------------------------------------------ --------------------------------------------------- Given the two input sparse feature maps $\boldsymbol{x}$ and $\boldsymbol{y}$ with their sparsity masks $\boldsymbol{m_x}$ and $\boldsymbol{m_x}$, the proposed joint concatenation and convolution operation is formulated as $$\begin{aligned} &\boldsymbol{z} = \left[ \boldsymbol{x}; \boldsymbol{y} \right] * \boldsymbol{k_a},\\ &\boldsymbol{m_z}= \boldsymbol{m_x}\vee\boldsymbol{m_y}, \label{eq:si_addition_mask}\end{aligned}$$ where $[;]$ denotes the concatenation of two feature maps along the channel dimension, and $*$ denotes the conventional convolution operation. Note that the output sparsity mask is calculated exactly the same as that in sparsity-invariant average. The key of the proposed operation is a $1\times 1$ convolution with an adaptive convolution kernel $\boldsymbol{k_a}$ that handles three different scenarios of concatenating sparse feature maps, which is formulated as $$\begin{aligned} \boldsymbol{k_a} (u,v) = \begin{cases} \boldsymbol{k}_{\boldsymbol{a}}^{(1)} & \boldsymbol{m_x}(u,v)=1, \, \boldsymbol{m_y}(u,v)=0;\\ \boldsymbol{k}_{\boldsymbol{a}}^{(2)} & \boldsymbol{m_x}(u,v)=0, \, \boldsymbol{m_y}(u,v)=1;\\ \boldsymbol{k}_{\boldsymbol{a}}^{(3)} & \boldsymbol{m_x}(u,v)=1, \, \boldsymbol{m_y}(u,v)=1, \end{cases}\end{aligned}$$ where $\boldsymbol{k_a} (u,v)$ are the $1\times 1$ adaptive convolution kernel at location $(u,v)$ of the concatenated feature maps $[\boldsymbol{x}; \boldsymbol{y}]$. $\boldsymbol{k}_{\boldsymbol{a}}^{(1)}$ $\boldsymbol{k}_{\boldsymbol{a}}^{(2)}$, $\boldsymbol{k}_{\boldsymbol{a}}^{(3)}$ are the three sets of learnable convolution weights for the three different feature concatenation scenarios: at each location $(u,v)$, either both input feature vectors are valid (i.e., $\boldsymbol{m_x}(u,v)=1$ and $\boldsymbol{m_y}(u,v)=1$), or only one of the input feature vectors is valid (i.e., either $\boldsymbol{m_x}(u,v)=1$ or $\boldsymbol{m_y}(u,v)=1$). The key reason for using different sets of kernel weights instead of the same set of convolution weights, as we briefly introduced before, is to avoid involving invalid input features in the concatenated feature maps into feature learning process. Illustrating with our notation above, if the current 1 $\times$ 1 convolution kernel is on the location $(u, v)$ and find that the first mask here $\boldsymbol{m_x}(u,v)$ is one and the second mask $\boldsymbol{m_y}(u,v)$ is zero, it would choose the first set of kernel weights $\boldsymbol{k}_{\boldsymbol{a}}^{(1)}$ to use during forward pass. So is the back propagation. In this case, we know the second chunk of the feature vector, of which the length is fixed, is always zero. And because it’s consistently processed by the first kernel, this kernel would naturally learn how to adapt to this pattern. In other words, by adopting the proposed adaptive convolution kernel $\boldsymbol{k_a}$, the three sets of kernel weights $\boldsymbol{k}_{\boldsymbol{a}}^{(1)}$, $\boldsymbol{k}_{\boldsymbol{a}}^{(2)}$, $\boldsymbol{k}_{\boldsymbol{a}}^{(3)}$ are able to handle different sparse feature concatenation scenarios. With joint training, the different kernels are learned to best adapt each other to generate appropriate feature representations for further processing. In this way, the sparse feature maps could be effectively fused with proposed concatenation. Hierarchical Multi-scale Network (HMS-Net) for depth completion {#ssec:hms_net} --------------------------------------------------------------- Multi-scale encoder-decoder neural networks for dense inputs are widely investigated for pixelwise prediction tasks. Those networks have the advantages of fusing both low-level and high-level features for accurate pixel prediction. However, with only the sparsity-invariant convolution in [@Uhrig2017THREEDV], encoder-decoder networks cannot be converted to handle sparse inputs. On the other hand, for frequently studied pixelwise prediction tasks, such as semantic segmentation, global high-level information usually shows greater importance to the final performance, while the full-resolution low-level features are less informative and generally go through fewer non-linearity layers compared with high-level features. However, we argue that depth completion is a low-level vision task. The low-level features in this task should be non-linearly transformed and fused with mid-level and high-level features for more times to achieve satisfactory depth completion accuracy. Based on this motivation, we propose the Hierarchical Multi-scale encoder-decoder Network (HMS-Net) with our proposed sparsity-invariant operations for sparse depth completion. The network structure without RGB information is illustrated in Fig. \[fig:overall\](a). We propose two basic building blocks, a two-scale block and a three-scale block, consisting proposed sparsity-invariant operations. The two-scale block has an upper path that non-linearly transforms the full-resolution low-level features by a $k\times k$ sparsity-invariant convolution. The lower path takes downsampled low-level features as inputs for learning higher-level features with another $k\times k$ convolution. We empirically set $k$=5 in all our experiments according to our hyperparameter study. The resulting higher-level features are then upsampled and added back to the full-resolution low-level features. Compared with the two-scale block, the three-scale block fuses features from two higher levels into the upper low-level feature path to utilize more auxiliary global information. In this way, the full-resolution low-level features are effectively fused with higher-level information and are non-linearly transformed multiple times to learn more complex prediction functions. All feature maps in our network are of 16 channels regardless of scales. Our final network utilizes a $5 \times 5$ sparsity-invariant convolution at the first layer. The resulting features then go through three of the proposed multi-scale blocks followed by sparsity-invariant max-pooling, and are then upsampled three times to generate the full-resolution feature maps. The final feature maps are then transformed by one $1\times 1$ convolution layers to generate the final per-pixel prediction. The output depth predictions are supervised by the Mean-square Error (MSE) loss function with ground-truth annotations. Another way to understand the network structure we proposed is that our structure could be considered as a backbone encoder-decoder CNN shown in Fig. \[fig:convnet\_encoder\_decoder\](b) but with special design for the depth completion task. By looking at the details in Fig. \[fig:overall\](a) and compare it with commonly used encoder-decoder networks [@ronneberger2015u; @Lin2017CVPR; @Newell2016ECCV], it’s easy to find that there are several key shortcuts (importance demonstrated later in the ablation study) between different flows uniquely in our network for a better fusion as we described before. Except for this aspect, the low-level features in the mentioned encoder-decoder networks go through very few non-linear transformations, while our proposed network emphasizes much more on the low-level features. Furthermore, the total depth of our network is also much shallower. Compared with Full-Resolution Residual Network [@pohlen2017full] which also has multiple shortcuts, the latter’s full-resolution low-level features only serve as residual signals. In addition, it does not consider the fusion of multiple-scale features at the same time as our three-scale block does. We further compared the proposed network with the commonly used encoder-decoder network structures in our experimental studies. RGB-guided multi-scale depth completion --------------------------------------- The LIDAR sensors are usually paired with RGB cameras to obtain the aligned sparse depth maps and RGB images. RGB images could therefore act as auxiliary guidance for depth completion. To integrate RGB features into our multi-scale encoder-decoder network, we added an RGB feature path to our proposed network. The network structure is illustrated in Fig. \[fig:overall\](b). The input image is first processed by an RGB sub-network to obtain mid-level RGB features. The structure of the sub-network follows the first six blocks of the ERFNet [@Romera2017IV]. It consists of two downsampling blocks and four residual blocks. The downsampling block has a $2\times 2$ convolution layer with stride 2 and a $2\times 2$ max-pooling layer. The input features are input into the two layers simultaneously, and their results are concatenated along the channel dimension to obtain the $1/2$ size feature maps. The main path of the residual block has two sets of $1\times 3$ conv $\rightarrow$ BN $\rightarrow$ ReLU $\rightarrow$ $3\times 1$ conv $\rightarrow$ BN $\rightarrow$ ReLU. Because the obtained mid-level RGB features are downsampled to $1/4$ of its original size, they are upsamepled back to the input image’s original size. The upsampled RGB features are then transformed by a series of convolutions. They act as additional guidance signals and are concatenated to the low-level sparse depth feature maps of different multi-scale blocks. Our experimental results show including the additional RGB mid-level features as guidance further improves the depth completion accuracy. Training scheme --------------- We adopt the mean squared error (MSE) loss function to train our proposed encoder-decoder networks. Since some datasets could only provide sparse ground-truth depth maps, the loss function is only evaluated at locations with ground-truth annotations, which could be formulated as $$\begin{aligned} L(\boldsymbol{x}, \boldsymbol{y}) = \frac{1}{|\boldsymbol{V}|} \sum_{u,v\in \boldsymbol{V}} \left| \boldsymbol{o}(u,v) - \boldsymbol{t}(u,v) \right|^2 \label{eq:loss}\end{aligned}$$ where $\boldsymbol{V}$ is the set containing coordinates with ground-truth depth values, $|\boldsymbol{V}|$ calculates the total number of valid points in $\boldsymbol{V}$, and $\boldsymbol{o}$ and $\boldsymbol{t}$ are the predicted and ground-truth depth maps. For network training, all network parameters except those of the RGB sub-network are randomly initilized. We adopt the ADAM optimizer [@kinga2015method] with an initial learning rate of 0.01. The network is trained for 50 epochs. To gradually decrease the learning rate, it is decayed according to the following equation, $$\begin{aligned} \text{learning rate} = 0.01 \times \left( 1 - \frac{\text{iter\_epoch}}{50} \right)^{0.9},\end{aligned}$$ where iter\_epoch denotes the current epoch iteration. Also note that sparsity masks are generated for every input directly depending on the network structure, without any learnable parameters. The values of input masks are set to 1 for all valid spatial locations and 0 for invalid locations. The mask in one layer is propagated to the following layer. In other words, they purely depend on the network structure and the current input. During training, they filter out both invalid feature points and the gradient for invalid spatial locations. For our RGB sub-network, we use the first six blocks of the ERFNet [@Romera2017IV]. Its initial network parameters are copied from the network pretrained on the CityScapes dataset [@Cordts2017CVPR]. Both paths are then end-to-end trained until convergence. Experiments =========== We conduct experiments on the KITTI depth completion dataset [@Uhrig2017THREEDV] and NYU-depth-v2 dataset [@silberman2012indoor] for evaluating the performance of our proposed approach. Methods RGB info? RMSE MAE iRMSE iMAE --------------------------------------- ----------- ---------------- -------- ------- ------ SparseConvs [@Uhrig2017THREEDV] $\times$ 1601.33 481.27 4.94 1.78 IP-Basic [@ku2018defense] $\times$ 1288.46 302.60 3.78 1.29 NConv-CNN [@eldesokey2018propagating] $\times$ 1268.22 360.28 4.67 1.52 Spade-sD [@Jaritz20183DV] $\times$ 1035.29 Sparse-to-Dense(d) [@Ma2018arxiv] $\times$ 954.36 288.64 3.21 1.35 Ours w/o RGB $\times$ [**937.48**]{} 258.48 2.93 1.14 Bilateral NN [@barron2016fast] $\surd$ 1750.00 520.00 - - ADNN [@chodosh2018deep] $\surd$ 1325.37 439.48 59.39 3.19 CSPN [@Cheng2018ECCV] $\surd$ 1019.64 279.46 2.93 1.15 Spade-RGBsD [@Jaritz20183DV] $\surd$ 917.64 Sparse-to-Dense(gd) [@Ma2018arxiv] $\surd$ 249.95 2.80 1.21 Ours w/ RGB $\surd$ 841.78 253.47 2.73 1.13 : Depth completion errors by different methods on the set of KITTI depth completion benchmark.[]{data-label="tab:kitti_overall"} KITTI depth completion benchmark {#ssec:kitti} -------------------------------- ### Data and evaluation metrics {#sssec:kitti_data_metric} We first evaluate our proposed approach on the KITTI depth completion benchmark [@Uhrig2017THREEDV]. Following the experimental setup in [@Uhrig2017THREEDV], 85,898 depth maps are used for training, 1,000 for validation and 1,000 for test. The LIDAR depth maps are aligned with RGB images by projecting the depth map into the image coordinates according to the two sensors’ essential matrix. The input depth maps generally contains $<10\%$ sparse points with depth values and the top 1/3 of the input maps do not contain any depth measurements. One example is shown in Fig. \[fig:intro\](a). According to the benchmark, all algorithms are evaluated according to the following metrics, root mean square error (RMSE in mm), mean absolute error (MAE in mm), root mean squared error of the inverse depth (iRMSE in $1/$km), and mean absolute error of the inverse depth (iMAE in $1/$km), i.e., $$\begin{aligned} \text{RMSE} &= \left( \frac{1}{\boldsymbol{|V|}} \sum_{u,v \in \boldsymbol{V}} |\boldsymbol{o}(u,v) - \boldsymbol{t}(u,v) |^2 \right)^{0.5}, \label{eq:rmse}\\ \text{MAE} &= \frac{1}{\boldsymbol{|V|}} \sum_{u,v \in \boldsymbol{V}} |\boldsymbol{o}(u,v) - \boldsymbol{t}(u,v) |, \label{eq:mae}\\ \text{iRMSE} &= \left( \frac{1}{\boldsymbol{|V|}} \sum_{u,v \in \boldsymbol{V}} \left| \frac{1}{\boldsymbol{o}(u,v)} - \frac{1}{\boldsymbol{t}(u,v)} \right|^2 \right)^{0.5}, \label{eq:irmse}\\ \text{iMAE} &= \frac{1}{\boldsymbol{|V|}} \sum_{u,v \in \boldsymbol{V}} \left| \frac{1}{\boldsymbol{o}(u,v)} - \frac{1}{\boldsymbol{t}(u,v)} \right|, \label{eq:imae}\end{aligned}$$ where $\boldsymbol{o}$ and $\boldsymbol{t}$ represent the output of our approach and ground-truth depth values. For RMSE and MAE, RMSE is more sensitive to large errors compared. This is because even a small number of large errors would be magnified by the square operation and dominate the overall loss value. RMSE is therefore chosen as the main metric for ranking different algorithms in the KITTI leaderboard. Since large depth values usually have greater errors and might dominate the calculation of RMSE and MAE, iRMSE and iMAE are also evaluated, which calculate the mean of inverse of depth errors. In this way, large depth values’ errors would have much lower weights on the two metrics. The two metrics focus more on depth points near the LIDAR sensors. ### Comparison with state-of-the-arts The performance of our proposed approaches and state-of-the-art depth completion methods are recorded in Table \[tab:kitti\_overall\]. SparseConvs represents the 6-layer convolution neural network with only sparsity-invariant convolution proposed in [@Uhrig2017THREEDV]. It only supports convolution and max-pooling operations and therefore loses much high-resolution information. IP-Basic represents the method in [@ku2018defense], a well-designed algorithm hand-crafted rules based on several traditional image processing algorithms. NConv-CNN [@eldesokey2018propagating] proposed a constrained convolution layer and propagating confidence across layers for depth completion. Bilateral NN [@barron2016fast] uses RGB images as guidance and integrate bilaterial filters into deep neural networks. It was modified to handle sparse depth completion following [@Uhrig2017THREEDV]. Spade-sD and Spade-RGBsD [@Jaritz20183DV] do not have special treatment for sparse data. They utilize conventional dense CNN but adopt different loss function and training strategy. CSPN [@Cheng2018ECCV] iteratively learns inter-pixel affinities with RGB guidance via recurrent convolution operations. The affinities could then be used to spatially propagate depth values between different locations. Sparse-to-dense(d) and Sparse-to-dense(gd) [@Ma2018arxiv] explore additional temporal information from sequential data to apply additional supervisions based on the photometric loss between neighboring frames. For methods without RGB guidance, our proposed network without RGB guidance outperforms all other peer-reviewed methods in terms of RMSE (the main ranking metric in KITTI leaderboard). Spade-sD has better MAE, iRMSE and iMAE, which mean that this method performs better on nearby objects but is more likely to generate large errors than our proposed method. Note that we utilize the $L2$ loss function to deliberately minimize RMSE. If other metrics are considered to be more important, different loss functions could be adopted for training. For methods with RGB guidance, our method ranks 2nd behind Sparse-to-dense(gd) [@Ma2018arxiv] in terms of RMSE. However, Sparse-to-dense(gd) utilized additional supervisions from temporal information, while our proposed method only uses supervisions from individual frames. Method RMSE MAE -------------------------------------------- ------------------- ------------------- -- -- Baseline w/o sparseconv 1819.81 426.84 [Baseline w/ sparseconv]{} [1683.22]{} [447.93]{} [Baseline + MS (Up only)]{} [1185.02]{} [323.41]{} [Baseline + MS (Down only)]{} [1192.43]{} [322.95]{} [Baseline + MS (Mid-level flow removed)]{} [1166.87]{} [317.74]{} Baseline + MS (Full) 1137.42 315.32 Baseline + MS + SO            994.14            262.41       Baseline + MS + SO + RGB 883.74 257.11 : Component analysis of our proposed method on the set of KITTI depth completion benchmark.[]{data-label="tab:components"} ### Ablation study [We investigate individual components in our framework to see whether they contribute to the final performance. The investigated components include, multi-scale structure, sparsity-invariant operations, and RGB guidance. We choose our full-resolution low-level feature path without the mid-level or high-level flow in our network (i.e., the upper path in Fig. \[fig:overall\](a)) as the baseline model for this section. The baseline model does not include RGB guidance. The analysis results on KITTI validation set are shown in Table \[tab:components\]. And the baseline model without multi-scale feature fusion generates large depth estimation errors.]{} -------------------------------------------------- ------------------------------------------------- -- ![image](figures/input_1sp.pdf){height="1.75cm"} ![image](figures/sc_1sp.pdf){height="1.75cm"} Input Sparse Depth Map Baseline ![image](figures/image_1sp.pdf){height="1.75cm"} ![image](figures/frrn_1sp.pdf){height="1.75cm"} Corresponding Image Baseline + MS -------------------------------------------------- ------------------------------------------------- -- **Multi-scale structure.** [Using our multi-scale encoder-decoder structure (denoted as *Baseline+MS (Full)*) in addition to the baseline model enables fusing low-level and high-level information from different scales. The multi-scale structure provides much larger receptive fields for neurons in the last convolution layer. Therefore, even if some regions in the input depth map are very sparse, the model could still predict depth values for every grid points. Using multi-scale features generally results in clearer boundaries and shows higher robustness to noises. *Baseline+MS (Full)* significantly decreases RMSE from 1819.81 to 1137.42. Also, our fusing skip connections connect different scales also enable a better fusion for the information in our network. By removing either the up fusing skip connection (shown by up arrows within the blocks in Fig. 6(a)) or the down fusing skip connection, our network performs worse as shown in Table 2 (Up only and Down only entries). Also, the mid-level flow is making the performance better (Mid-level flow removed entry). An example showing the differences between *Baseline+MS (Full)* and *Baseline* is in Fig. \[fig:example\_ms\].]{} ------------------------------------------------- ------------------------------------------------ ------------------------------------------------- ![image](figures/input_2sp.pdf){height="1.6cm"} ![image](figures/frrn_2sp.pdf){height="1.6cm"} ![image](figures/frrnd_2sp.pdf){height="1.6cm"} Input Sparse Depth Map Baseline + MS Baseline + MS (magnified) ![image](figures/image_2sp.pdf){height="1.6cm"} ![image](figures/sp_2sp.pdf){height="1.6cm"} ![image](figures/spd_2sp.pdf){height="1.6cm"} Corresponding Image Baseline + MS + SO Baseline + MS + SO (magnified) ------------------------------------------------- ------------------------------------------------ ------------------------------------------------- **Sparsity-invariant operations.** The *Baseline+MS* utilizes dense convolution. It has difficulty in handling sparse input data and sparse feature maps, especially for regions where there are very sparse points. The *Baseline+MS+SO* uses our proposed sparsity-invariant operations to maintain a correct mask flow and then converts conventional operations in *Baseline+MS* into sparsity-invariant ones to handle very sparse inputs. RMSE by *Baseline+MS+SO* improves from 1137.42 to 994.14. An example is shown in Fig. \[fig:example\_so\], which shows *Baseline+MS+SO* better handles regions with very sparse inputs. ------------------------------------------------ ----------------------------------------------- -- ![image](figures/input_5.pdf){height="1.75cm"} ![image](figures/sp_k5.pdf){height="1.75cm"} Input Sparse Depth Map Baseline + MS + SO ![image](figures/image_5.pdf){height="1.75cm"} ![image](figures/rgb_k5.pdf){height="1.75cm"} Corresponding Image Baseline + MS + SO + RGB ------------------------------------------------ ----------------------------------------------- -- \[tab:structures\] Method [**RMSE**]{} MAE --------------------------------- ---------------- ---------------- -- -- U-Net [@ronneberger2015u] 1387.35 445.73 FRRN [@pohlen2017full] 1148.27 338.56 PSPNet [@zhao2017pyramid] 1185.39 354.21 FPN [@lin2017feature] 1441.82 473.65 [He et al. [@he2018learning]]{} [1056.39]{} [293.86]{} Ours w/o RGB [**994.14**]{} [**262.41**]{} : Comparison with commonly used encoder-decoder networks without RGB guidance on KITTI depth completion set. **Deep fusion of RGB features.** By incorporating RGB features into *Baseline+MS+SO+RGB*, the network utilizes useful additional guidance from RGB images for further improving the depth completion accuracy. An example comparing *Baseline+MS+SO+RGB* and *Baseline+MS+SO* is shown in Fig. \[fig:examples\_rgb\], where the resulting depth maps with RGB guidance are much sharper at image boundaries. ### Comparison with other encoder-decoder structures [To evaluate the effectiveness of our proposed HMS-Net structure, we conduct experiments to test other commonly used encoder-decoder network structures. We modify those structures with the sparsity-invariant convolution and our proposed sparsity-invariant operations to handle sparse inputs. The experimental results on the KITTI validation set are reported in Table 3. We compared our network structure without RGB guidance with modified U-Net [@ronneberger2015u], FRRN [@pohlen2017full], PSPNet [@zhao2017pyramid], FPN [@lin2017feature] and He et al. [@he2018learning] without focal length. Our proposed network structure achieves the lowest errors in terms of RMSE and MAE.]{} -- ----------------------------------------- ----------------------------------------------------- ------------------------------------------- ![image](figures/noise){height="4.3cm"} ![image](figures/gauss_patch_noise){height="4.3cm"} ![image](figures/3_1.pdf){height="4.3cm"} \(a) Scene-level Gaussian noises \(b) Region-level Gaussian noise \(c) Randomly abandon input depth points -- ----------------------------------------- ----------------------------------------------------- ------------------------------------------- -------------------------------------------- ----------- ----------- ----------- ----------- ----------- ----------- -- -- -- -- RMSE REL RMSE REL RMSE REL Ma et al. [@ma2017sparse] w/o RGB 0.461 0.347 0.076 0.259 0.054 [Jaritz et al. [@Jaritz20183DV] w/o RGB]{} [0.476]{} [0.114]{} [0.358]{} [0.074]{} [0.246]{} [0.051]{} [Ma et al. [@Ma2018arxiv] w/o RGB]{} [0.481]{} [0.113]{} [0.352]{} [0.245]{} [0.049]{} [He et al. [@he2018learning] w/o RGB]{} [0.478]{} [0.113]{} [0.355]{} [0.072]{} [0.238]{} [0.045]{} Ours w/o RGB [Ma et al. [@ma2017sparse] w/ RGB]{} [0.351]{} [0.281]{} [0.230]{} [0.044]{} [Ours w/ RGB]{} [0.079]{} -------------------------------------------- ----------- ----------- ----------- ----------- ----------- ----------- -- -- -- -- \[tab:nyuv2\] Robustness testing on KITTI benchmark {#ssec:robustness} ------------------------------------- ### Robustness to depth noises Since the sparse depth maps are obtained by LIDAR scans, inevitably, there would be noises in acquired depth values. As a result, the robustness of depth completion algorithms with regard to different noise levels is important in practice. We conduct experiments to test the robustness of our proposed model without RGB guidance and compare with SparseConvs [@Uhrig2017THREEDV] and IP-Basic [@ku2018defense]. Note that all models in this section are trained on original data and directly tested on noisy data. **Scene-level Gaussian noises.** For this experiment, we add Gaussian noises on randomly selected 10% depth points among all points. Once the 10% points are selected, for a specific noise standard deviation level from 5-50 meters, we sample additive noise values from the zero-mean Gaussian distribution. The negative additive noises could simulate occlusion from raindrops, snowflakes or fog, while the positive additive noises mimic the laser going through glasses that mistakenly measures objects behind glasses. Noisy points whose depth values are smaller than 1 meter are set to 1 meter to simulate the minimum range of the LIDAR sensor. The RMSEs by three methods on the KITTI validation set with Gaussian noises are shown in Fig. \[fig:robustness\](a). Our method outperforms both SparseConvs [@Uhrig2017THREEDV] and IP-Basic [@ku2018defense] on different noisy depth values. **Region-level Gaussian noises.** We randomly select eight regions of size 25 $\times$ 25 pixels in every input depth map. In each region, 50% of depth points are randomly selected to add Gaussian noises of zero mean and different standard deviation values. Noisy points whose depth values are smaller than 1 meter are set to 1 meter to simulate the minimum range of the LIDAR sensor. The region-level noises are used to simulate the cases where large glasses or mirrors exist. Those regions would reflect most laser and leave large holes in the obtained depth map. The RMSE by different methods on the KITTI validation set are shown in Fig. \[fig:robustness\](b). Our method again outperforms the two compared methods, because of its capability of fusing low-level and high-level information with the proposed multi-scale encoder-decoder structure. ### Robustness to sparsity Robustness to sparsity is also essential to depth completion algorithms. We conduct experiments on testing different levels of sparsity of the input depth maps. For each input depth map, we randomly abondon 10%-90% of valid depth points. Note again that all methods are trained on original training data and are not finetuned to adapt the sparser inputs. The results on the KITTI validation set by the our model without RGB guidance and two compared methods are shown in Fig. \[fig:robustness\](c). Our proposed method shows the highest tolerance against different sparsity levels. [l@m[1cm]{}&gt;m[3cm]{}&gt;m[3cm]{}&gt;m[3cm]{}&gt;m[3cm]{}&gt;m[3cm]{}]{} & Inputs & ![image](figures/nyu_test/20/0/input.png){height="2.35cm"} & ![image](figures/nyu_test/50/100/input.png){height="2.35cm"} & ![image](figures/nyu_test/200/150/input.png){height="2.35cm"} & ![image](figures/nyu_test/1000/450/input.png){height="2.35cm"} & ![image](figures/nyu_test/5000/650/input.png){height="2.35cm"}\ & RGB Images (not used) & ![image](figures/nyu_test/20/0/rgb.png){height="2.35cm"} & ![image](figures/nyu_test/50/100/rgb.png){height="2.35cm"} & ![image](figures/nyu_test/200/150/rgb.png){height="2.35cm"} & ![image](figures/nyu_test/1000/450/rgb.png){height="2.35cm"} & ![image](figures/nyu_test/5000/650/rgb.png){height="2.35cm"}\ & Prediction Results & ![image](figures/nyu_test/20/0/pred.png){height="2.35cm"} & ![image](figures/nyu_test/50/100/pred.png){height="2.35cm"} & ![image](figures/nyu_test/200/150/pred.png){height="2.35cm"} & ![image](figures/nyu_test/1000/450/pred.png){height="2.35cm"} & ![image](figures/nyu_test/5000/650/pred.png){height="2.35cm"}\ & Ground Truth & ![image](figures/nyu_test/20/0/target.png){height="2.35cm"} & ![image](figures/nyu_test/50/100/target.png){height="2.35cm"} & ![image](figures/nyu_test/200/150/target.png){height="2.35cm"} & ![image](figures/nyu_test/1000/450/target.png){height="2.35cm"} & ![image](figures/nyu_test/5000/650/target.png){height="2.35cm"}\ & & $N=20$ & $N=50$ & $N=200$ & $N=1000$ & $N=5000$ NYU-depth-v2 dataset -------------------- ### Data, experimental setup, and evaluation metrics We also evaluate our proposed method on the NYU-depth-v2 dataset [@silberman2012indoor] with its official train/test split. Each RGB image in the dataset is paired with a spatially aligned dense depth map. The original depth maps are dense and the dataset is not originally proposed for sparse depth completion. Following the experimental setup in [@ma2017sparse], synthetic sparse depth maps could be created to test the performance of depth completion algorithms. Only $N$ depth points in each depth map are randomly kept as input depth maps for depth completion. The training set consists of depth and image pairs from 249 scenes, and 654 images are selected for evaluating the final performance according to the setup in [@ma2017sparse; @laina2016deeper; @eigen2014depth]. RMSE (Eq. ) and mean absolute relative error (REL in meters) are adopted as the evaluation metrics. REL is calculated as $$\begin{aligned} \text{REL} = \frac{1}{\boldsymbol{|V|}} \sum_{u,v \in \boldsymbol{V}} \left|\frac{\boldsymbol{o}(u,v) - \boldsymbol{t}(u,v)}{\boldsymbol{t}(u,v) }\right|, \label{eq:rel}\end{aligned}$$ where $\boldsymbol{o}$ and $\boldsymbol{t}$ are the outputs of our network and the ground truth dense depth maps. ### Comparison with state-of-the-art [We compare our method to methods proposed in Ma et al. [@Ma2018arxiv; @ma2017sparse][^1], Jaritz et al. [@Jaritz20183DV] and He et al.[@he2018learning].]{} Since the input depth maps are much sparser than the depth maps in KITTI dataset [@Uhrig2017THREEDV], we added a $2 \times 2$ max-pooling layer following the first $5 \times 5$ convolution layer, and added a Batch Normalization [@ioffe2015batch] layer after each convolution layer. Each input depth map has $N= 20$, $50$, $200$ randomly kept depth points. RMSE and REL of different $N$ values are reported in Table \[tab:nyuv2\], which demonstrates that our proposed method outperforms all other methods without using RGB information. [We also show the result with RGB guidance in TABLE 4 as well as examples of different $N$ values and the depth completion results in Fig. \[fig:nyu\].]{} Conclusions =========== In this paper, we proposed several novel sparsity-invariant operations for handling sparse feature maps. The novel operations enable us to design a novel sparsity-invariant encoder-decoder network, which effectively fuses multi-scale features from different CNN layers for accurate depth completion. RGB features for better guiding depth completion is also integrated into the proposed framework. Extensive experiment results and component analysis show advantages of the proposed sparsity-invariant operations and the encoder-decoder network structure. The proposed method outperforms state-of-the-arts and demonstrate great robustness against different levels of data corruption. [^1]: The code released by the authors were utilized.
--- abstract: 'This paper gives a precise characterization of the fundamental limits of adaptive sensing for diverse estimation and testing problems concerning sparse signals. We consider in particular the setting introduced in [@haupt:10] and show necessary conditions on the minimum signal magnitude for both detection and estimation: if ${{\boldsymbol{x}}}\in\R^n$ is a sparse vector with $s$ non-zero components then it can be reliably detected in noise provided the magnitude of the non-zero components exceeds $\sqrt{2/s}$. Furthermore, the signal support can be exactly identified provided the minimum magnitude exceeds $\sqrt{2\log s}$. Notably there is no dependence on $n$, the extrinsic signal dimension. These results show that the adaptive sensing methodologies proposed previously in the literature are essentially optimal, and cannot be substantially improved. In addition these results provide further insights on the limits of adaptive compressive sensing.' address: 'Eindhoven University of Technology, The Netherlands' author: - bibliography: - 'references.bib' title: Adaptive Sensing Performance Lower Bounds for Sparse Signal Detection and Support Estimation --- Introduction {#sec:introduction} ============ This paper addresses the characterization of the fundamental limits of adaptive sensing in sparse settings, when a potentially infinite number of observations is available but there is a restriction on the sensing precision budget available. One of the key aspects of adaptive sensing is that the data collection process is sequential and adaptive. In different fields these sensing/experimenting paradigms are known by different names, such as *sequential experimental design* in statistics and economics (see [@wald:47; @bessler:60; @fedorov:72; @elgamal:91; @hall:03; @lai:85; @blanchard:05]), *active learning* or *adaptive sensing/sampling* in computer science, engineering and machine learning (see [@cohn:96; @freund:97; @novak:96; @korostelev:00; @dasgupta:04; @castro:05; @dasgupta:05a; @dasgupta:05b; @hanneke:10; @koltchiinskii:10; @balcan:06; @castro:08]). The extra flexibility of adaptive sensing can sometimes (but not always) yield significant performance gains. In this paper we are particularly concerned with the setting in [@haupt:10], where the authors propose an adaptive sparse signal recovery method that provably improves on the best possible non-adaptive sensing methods. However, in that work there is no indication on the fundamental performance limitations in such sensing scenarios. This paper addresses those breeches in our understanding, and shows that the proposed procedures are essentially asymptotically optimal for estimation problems. Furthermore, with some modifications, the procedure of [@haupt:10] is also nearly optimal when testing for the presence of a sparse signal. In addition, we also present results characterizing the fundamental limitations in several other settings, such as exact support recovery, as in [@malloy:11; @malloy:11b] or in [@arias-castro:11]. Problem Setting =============== Let ${{\boldsymbol{x}}}\in\R^n$ be an unknown vector. We assume this vector is sparse in the sense that only a reduced number of its entries are not-zero. In particular let $S$ be a subset of $\{1,\ldots,n\}$ and assume that for all $i\in\{1,\ldots,n\}$ such that $i\notin S$ we have $x_i=0$. We refer to $S$ as the signal support set and this is our main object of interest. In this paper we consider two distinct classes of problems: (i) signal support estimation, where we desire to estimate $S$; (ii) signal detection, where we simply want to test if $S$ belongs to some particular class. In our model the signal ${{\boldsymbol{x}}}$ is unknown, but we can collect partial information through noisy observations. In particular we observe $$Y_k=x_{A_k}+\Gamma^{-1}_k W_k \quad \forall k\in\{1,2,\ldots\}\ ,\label{eqn:observations}$$ where $A_k,\Gamma_k$ are taken to be measurable functions of $\{Y_i,A_i,\Gamma_i\}_{i=1}^{k-1}$, and $W_k$ are standard normal random variables, independent of $\{Y_i\}_{i=1}^{k-1}$ and also independent of $\{A_i,\Gamma_i\}_{i=1}^{k}$. In this model $A_k\in\{1,\ldots,n\}$ corresponds to the entry of ${{\boldsymbol{x}}}$ that is measured at time $k$, therefore $A_k$ can be viewed as the *sensing action* taken at time $k$. Similarly $\Gamma^2_k$ is the *precision* of the measurement taken at time $k$. Finally, there is a total sensing budget constraint that must be satisfied, namely $$\sum_{k=1}^\infty \Gamma^2_k \leq m\ ,\label{eqn:budget}$$ where $m>0$. It is important to note that we can consider both deterministic sequential designs or random sequential designs. In the latter we allow the choices $A_k$ and $\Gamma_k$ to incorporate extraneous randomness, which is not explicitly described in the model. Besides being more general this extra flexibility often facilitates the analysis. The collection of conditional distributions of $A_k,\Gamma_k$ given $\{Y_i,A_i,\Gamma_i\}_{i=1}^{k-1}$ for all $k$ is referred to as the *sensing strategy*, and denoted by $\cal A$. Note that, within the sensing model above, we can also consider non-adaptive sensing frameworks, meaning the choice of sensing actions and precision allocation must be made before collecting any data. Formally this means that $\{A_k,\Gamma_k\}_{k\in\N}$ is statistically independent from $\{Y_k\}_{k\in\N}$. Note that a non-adaptive design can still be random. The case $m=n$ is of particular interest and this is often considered in literature as it allows direct comparison between adaptive and non-adaptive sensing methodologies. If $m=n$ we allow, on average, one unit of precision for each one of the $n$ signal entries. Therefore if we assume the signal ${{\boldsymbol{x}}}$ belongs a class for which there is no reason to give a priori preference to any particular signal entry the optimal non-adaptive sensing strategy amounts to measuring each vector entry exactly once, with precision one[^1]. This is obviously the classical normal means model. In the following sections we consider two different scenarios: signal detection/testing and signal estimation. In both cases the extra flexibility of adaptive sensing is shown to be extremely rewarding. We characterize the fundamental performance limits of adaptive sensing in those scenarios and show that these limits can be achieved by practical inference methodologies. Signal Detection ================ In this setting we are interested in a binary hypothesis testing problem, where we test a simple null hypothesis against a composite alternative. In particular, the null hypothesis $H_0$ is simply $S=\emptyset$, and the alternative hypothesis $H_1$ is $S\in\C$, where $\C$ is some class of non-empty subsets of $\{1,\ldots,n\}$. We are particularly interested in the case when under the alternative $H_1$ all the sets in $\C$ have cardinality $s$, meaning that for all $S\in\C$ we have $|S|=s$. We will consider only such classes, as this greatly simplifies the presentation and is not, for the most part, a restrictive condition. Define $$x_{\min}=\min\left\{|x_i|:x_i\neq 0\ ,\ i\in\{1,\ldots,n\}\right\}\ .$$ In the following we characterize the fundamental signal detection limits, in particular identifying conditions on $x_{\min}$ as a function of $\C$ and $n$, such that no procedure is able to reliably distinguish the two hypotheses. Furthermore these bounds are essentially tight, in the sense that there exist practical procedures matching them. For simplicity we consider only non-negative signals, meaning that $x_i\geq 0$ for all $i\in\{1,\ldots,n\}$. This greatly simplifies the analysis, without hindering the generality of the results. More comments about this are issued in Remark \[rmk:tightness\]. Furthermore the hardest signals to detect or estimate are of the form $$\label{eqn:simple_model} x_i=\left\{\begin{array}{ll} \mu & \text{ if } i\in S\\ 0 & \text{ otherwise} \end{array}\right.\ .$$ This means that we can restrict our analysis to signals of the form above, which are entirely described by the signal support set $S$ and signal amplitude $\mu$. This is also the class of signals considered in [@addario-berry:10] or in [@donoho:04] in a non-adaptive sensing context. Let $$D=\{Y_i,A_i,\Gamma_i\}_{i\in\N}\ ,$$ and let $d=\{y_i,a_i,\gamma_i\}_{i=1}^{\infty}$ be a particular realization of the experimental procedure. Let ${\cal A}$ denote a particular sensing strategy, and $\hat \Phi(D)\in\{0,1\}$ be an arbitrary testing function, taking the value $1$ if the null hypothesis is to be rejected, and zero otherwise. For notational convenience we write simply $\hat \Phi$ where the hat indicates the dependency on the data $D$. The *risk* of this procedure is given by $$R(\hat\Phi)=\P_\emptyset(\hat\Phi\neq 0)+\max_{S\in\C} \P_S(\hat\Phi\neq 1)\ ,$$ where $\P_S$ denotes the joint probability distribution of $\{Y_i,A_i,\Gamma_i\}_{i=1}^{\infty}$ for a given value of $S$. Likewise we use $\E_S$ to denote expectation under $\P_S$. Now define $$c(\mu,\C) = \inf_{\hat\Phi,{\cal A}} R(\hat\Phi)=\inf_{\hat\Phi,{\cal A}} \left\{\P_\emptyset(\hat\Phi\neq 0)+\max_{S\in\C} \P_S(\hat\Phi\neq 1)\right\}\ .\label{eqn:minimax}$$ Our formal goal is to identify the values of the signal magnitude $\mu$ for which we have necessarily $c(\mu,\C)\geq\epsilon$ for some $\epsilon>0$. The choice of risk above is obviously not the only one possible, and in the literature other choices of risk have been considered, such as $$\tilde R(\hat\Phi)=\max\left\{\P_\emptyset(\hat\Phi\neq 0),\max_{S\in\C} \P_S(\hat\Phi\neq 1)\right\}\ ,\label{eqn:max_risk}$$ or $$\bar R(\hat\Phi)=\P_\emptyset(\hat\Phi\neq 0)+\frac{1}{N}\sum_{S\in\C} \P_S(\hat\Phi\neq 1)\ .\label{eqn:bayes_risk}$$ As discussed in [@addario-berry:10], the latter measure of risk corresponds to the view that, under the alternative hypothesis, a set $S\in\C$ is selected uniformly at random from $\C$. Clearly $$\bar R(\hat\Phi) \leq R(\hat\Phi) \leq 2\tilde R(\hat\Phi) \leq 2 R(\hat\Phi)\ .$$ If there is sufficient symmetry in $\C$ and $\hat\Phi$ these three risk measures are essentially identical. Whenever possible we characterize the fundamental limits of adaptive sensing for each one of the risk measures, but focus primarily on $R(\hat\Phi)$. Main Results - Detection ------------------------ The class $\C$ of all subsets of $\{1,\ldots,n\}$ with cardinality $s$ is one of particular interest. This is the class of maximal size, and obviously the one for which we expect the worst performance for detection. Perhaps surprisingly, under the adaptive sensing paradigm, the exact same performance lower bound is obtained for *any* class $\C$ exhibiting some very mild symmetry. This means that, in many situations, the structure of the class $\C$ does not really help under the adaptive sensing scenario. This is in stark contrast with non-adaptive sensing scenarios, where the structure of the set $\C$ can play a very prominent role, as well documented in [@addario-berry:10; @arias-castro:08; @butucea:11]. To state the main result of this section we need the following definitions: Let $\Xi=\bigcup_{S\in\C} S$ and $S$ be drawn uniformly at random from $\C$. If for all $i\in\Xi$ we have $\P(i\in S)=s/|\Xi|$ the class $\C$ is said to be symmetric. Furthermore if $|\Xi|=n$ the class is said to be full range. It is remarkable that many classes $\C$ of interest satisfy this mild symmetry, as for instance, all the classes in [@addario-berry:10]. \[thm:detection\] Consider the setting above and let $\C$ be a symmetric class. Let $\hat\Phi(D)$ be an arbitrary testing procedure, where $D=\{Y_i,A_i,\Gamma_i\}_{i\in\N}$. Finally let $0<\epsilon<1$ be arbitrary. If $R(\hat\Phi)\leq\epsilon$ then necessarily $$x_{\min} \geq \sqrt{\frac{2|\Xi|}{sm} \log\frac{1}{2\epsilon}}\ .\label{eqn:detection}$$ This result gives a condition on the minimal signal magnitude necessary to ensure the detection risk is not too large. Perhaps surprisingly the lower bound does not include any factor involving specific structural properties of $\C$, but only the range and cardinality of the corresponding sets. A possible way to understand this comes from the following observation: for detection, it suffices to identify a *single* element of $S$, and there is no need to identify all the elements. Therefore cues provided by the structure are not very informative. Before proving this result it is interesting to present a simple corollary for the case of full range classes, emphasizing the asymptotic behavior. \[coro:detection\] Let $\C$ be a symmetric and full range class of sets with cardinality $s$, where $s$ can be a function of $n$ (this dependence is not explicitly stated). Let $\hat\Phi_n$ be an arbitrary adaptive sensing testing procedure. If $$\lim_{n\rightarrow \infty} R(\hat\Phi_n)=0$$ then necessarily $$x_{\min} \geq \omega_n \sqrt{\frac{n}{sm}}\ ,$$ where $\omega_n$ is a sequence for which $\lim_{n\rightarrow\infty} \omega_n=\infty$. This corollary gives a necessary condition for detection consistency. As shown in Proposition \[prop:MDS\] this bound is actually tight, meaning there are adaptive sensing procedures that can detect signals satisfying the above condition. The case $m=n$ is particularly interesting, as it allows the comparison between adaptive and non-adaptive sensing performance. For that case, adaptive sensing detection is possible if $x_{\min} = \omega_n \sqrt{\frac{1}{s}}$. Since $\omega_n$ can diverge arbitrarily slowly we see that the extrinsic signal dimension $n$ plays no significant role in this bound, and only the intrinsic dimension $s$ is relevant. Keep in mind, however, that $\omega_n$ is related to the rate of convergence of the risk $R(\hat\Phi_n)$ to zero. Corollary \[coro:detection\] is in stark contrast to what is known for the same problem if one restricts to the classical setting of non-adaptive sensing, as in [@ingster:97; @ingster:03; @donoho:04; @donoho:06]. For instance, for the class of all subsets with cardinality $s$ it is necessary to have $x_{\min} \geq c\sqrt{\log n}$ if $s<o(\sqrt{n})$, where the factor $c>0$ depends on the specific relation between $s$ and $n$. In [@meinshausen:06] the authors considered estimation of the proportion of significant components $|S|/n$. Their setting is more general, as the distributions corresponding to significant and insignificant signal component observations can be non-normal. Their approach can be used to test the hypothesis $|S|=0$. For the Gaussian case, they recover essentially the $\sqrt{\log n}$ scaling. Finally, in [@cai:07] the authors consider again the estimation of the fraction of significant signal components in the normal means case, and show results beyond consistency, including minimax rates of convergence of the risk. We now proceed with the proof of the theorem and a discussion about tightness of the bounds. : The proof of this lower bound hinges, as usual, on the analysis of likelihood ratios. Begin by defining the joint probability density function of $\{Y_k,A_k,\Gamma_k\}_{k=1}^\infty$ under $S$, which we denote by $$f(d;S)=f(y_1,a_1,\gamma_1,y_2,a_2,\gamma_2,\ldots;S)\ .$$ Note that this is properly defined for a certain dominating measure (mixed continuous and discrete). Taking into account the conditional dependences in our observation model we can factorize this probability density function as follows $$\begin{aligned} f(d;S) &=& f_{A_1,\Gamma_1}(a_1,\gamma_1) \times f_{Y_1|A_1,\Gamma_1}(y_1|a_1,\gamma_1;S)\\ && \times f_{A_2,\Gamma_2|Y_1,A_1,\Gamma_1}(a_2,\gamma_2|y_1,a_1,\gamma_1)\times f_{Y_2|A_2,\Gamma_2}(y_2|a_2,\gamma_2;S) \times\cdots\end{aligned}$$ Note that in this factorization only some terms involve the underlying true set $S$, while all the other terms depend solely on the sensing strategy used. This greatly simplifies the computation of likelihood ratios, as all the terms not involving $S$ cancel out. In particular the likelihood ratio between two hypotheses is given simply by $$\begin{aligned} \LR_{S,S'}(d) &=& \frac{f(d;S)}{f(d;S')}\\ &=& \prod_{k=1}^\infty \frac{f_{Y_k|A_k,\Gamma_k}(y_k|a_k,\gamma_k;S)}{f_{Y_k|A_k,\Gamma_k}(y_k|a_k,\gamma_k;S')}\label{eqn:LR}\ .\end{aligned}$$ As usual, in order to effectively distinguish if the underlying true distribution is parameterized by $S$ or $S'$ the corresponding likelihood ratio needs to be significantly different than 1. We proceed by formally stating this. Our analysis is heavily inspired by the approach in [@chernoff:59]. The first step is to relate the probabilities of type I and type II errors to the likelihood ratio, namely giving a relation between $\P_S(\hat\Phi\neq 1)$ and $\P_\emptyset(\hat\Phi\neq \emptyset)$ where $S$ is an arbitrary element of $\C$. Begin by defining the total variation and the Kullback-Leibler divergence between two probability measures. Let $\P_0$ and $\P_1$ be two probability measures defined on a common measurable space $(\Omega,{\cal B})$. The total variation distance is defined as $$\text{TV}(\P_0,\P_1) = \sup_{B\in{\cal B}} |\P_0(B)-\P_1(B)|\ .$$ The Kullback-Leibler divergence is defined as $$\text{KL}(\P_0\|\P_1)=\left\{\begin{array}{ll} \int_\Omega \log \frac{d\P_0}{d\P_1} d\P_0 & \text{ if } \P_0 \ll \P_1\ ,\\ +\infty & \text{ otherwise} \end{array}\right. \ .$$ The total variation is a proper distance, unlike the Kullback-Leibler divergence. Both are always non-negative but the latter is not symmetric. Note that, whenever $\P_0$ and $\P_1$ have a common dominating measure one can define the corresponding densities $f_0$ and $f_1$, and the Kulback-Leibler divergence is simply given by $$\text{KL}(\P_0\|\P_1)=\E_0\left[\log \frac{f_0(X)}{f_1(X)}\right]\ ,$$ where $X$ is a random variable with distribution given by $\P_0$. Therefore we get simply the expected value of a log-likelihood ratio. Consider now the setting in this paper. As done in [@tsybakov:09] the total variation is closely related to the infimum of the sum of type I and type II error probability, namely, for any binary (test) function $\hat\Phi$ we have $$\P_\emptyset(\hat\Phi\neq 0)+\P_S(\hat\Phi\neq 1)\geq 1-\text{TV}(\P_\emptyset,\P_S)\ .$$ Evaluating the total variation distance is generally difficult, but using Lemma 2.6 of [@tsybakov:09] we can relate it to the Kullback-Leibler divergence, which is generally much easier to evaluate. Namely $$1-\text{TV}(\P_\emptyset,\P_S)\geq \frac{1}{2}\exp(-\text{KL}(\P_\emptyset\|\P_S))\ .$$ Putting these two results together we obtain a simple relation between the Kullback-Leibler divergence and the probabilities of error, $$\text{KL}(\P_\emptyset\|\P_S)\geq -\log \left(2\P_\emptyset(\hat\Phi\neq \emptyset)+2\P_S(\hat\Phi\neq 1)\right)\ .\label{eqn:keylowerbound}$$ To simplify the notation let $\LR_{S,S'}\equiv \LR_{S,S'}(D)$. From Equation \[eqn:keylowerbound\] we conclude that $$\E_\emptyset[\log \LR_{\emptyset,S}]=\text{KL}(\P_\emptyset\|\P_S) \geq -\log\left(2\P_\emptyset(\hat\Phi\neq 0)+2\P_S(\hat\Phi\neq1)\right)\ .$$ Since the choice of set $S$ was completely arbitrary, we have the bound $$\begin{aligned} \min_{S\in\C} \E_\emptyset[\log \LR_{\emptyset,S}] \geq \min_{S\in\C} \left\{-\log\left(2\P_\emptyset(\hat\Phi\neq 0)+2\P_S(\hat\Phi\neq1)\right)\right\}\ .\label{eqn:logliklowerbound}\end{aligned}$$ At this point it is important to note that, if we desire to have $R(\hat\Phi)\leq\epsilon$ for some $0<\epsilon<1$ then $\P_\emptyset(\hat\Phi\neq 0)+\P_S(\hat\Phi\neq 1)\leq \epsilon$ (for any $S\in\C$), and therefore $$\min_{S\in\C} \E_\emptyset[\log \LR_{\emptyset,S}] \geq \log\left(\frac{1}{2\epsilon}\right)\ . \label{eqn:lower_LR_bound}$$ The next step of the proof entails deriving a good upper bound on $\min_{S\in\C} \E_\emptyset[\log \LR_{\emptyset,S}]$ and comparing it to the lower bound just shown. As noted before, the expected likelihood ratio is actually the Kullback-Leibler divergence between $\P_\emptyset$ and $\P_S$. This obviously depends on the sensing strategy $\cal A$ that is used. Therefore we need to get an upper bound on $$\sup_{\cal A} \min_{S\in\C} \E_\emptyset[\log \LR_{\emptyset,S}]\ .\label{eqn:maxmin}$$ It is instructive to compare the above expression with the one of the minimax error . Note that the roles of the max/sup and min/inf are reversed. This should not come as a surprise as larger values of $\E_\emptyset[\log \LR_{\emptyset,S}]$ correspond to lower probabilities of error. Note also that $\E_\emptyset[\log \LR_{\emptyset,S}]$ can be interpreted as the payoff matrix of a game where the sensing strategy makes the first move, and nature is the opponent that chooses a sparsity pattern in an adversarial way. Now note that $$\begin{aligned} \E_\emptyset[\log \LR_{\emptyset,S}] &=& \sum_{k=1}^\infty \E_\emptyset\left[\log \frac{f_{Y_k|A_k,\Gamma_k}(Y_k|A_k,\Gamma_k;\emptyset)}{f_{Y_k|A_k,\Gamma_k}(Y_k|A_k,\Gamma_k;S)}\right]\\ &=& \sum_{k=1}^\infty \E_\emptyset\left[\E_\emptyset\left.\left[\log \frac{f_{Y_k|A_k,\Gamma_k}(Y_k|A_k,\Gamma_k;\emptyset)}{f_{Y_k|A_k,\Gamma_k}(Y_k|A_k,\Gamma_k;S)}\right|A_k,\Gamma_k\right]\right]\\ &=& \sum_{k=1}^\infty \E_\emptyset\left[\frac{\mu^2 \1\{A_k\in S\}}{2} \Gamma^2_k\right]\\ &=& \frac{\mu^2}{2} \sum_{k=1}^\infty \E_\emptyset\left[\1\{A_k\in S\} \Gamma^2_k\right] \ ,\end{aligned}$$ where the final steps rely simply on the Kullback-Leibler divergence between normal random variables with the same variance and different means. At this point we need to evaluate $$\sup_{\cal A} \min_{S\in\C} \left\{ \frac{\mu^2}{2} \sum_{k=1}^\infty \E_\emptyset\left[\1\{A_k\in S\} \Gamma^2_k\right]\right\} \ .$$ We need to solve the above optimization problem over the space of all possible sensing strategies. Although this might seem rather involved, this optimization can be reduced to a much simpler deterministic optimization problem. Begin by defining $$b_i=\sum_{k=1}^\infty \E_\emptyset[\1\{A_k=i\} \Gamma^2_k]\ .\label{eqn:b}$$ Note that this definition does not depend on $S$, as the expectation is taken under the null hypothesis. Furthermore $b_i\geq 0$, and the sensing budget in the observation model implies that $\sum_{i=1}^n b_i\leq m$. Therefore $$\begin{aligned} \lefteqn{\sup_{\cal A} \min_{S\in\C} \left\{\frac{\mu^2}{2} \sum_{k=1}^\infty \E_\emptyset\left[\1\{A_k\in S\} \Gamma^2_k\right]\right\}}\\ &=& \frac{\mu^2}{2} \sup_{\cal A} \min_{S\in\C} \left\{\sum_{k=1}^\infty \sum_{i\in S} \E_\emptyset\left[\1\{A_k=i\} \Gamma^2_k\right]\right\}\\ &=& \frac{\mu^2}{2} \sup_{\cal A} \min_{S\in\C} \left\{\sum_{i\in S} \sum_{k=1}^\infty \E_\emptyset\left[\1\{A_k=i\} \Gamma^2_k\right]\right\}\\ &=& \frac{\mu^2}{2} \sup_{{{\boldsymbol{b}}}\in\R_0^+: \sum_{i=1}^n b_i\leq m} \min_{S\in\C} \sum_{i\in S} b_i \ .\end{aligned}$$ We have now a relatively simple finite dimensional problem, where we seek to identify the vector ${{\boldsymbol{b}}}=(b_1,\ldots,b_n)$) maximizing a concave function. The solution of this problem obviously depends on the exact structure of $\C$. Remarkably, for symmetric classes, the solution is extremely simple and characterized in the first part of the following lemma, proved in the Appendix. \[lemma:optimization\] Let $\C$ be a symmetric class. Let $\Xi=\bigcup_{S\in\C} S$. Then 1. $$\sup_{{{\boldsymbol{b}}}\in\R_0^+: \sum_{i=1}^n b_i=m} \min_{S\in \C} \sum_{i\in S} b_i =\frac{ms}{|\Xi|}\ ,$$ 2. $$\sup_{{{\boldsymbol{b}}}\in\R_0^+: \sum_{i=1}^n b_i=m} \frac{1}{|\C|} \sum_{S\in \C} \sum_{i\in S} b_i =\frac{ms}{|\Xi|}\ ,$$ and in both cases the solution is attained taking $b_i=m/|\Xi|$ for $i\in\Xi$ and zero otherwise. We are now in place to prove the theorem: by putting together the likelihood ratio lower bound and the above upper bound we get $$\frac{\mu^2 m s}{2|\Xi|}\geq \log\frac{1}{2\epsilon}\ ,$$ which is equivalent to $$\mu \geq \sqrt{\frac{2 |\Xi|}{s m} \log\frac{1}{2\epsilon}}$$ concluding the proof. Lower bounds for adaptive sensing in settings other than the one in this paper have been derived previously. For instance in [@castro:08] a minimax characterization of the fundamental performance limits of active learning for a binary classification problem was provided. Such results were made possible by bringing together approximation results for smooth functional spaces and classical minimax bounding techniques (as in [@tsybakov:09]), modified to incorporate the sequential experimental design aspect of the problem. In that approach the functional approximation results played the prominent role, and the stochastic part of the error had a much smaller contribution. Unfortunately this is not the case for the setting considered in the current paper and previously existing approaches were not adequate, prompting the novel approach in this paper. The proof of this theorem can be adapted for the other two risk definitions and , and we can show that the risk behavior is qualitatively the same. These results are stated in the following proposition, proved in the Appendix. \[prop:detection\_other\_risks\] Consider the setting of Theorem \[thm:detection\] and let $0<\epsilon<1$. If $\tilde R(\hat\Phi)\leq\epsilon/2$ or $\bar R(\hat\Phi)\leq\epsilon$ then the conclusion of Theorem \[thm:detection\] is still valid and the lower bound holds. Tightness of the Detection Lower Bounds {#sec:tightness_detection} --------------------------------------- We now proceed to show that the lower bounds derived above are indeed tight, in the sense that there are adaptive sensing testing procedures which are able to nearly attain them. As we saw, for symmetric classes $\C$, extra class structure does not help. Therefore we focus exclusively on the largest class of all the subsets of $\{1,\ldots,n\}$ with cardinality $s$. In [@haupt:10] a procedure called Distilled Sensing (DS) was introduced, and the authors proved that for the detection problem described above this procedure is able to asymptotically drive the risk to zero when $\mu>4\sqrt{n/m}$ and $\log\log\log n<s<n^{1-\beta}$ for some $\beta\in(0,1)$. When comparing this result to the above lower bound we see that there is a huge gap, as we would expect the signal magnitude $\mu$ to scale essentially like $\sqrt{2n/(sm)}$. However, it is important to note that DS is entirely agnostic about the sparsity level and possible signal magnitude. An alternative non-agnostic methodology can be derived using DS as a black-box, which nearly achieves the lower-bounds of the previous section. We begin by formally stating the performance results for the DS procedure. The following proposition is essentially the second part of Theorem III.1 in [@haupt:10]. \[prop:DS\] Assume $\log\log\log n< s\leq n^{1-\beta}$, for some $\beta\in(0,1)$. Furthermore let $\mu>4\sqrt{n/m}$. There is a sensing strategy ${\cal A}_\text{DS}$ and a test function $\hat\Phi_\text{DS}$ such that $$R(\hat\Phi_\text{DS})\rightarrow 0\ ,$$ as $n\rightarrow\infty$. Note that this result is valid even if $s\approx\log \log \log n$, meaning $s$ is nearly asymptotically constant. This suggests the following modification: first randomly select $\tilde n$ elements of $\{1,\ldots,n\}$ without replacement. Denote these by ${\cal E}=\{E_1,\ldots,E_{\tilde n}\}$. Our sensing strategy will focus exclusively on the entries $\cal E$ and ignore all the remaining ones. In other words, our observation model is now $$Y_k=x_{E_{A_k}}+\Gamma^{-1}_k W_k \quad \forall k\in\{1,2,\ldots\}\ ,$$ where $A_k\in\{1,\ldots,\tilde n\}$. The sensing budget is, however, the same as in the original formulation $$\sum_{k=1}^\infty \Gamma^2_k \leq m\ .$$ In summary, we have exactly the same setting as before, but the extrinsic dimension $n$ is now replaced by the smaller $\tilde n$. Now, provided we choose $\tilde n$ large enough so that the conditions of Proposition \[prop:DS\] are met for this new setting then an improvement in performance is possible, yielding the following result. \[prop:MDS\] Assume $s>\log\log\log n$. Furthermore let $\mu>\sqrt{\frac{32n\log\log\log n}{sm}}$. There is an adaptive sensing testing strategy such that $$R(\hat\Phi)\rightarrow 0\ ,$$ as $n\rightarrow\infty$. This result means that the statement of Corollary \[coro:detection\] is essentially tight, at least provided there are more than $\log\log\log n$ signal components under the alternative hypothesis. The constant in the bound is certainly not optimal, and the factor $\log\log\log n$ is (possibly) an artifact of the procedure. Closing the small gap between the upper and lower bounds is, however, still a direction for future research. \[rmk:tightness\] The results above were derived assuming the non-zero signal components are positive. Qualitatively these results remain the same even if one allows both positive and negative components. A simple way to address this setting is to write ${{\boldsymbol{x}}}$ as ${{\boldsymbol{x}}}={{\boldsymbol{x}}}^{+}-{{\boldsymbol{x}}}^{-}$, where ${{\boldsymbol{x}}}^{+}$ and ${{\boldsymbol{x}}}^{-}$ are sparse signal vectors with positive components (and the joint number of non-zero components is simply $s$). Now we can split the sensing budget into two equal parts, and make use of each one to test for the presence/absence or either signal. This approach yields the same asymptotic behavior, and will at most result in larger constants in the bounds. Also note that, in principle, a procedure in the spirit of the one introduced in [@chernoff:59] could be used to construct an adaptive sensing and testing methodology. However, the method of analysis in that paper is not adequate to deal with our setting. Nevertheless such procedure seems to work extremely well based on a short simulation study we conducted, and its analytical characterization presents an interesting direction for future work. The idea is simply to use the construction above, with $\tilde n=\frac{2 n\log\log\log n}{s}$. Because of the random entry selection step (the choice of $\cal E$) the conditions of Proposition \[prop:DS\] might not always be satisfied. However this happens with very low probability. Define $\tilde x\in\R^{\tilde n}$ where $\tilde x_i=x(E_i)\quad i=1,\ldots,\tilde n$. Suppose $x$ has $s$ non-zero components, and let $\tilde s$ be the number of non-zero components of $\tilde x$. Because of the sampling without replacement process, $\tilde s$ is an hypergeometric random variable with mean $$\E[\tilde s]=\tilde n \frac{s}{n} = 2\log\log\log n\ ,$$ and variance $$\V(\tilde s)=\tilde n \frac{s}{n} \left(1-\frac{s}{n}\right)\frac{n-\tilde n}{n-1}\leq \tilde n\frac{s}{n}=2\log\log\log n\ .$$ This means that $$\begin{aligned} \P(\tilde s < \log\log\log n) &=& \P(\tilde s -\E[\tilde s]< \log\log\log n-\E[\tilde s])\\ &=& \P(\tilde s -\E[\tilde s]< -\log\log\log n)\\ &\leq& \P(|\tilde s -\E[\tilde s]|> \log\log\log n)\\ &\leq& \frac{\V(\tilde s)}{(\log\log\log n)^2}\\ &\leq& \frac{2}{\log\log\log n}\end{aligned}$$ where we used Chebyshev’s inequality on the second-to-last step. This means that, with probability at least $1-2/\log\log\log n$ the conditions of Proposition \[prop:DS\] are fulfilled. For convenience define the event $\Omega=\{\tilde s \geq \log\log\log n\}$. Since the detection risk is always bounded by 2 we have $$R(\hat\Phi)\leq 2 \frac{2}{\log\log\log n} + R(\hat\Phi|\Omega)\ ,$$ therefore it suffices to show that, conditionally on $\Omega$, the risk of our procedure vanishes asymptotically. From Proposition \[prop:DS\] we know that if $\mu>4\sqrt{\tilde n/m}$ the detection risk converges to zero, which immediately yields $$\mu>4\sqrt{\frac{2n\log\log\log n}{sm}}\ .$$ concluding the proof. Signal Estimation ================= In this section we consider the signal estimation problem, where the goal is to identify the support of the underlying signal ${{\boldsymbol{x}}}$ as accurately as possible. As in the detection case, we are interested in characterizing the minimum signal amplitude $x_{\min}$ for which estimation is still possible. Clearly estimation is statistically more “difficult” than signal detection, and therefore the requirements on $x_{\min}$ are more stringent in this case. Nevertheless we show that the dependence on the extrinsic dimension $n$ does not play a role in the asymptotic performance bounds. For the same reasons as in the previous section we focus our attention on the signal model in . Our main goal is the estimation of the signal support set $S=\{i:x_i\neq 0\}$. In other words, our goal is to use adaptive sensing observations to construct an estimate $\hat S$ which is “close” to $S$. The metric of interest is the cardinality of the symmetric set difference $$d(\hat S,S)=|\hat S\Delta S|=|(\hat S\cap S^c)\cup(\hat S^c\cap S)| \ ,$$ where $S^c$ denotes the complement of $S$ in $\{1,\ldots,n\}$. Clearly $d(\hat S,S)$ is just the number of errors in the estimate $\hat S$. In a similar spirit to that of the previous section, we want to determine how small can the signal magnitude $\mu$ be so that $$\max_{S\in\C} \E_S[d(\hat S,S)]\leq \epsilon\ ,\label{eqn:risk_numerrors}$$ where $\C$ is a class of sets, and $\epsilon>0$ is small. A different error metric which is also popular in the literature is $\P_S(\hat S\neq S)$, that is, the probability one does not achieve exact support estimation. Clearly $$\P_S(\hat S\neq S)\leq \E_S[d(\hat S,S)]\ ,$$ and therefore this is a less stringent metric. The tools developed in this paper pertain $\E_S[d(\hat S,S)]$ and it is not clear if adaptive sensing lower bounds about $\P_S(\hat S\neq S)$ can be derived easily using a similar approach. In addition, we will also consider a different support estimation risk function. Define the *False Discovery Rate* (FDR) and the *Non-Discovery Rate* (NDR) as $$\text{FDR}(\hat S,S)=\E_S\left[\frac{|\hat S \setminus S|}{|\hat S|}\right]$$ and $$\text{NDR}(\hat S,S)=\E_S\left[\frac{|S \setminus \hat S|}{|S|}\right]\ .$$ In the above definitions convention $0/0=0$. Ideally we want both these quantities to be as small as possible, and so we can naturally define the risk $$R_\text{FDR+NDR}(\hat S,S)=\max_{S\in\C} \left\{\text{FDR}(\hat S,S)+\text{NDR}(\hat S,S)\right\}\ .$$ Obviously $\E_S[d(\hat S,S)]\geq \text{FDR}(\hat S,S)+\text{NDR}(\hat S,S)$ and these two measures of error can be dramatically different, therefore controlling the risk $R_\text{FDR+NDR}(\hat S,S)$ is significantly easier than controlling the absolute number of errors. Our original goal is to study lower bounds for the class $\C$ of all subsets of $\{1,\ldots,n\}$ with cardinality $s$. For technical reasons this is a bit challenging, and to greatly simplify the analysis we consider a different setting that nonetheless captures the essence of the problem. Let $\C'$ denote the class consisting of sets of cardinality $s$, $s+1$ and $s-1$. This class is only “slightly” bigger than $\C$. We instead consider procedures that exhibit good performance when $S\in\C'$, that is, estimation procedures that are “very mildly” adaptive to unknown sparsity. Generalization of the results to other classes of sets shall be considered in future work and is out of the scope of this paper. To aid in the presentation we introduce some new notation. Namely let $S_i=\1\{i\in S\}$. Similarly, for any estimator $\hat S$ let $\hat S_i=\1\{i\in \hat S\}$. Note that the joint description of $\hat S_i$ for all $i$ is equivalent to $\hat S$. For analysis purposes it is convenient to consider only *symmetric* procedures, meaning that for any $S\in\C'$ $$\forall i,j\in S \quad \P_S(\hat S_i \neq 1)=\P_S(\hat S_j \neq 1)\ ,\label{eqn:symmetry1}$$ and $$\forall i,j\notin S \quad \P_S(\hat S_i \neq 0)=\P_S(\hat S_j \neq 0)\ .\label{eqn:symmetry2}$$ Although this might seem overly restrictive, it is indeed not the case. Any inference procedure can be “symmetrized” without increasing its maximal risk. In other words, given an estimator $\hat S$ we can construct another estimator $\hat S^{(\perm)}$ satisfying and and such that $$\E_S[d(\hat S^{(\perm)},S)] \leq \max_{S'\in\C'} \E_{S'}[d(\hat S,S')]\ ,$$ for all sets $S\in \C'$. The symmetrization is achieved by randomization. Let ${\perm:\{1,\ldots,n\}\rightarrow\{1,\ldots,n\}}$ be a permutation of $\{1,\ldots,n\}$ chosen uniformly at random among the set of $n!$ possible permutations. Let $\hat S$ be a particular estimator we are going to symmetrize. Proceed by exchanging the identity of the entries of ${{\boldsymbol{x}}}$ using this permutation, or equivalently by taking $A^{(\perm)}_k=A_{\perm^{-1}(k)}$ for all $k$, and use the estimator $\hat S$ on the collected data. Finally reverse the permutation, namely defining $\hat S^{(\text{perm})}_i=\hat S_{\perm(i)}$, for all $i\in\{1,\ldots,n\}$. Using this construction we get the following lemma, proved in the Appendix. \[lemma:symmetrization\] Let $\hat S$ be any adaptive sensing procedure. The random symmetrization approach described in the paragraph above yields another adaptive sensing procedure $\hat S^{(\text{perm})}$ such that, for any $S\in\C'$ $$\forall i\in S \quad \P(\hat S^{(\perm)}_i\neq1)=\frac{1}{|S|{n \choose |S|}}\sum_{S'\in\C':|S'|=|S|} \sum_{j\in S'} \P_{S'}(\hat S_j \neq 1)\ ,$$ and $$\forall i\notin S \quad \P(\hat S^{(\perm)}_i\neq 0)=\frac{1}{(n-|S|){n \choose |S|}} \sum_{S'\in\C:|S'|=|S|} \sum_{j\notin S'} \P_{S'}(\hat S_j \neq 0)\ .$$ In addition, the following is also true: $$\E_S[d(\hat S^{(\perm)},S)]\leq\frac{1}{{n \choose |S|}}\ \sum_{S'\in\C:|S'|=|S|} \E_{S'}[d(\hat S,S')] \ \leq \ \max_{S'\in\C':|S'|=|S|} \E_{S'}[d(\hat S,S')]\ .$$ This ensures that without loss of generality we can consider only symmetric procedures. It is important to note that that this approach is valid only if the class $\C'$ is invariant under permutations. Finally, for symmetric procedures the lower bounds we derive are are also applicable to measures of risk different than , such as the *average estimation risk* $\frac{1}{|\C'|}\sum_{S'\in\C'} \E_{S'}[d(\hat S,S')]$. Main Results - Estimation ------------------------- \[thm:estimation\] Let $\C'$ denote the class of all subsets of $\{1,\ldots,n\}$ with cardinality $s$, $s+1$ and $s-1$. Let $\hat S\equiv \hat S(D)$ be an arbitrary adaptive sensing estimator, where $D=\{Y_i,A_i,\Gamma_i\}_{i=1}^{\infty}$. If $$\max_{S\in \C'}\ \E_S[d(\hat S,S)]\leq \epsilon\ ,$$ where $0<\epsilon<1$ then necessarily $$x_{\min} \geq \sqrt{\frac{2n}{m}\left(\log s + \log\frac{n-s}{n+1} + \log\frac{1}{2\epsilon}\right)}\ .$$ The proof of the theorem is presented at the end of this section. As before it is useful to look at the asymptotic behavior, and the case $s\ll n$ is particularly interesting. \[coro:estimation\] Consider the setting of Theorem \[thm:estimation\] and assume $s=o(n)$ as $n\rightarrow\infty$. Let $\hat S_n$ be an arbitrary estimation procedure for which $$\lim_{n\rightarrow \infty} \max_{S\in \C'} \E_S[d(\hat S_n,S)]=0\ .$$ Necessarily $$x_{\min} \geq \sqrt{2\frac{n}{m}\left(\log s+\omega_n\right)}\ ,$$ where $\omega_n$ is a sequence for which $\lim_{n\rightarrow\infty} \omega_n=\infty$. For the FDR+NDR risk we can use the same proof approach to obtain a much less restrictive bound on the signal magnitude. \[coro:estimation\_FDR\] Consider the setting of Theorem \[thm:estimation\] and assume $s=o(n)$. Let $\hat S_n$ be an arbitrary estimation procedure such that $$\lim_{n\rightarrow \infty} R_\text{FDR+NDR}(\hat S,S)=0\ .$$ Necessarily $$x_{\min} \geq \omega_n\sqrt{\frac{n}{m}}\ ,$$ where $\omega_n$ is a sequence for which $\lim_{n\rightarrow\infty} \omega_n=\infty$. A sketch of the proof of this corollary can be found in the Appendix. The proof follows a similar approach as that of Theorem \[thm:detection\], and capitalizes heavily on the symmetry of the estimation procedure. In light of Lemma \[lemma:symmetrization\] it suffices to consider symmetric procedures, that is, procedures that satisfy and . Let $S\in\C'$ be arbitrary and assume that $$\E_S[d(\hat S,S)]\leq \epsilon\ ,$$ where $0<\epsilon<1$. Clearly $$\begin{aligned} \E_S[d(\hat S,S)] &=& \E_S\left[ \sum_{i=1}^n \1\{\hat S_i \neq S_i\}\right]\\ &=& \sum_{i\in S} \E_S\left[\1\{\hat S_i \neq 1\}\right] \ + \ \sum_{j\notin S} \E_S\left[\1\{\hat S_j \neq 0\} \right]\\ &=& \sum_{i\in S} \P_S\left(\hat S_i \neq 1\right) \ + \ \sum_{j\notin S} \P_S\left(\hat S_j \neq 0\right)\ .\end{aligned}$$ As we consider symmetric procedures we conclude that $$\forall i\in S \ \P_S(\hat S_i \neq 1)\leq \frac{\epsilon}{|S|}\ ,$$ and $$\forall i\notin S \ \P_S(\hat S_i \neq 0)\leq \frac{\epsilon}{n-|S|}\ .$$ For our purposes it is convenient to re-write the likelihood ratio as $$\begin{aligned} \LR_{S,S'}(d) &=& \frac{f(d;S)}{f(d;S')}\\ &=& \prod_{i=1}^n \prod_{k:a_k=i} \frac{f_{Y_k|A_k,\Gamma_k}(y_k|a_k,\gamma_k;S)}{f_{Y_k|A_k,\Gamma_k}(y_k|a_k,\gamma_k;S')}\ .\end{aligned}$$ Now let $S\in\C$ be an arbitrary set of cardinality $s$, and define $S^{(i)}\in\C'$ to be $$S^{(i)}=\left\{\begin{array}{ll} S\setminus\{i\} & \text{ if } i\in S\\ S\cup\{i\} & \text{ if } i\notin S \end{array} \right. \ ,$$ in words, we either remove element $i$ if $i\in S$, or add it otherwise, meaning that $S\Delta S^{(i)}=\{i\}$. We proceed in a similar way as we did in the signal detection scenario. Let ${i\in\{1,\ldots,n\}}$ be arbitrary. We conclude that $$\forall i\in S\quad \E_S \left[ \log \LR_{S,S^{(i)}}\right] \geq -\log\left(2\P_S\left(\hat S_i \neq 1\right)+2\P_{S^{(i)}} \left(\hat S_i \neq 0 \right)\right)\ ,$$ and $$\forall i\notin S\quad \E_S \left[ \log \LR_{S,S^{(i)}}\right] \geq -\log\left(2\P_S\left(\hat S_i \neq 0 \right)+2\P_{S^{(i)}} \left(\hat S_i \neq 1\right)\right)\ .$$ We now take advantage of the symmetry of the estimator, to conclude that $$\forall i\in S\quad \E_S \left[ \log \LR_{S,S^{(i)}}\right] \geq -\log\left(\frac{2\epsilon}{s}+\frac{2\epsilon}{n-s+1}\right)\ ,\label{eqn:LR_lowerbound1}$$ and $$\forall i\notin S\quad \E_S \left[ \log \LR_{S,S^{(i)}}\right] \geq -\log\left(\frac{2\epsilon}{n-s}+\frac{2\epsilon}{s+1}\right)\ .\label{eqn:LR_lowerbound2}$$ Now that we have lower bounds for $\E_S \left[ \log \LR_{S,S^{(i)}}\right]$ we need to evaluate this quantity in terms of $\mu$. This is easily done by noting that for $i\in \{1,\ldots,n\}$ $$\begin{aligned} \E_S \left[ \log \LR_{S,S^{(i)}}\right] &=& \E_S \left[\sum_{k:A_k=i} \log \frac{f_{Y_k|A_k,\Gamma_k}(Y_k|A_k,\Gamma_k;S)}{f_{Y_k|A_k,\Gamma_k}(Y_k|A_k,\Gamma_k;S^{(i)})}\right]\\ &=& \E_S \left[\sum_{k:A_k=i} \frac{\mu^2}{2}\Gamma^2_k\right]\ ,\end{aligned}$$ Note that we cannot yet evaluate the above expression, as one cannot invoke the sensing budget constraint . This can be addressed by summing each of the above terms over $i\in\{1,\ldots,n\}$. On one hand $$\sum_{i=1}^n \E_S \left[ \log \LR_{S,S^{(i)}}\right] = \E_S \left[\sum_{i=1}^n \sum_{k:A_k=i} \frac{\mu^2}{2}\Gamma^2_k\right] = \E_S \left[\sum_{k=1}^\infty \frac{\mu^2}{2}\Gamma^2_k\right] \leq \frac{m \mu^2}{2}\ .\label{eqn:upper_bound_estimation}$$ On the other hand $$\begin{aligned} \sum_{i=1}^n \E_S \left[ \log \LR_{S,S^{(i)}}\right] &=& \sum_{i\in S} \E_S \left[ \log \LR_{S,S^{(i)}}\right]+\sum_{i\notin S} \E_S \left[ \log \LR_{S,S^{(i)}}\right]\\ &\geq& -s \log\left(\frac{2\epsilon}{s}+\frac{2\epsilon}{n-s+1}\right)-(n-s)\log\left(\frac{2\epsilon}{n-s}+\frac{2\epsilon}{s+1}\right)\ .\end{aligned}$$ We can get a more insightful bound by reorganizing the various terms $$\begin{aligned} \lefteqn{\sum_{i=1}^n \E_S \left[ \log \LR_{S,S^{(i)}}\right] \geq n\log\frac{1}{2\epsilon}+s\log\frac{s(n-s+1)}{n+1}+(n-s)\log\frac{(n-s)(s+1)}{n+1}}\nonumber\\ &=& n\log\frac{1}{2\epsilon}+s\log s +(n-s)\log(s+1)+s\log\frac{n-s+1}{n+1}+(n-s)\log\frac{n-s}{n+1}\label{eqn:above}\\ &\geq& n\left(\log s+\log\frac{n-s}{n+1}+\log\frac{1}{2\epsilon}\right)\nonumber\ ,\end{aligned}$$ where the last inequality follows by noting that $\log(s+1)>\log s$ and $\log(n-s+1)>\log(n-s)$. Using this together with concludes the proof. Tightness of the Estimation Lower Bounds {#sec:tightness_estimation} ---------------------------------------- Similarly to what happened in the detection setting the lower bounds derived for estimation are also tight, in the sense that there are inference procedures able to achieve them. In [@malloy:11; @malloy:11b] a slightly different problem was considered, where each measurement had the same accuracy/precision and one desired to control the total number of errors in $\hat S$. Their results were stated in term of conditions on the signal magnitude $\mu$ that were necessary to ensure the risk converged to zero. In their setting there is no strict sensing budget, but instead only control over the expect precision budget used. In other words, the procedures in [@malloy:11; @malloy:11b] do not always satisfy the sensing budget in equation , but instead satisfy an *expected* sensing budget constraint $$\E\left[\sum_{k=1}^\infty \Gamma^2_k\right] \leq m\ .$$ Such methods can be modified to ensure that the sensing budget is fulfilled with increasingly high probability (as $n$ grows) without altering their asymptotic performance behavior, and we can state the following result, proved in the Appendix. \[prop:estimation\_upperbound\] Assume $s+1 \leq \frac{n}{(\log_2^2 n) - 3}$. Let $$\mu\geq\sqrt{\frac{4n}{m}(2\log (s+1)+5\log\log_2 n)}\ .$$ There is a sensing and estimation strategy yielding an estimator $\hat S$ such that $$\max_{S\in\C'} \E_S[d(\hat S,S)]\rightarrow 0\ ,$$ as $n\rightarrow\infty$. This means that provided $x_{\min}$ is of the order $\sqrt{(n/m)(\log(s) +\log\log n)}$ we can ensure exact recovery of a sufficiently sparse signal support with probability approaching 1. The proposition is proved in the Appendix. The constants in the above result are rather loose, and can be made much tighter (see [@malloy:11]). The $\log\log n$ term is an artifact of this method (which is parameter adaptive and agnostic about $s$). This term can be entirely avoided by considering another procedure, namely by executing in parallel $n$ properly calibrated sequential likelihood ratio tests, which requires the knowledge of the sparsity level $s$. Such a procedure achieves precisely the bound in Corollary \[coro:estimation\]. Lower bounds for estimation have been derived under a different set of assumptions for the class of entry-wise sequential tests in [@malloy:11b]. In contrast the results in the current paper pertain any adaptive sensing procedure (and not only entry-wise testing procedures). Control of the FDR+NDR risk was considered in [@haupt:10] in the exact setting described in this paper, and the distilled sensing procedure in proposed there is able to achieve the bound in Corollary \[coro:estimation\_FDR\] provided $\log\log\log n < s \leq n^{1-\beta}$ for some $0<\beta<1$. Therefore the lower bounds on the FDR+NDR risk are also tight for a wide range of sparsity levels. Relation to Compressed Sensing ------------------------------ The proof technique used in Theorem \[thm:estimation\] also provides some important insights for the problem of adaptive compressive sensing. This setting is different than the one considered so far and the observation model is now of the form $${{\boldsymbol{Y}}}={{\boldsymbol{A}}}{{\boldsymbol{x}}}+{{\boldsymbol{W}}}\ ,$$ where ${{\boldsymbol{Y}}}\in\R^l$ denotes the observations, ${{\boldsymbol{A}}}\in\R^{l\times n}$ is the design/sensing matrix, ${{\boldsymbol{x}}}\in\R^n$ is the unknown signal, and ${{\boldsymbol{W}}}\in\R^l$ is Gaussian with zero mean an identity covariance matrix. The rows of ${{\boldsymbol{A}}}$ can be designed sequentially, and the $k^{th}$ row (denoted by ${{\boldsymbol{A}}}_{k\cdot}$) can depend explicitly on $\{Y_j,{{\boldsymbol{A}}}_{j\cdot}\}_{j=1}^{k-1}$. Note that $W_k$ is a normal random variable independent of $\{Y_j,{{\boldsymbol{A}}}_{j\cdot},W_j\}_{j=1}^{k-1}$ and also independent of ${{\boldsymbol{A}}}_{k\cdot}$. This setting is particularly interesting when we impose some constrains on ${{\boldsymbol{A}}}$, namely $$\E\left[\|{{\boldsymbol{A}}}\|_F^2\right] \leq m\ ,$$ where $\|\cdot\|_F$ is the Frobenius matrix norm. Like , this sensing budget condition is very natural and the issue of noise is irrelevant without it. Each row ${{\boldsymbol{A}}}_{k\cdot}$ plays the role of the sensing action $A_k$ in our original scenario, and $\|{{\boldsymbol{A}}}_{k\cdot}\|_2^2$ plays the role of the precision parameter $\Gamma^2_k$ in . As before, we do not impose any restrictions on the total number of measurements $l$, which can be potentially infinite. We can show the following result using an approach similar to that of Theorem \[thm:estimation\]. \[prop:CS\_lowerbound\] Consider the adaptive compressed sensing setting as described above, with observations ${{\boldsymbol{Y}}}={{\boldsymbol{A}}}{{\boldsymbol{x}}}+{{\boldsymbol{W}}}$, where ${{\boldsymbol{W}}}$ is Gaussian zero mean with identity covariance matrix and $\E\left[\|{{\boldsymbol{A}}}\|_F^2\right] \leq m$. Let ${\cal H}(\mu)\subset\R^n$ be the class of all vectors ${{\boldsymbol{x}}}$ with support in $\C'$ (i.e. the support[^2] has cardinality $s$, $s+1$ or $s-1$) and the magnitude of the minimum non-zero entries greater or equal than $\mu$. That is $${\cal H}(\mu)=\{{{\boldsymbol{x}}}\in\R^n: {\operatorname{supp}}(x)\in\C' \text{ and } \min_i\{|x_i|:x_i\neq 0\}\geq \mu\}\ .$$ Let $D=\{{{\boldsymbol{Y}}},{{\boldsymbol{A}}}\}$ and $\hat S(D)$ be an arbitrary estimator. If $$\max_{{{\boldsymbol{x}}}\in{\cal H}{\mu}}\ \E_{{{\boldsymbol{x}}}}[d(\hat S,S)]\leq \epsilon\label{eqn:CS_condition}$$ where $0<\epsilon<1$ then necessarily $$\mu \geq \sqrt{\frac{2n}{m}\left(\log s + \log\frac{n-s}{n+1} + \log\frac{1}{2\epsilon}\right)}\ .$$ The proof of the proposition can be found in the Appendix. In [@arias-castro:11] the authors derived lower bounds for both support recovery and mean square error risk for adaptive compressive sensing. In their setting $l=m$, and each row of the matrix ${{\boldsymbol{A}}}$ has expected norm at most 1. These two constrains imply the Frobenius norm constrain in Proposition \[prop:CS\_lowerbound\]. Theorem 2 in that paper states that the minimum signal amplitude $x_{\min}$ must be greater than $\sqrt{n/m}$ to ensure that support recovery is possible within the class of all possible $s$-sparse signals. In contrast, our result shows that lower bound is not entirely tight. Formally, if $s=o(n)$ and $$\lim_{n\rightarrow \infty} \max_{S\in \C'} \E_S[d(\hat S_n,S)]=0$$ we have necessarily $$x_{\min} = \sqrt{2\frac{n}{m}\left(\log s +\omega_n\right)}\ ,$$ as $n\rightarrow\infty$. So, the above result improves the bound in [@arias-castro:11] by a $\log s$ factor. In light of the recent results in [@haupt:12] it seems plausible that this is a necessary and sufficient term. However, a precise characterization of these limits remains an open problem. Conclusion ========== In this paper we presented several lower bounds for detection and estimation of sparse signals using adaptive sensing. These results bridge a gap in our understanding of adaptive sensing and show that methodologies recently proposed in the literature are nearly optimal. A very interesting insight is that, for signal detection, the sparsity structure is essentially irrelevant. The intuition being that for detection it suffices to identify one non-zero component, and cues provided by the structure are not too useful under adaptive sensing scenarios. However, for signal estimation it is not clear if structure helps, which raises many interesting directions for future research. Acknowledgements {#acknowledgements .unnumbered} ================ The author wants to thank Nikhil Bansal for suggesting the elegant proof of Lemma \[lemma:optimization\]. Also, the modification of DS proposed in Section \[sec:tightness\_detection\] came into being after lengthy discussions with Jarvis Haupt. Finally the author wants to thank the two anonymous referees and the associate editor for their valuable comments and suggestions towards an improvement of the content and presentation. Appendix {#appendix .unnumbered} ======== : We begin by proving the first result. Let $$b'_i=\left\{\begin{array}{ll} m/|\Xi| & \text{ if } i\in\Xi\\ 0 & \text{ otherwise } \end{array}\right. \quad i=1,\ldots,n \ .$$ Begin by noticing that $$\sup_{{{\boldsymbol{b}}}\in\R_0^+: \sum_{i=1}^n b_i=m} \min_{S\in \C} \sum_{i\in S} b_i \geq \min_{S\in \C} \sum_{i\in S} b'_i=\frac{ms}{|\Xi|}\ .$$ The proof proceeds by contradiction, and makes use of a probabilistic argument. Suppose there is a vector ${{\boldsymbol{b}}}^*\in\R_0^+$ such that $\sum_{i=1}^n b^*_i\leq m$ and $$\min_{S\in \C} \sum_{i\in S} b^*_i>\frac{ms}{|\Xi|}\ .\label{eqn:contradiction}$$ We show next that this in contradiction with the symmetry assumption. Let $J$ be a uniform random variable with range $\Xi$. Then $$\E[b^*_J]=\frac{1}{|\Xi|}\sum_{j\in\Xi} b^*_j\leq\frac{1}{|\Xi|}\sum_{j=1}^n b^*_j \leq \frac{m}{|\Xi|}\ .\label{eqn:E_J}$$ Now construct another random variable $K$ in a hierarchical fashion: first take $S$ drawn uniformly over $\C$, and given $S$ take $K$ drawn uniformly over $S$. Then clearly $$\begin{aligned} \E[b^*_K] &=& \E\left[\E\left.\left[ b^*_K \right| S \right]\right]\nonumber\\ &=& \E\left[\frac{1}{s}\sum_{k\in S} b^*_k \right]\nonumber\\ &\geq& \E\left[\min_{S\in\C} \frac{1}{s}\sum_{k\in S} b^*_k \right]\nonumber\\ &=& \frac{1}{s} \E\left[\min_{S\in\C} \sum_{k\in S} b^*_k \right]\nonumber\\ &>& \frac{m}{|\Xi|}\ ,\label{eqn:E_K}\end{aligned}$$ where the strict inequality follows from . To conclude the proof we just need to notice that $J$ and $K$ have exactly the same distribution if the class $\C$ is symmetric. Let $k\in\Xi$ be arbitrary. Then $$\begin{aligned} \P(K=k) &=& \E[\1\{K=k\}]\\ &=& \E[\E[\1\{K=k\} | S ]]\\ &=& \E\left[\frac{1}{s} \1\{k\in S\} \right]\\ &=& \frac{1}{s} \P(k\in S)\\ &=& \frac{1}{s} \frac{s}{|\Xi|}=\frac{1}{|\Xi|}\ .\end{aligned}$$ Therefore both $J$ and $K$ are uniformly distributed over $\Xi$ and so $\E[b^*_J]=\E[b^*_K]$. This creates a contradiction between and invalidating the existence of vector ${{\boldsymbol{b}}}^*$, concluding the proof. For the second result note simply that $$\begin{aligned} \frac{1}{|\C|} \sum_{S\in\C} \sum_{i\in S} b_i &=& \frac{1}{|\C|} \sum_{S\in\C} \sum_{i=1}^n b_i \1\{i\in S\}\\ &=& \sum_{i=1}^n b_i \frac{1}{|\C|} \sum_{S\in\C} \1\{i\in S\}\\ &=& \sum_{i=1}^n b_i \frac{s}{|\Xi|}\ ,\end{aligned}$$ where the last step follows from the symmetry assumption. The result of the lemma is now immediate. : If $\tilde R(\hat\Phi)<\epsilon/2$ the result follows immediately from the simple fact that $R(\hat\Phi) \leq 2\tilde R(\hat\Phi)$. Therefore $\tilde R(\hat\Phi)<\epsilon/2$ implies that $R(\hat\Phi)<\epsilon$ and we just apply the result of the theorem. For the second statement it is useful to look at $S$ as a uniform random variable with range $\C$. In the proof of Theorem \[thm:detection\] we showed that, for any $S\in\C$ $$\E_\emptyset[\log \LR_{\emptyset,S}|S] \geq -\log\left(2\P_\emptyset(\hat\Phi\neq 0)+2\P_1(\hat\Phi\neq 1|S)\right) \ ,$$ where $\P_1$ denotes the probability measure under the alternative hypothesis. By taking the expectation on both sides we have $$\E_\emptyset[\log \LR_{\emptyset,S}] \geq -\frac{1}{|\C|} \sum_{S\in\C} \log\left(2\P_\emptyset(\hat\Phi\neq 0)+2\P_1(\hat\Phi\neq 1|S)\right) \ .$$ To simplify the notation let $p_0\equiv \P_\emptyset(\hat\Phi\neq 0)$ and $p_S\equiv\P_1(\hat\Phi\neq1|S)$. The statement $\bar R(\hat\Phi)\leq\epsilon$ is equivalent to $p_0+\frac{1}{|\C|}\sum_{S\in\C} p_S\leq \epsilon$. Accordingly define the constraint set ${\cal P}\subseteq \R^{1+|\C|}$ as $${\cal P}=\left\{p_0,\{p_S\}_{S\in\C}: p_0+\frac{1}{|\C|}\sum_{S\in\C} p_S\leq \epsilon\right\}\ .$$ We have that $$\begin{aligned} \E_\emptyset[\log \LR_{\emptyset,S}] &\geq& \min_{\cal P} \left\{-\frac{1}{|\C|} \sum_{S\in\C} \log\left(2p_0+2p_S\right)\right\}\\ &=& \log \frac{1}{2\epsilon}\ ,\end{aligned}$$ where the last step follows from a straightforward Lagrange multiplier argument, to conclude that the minimum is attained by taking $p_0+p_S=\epsilon$ for all $S\in\C$. The next step, similar to the proof of Theorem \[thm:detection\], is to solve $\sup_{\cal A} \E_\emptyset[\log \LR_{\emptyset,S}]$, where it is important to recall that $S$ is random. Following the same approach as in the proof of the theorem yields $$\begin{aligned} \sup_{\cal A} \E_\emptyset[\log \LR_{\emptyset,S}] &=& \frac{\mu^2}{2} \sup_{{{\boldsymbol{b}}}\in\R_0^+: \sum_{i=1}^n b_i=n} \frac{1}{|\C|}\sum_{S\in\C} \sum_{i\in S} b_i \ ,\end{aligned}$$ where $b_i$ is defined in . The second result of Lemma \[lemma:optimization\] characterizes the solution of this optimization problem, and therefore $$\frac{\mu^2 m s}{2|\Xi|}\geq \log\frac{1}{2\epsilon}\ .$$ Simple algebraic manipulation concludes the proof. : To ease the notation let $\C_s$ denote the class of all subsets of $\{1,\ldots,n\}$ with cardinality $s$. Let $S\in\C_s$ and $i\in S$ be fixed, but arbitrary. Note that the permutation $\perm$ maps this set to another set $S^{(\perm)}=\perm(S)\in\C_s$ with the same cardinality. Furthermore, since the permutation is chosen uniformly over the set of all permutations this set is uniformly distributed over $\C_s$, that is $$S^{(\perm)}\sim\text{Unif}(\C_s)\ .$$ In addition define the random variable $J=\perm(i)$. This is obviously uniformly distributed over $\{1,\ldots,n\}$. More importantly, conditionally on $S^{(\perm)}$, $J$ is uniformly distributed over the set $S^{(\perm)}$. In other words, for arbitrary $k\in\{1,\ldots,n\}$ $$\begin{aligned} \P(J=k|S^{(\perm)}) &=& \P(\perm(i)=k|S^{(\perm)})\\ &=& \P(\perm^{-1}(k)=i|S^{(\perm)})\\ &=& \left\{\begin{array}{ll} 1/s & \text{ if } k\in S^{(\perm)}\\ 0 & \text{ otherwise } \end{array}\right. \ .\end{aligned}$$ Therefore $$\begin{aligned} \P(\hat S^{(\perm)}_i\neq1) &=& \E\left[\1\{\hat S_{\perm(i)}\neq 1\}\right]\\ &=& \E\left[\E\left[\left. \1\{ \hat S_{\perm(i)}\neq 1\}\right|S^{(\perm)}\right]\right]\\ &=& \E\left[\frac{1}{s}\sum_{j\in S^{(\perm)}} \P_{S^{(\perm)}} (\hat S_j\neq 1)\right]\\ &=& \frac{1}{|\C_s|}\sum_{S'\in\C_s} \frac{1}{s}\sum_{j\in S'} \P_{S'} (\hat S_j\neq 1) \ .\end{aligned}$$ where the two last steps follow from the distribution of $S^{(\perm)}$ and $\perm(i)$. The proof of the lemma statement for $i\notin S$ is entirely analogous. Finally, the last result in the lemma follows trivially from the other two statements. : The result in the corollary follows in the same manner as the result in Theorem \[thm:estimation\], but noticing that for symmetric estimation procedures the requirements on the estimator $\hat S_i$ for each $i\in\{1,\ldots,n\}$ are much less stringent. In particular let $S\in\C'$ be arbitrary and assume that $$R_\text{FDR+NDR}(\hat S,S) \leq \epsilon\ ,$$ where $\epsilon>0$, which implies that both FDR and NDR are less than $\epsilon$. Now consider symmetric procedures and let $\alpha=P(\hat S_i\neq 0)$ for $i\notin S$ and $\beta=P(\hat S_i\neq 1)$ for $i\in S$. Clearly, the constraint in NDR implies that $$\epsilon\geq \text{NDR}(\hat S,S)=\E\left[\frac{|S \setminus \hat S|}{|S|}\right]=\frac{|S|\beta}{|S|}=\beta\ .$$ The constraint on FDR is a bit more difficult to analyze, due to the random denominator its definition. However, a very sloppy bound suffices, namely $$\epsilon\geq \text{FDR}(\hat S,S) = \E\left[\frac{|\hat S\setminus S|}{|\hat S|}\right] \geq \E\left[\frac{|\hat S\cap S^c|}{n}\right] = \frac{(n-|S|)\alpha}{n}\ .$$ Therefore we conclude that $\alpha\leq \epsilon$ suffices. Note that this is a very loose but nevertheless sufficient bound. The rest of the proof proceeds now in the same fashion as Theorem \[thm:estimation\] and Corollary \[coro:estimation\]. : The proof of this result mimics closely the proof of Theorem \[thm:estimation\], with the necessary changes to account for the different sensing model. The first step is to reduce the class of signals under consideration. Clearly signals of the form are also in the class ${\cal H}(\mu)$. Therefore $$\max_{{{\boldsymbol{x}}}\in{\cal H}{\mu}}\ \E_{{{\boldsymbol{x}}}}[d(\hat S,S)]\geq \max_{S\in\C'}\ \E_S[d(\hat S,S)]\ ,$$ where the expectation on the right-hand-side is taken assuming ${{\boldsymbol{x}}}$ is of the form with support $S$. Condition  therefore implies that $$\max_{S\in\C'}\ \E_S[d(\hat S,S)]\leq \epsilon\ ,$$ so, for the purpose of computing a lower bound it suffices to consider on the signals where all the non-zero components are valued $\mu$. It is important to note that this subclass of signals might not correspond to the “hardest” signals to estimate, and no claim is made about this. However, this subclass seems to capture the essential aspects of the problem in light of the bounds derived. As the class of signals under consideration is the same as in Theorem \[thm:estimation\] the only change in that proof stems from the different observation model, which in turn results in a different log-likelihood ratio. Notice that, as before, we can consider only symmetric procedures in the sense of Lemma \[lemma:symmetrization\]. To aid in the presentation let $A_{ij}$ denote the entry in the $i$th row and $j$th column of the matrix ${{\boldsymbol{A}}}$, and let ${{\boldsymbol{A}}}_{i\cdot}$ and ${{\boldsymbol{A}}}_{\cdot j}$ denote respectively the $i$th row of and the $j$th columm of ${{\boldsymbol{A}}}$. The log-likelihood ratio is therefore given by $$\begin{aligned} \lefteqn{\log \LR_{S,S'}({{\boldsymbol{Y}}},{{\boldsymbol{A}}}) = \log \frac{f({{\boldsymbol{Y}}},{{\boldsymbol{A}}};S)}{f({{\boldsymbol{Y}}},{{\boldsymbol{A}}};S')}}\\ &=& \sum_{k=1}^\ell \log \frac{f_{Y_k|{{\boldsymbol{A}}}_{k\cdot}}(Y_k|{{\boldsymbol{A}}}_{k\cdot};S)}{f_{Y_k|{{\boldsymbol{A}}}_{k\cdot}}(Y_k|{{\boldsymbol{A}}}_{k\cdot};S')}\\ &=& \frac{1}{2}\sum_{k=1}^\ell \left[ \left(Y_k-\mu\sum_{j\in S'} A_{kj}\right)^2-\left(Y_k-\mu\sum_{j\in S} A_{kj}\right)^2 \right]\ .\end{aligned}$$ Given this, the expected log-likelihood ratio can be computed quite easily as before, and we get $$\E_S\left[\log \LR_{S,S'}({{\boldsymbol{Y}}},{{\boldsymbol{A}}})\right] = \frac{\mu^2}{2} \sum_{k=1}^\ell \E_S\left[\left(\left(\sum_{j\in S} A_{kj}\right)-\left(\sum_{j\in S'} A_{kj}\right)\right)^2\right] \ .\label{eqn:CS_LR}$$ Now consider the sets $S^{(i)}$ as in the proof of Theorem \[thm:estimation\]. Since we have $S\Delta S^{(i)}=\{i\}$ we get from Equation  $$\begin{aligned} \E_S\left[\log \LR_{S,S^{(i)}}({{\boldsymbol{Y}}},{{\boldsymbol{A}}})\right] &=& \frac{\mu^2}{2} \E\left[\sum_{k=1}^\ell A_{ki}^2\right]\nonumber\\ &=& \frac{\mu^2}{2} \E\left[\|{{\boldsymbol{A}}}_{\cdot i}\|_F^2\right]\ .\label{eqn:CS_LR_i}\end{aligned}$$ From this point on the proof proceeds in exactly the same fashion as that of Theorem \[thm:estimation\]. Begin by summing the terms over $i\in\{1,\ldots,n\}$ to get an upper bound on the expected likelihood ratio $$\sum_{i=1}^n \E_S \left[ \log \LR_{S,S^{(i)}}\right] = \frac{\mu^2}{2} \E\left[\sum_{i=1}^n \|{{\boldsymbol{A}}}_{\cdot i}\|_F^2 \right] = \frac{\mu^2}{2} \E\left[\|{{\boldsymbol{A}}}\|_F^2 \right] \leq \frac{m \mu^2}{2}\ .\label{eqn:CS_upper_bound}$$ Finally, the lower bounds on the log-likelihood ratio in and are not dependent on the nature of the likelihood ratio itself, but rather on the desired risk performance. So these bounds are valid in the compressed sensing setting as well. As in the proof of Theorem \[thm:estimation\] using these lower bounds together with concludes the proof. : We begin by introducing an algorithm that achieves the desired performance bound. Algorithm \[alg:SDS\] is described here for convenience of presentation and explained in detail in the next paragraphs. It is essentially the algorithm presented in [@malloy:11b] for the case of Gaussian observation noise. \ \ $\qquad k\leftarrow 0$, $i \leftarrow 1$, $\hat S\leftarrow \emptyset$\ $\qquad c_i\leftarrow 0$ for $i=1,\ldots,n$\ $\qquad \Gamma^2_j\leftarrow p$ for $j=1,2,\ldots$\ Sensing is performed coordinate-wise in a sequential way, until all the signal entries have been explored or the total sensing budget is exhausted. Note that all the measurements are made with the same precision $p$. For each signal entry $i$ the algorithm performs at most $l$ measurements. If any of these measurements is negative then entry $i$ is deemed not to belong to the support estimate $\hat S$. If all the $l$ measurements are non-negative then entry $i$ is deemed to belong to the support estimate. For convenience we identify the measurements of entry $i$ by $Y_i^{(j)}$, where $j\in\{1,\ldots,l\}$. In a sense the algorithm is a very crude version of a sequential likelihood ratio test. Given that we are interested in the general rates of error decay we do not optimize the algorithm parameters for performance and instead make crude choices that are sufficient to prove the result. In particular we take $p=m/(4n)$ and $l=\log_2^2 n$. The proof goes by showing first that, with high probability, the algorithm terminates before reaching the total sensing budget. Therefore, for the analysis we consider a modification of the algorithm were termination upon the event $p(k+1)>m$ is removed. Note that the number of measurements collected for entry $i$ is simply $c_i$. These are independent random variables. The total number of measurements collected is $\sum_{i=1}^n c_i$. Note that for all $i$ we have $0\leq c_i\leq l$. Furthermore, note that for $i\notin S$ the corresponding measurements $Y_i^{j}$ are zero mean normal random variables, which means that $\P_S(Y_i^{(j)}<0)=1/2$. Therefore $c_i$ corresponds to a truncated geometric random variable: $$i\notin S,\qquad \P_S(c_i=x)=\left\{\begin{array}{ll} (1/2)^x & \text{ if } x=1,\ldots,l-1\\ (1/2)^{l-1} & \text{ if } x=l\\ 0 & \text{ otherwise} \end{array}\right.\ .$$ Since these are truncated geometric random variables it is clear that $\E_S(c_i)\leq 2$ and $\V_S(c_i)\leq 2$. Now, Bernstein’s inequality (as stated in [@wasserman:06], page 9) tells us immediately that $$\P_S\left(\sum_{i\notin S} c_i -2(n-s) \geq t\right)\leq \exp\left(-\frac{1}{2}\frac{t^2}{2(n-s)+lt/3}\right)\ .$$ Taking $t=n-s$, and noting that $\sum_{i=1}^n c_i \leq sl+\sum_{i\notin S} c_i$ we conclude that $$\P_S\left(\sum_{i=1}^n c_i < 3(n-s) +sl\right)\geq 1-\exp\left(-\frac{1}{2}\frac{n-s}{2+l/3}\right)\ .$$ Now, provided $s\leq n/(l-3)$, we conclude that the total number of measurements of the algorithm is smaller than $4n$ with probability approaching 1 as $n$ grows, that is $$\P_S\left(\sum_{i=1}^n c_i < 4n\right)\geq 1-\exp\left(-\frac{1}{2}\frac{n-s}{2+l/3}\right)\ .$$ Therefore the total amount of precision used is under $4np$ with high probability. Therefore, for the choice $p=m/(4n)$ the total amount of precision used is less than $m$ with high probability. In other words, $$\P_S(p(k+1)> m)\leq \exp\left(-\frac{1}{2}\frac{n-s}{2+l/3}\right)\ .\label{eqn:HP}$$ This result ensures the modified algorithm is essentially the same as the original one, as in the latter we will rarely encounter the event $p(k+1)> m$ (this statement will be made precise later). Therefore we can proceed by analyzing the performance of the modified algorithm. This can be done in a entry-wise fashion and we must consider the cases $i\in S$ and $i \notin S$. For $i\notin S$ note that $$\P_S(i\in\hat S)=\P_S\left(\bigcap_{i=1}^l \{Y_i^{(j)}\geq 0\}\right)=\frac{1}{2^l}\ .$$ For $i\in S$ we have $$\P_S(i\notin\hat S)\leq\P_S\left(\bigcup_{i=1}^l \{Y_i^{(j)}<0\}\right)=\frac{l}{2}\exp\left(-\frac{p\mu^2}{2}\right)\ ,$$ where the result follows from a Gaussian tail and the union (of events) bounds. These two results together give $$\begin{aligned} \E_S[d(\hat S,S)] &=& \sum_{i\notin S} \P_S(i\in\hat S) \ + \ \sum_{i\in S} \P_S(i\notin\hat S)\\ &\leq& \frac{n-s}{2^l} \ + \ \frac{sl}{2}\exp\left(-\frac{p\mu^2}{2}\right)\\ &\leq& \frac{n-s}{2^l} \ + \ \frac{1}{2}\exp\left(-\frac{p\mu^2-2\log s-2\log l}{2}\right)\ .\end{aligned}$$ Now, given the choice $l=\log^2_2 n$ we conclude that the first term in the above summation converges to 0 as $n\rightarrow\infty$, and the second term also converges to zero provided $$-p\mu^2-2\log s-2\log l\rightarrow \infty$$ as $n\rightarrow\infty$. Clearly if $\mu\geq\sqrt{\frac{4n}{m}(2\log s+5\log\log_2 n)}$ this condition is satisfied. To conclude the proof all that remains to be done is to take Equation \[eqn:HP\] into account to conclude that, for the original algorithm $$\begin{aligned} \E_S[d(\hat S,S)] &\leq& \E_S[d(\hat S,S)|p(k+1)\leq m]+\E_S[d(\hat S,S)|p(k+1)> m]\P_S(p(k+1)> m)\\ &\leq& \E_S[d(\hat S,S)|p(k+1)\leq m]+n\P_S(p(k+1)> m)\\ &\leq& \frac{n-s}{2^l} \ + \ \frac{1}{2}\exp\left(-\frac{p\mu^2-2\log s-2\log l}{2}\right)+n\exp\left(-\frac{1}{2}\frac{n-s}{2+\log^2_2 n/3}\right)\ .\end{aligned}$$ Clearly, under the condition $s\leq n/(l-3)$ all the terms above converge to zero as $n\rightarrow\infty$, concluding the proof. [^1]: Due to statistical sufficiency there is no gain in measuring each signal entry more than once. [^2]: Define ${\operatorname{supp}}({{\boldsymbol{x}}})=\{i:x_i\neq 0\}$.
--- abstract: 'Core concepts in singular optics, especially the polarization singularity, have rapidly penetrated the surging fields of topological and nonhermitian photonics. For open photonic structures with degeneracies in particular, the polarization singularity would inevitably encounter another sweeping concept of Berry phase. Several investigations have discussed, in an inexplicit way, the connections between both concepts, hinting at that nonzero topological charges for far-field polarizations on a loop is inextricably linked to its nontrivial Berry phase when degeneracies are enclosed. In this work, we reexamine the seminal photonic crystal slab that supports the fundamental two-level nonhermitian degeneracies. Regardless of the invariance of nontrivial Berry phase for different loops enclosing both exceptional points, we demonstrate that the associated polarization fields exhibit topologically inequivalent patterns that are characterized by variant topological charges, including even the trivial scenario of zero charge. It is further revealed that for both bands, the seemingly complex evolutions of polarizations are bounded by the global charge conservation, with extra points of circular polarizations playing indispensable roles. This indicates that tough not directly associated with any local charges, the invariant Berry phase is directly linked to the globally conserved charge, the physical principles underlying which have all been further clarified by a modified Berry-Dennis model. Our work can potentially trigger an avalanche of studies to explore subtle interplays between Berry phase and all sorts of optical singularities, shedding new light on subjects beyond photonics that are related to both Berry phase and singularities.' author: - Weijin Chen - Qingdong Yang - Yuntian Chen - Wei Liu title: Evolution and global charge conservation for polarization singularities emerging from nonhermitian degeneracies --- Pioneered by Pancharatnam, Berry, Nye and others [@PANCHARATNAM_1955_ProcIndianAcadSci_propagation; @PANCHARATNAM_1956_ProcIndianAcadSci_Generalized; @BERRY_1984_Proc.R.Soc.A_Quantal; @berry_adiabatic_1987; @BERRY_2010_Nat.Phys._Geometric; @BERRY_1976_Adv.Phys._Waves; @NYE_1974_Proc.R.Soc.Lond.Math.Phys.Sci._Dislocations; @NYE_1983_ProcRSocA_Polarization; @NYE_1983_Proc.R.Soc.A_Lines; @BERRY_2001_SecondInt.Conf.Singul.Opt.Opt.VorticesFundam.Appl._Geometry], Berry phase and singularities have become embedded languages all across different branches of photonics. Optical Berry phase is largely manifested through either polarization evolving Pancharatnam-Berry phase or the spin-redirection Bortolotti-Rytov-Vladimirskii-Berry phase [@PANCHARATNAM_1956_ProcIndianAcadSci_Generalized; @berry_adiabatic_1987; @BORTOLOTTI_1926_RendRAccNazLinc_Memories; @RYTOV_1938_DoklAkadNaukSSSR_Transition; @VLADIMIRSKII_1941_DoklAkadNaukSSSR_rotation; @BERRY_2010_Nat.Phys._Geometric; @BLIOKH_2019_Rep.Prog.Phys._Geometric; @COHEN_2019_NatRevPhys_Geometric]; while optical singularities are widely observed as singularities of intensity (caustics) [@BERRY_1976_Adv.Phys._Waves], phase (vortices) [@NYE_1974_Proc.R.Soc.Lond.Math.Phys.Sci._Dislocations] or polarization [@NYE_1983_ProcRSocA_Polarization; @NYE_1983_Proc.R.Soc.A_Lines; @BERRY_2001_SecondInt.Conf.Singul.Opt.Opt.VorticesFundam.Appl._Geometry]. As singularities for complex vectorial waves, polarization singularities are skeletons of electromagnetic waves and are vitally important for understanding various interference effects underlying many applications [@DENNIS_2009_ProgressinOptics_Chapter; @GBUR_2016__Singular]. There is a superficial similarity between the aforementioned two concepts: both the topological charge of polarization field (Hopf index of line field [@HOPF_2003__Differential]) and Berry phase are defined on a closed circuit. In spite of this, it is quite unfortunate that almost no definite connections have been established between them in optics. This is fully understandable: Berry phase is defined on the Pancharatnam connection (parallel transport) that decides the phase contrast between neighbouring states on the loop [@BERRY_1984_Proc.R.Soc.A_Quantal; @berry_adiabatic_1987]; while the polarization charge reflects accumulated orientation rotations of polarization ellipses, which has no direct relevance to overall phase of each state. This explains why in pioneering works where both concepts were present [@BERRY_CURRENTSCIENCE-BANGALORE-_pancharatnam_1994; @BERRY_2000_Nature_Making; @BERRY_2003_Proc.R.Soc.Lond.A_optical; @BERRY_2004_CzechoslovakJournalofPhysics_Physicsa; @BERRY_2007_ProgressinOptics_Chapter], their interplay were rarely elaborated. Spurred by studies into bound states in the continuum, polarization singularities have gained enormous renewed interest in open periodic photonic structures, manifested in different morphologies with both generic and higher-order half-integer charges [@HSU_Nat.Rev.Mater._bound_2016; @HSU_Nature_observation_2013-1; @ZHEN_2014_Phys.Rev.Lett._Topological; @YANG_2014_Phys.Rev.Lett._Analytical; @GUO_Phys.Rev.Lett._topologically_2017; @KODIGALA_Nature_lasing_2017; @BULGAKOV_2017_Phys.Rev.Lett._Topological; @DOELEMAN_2018_Nat.Photonics_Experimentala; @ZHOU_2018_Science_Observationa; @ZHANG_2018_Phys.Rev.Lett._Observation; @KOSHELEV_2018_Phys.Rev.Lett._Asymmetrica; @CHEN_2019__Singularities; @CHEN_2019_Phys.Rev.B_Observing; @LIU_2019_Phys.Rev.Lett._Circularly; @JIN_2019_Nature_Topologicallya; @GUO_2020_Phys.Rev.Lett._Meron; @YIN_2020_Nature_Observationa; @YE_2020_Phys.Rev.Lett._Singular; @CHEN_2019_ArXiv190409910Math-PhPhysicsphysics_Linea; @LIU_2019_Phys.Rev.Lett._High; @HUANG_2020_Science_Ultrafasta; @WANG_2019_ArXiv190912618Phys._Generating]. Simultaneously, the significance of Berry phase has been further reinforced in surging fields of topological and nonhermitian photonics [@Lu2014_topological; @OZAWA_2018_ArXiv180204173; @PANCHARATNAM_1955_ProcIndianAcadSci_propagation; @BERRY_CURRENTSCIENCE-BANGALORE-_pancharatnam_1994; @BERRY_2004_CzechoslovakJournalofPhysics_Physicsa; @FENG_2017_Nat.Photonics_NonHermitiana; @EL-GANAINY_2018_Nat.Phys._NonHermitian; @MIRI_2019_Science_Exceptionala]. In periodic structures involving band degeneracies, Berry phase and polarization singularity would inevitably meet, which sparks the influential work on nonhermitian degeneracy [@ZHOU_2018_Science_Observationa] and several other following studies [@CHEN_2019_Phys.Rev.B_Observing; @GUO_2020_Phys.Rev.Lett._Meron; @YE_2020_Phys.Rev.Lett._Singular] discussing both concepts simultaneously. Though not claimed explicitly, those works hint that nontrivial Berry phase produces nonzero polarization charge. Aiming to bridge Berry phase and polarization singularity, we reexamine the seminal photonic crystal slab (PCS) that supports elementary two-level nonhermitian degeneracies. Despite the invariance of nontrivial Berry phase, the corresponding polarization fields on different isofrequency contours enclosing both exceptional points (EPs) exhibit diverse patterns characterized by different polarization charges, including the trivial zero charge. It is further revealed such complexity of field evolutions is regulated by global charge conservation for both bands, with extra points of circular polarizations (**C**-points) playing pivotal roles. This reveals the explicit connection between globally conserved charge and the invariant Berry phase, underlying which the physical mechanisms have been further clarified by a modified Berry-Dennis model [@BERRY_2003_Proc.R.Soc.Lond.A_optical]. Our study can spur further investigations in other subjects beyond photonics to explore conceptual interconnectedness, where both the concepts of Berry phase and singularities are present. For better comparisons, we revisit the rhombic-lattice PCS in Ref. [@ZHOU_2018_Science_Observationa]: refractive index $n$, side length $p$, height $h$ and tilting angle $\theta$; semi-major (minor) diameters are $l_1$ ($l_2$); the whole structure is placed in air background of $n=1$ \[Fig. \[fig1\](a); parameter values shown in the figure caption\]. We have further defined $\vartheta={\vartriangle}l/l_2$ to characterize the mirror ($k_y$-$k_z$ plane)-symmetry breaking when air holes are partially filled. When $\vartheta=0$, dispersion bands (in terms of real parts of complex eigenfrequencies $\breve{\omega}=\breve{\omega}_1+i\breve{\omega}_2$ for the Bloch eigenmodes calculated with COMSOL Multiphysics) are presented in Fig. \[fig1\](b). Throughout this work, both frequency and wave vector are normalized: $\omega\rightarrow{\omega}p/2{\pi}c$ ($c$ is light speed); $\mathbf{k}\rightarrow\mathbf{k}p/2\pi$. Both branch cut (Fermi arc) and branch points (EPs) on the isofrequency plane (position information shown in figure captions, as is the case throughout this work) are observed \[marked also respectively in Fig. \[fig1\](c) by black curve and dots\], confirming the existence of nonhermitian degeneracies. On the lower band, we have identified two **C**-points (marked by stars; the corresponding eigenmodes are circularly polarized in the far field) on the isofrequency plane (position information shown in figure captions, as is the case throughout this work). Polarization fields (line fields in terms of the semi-major axis of the polarization ellipses) are projected on the Bloch vector $k_x$-$k_y$ plane \[Fig. \[fig1\](c)\], with blue and red lines corresponding respectively to the eigenmodes on the lower and upper bands (fields exhibiting mirror symmetry as required by the structure symmetry). The representative eigenvalue-swapping feature is further confirmed in Fig. \[fig1\](b), where the polarization fields are continuous across the Fermi arc for opposite bands only [@BERRY_2003_Proc.R.Soc.Lond.A_optical]. ![(a) Unit cell of the rhombic-lattice PCS: index $n=1.384$, $p=525$ nm, $h=220$ nm, $l_1=348$ nm, $l_2=257$ nm, $\theta=114.5^{\circ}$ and $\vartheta={\vartriangle}l/l_2$. (b) Dispersion bands ($\vartheta=0$) with two EPs $\breve{\omega}_1=0.961361$, $k_x=0.029525$, $k_y=\pm6.8\times10^{-4}$) and two **C**-points on the lower band $\breve{\omega}_1=0.961347$, $k_x=0.029517$, $k_y=\pm6.8\times10^{-4}$. The polarization fields on a loop enclosing two **C**-points are shown in (d) with $q=-1$. (c) Polarization fields for both lower (blue) and upper (red) bands, and three isofrequency contours are selected ($\breve{\omega}_1=0.961368,0.961353,0.96133$), on which the polarization fields are summarized in (e)-(g), with $q=-1/2,~+1/2,~-1/2$, respectively. []{data-label="fig1"}](figure1){width="9cm"} The coexistence of two **C**-points on the same band with equal charge $q=-1/2$ (generic polarization singularities) is protected by the mirror symmetry, decorated by typical star-like field patterns [@BERRY_1977_J.Phys.A:Math.Gen._Umbilic]. On a contour that encloses two **C**-points (without enclosing EPs), the polarization fields are shown in Fig. \[fig1\](d) with the expected charge $q=(-1/2)\times2=-1$. Such a contour is not on an isofrequency plane and thus not quite feasible for direct experimental verifications. We then proceed to isofrequency contours that are characterized by an invariant $\pi$ Berry phase [@MAILYBAEV_2005_Phys.Rev.A_Geometric; @LEYKAM_2017_Phys.Rev.Lett._Edge; @SHEN_2018_Phys.Rev.Lett._Topological]. Since both **C**-points locate on the lower bands and on the isofrequency plane: for the upper band, there is no **C**-point enclosed by the contour; for the lower band, the contour could enclose either zero or both **C**-points simultaneously. Polarization fields on three such contours \[one on the upper band (red dashed line) and two on the lower band (blue dashed lines)\] are summarized in Figs. \[fig1\](e)-(g), with $q=-1/2, +1/2, -1/2$, respectively. The charge contrast of $-1$ between the two contours on the lower band are obviously induced by **C**-points of total charge $q=-1$. Though we have studied the same structure ($\vartheta=0$) as that in Ref. [@ZHOU_2018_Science_Observationa], our results presented in Fig. \[fig1\] are by no means mere reproductions, since the scenario of $q=+1/2$ we demonstrate is missing in Ref. [@ZHOU_2018_Science_Observationa], where the key roles of **C**-points are also overlooked. We emphasize that though not explicitly demonstrated, the case of $q=+1/2$ was actually not forbidden by the arguments presented in Ref. [@ZHOU_2018_Science_Observationa]. Based on mode swapping and mirror symmetry properties, it was proved there that the charge associated with the isofrequency contour has to be a half-integer, accommodating both $q=\pm1/2$. We then make a further step to investigate asymmetric structures ($\vartheta\neq0$). The polarization fields on the $k_x$-$k_y$ plane for two scenarios ($\vartheta=0.01,~0.007$) are summarized in Figs. \[fig2\](a) and (b), neither exhibiting mirror symmetry anymore. With symmetry broken, though one **C**-point on the lower band is relatively stable, the other can move to the Fermi arc \[Fig. \[fig2\](b)\] or across to the upper band \[Figs. \[fig2\](a)\], with invariant $q=-1/2$ (see Table \[table\]). When the two **C**-points locate on opposite bands \[Fig. \[fig2\](a)\], we choose two contours on the upper band (the charge distribution on the lower band is similar): one encloses two EPs only and the other encloses also the **C**-point. The polarization fields on the contours are shown in Figs. \[fig2\](c) and (d), with $q=0$ and $-1/2$, respectively. Despite this charge variance, we emphasize that for any isofrequency contour, the Berry phase is an invariant $\pi$, regardless of whether the symmetry is broken or not [@MAILYBAEV_2005_Phys.Rev.A_Geometric; @LEYKAM_2017_Phys.Rev.Lett._Edge; @SHEN_2018_Phys.Rev.Lett._Topological]. Basically, Fig. \[fig2\](c) tells convincingly that a nontrivial Berry phase does not necessarily produce a nonzero polarization charge. Except EPs, other points on the Fermi arc actually correspond to two sets of eigenmodes with equal $\breve{\omega}_1$ while different $\breve{\omega}_2$. As a result, the **C**-point on the Fermi arc \[Fig. \[fig2\](b)\] is not really shared by both bands (only EPs are shared), but still locate on the lower band, which can be confirmed by inspecting $\breve{\omega}_2$. With the absence of **C**-points, the charge distribution on the upper band would be identical to that in Fig. \[fig1\](b): any isofrequency contour encloses two EPs only with $q=-1/2$. On the lower band, in contrast, an isofrequency contour can enclose either two EPs and inevitably a **C**-point on the Fermi arc, or two EPs and two **C**-points. Both scenarios are illustrated in Figs. \[fig2\](e) and (f), with $q=0$ and $-1/2$, respectively. Figure \[fig2\](e) reconfirms that Berry phase and partial polarization charge are not strictly interlocked. ![(a) and (b) Polarization fields for two assymetric PCSs with $\vartheta=0.01$ and $0.007$, respectively. The positions for the EPs are $\breve{\omega}_1=0.9612505$, $k_x=0.029632$, $k_y=\pm6.4\times10^{-4}$ in (a) and $\breve{\omega}_1=0.961297$, $k_x=0.029592$, $k_y=\pm6.6\times10^{-4}$ in (b). The positons of the two **C**-points are: $\breve{\omega}_1=(0.961261,0.9612108)$, $k_x=(0.029638,0.029623)$, $k_y=(6.3\times10^{-4},-6.8\times10^{-4})$ in (a) and $\breve{\omega}_1=(0.961297,0.961270)$, $k_x=(0.029588,0.02958)$, $k_y=(6.48\times10^{-4},-6.8\times10^{-4})$ in (b). In both (a) and (b), two isofrequency contours are chosen, on which the polarizations fields are shown in (c)-(f), with $q=0,-1/2,0,-1/2$ and $\breve{\omega}_1=0.9612$+ $(5.5,8.3,8.3,5)\times10^{-5}$, respectively.[]{data-label="fig2"}](figure2){width="9cm"} Charge distributions for all three structures are summarized in Table \[table\], with blank spaces corresponding to nonexistent scenarios. Table \[table\] clearly indicates that for both the upper and lower bands, the global charge (when the contour is large enough to enclose both EPs and all **C**-points on the band) is invariant ($q=-1/2$), irrespective of how the **C**-points are distributed or whether the mirror symmetry is broken or not. In a word, there is a hidden order underlying the seemingly complex evolutions of polarization fields and their charges: the evolution is bounded by charge conservation. Considering the invariant $\pi$ Berry phase for any isofrequency contours, it becomes clear that the global polarization charge (rather than partial ones when the contours covers part of the singularities of degeneracies or **C**-points) is inextricably linked to this invariant Berry phase. Such a subtle connection is also manifest for not only hermitian degeneracies [@BERRY_2003_Proc.R.Soc.Lond.A_optical; @BERRY_2007_ProgressinOptics_Chapter; @YE_2020_Phys.Rev.Lett._Singular], but also scenarios with the degeneracies removed by further perturbations [@BERRY_2003_Proc.R.Soc.Lond.A_optical; @GUO_2020_Phys.Rev.Lett._Meron]. ---------------- -- -- -- -- -- --           1C     Two EPs Two EPs + 1C Two EPs + 2Cs         Global ---------------- -- -- -- -- -- -- : Charges for **C**-points and different isofrequency contours (**L**: lower band; **U**: Upper band). Blank spaces correspond to nonexistent scenarios.[]{data-label="table"} \ As the final step, we employ the local Berry-Dennis model proposed in Ref. [@BERRY_2003_Proc.R.Soc.Lond.A_optical] to clarify the underlying mechanisms. The corresponding Hamiltonian of this model in linear basis is: $$\label{berry-dennis} \mathcal{H}\left(k_{x}, k_{y}\right)=\left(k_{x}+i \gamma\right) \sigma_{z}+k_{y} \sigma_{x}+\kappa \sigma_{y},$$ where $k_{x,y}$ are real; $\sigma_{x,y,z}$ are Pauli matrices; $\kappa$ and $\gamma$ are the planar chirality and radiation loss terms, respectively [@Supplemental_Material_3]. This Hamiltonian matrix is indeed a rather ordinary $2\times2$ nonhermition matrix, except that Berry and Dennis view its eigenvectors as Jones vectors [@YARIV_2006__Photonics] for generally elliptically polarized light in linear basis, thus establishing an effective connection between the Hamiltonian matrix and the electromagnetic polarization fields (see Supplemental Material (SM) [@Supplemental_Material_3] for justifications of this connection and the incorporation of $\kappa$). With this connection and the complex eigenvector denoted as $\textbf{x}=(x_1;x_2)$: when $\kappa=0$, EPs are chiral points with degenerate eigenvectors satisfying $x_1{\pm}ix_2=0$, overlapping with **C**-points; when $\kappa\neq0$, EPs are nonchiral and thus separated from **C**-points  [@BERRY_2003_Proc.R.Soc.Lond.A_optical; @HEISS_2001_Eur.Phys.J.D_chirality; @HARNEY_2004_TheEuropeanPhysicalJournalD-AtomicMolecularandOpticalPhysics_Time; @BERRY_2006_J.Phys.A:Math.Gen._Proximity]. Since for all the scenarios discussed above (see Figs. \[fig1\] and \[fig2\]) the EPs do not overlap with **C**-points, the introduction of chirality term $\kappa$ is inevitable, which is missing in Ref. [@ZHOU_2018_Science_Observationa]. For convenience of analysis, to directly locate **C**-points in particular, the Hamiltonian can be converted into a circular-basis form as [@BERRY_2003_Proc.R.Soc.Lond.A_optical]: $$\label{berry-dennis-circular} \begin{aligned} \mathcal{H}_c\left(k_{x}, k_{y}\right) &=\left(k_{x}+i \gamma\right) \sigma_{x}+k_{y} \sigma_{y}+\kappa \sigma_{z} \\ &=\left(\begin{array}{cc} \kappa & k_{x}-\mathrm{i} k_{y}+\mathrm{i} \gamma \\ k_{x}+\mathrm{i} k_{y}+\mathrm{i} \gamma & -\kappa \end{array}\right), \end{aligned}$$ since such conversion would transform $\sigma_{x,y,z}$ in linear basis to $\sigma_{y,z,x}$ in circular basis [@Supplemental_Material_3]. After this conversion, the chiral points now correspond to points of linear polarizations, while circular-basis eigenvectors of $x_1^cx_2^c=0$ correspond to **C**-points. Identical to the linear basis case, the EPs correspond to circular (noncircular) polarizations with the chirality term $\kappa=0$ ($\kappa\neq0$). The superiority of this circular-basis Hamiltonian resides in that the positions of **C**-points can then be directly identified by setting the off-diagonal terms of the matrix equal to zero: $k_{x}-\mathrm{i} k_{y}+\mathrm{i} \gamma=0$ and $k_{x}+\mathrm{i} k_{y}+\mathrm{i} \gamma=0$. Their roots $k_x=0,~ k_y=\gamma$ and $k_x=0,~ k_y=-\gamma$ are the positions of **C**-points on the lower and upper bands, respectively [@BERRY_2003_Proc.R.Soc.Lond.A_optical]. This model can explain the charge distributions shown in Fig. \[fig2\](a) (also summarized in Table  \[table\] with $\vartheta=0.01$) with the two **C**-points located on opposite bands, except that in this model the topological charge of the **C**-point and the global charge for either band is $+1/2$ rather than $-1/2$ (see SM [@Supplemental_Material_3]). To account for these discrepancies, we modify the Hamiltonian as: $$\label{berry-dennis-circular2} \mathcal{H}_c\left(k_{x}, k_{y}\right)=\left(k_{x}+i \gamma\right) \sigma_{x}-k_{y} \sigma_{y}+\kappa \sigma_{z},$$ by adding a minus sign before the $\sigma_{y}$ term in Eq. (\[berry-dennis-circular\]). This is similar to substituting the Hamiltonian of the K-valley for that of the $\rm{K}^{\prime}$-valley in graphene, which would induce a $2\pi$ jump of Berry phase from $\pi$ to $-\pi$ [@KANE_2005_Phys.Rev.Lett._Quantum; @XIAO_2007_Phys.Rev.Lett._ValleyContrasting] (see SM [@Supplemental_Material_3]). Though Berry phases of $\pi$ and $-\pi$ are effectively the same (phase is only definable modulo $2\pi$), the corresponding polarization fields and charge distributions are contrastingly different (see SM [@Supplemental_Material_3] for the connections between Berry phase and polarization charges), with opposite sings for both the C-point charge and the global charge of both bands ($q=+1/2$ versus $q=-1/2$). The polarization fields extracted from this modified model ($\gamma=1$ and $\kappa=0.8$) are shown in Fig. \[fig3\](a), which are topologically equivalent to those in Fig. \[fig2\](a): for each **C**-point $q=-1/2$; for iso-eigenvalue ($\lambda_c$) contours that enclose both EPs, $q=0$ and $q=-1/2$ with and without the extra **C**-point surrounded, respectively \[see Figs. \[fig3\](c) and (d)\]; the global charge is constant ($q=-1/2$) for both bands. ![(a) and (b) Polarization fields extrated from the model respectively in Eq. (\[berry-dennis-circular2\]) and Eq. (\[berry-dennis-circular3\]), with $\gamma=1$ and $\kappa=0.8$. Two iso-eigenvalue contours are selected in (a) and (b), on which the polarizations fields are shown in (c)-(f), with $q=-1/2,0,0,-1/2$, respectively.[]{data-label="fig3"}](figure3){width="9cm"} Since the model presented above is linear, there is only one solution when either of the off-diagonal terms is setting to zero. This means that there is one and only one **C**-point on each band. As a result, this linear model would fail to account for what is observed in Fig. \[fig1\](b), where there are two **C**-points on the same band. Actually the linear model in Eq. (\[berry-dennis-circular2\]) has broken the $k_x$-$k_z$ mirror symmetry of the polarization fields (symmetries of the Hamiltonian and constructed fields are different, due to involvements of construction basis [@Supplemental_Material_3]), as confirmed by the field patterns in Fig. \[fig3\](a). To reflect the mirror symmetry of the structure and thus also the polarization fields, the linear model can be further modified as: $$\label{berry-dennis-circular3} \mathcal{H}_{c}\left(k_{x}, k_{y}\right)=\left(k_{x}+i \gamma\right) \sigma_{x}-k_{y} \sigma_{y}-k_{y}\sigma_{z},$$ where the constant chirality term $\kappa$ in Eq. (\[berry-dennis-circular2\]) is replaced by the variable $-k_{y}$, which guarantees that the constructed polarization fields are symmetric with respect to the $k_y$-$k_z$ plane (see SM [@Supplemental_Material_3] for detailed arguments concerning field symmetry). The symmetric fields \[in contrast to asymmetric ones in Fig. \[fig3\](a)\] based on this model ($\gamma=1$ and $\kappa=0.8$) are shown in Fig. \[fig3\](b), which is topologically equivalent to Fig. \[fig1\](b): for each **C**-point $q=-1/2$; for iso-eigenvalue contours on the upper band $q=-1/2$; iso-eigenvalue contours on the lower band that enclose both EPs have $q=+1/2$ and $q=-1/2$, with and without the two **C**-points surrounded, respectively \[see Figs. \[fig3\](e) and (f)\]; the global charge is an invariant $q=-1/2$ for both bands. This reconfirms the claim in Ref. [@ZHOU_2018_Science_Observationa]: combined mirror symmetry and mode swapping produces half-integer charges. Here for simplicity we have confined to linear models only, aiming to explain topologically what has been observed in Figs. \[fig1\] and \[fig2\]. To obtain more than one **C**-points on the same band, besides introducing the variable chirality term as shown in Eq. (\[berry-dennis-circular3\]), we can also incorporate nonlinear terms into the Berry-Dennis model, which nevertheless would change the charge distribution both locally and globally (see SM [@Supplemental_Material_3]). In conclusion, we revisit PCSs supporting nonhermitian degeneracies and establish a subtle connection between invariant Berry phase and the conserved global charge. It is revealed that for any isofrequency contour enclosing both EPs, despite the nontrivial $\pi$ Berry phase invariance, the topological charge is contrastingly variable, which could even be the trivial zero charge. Such seemingly complex evolutions of charge distributions are mediated by extra **C**-points, ensuring global charge conservation for both bands that is synonymous with the Berry phase invariance. Our discussions are confined to fundamental two-level systems, which can be extended to more sophisticated systems with more complex EPs distributions [@HEISS_2008_J.Phys.A:Math.Theor._Chirality; @RYU_2012_Phys.Rev.A_Analysis; @LEE_2012_Phys.Rev.A_Geometrica; @HEISS_2015_J.Phys.A:Math.Theor._Resonance; @LIN_2016_Phys.Rev.Lett._Enhanceda; @HEISS_2016_J.Phys.A:Math.Theor._model; @CHEN_2017_Nature_Exceptional; @HODAEI_2017_Nature_Enhanced; @ZHANG_2018_Phys.Rev.X_Dynamically; @PAP_2018_Phys.Rev.A_NonAbelian; @DING_2018_Phys.Rev.Lett._Experimentala; @ZHANG_2018_Phys.Rev.A_Hybrid; @ZHONG_2018_Nat.Commun._Winding]. We emphasize that in this work, the Berry phase and polarization charge actually characterize different entities of eigenvectors of Bloch modes and their projected far fields: Bloch modes are defined on the momentum-torus and can be folded into the irreducible Brillouin zone; while far fields are defined on the momentum-sphere, due to the involvement of out-of-plane wave vectors along which there is no periodicity. It is recently shown the Berry phase for electromagnetic fields themselves on a contour can be well defined [@BLIOKH_2019_Rep.Prog.Phys._Geometric; @BERRY_2019_J.Opt._Geometry]. We expect that blending all those concepts (non-hermitian degeneracies, Berry phase of their matrix eigenvectors, Berry phase and polarization singularities of the corresponding electromagnetic waves) would render much more fertile platforms to incubate new fundamental investigations and practical applications, including the rare scenario of Berry phase (for electromagnetic fields) with slaving parameters (eigenvectors from which the electromagnetic fields are constructed) themselves also having Berry phase. *Acknowledgments*: We acknowledge the financial support from National Natural Science Foundation of China (Grant No. 11874026 and 11874426), and several other Researcher Schemes of National University of Defense Technology. W. L. is indebted to Sir Michael Berry and Prof. Tristan Needham for invaluable correspondences. [10]{} S. Pancharatnam, “The propagation of light in absorbing biaxial crystals,” Proc. Indian. Acad. Sci. **42**, 86–109 (1955). S. Pancharatnam, “Generalized theory of interference, and its applications,” Proc. Indian. Acad. Sci. **44**, 247–262 (1956). M. V. Berry, “Quantal [[Phase Factors Accompanying Adiabatic Changes]{}]{},” Proc. R. Soc. A **392**, 45–57 (1984). M. V. Berry, “The [[Adiabatic Phase]{}]{} and [[Pancharatnam Phase]{}]{} for [[Polarized]{}]{}-[[Light]{}]{},” J. Mod. Opt. **34**, 1401 (1987). M. Berry, “Geometric phase memories,” Nat. Phys. **6**, 148–150 (2010). M. V. Berry, “Waves and [[Thom]{}]{}’s theorem,” Adv. Phys. **25**, 1–26 (1976). J. F. Nye and M. V. Berry, “Dislocations in wave trains,” Proc. R. Soc. Lond. A **336**, 165–190 (1974). J. F. Nye, “Polarization [[Effects]{}]{} in the [[Diffraction]{}]{} of [[Electromagnetic Waves]{}]{}: [[The Role]{}]{} of [[Disclinations]{}]{},” Proc. R. Soc. A **387**, 105–132 (1983). J. F. Nye, “Lines of circular polarization in electromagnetic wave fields,” Proc. R. Soc. A **389**, 279–290 (1983). M. V. Berry, “Geometry of phase and polarization singularities illustrated by edge diffraction and the tides,” in “Second [[International Conference]{}]{} on [[Singular Optics]{}]{} ([[Optical Vortices]{}]{}): [[Fundamentals]{}]{} and [[Applications]{}]{},” , vol. 4403 ([International Society for Optics and Photonics]{}, 2001), vol. 4403, pp. 1–12. E. Bortolotti, “Memories and notes presented by fellows,” Rend. R. Acc. Naz. Linc. **4**, 552 (1926). S. M. Rytov, “Transition from wave to geometrical optics,” Dokl. Akad. Nauk. SSSR. **18**, 263 (1938). V. V. Vladimirskii, “The rotation of polarization plane for curved light ray,” Dokl. Akad. Nauk. SSSR. **21**, 222 (1941). K. Y. Bliokh, M. A. Alonso, and M. R. Dennis, “Geometric phases in [[2D]{}]{} and [[3D]{}]{} polarized fields: Geometrical, dynamical, and topological aspects,” Rep. Prog. Phys. **82**, 122401 (2019). E. Cohen, H. Larocque, F. Bouchard, F. Nejadsattari, Y. Gefen, and E. Karimi, “Geometric phase from [[Aharonov]{}]{} to [[Pancharatnam]{}]{} and beyond,” Nat Rev Phys **1**, 437–449 (2019). M. R. Dennis, K. O’Holleran, and M. J. Padgett, “Chapter 5 [[Singular Optics]{}]{}: [[Optical Vortices]{}]{} and [[Polarization Singularities]{}]{},” in “Progress in [[Optics]{}]{},” , vol. 53, E. Wolf, ed. ([Elsevier]{}, 2009), pp. 293–363. G. J. Gbur, *Singular [[Optics]{}]{}* ([CRC Press Inc]{}, Boca Raton, 2016). H. Hopf, *Differential [[Geometry]{}]{} in the [[Large]{}]{}: [[Seminar Lectures New York University]{}]{} 1946 and [[Stanford University]{}]{} 1956* ([Springer]{}, 2003). M. Berry, “Pancharatnam, virtuoso of the [[Poincaré]{}]{} sphere: An appreciation,” Curr. Sci. **67**, 220–220 (1994). M. Berry, “Making waves in physics,” Nature **403**, 21–21 (2000). M. V. Berry and M. R. Dennis, “The optical singularities of birefringent dichroic chiral crystals,” Proc. R. Soc. Lond. A **459**, 1261–1292 (2003). M. Berry, “Physics of [[Nonhermitian Degeneracies]{}]{},” Czechoslovak Journal of Physics **54**, 1039–1047 (2004). M. V. Berry and M. R. Jeffrey, “Chapter 2 [[Conical]{}]{} diffraction: [[Hamilton]{}]{}’s diabolical point at the heart of crystal optics,” in “Progress in [[Optics]{}]{},” , vol. 50, E. Wolf, ed. ([Elsevier]{}, 2007), pp. 13–50. C. W. Hsu, B. Zhen, A. D. Stone, J. D. Joannopoulos, and M. Solja[č]{}ić, “Bound states in the continuum,” Nat. Rev. Mater. **1**, 16048 (2016). C. W. Hsu, B. Zhen, J. Lee, S.-L. Chua, S. G. Johnson, J. D. Joannopoulos, and M. Solja[č]{}ić, “Observation of trapped light within the radiation continuum,” Nature **499**, 188–191 (2013). B. Zhen, C. W. Hsu, L. Lu, A. D. Stone, and M. Solja[č]{}ić, “Topological nature of optical bound states in the continuum,” Phys. Rev. Lett. **113**, 257401 (2014). Y. Yang, C. Peng, Y. Liang, Z. Li, and S. Noda, “Analytical [[Perspective]{}]{} for [[Bound States]{}]{} in the [[Continuum]{}]{} in [[Photonic Crystal Slabs]{}]{},” Phys. Rev. Lett. **113**, 037401 (2014). Y. Guo, M. Xiao, and S. Fan, “Topologically [[Protected Complete Polarization Conversion]{}]{},” Phys. Rev. Lett. **119**, 167401 (2017). A. Kodigala, T. Lepetit, Q. Gu, B. Bahari, Y. Fainman, and B. Kanté, “Lasing action from photonic bound states in continuum,” Nature **541**, 196–199 (2017). E. N. Bulgakov and D. N. Maksimov, “Topological [[Bound States]{}]{} in the [[Continuum]{}]{} in [[Arrays]{}]{} of [[Dielectric Spheres]{}]{},” Phys. Rev. Lett. **118**, 267401 (2017). H. M. Doeleman, F. Monticone, W. den Hollander, A. Alù, and A. F. Koenderink, “Experimental observation of a polarization vortex at an optical bound state in the continuum,” Nat. Photonics **12**, 397 (2018). H. Zhou, C. Peng, Y. Yoon, C. W. Hsu, K. A. Nelson, L. Fu, J. D. Joannopoulos, M. Solja[č]{}i[ć]{}, and B. Zhen, “Observation of bulk [[Fermi]{}]{} arc and polarization half charge from paired exceptional points,” Science **359**, 1009–1012 (2018). Y. Zhang, A. Chen, W. Liu, C. W. Hsu, B. Wang, F. Guan, X. Liu, L. Shi, L. Lu, and J. Zi, “Observation of [[Polarization Vortices]{}]{} in [[Momentum Space]{}]{},” Phys. Rev. Lett. **120**, 186103 (2018). K. Koshelev, S. Lepeshov, M. Liu, A. Bogdanov, and Y. Kivshar, “Asymmetric [[Metasurfaces]{}]{} with [[High]{}]{}-[[Q]{}]{} [[Resonances Governed]{}]{} by [[Bound States]{}]{} in the [[Continuum]{}]{},” Phys. Rev. Lett. **121**, 193903 (2018). W. Chen, Y. Chen, and W. Liu, “Singularities and poincaré indices of electromagnetic multipoles,” Phys. Rev. Lett. **122**, 153907 (2019). A. Chen, W. Liu, Y. Zhang, B. Wang, X. Liu, L. Shi, L. Lu, and J. Zi, “Observing vortex polarization singularities at optical band degeneracies,” Phys. Rev. B **99**, 180101 (2019). W. Liu, B. Wang, Y. Zhang, J. Wang, M. Zhao, F. Guan, X. Liu, L. Shi, and J. Zi, “Circularly [[Polarized States Spawning]{}]{} from [[Bound States]{}]{} in the [[Continuum]{}]{},” Phys. Rev. Lett. **123**, 116104 (2019). J. Jin, X. Yin, L. Ni, M. Solja[č]{}i[ć]{}, B. Zhen, and C. Peng, “Topologically enabled ultrahigh- [[Q]{}]{} guided resonances robust to out-of-plane scattering,” Nature **574**, 501–504 (2019). C. Guo, M. Xiao, Y. Guo, L. Yuan, and S. Fan, “Meron [[Spin Textures]{}]{} in [[Momentum Space]{}]{},” Phys. Rev. Lett. **124**, 106103 (2020). X. Yin, J. Jin, M. Solja[č]{}i[ć]{}, C. Peng, and B. Zhen, “Observation of topologically enabled unidirectional guided resonances,” Nature **580**, 467–471 (2020). W. Ye, Y. Gao, and J. Liu, “Singular [[Points]{}]{} of [[Polarizations]{}]{} in the [[Momentum Space]{}]{} of [[Photonic Crystal Slabs]{}]{},” Phys. Rev. Lett. **124**, 153904 (2020). W. Chen, Y. Chen, and W. Liu, “Line [[Singularities]{}]{} and [[Hopf Indices]{}]{} of [[Electromagnetic Multipoles]{}]{},” Laser Photonics Rev., Doi: 10.1002/lpor.202000049 (2020). Z. Liu, Y. Xu, Y. Lin, J. Xiang, T. Feng, Q. Cao, J. Li, S. Lan, and J. Liu, “High-[[Q]{}]{} [[Quasibound States]{}]{} in the [[Continuum]{}]{} for [[Nonlinear Metasurfaces]{}]{},” Phys. Rev. Lett. **123**, 253901 (2019). C. Huang, C. Zhang, S. Xiao, Y. Wang, Y. Fan, Y. Liu, N. Zhang, G. Qu, H. Ji, J. Han, L. Ge, Y. Kivshar, and Q. Song, “Ultrafast control of vortex microlasers,” Science **367**, 1018–1021 (2020). B. Wang, W. Liu, M. Zhao, J. Wang, Y. Zhang, A. Chen, F. Guan, X. Liu, L. Shi, and J. Zi, “Generating optical vortex beams by momentum-space polarization vortices centered at bound states in the continuum,” arXiv190912618 (2019). L. Lu, J. D. Joannopoulos, and M. Soljacic, “Topological photonics,” Nat. Photonics **8**, 821 (2014). T. Ozawa, H. M. Price, A. Amo, N. Goldman, M. Hafezi, L. Lu, M. C. Rechtsman, D. Schuster, J. Simon, O. Zilberberg, and I. Carusotto, “Topological photonics,” Rev. Mod. Phys. **91**, 015006 (2019). L. Feng, R. [El-Ganainy]{}, and L. Ge, “Non-[[Hermitian]{}]{} photonics based on paritytime symmetry,” Nat. Photonics **11**, 752–762 (2017). R. [El-Ganainy]{}, K. G. Makris, M. Khajavikhan, Z. H. Musslimani, S. Rotter, and D. N. Christodoulides, “Non-[[Hermitian]{}]{} physics and [[PT]{}]{} symmetry,” Nat. Phys. **14**, 11–19 (2018). M.-A. Miri and A. Al[ù]{}, “Exceptional points in optics and photonics,” Science **363**, eaar7709 (2019). M. V. Berry and J. H. Hannay, “Umbilic points on [[Gaussian]{}]{} random surfaces,” J. Phys. A: Math. Gen. **10**, 1809–1821 (1977). A. A. Mailybaev, O. N. Kirillov, and A. P. Seyranian, “Geometric phase around exceptional points,” Phys. Rev. A **72**, 014104 (2005). D. Leykam, K. Y. Bliokh, C. Huang, Y. D. Chong, and F. Nori, “Edge [[Modes]{}]{}, [[Degeneracies]{}]{}, and [[Topological Numbers]{}]{} in [[Non]{}]{}-[[Hermitian Systems]{}]{},” Phys. Rev. Lett. **118**, 040401 (2017). H. Shen, B. Zhen, and L. Fu, “Topological [[Band Theory]{}]{} for [[Non]{}]{}-[[Hermitian Hamiltonians]{}]{},” Phys. Rev. Lett. **120**, 146402 (2018). Supplemental Material includes the following six sections: (****). The transformations of Pauli matrices with conversions from linear to circular basis; (****). Employment of Berry-Dennis model for the photonic crystal slab; (****). Berry phase around two EPs of the nonhermitian Hamiltonians in Eq.(2) and Eq.(3); (****). Global charge of the polarization fields constructed from nonhermitian Hamiltonians in Eq.(2) and Eq.(3); (****). The mirror asymmetry and symmetry for polarization fields constructed from nonhermitian Hamiltonians in Eqs.(2-3) and Eq.(4); (****). Introducing nonlinear terms into the linear model. Supplemental Material include the following Refs. [@YARIV_2006__Photonics; @BERRY_2003_Proc.R.Soc.Lond.A_optical; @ZHOU_2018_Science_Observationa; @PAPAKOSTAS_Phys.Rev.Lett._optical_2003; @CHEN_2020_Phys.Rev.Research_Scatteringa; @HARNEY_2004_TheEuropeanPhysicalJournalD-AtomicMolecularandOpticalPhysics_Time; @BERRY_2006_J.Phys.A:Math.Gen._Proximity; @MAILYBAEV_2005_Phys.Rev.A_Geometric; @ZHOU_2016__Tailoring; @LEYKAM_2017_Phys.Rev.Lett._Edge; @SHEN_2018_Phys.Rev.Lett._Topological; @BERRY_2007_ProgressinOptics_Chapter; @CHEN_2019_ArXiv190409910Math-PhPhysicsphysics_Linea; @BERRY_1977_J.Phys.A:Math.Gen._Umbilic]. A. Yariv and P. Yeh, *Photonics: [[Optical Electronics]{}]{} in [[Modern Communications]{}]{}* ([Oxford University Press]{}, [New York]{}, 2006), 6th ed. W. Heiss and H. Harney, “The chirality of exceptional points,” Eur. Phys. J. D **17**, 149–151 (2001). H. L. Harney and W. D. Heiss, “Time [[Reversal]{}]{} and [[Exceptional Points]{}]{},” **29**, 429–432 (2004). M. V. Berry, “Proximity of degeneracies and chiral points,” J. Phys. A: Math. Gen. **39**, 10013–10018 (2006). C. L. Kane and E. J. Mele, “Quantum [[Spin Hall Effect]{}]{} in [[Graphene]{}]{},” Phys. Rev. Lett. **95**, 226801 (2005). D. Xiao, W. Yao, and Q. Niu, “Valley-[[Contrasting Physics]{}]{} in [[Graphene]{}]{}: [[Magnetic Moment]{}]{} and [[Topological Transport]{}]{},” Phys. Rev. Lett. **99**, 236809 (2007). W. D. Heiss, “Chirality of wavefunctions for three coalescing levels,” J. Phys. A: Math. Theor. **41**, 244010 (2008). J.-W. Ryu, S.-Y. Lee, and S. W. Kim, “Analysis of multiple exceptional points related to three interacting eigenmodes in a non-[[Hermitian Hamiltonian]{}]{},” Phys. Rev. A **85**, 042101 (2012). S.-Y. Lee, J.-W. Ryu, S. W. Kim, and Y. Chung, “Geometric phase around multiple exceptional points,” Phys. Rev. A **85**, 064103 (2012). W. D. Heiss and G. Wunner, “Resonance scattering at third-order exceptional points,” J. Phys. A: Math. Theor. **48**, 345203 (2015). Z. Lin, A. Pick, M. Lon[č]{}ar, and A. W. Rodriguez, “Enhanced [[Spontaneous Emission]{}]{} at [[Third]{}]{}-[[Order Dirac Exceptional Points]{}]{} in [[Inverse]{}]{}-[[Designed Photonic Crystals]{}]{},” Phys. Rev. Lett. **117**, 107402 (2016). W. D. Heiss and G. Wunner, “A model of three coupled wave guides and third order exceptional points,” J. Phys. A: Math. Theor. **49**, 495303 (2016). W. Chen, [Ş]{}. Kaya [Ö]{}zdemir, G. Zhao, J. Wiersig, and L. Yang, “Exceptional points enhance sensing in an optical microcavity,” Nature **548**, 192–196 (2017). H. Hodaei, A. U. Hassan, S. Wittek, H. [Garcia-Gracia]{}, R. [El-Ganainy]{}, D. N. Christodoulides, and M. Khajavikhan, “Enhanced sensitivity at higher-order exceptional points,” Nature **548**, 187–191 (2017). X.-L. Zhang, S. Wang, B. Hou, and C. T. Chan, “Dynamically [[Encircling Exceptional Points]{}]{}: [[In]{}]{} situ [[Control]{}]{} of [[Encircling Loops]{}]{} and the [[Role]{}]{} of the [[Starting Point]{}]{},” Phys. Rev. X **8**, 021066 (2018). E. J. Pap, D. Boer, and H. Waalkens, “Non-[[Abelian]{}]{} nature of systems with multiple exceptional points,” Phys. Rev. A **98**, 023818 (2018). K. Ding, G. Ma, Z. Q. Zhang, and C. T. Chan, “Experimental [[Demonstration]{}]{} of an [[Anisotropic Exceptional Point]{}]{},” Phys. Rev. Lett. **121**, 085702 (2018). X.-L. Zhang and C. T. Chan, “Hybrid exceptional point and its dynamical encircling in a two-state system,” Phys. Rev. A **98**, 033810 (2018). Q. Zhong, M. Khajavikhan, D. N. Christodoulides, and R. [El-Ganainy]{}, “Winding around non-[[Hermitian]{}]{} singularities,” Nat. Commun. **9**, 4808 (2018). M. V. Berry and P. Shukla, “Geometry of [[3D]{}]{} monochromatic light: Local wavevectors, phases, curl forces, and superoscillations,” J. Opt. **21**, 064002 (2019). A. Papakostas, A. Potts, D. M. Bagnall, S. L. Prosvirnin, H. J. Coles, and N. I. Zheludev, “Optical manifestations of planar chirality,” Phys. Rev. Lett. **90**, 107404 (2003). W. Chen, Q. Yang, Y. Chen, and W. Liu, “Scattering activities bounded by reciprocity and parity conservation,” Phys. Rev. Research **2**, 013277 (2020). H. Zhou, “Tailoring light with photonic crystal slabs : From directional emission to topological half charges,” Thesis, Massachusetts Institute of Technology (2016).
--- abstract: 'For some exact monoidal categories, we describe explicitly a connection between topological and algebraic definitions of the Lie bracket on the extension algebra of the unit object. The topological definition, due to Schwede and to Hermann, involves loops in extension categories, and the algebraic definition involves homotopy liftings as introduced by the first author. As a consequence of our description, we prove that the topological definition indeed yields a Gerstenhaber algebra structure in the monoidal category setting, answering a question of Hermann. For use in proofs, we generalize $A_{\infty}$-coderivation and homotopy lifting techniques from bimodule categories to some exact monoidal categories.' address: - 'Department of Mathematics and Mechanics, Saint Petersburg State University, Saint Petersburg, Russia' - 'Department of Mathematics, Texas A&M University, College Station, Texas 77843, USA' author: - 'Y. Volkov' - 'S. Witherspoon' date: 13 April 2020 title: Graded Lie structure on cohomology of some exact monoidal categories --- Introduction ============ The Lie structure on Hochschild cohomology of an algebra is more difficult to understand than is the associative algebra structure. There are fewer techniques available for handling it in relation to arbitrary resolutions or to arbitrary extensions of modules. A topological approach introduced by Schwede [@Schwede] and expanded to some types of monoidal categories by Hermann [@Hermann2] expresses the bracket as a loop in an extension category. Shoikhet [@Shoikhet1; @Shoikhet2] and Lowen and Van den Bergh [@LowVan] offered related advances in the direction of Deligne’s Conjecture. An algebraic approach introduced by Negron and the authors [@NW1; @Volkov] describes the bracket on an arbitrary projective resolution via homotopy lifting functions [@Volkov] which were expanded to $A_{\infty}$-coderivations [@NVW], providing further insight and theoretical tools. In this paper, we generalize the algebraic approach of homotopy liftings and $A_{\infty}$-coderivations from the Hochschild cohomology of algebras to the cohomology of some types of exact monoidal categories. We use these techniques to make a direct connection to the work of Schwede and Hermann. Specifically, the topological definition of the bracket is a loop traversing four incarnations of cup product: tensor product in each of two orders and Yoneda splice in each of two orders. This definition calls on an isomorphism from homotopy classes of loops on a category of $n$-extensions to a category of $(n-1)$-extensions, given by Retakh and by Neeman [@NeRe; @Retakh]. The algebraic definition of the bracket via homotopy liftings then essentially provides a homotopy between two resulting paths from the Yoneda splice in one order to that in the other. As a consequence of this explicit description and connection with topology, we prove that Hermann’s bracket in a monoidal category setting indeed induces a Gerstenhaber algebra structure on cohomology, answering a question of his [@Hermann2]. We begin in Section \[sec:exact\] by recalling some standard definitions and notation for exact categories and $n$-extensions. We then summarize some of Retakh’s work on loops in extension categories in Section \[sec:SH\], in particular Schwede’s and Hermann’s formulation of his work in view of its application to Lie structures. In Section \[sec:Gb\] we generalize the $A_{\infty}$-coalgebra techniques of [@NVW] and the homotopy lifting techniques of [@Volkov] to some types of monoidal categories, defining a bracket on the extension algebra of the unit object that makes it a Gerstenhaber algebra. Finally, we make a direct connection to Schwede’s and Hermann’s topological approach in Section \[sec:equiv\]. Exact categories and extensions {#sec:exact} =============================== In this section we recall definitions and basic facts, and we introduce some notation concerning exact categories and $n$-extensions. Let ${{\mathcal{C}}}$ be an additive category and $\mathcal{E}$ a class of distinguished sequences $X\rightarrow Y\rightarrow Z$ of ${{\mathcal{C}}}$. We call ${{\mathcal{E}}}$ a class of [*conflations*]{} if for every sequence $X\xrightarrow{\iota} Y\xrightarrow{\pi} Z$ in ${{\mathcal{E}}}$, the morphism $\iota$ is a kernel of $\pi$ and the morphism $\pi$ is a cokernel of $\iota$. A morphism $\iota: X\rightarrow Y$ in ${{\mathcal{C}}}$ is an [*inflation*]{} if there exists a conflation of the form $X\xrightarrow{\iota} Y\xrightarrow{\pi} Z$. A morphism $\pi:Y\rightarrow Z$ in ${{\mathcal{C}}}$ is a [*deflation*]{} if there exists a conflation of the form $X\xrightarrow{\iota} Y\xrightarrow{\pi} Z$. The pair $(\mathcal{C},\mathcal{E})$ is called an [*exact category*]{} if the following axioms hold: 1. $0\rightarrow 0\rightarrow 0$ is a conflation; 2. the composition of any two deflations is also a deflation; 3. if $\pi: Y\rightarrow Z$ is a deflation and $f:Y'\rightarrow Z$ is any morphism, then there exists a pullback \(A) [$K$]{}; (B) \[right of=A\] [$Y'$]{}; (C) \[below of=A\] [$Y$]{}; (D) \[below of=B\] [$Z$]{}; \(A) to node\[above\][$\pi'$ ]{} (B) ; (A) to node\[left\][$f'$ ]{} (C) ; (B) to node\[right\][$f$ ]{} (D) ; (C) to node\[above\][$\pi$ ]{} (D) ; with deflation $\pi'$; 4. if $\iota: X\rightarrow Y$ is an inflation and $g:X\rightarrow Y'$ is any morphism, then there exists a pushout \(A) [$X$]{}; (B) \[right of=A\] [$Y$]{}; (C) \[below of=A\] [$Y'$]{}; (D) \[below of=B\] [$R$]{}; \(A) to node\[above\][$\iota$ ]{} (B) ; (A) to node\[left\][$g$ ]{} (C) ; (B) to node\[right\][$g'$ ]{} (D) ; (C) to node\[above\][$\iota'$ ]{} (D) ; with inflation $\iota'$. One can show (see [@exKeller]) that if $(\mathcal{C},\mathcal{E})$ is an exact category, then any split exact sequence is a conflation and the composition of any two inflations is an inflation. Moreover, if $\iota$ has a cokernel and $f\iota$ is an inflation for some $f$, then $\iota$ is an inflation itself and dually if $\pi$ has a kernel and $\pi g$ is a deflation for some $g$, then $\pi$ is a deflation. One can also show (see [@exact]) that any extension closed full subcategory of an abelian category is exact and that any small exact category can be realized as an extension closed full subcategory of some abelian category. We will usually omit the notation $\mathcal{E}$ and call $\mathcal{C}$ an exact category meaning that there is some fixed class of conflations for $\mathcal{C}$. We are going to follow the approach of [@Hermann2] to the study of homological properties of exact categories. Namely, we will study the categories of $n$-extensions in $\mathcal{C}$. A sequence $\cdots\xrightarrow{d_1}E_1\xrightarrow{d_0}E_0$ with a morphism $\mu_E:E_0\rightarrow X$ is called a [*resolution*]{} of $X\in{{\mathcal{C}}}$ if there are conflations $K_0\xrightarrow{\iota_0}E_0\xrightarrow{\mu_E}X$; $K_1\xrightarrow{\iota_1}E_1\xrightarrow{\pi_0}K_0$; $\ldots$ such that $d_i=\iota_i\pi_i$ for all $i\ge 0$. In this case we will denote the corresponding resolution by $(E,d,\mu_E)$ where $E=(E_i)_{i\ge 0}$, $d=(d_i)_{i\ge 0}$. Of course, resolutions are particular cases of complexes, i.e. of sequences $\cdots\xrightarrow{d_{i+1}}E_{i+1}\xrightarrow{d_i}E_i\xrightarrow{d_{i-1}}E_{i-1}\xrightarrow{d_{i-2}}\cdots$ such that $d_id_{i+1}=0$ for all $i\in\mathbb{Z}$. Such a complex we denote by $(E,d)$. If $(E',d')$ is another complex, then a [*degree $n$ morphism*]{} from $(E, d)$ to $(E', d')$ is a sequence of maps $f=(f_i)_{i\in\mathbb{Z}}$ with $f_i\in\operatorname{Hom}_{{\mathcal{C}}}(E_i,E_{i-n})$. In particular, $d$ is a degree one morphism from $(E, d)$ to itself. Degree $n$ morphisms between two fixed complexes form an abelian group in an obvious way. Moreover, if $g$ is a degree $m$ morphism from $(E',d')$ to $(E'', d'')$, then we define the composition $gf$ as the degree $(n+m)$ morphism from $(E, d)$ to $(E'', d'')$ defined by the equality $(gf)_i = g_{i-n}f_i$ for all $i$. For a degree $n$ morphism $f$ as above, we denote by ${\partial}(f)$ the degree $(n + 1)$ morphism defined by the equality ${\partial}(f) =d'f - (-1)^nf d$. We will call $f$ a [*chain map*]{} if ${\partial}(f) = 0$ and we will say that a degree $n$ morphism $f'$ from $(E, d)$ to $(E', d')$ is [*homotopic*]{} to $f$ and write $f'\sim f$ if $f-f' = {\partial}(s)$ for some degree $(n-1)$ morphism $s$. We will call $f$ [*null homotopic*]{} if $f\sim 0$. Any object $X$ of ${{\mathcal{C}}}$ we will consider also as a complex $(\tilde X,0)$ with $\tilde X_0=X$ and $\tilde X_i=0$ for $i\not=0$. For two resolutions $(E, d, \mu_E)$ and $(E', d', \mu_{E'})$ of $X$, we will call a degree zero chain map $f$ from $(E, d)$ to $(E', d')$ a [*morphism of resolutions*]{} if it lifts the identity morphism on $X$, i.e. if $\mu_{E'}f=\mu_E$. If other data is clear from the context, we will sometimes denote the complex $(E, d)$ or even the resolution $(E, d, \mu_E)$ simply by $E$. A resolution $(E, d, \mu_E)$ of $X$ is called an [*$n$-extension*]{} of $X$ by $Y$ if $E_n = Y$ and $E_i = 0$ for $i > n$. The set of all $n$-extensions of $X$ by $Y$ is denoted by ${{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$. As is usual, we set ${{\mathcal{E}} \! {\it{xt}}}^0_{{\mathcal{C}}}(X, Y )=\operatorname{Hom}_{{\mathcal{C}}}(X, Y )$, but this paper concerns more the case $n \ge 1$ and one may assume throughout that $n\geq 1$ whenever some argument or construction does not work for $n = 0$. For an $n$-extension $(E, d, \mu_E)$ of $X$ by $Y$, we introduce some special morphisms. We set $\iota_E = d_{n-1}:Y\rightarrow E_{n-1}$ and introduce morphisms $$\kappa_E:Y\rightarrow E \ \mbox{ and } \ \pi_E : E\rightarrow Y$$ of degrees $-n$ and $n$ respectively, that are identity maps in their unique nonzero degrees. Note that $\pi_E$ is a chain map, ${\partial}(\kappa_E) = \iota_E\kappa_E$ and $\pi_E\kappa_E = 1_Y$. Let us pick $(E, \phi, \mu_E),(F, \psi, \mu_F )\in {{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$. A [*morphism of $n$-extensions*]{} from $E$ to $F$ is a morphism of resolutions $f : E\rightarrow F$ that is the identity map in degree $n$, i.e. such that $\pi_F f = \pi_E$. There are only identity morphisms between elements of ${{\mathcal{E}} \! {\it{xt}}}^0_{{\mathcal{C}}}(X, Y )$. Morphisms generate an equivalence relation on ${{\mathcal{E}} \! {\it{xt}}}^n_{{{\mathcal{C}}}}(X,Y)$ and the set of equivalence classes is denoted $\operatorname{Ext}^n_{{{\mathcal{C}}}}(X,Y)$. Also, with these morphisms, ${{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$ is turned into a category for any $n \ge 0$. Then one can define homotopy groups $\pi_i{{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$ of ${{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$ as homotopy groups of the classifying space $\mathcal{B}\big({{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )\big)$. For a more direct interpretation of the groups $\pi_0{{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$ and $\pi_1{{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$ one can look, for example, at [@Hermann2 §2.2]. In particular, $\operatorname{Ext}^n_{{\mathcal{C}}}(X, Y ) = \pi_0{{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$ consists of the classes of elements of ${{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$ modulo the minimal equivalence relation such that $E$ is equivalent to $F$ whenever $\operatorname{Hom}_{{{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X,Y )}(E, F)\not=\varnothing$. Let us now recall some constructions involving the sets ${{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$. First of all, let us pick $(E, \phi, \mu_E) \in {{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$ and two morphisms $\alpha : X'\rightarrow X$ and $\beta : Y\rightarrow Y'$. Then we define $E\alpha = (E\alpha, \phi^{\alpha}, \mu_{E\alpha}) \in {{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X', Y )$ and $\beta E = (\beta E, {}^\beta\phi, \mu_{\beta E}) \in {{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y')$ in the following way. Let us construct\ the pullback \(A) [$(E\alpha)_0$]{}; (Ar) \[right of=A\] ; (B) \[right of=Ar\] [$X'$]{}; (C) \[below of=A\] [$E_0$]{}; (D) \[below of=B\] [$X$]{}; \(A) to node\[above\][$\mu_{E\alpha}$ ]{} (B) ; (A) to node\[left\][$\bar\alpha$ ]{} (C) ; (B) to node\[left\][$\alpha$ ]{} (D) ; (C) to node\[above\][$\mu_E$ ]{} (D) ; of $\mu_E$ along $\alpha$ and the pushout \(A) [$Y$]{}; (Ar) \[right of=A\] ; (B) \[right of=Ar\] [$E_{n-1}$]{}; (C) \[below of=A\] [$Y'$]{}; (D) \[below of=B\] [$(\beta E)_{n-1}$]{}; \(A) to node\[above\][$\iota_E$ ]{} (B) ; (A) to node\[left\][$\beta$ ]{} (C) ; (B) to node\[left\][$\bar\beta$ ]{} (D) ; (C) to node\[above\][$\iota_{\beta E}$ ]{} (D) ; of $\iota_E$ along $\beta$. Now we set $(E \alpha)_i = E_i$, $\phi^{\alpha}_i = \phi_i$ for $i > 0$ and define $\phi^{\alpha}_0$ as the unique morphism such that $\mu_{E_\alpha}\phi^{\alpha}_0=0$ and $\bar\alpha \phi^{\alpha}_0 = \phi_0$. We set also $(\beta E)_i = E_i$, ${}^\beta \phi_{i-1} = \phi_{i-1}$ for $i < n - 1$, $\mu_{\beta E} = \mu_E$ and define ${}^\beta \phi_{n-2}$ as the unique morphism such that ${}^\beta \phi_{n-2}\iota_{\beta E} = 0$ and ${}^\beta \phi_{n-2}\bar\beta= \phi_{n-2}$. In the case $n = 1$ the last construction must be slightly corrected, because in this case the pushout construction must be applied to $\mu_{\beta E}\not=\mu_E$. One can see that $(\beta E)\alpha = \beta(E\alpha)$ and so the notation $\beta E\alpha$ makes sense. Let us now pick two extensions $E, F\in {{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$. We define their sum (called the [*Baer sum*]{}) in the following way. First we form the $n$-extension $E\oplus F$ of $X^2$ by $Y^2$ in the obvious way and then define $E + F = \begin{pmatrix}1&1\end{pmatrix}(E \oplus F)\begin{pmatrix}1\\ 1\end{pmatrix}$. This sum operation determines a commutative monoid structure on the set of (isomorphism classes of) $n$-extensions of $X$ by $Y$. The zero element for this operation is $$\label{sigma} \sigma_n(X, Y ) =\left(0 \rightarrow Y\xrightarrow{1_Y} Y \rightarrow 0 \rightarrow \cdots \rightarrow 0 \rightarrow X\right)\in {{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y)$$ with $\mu_{\sigma_n(X,Y )} = 1_X$, where for $n = 1$ the middle terms $Y$ and $X$ glue together and form the direct sum $X \oplus Y$. Moreover, the sum operation passes to $\operatorname{Ext}^n_{{\mathcal{C}}}(X, Y )$ and determines the structure of an abelian group on it. If the underlying category ${{\mathcal{C}}}$ is ${\mathbf{k}}$-linear for some commutative ring ${\mathbf{k}}$, then $\operatorname{Ext}^n_{{\mathcal{C}}}(X, Y )$ is a ${\mathbf{k}}$-module, where the $n$-extension $aE = Ea$ is defined via the identification of $a\in{\mathbf{k}}$ with the morphism $a1_X : X \rightarrow X$. In particular, if $a$ is invertible and $E$ is reserved for $(E, \phi, \mu_E)$, then $aE$ denotes the $n$-extension $(E, \phi, a^{-1}\mu_E)$. We set $\operatorname{Ext}^{{{\begin{picture}(2.5,2) (1,1)\put(2,3){\circle*{2}}\end{picture}}}}_{{\mathcal{C}}}(X, Y ) = \oplus_{n\ge 0}\operatorname{Ext}^n_{{\mathcal{C}}}(X, Y )$. Note that at this moment this definition makes sense. Let us pick now $(E, \phi, \mu_E) \in {{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$ and $(F, \psi, \mu_F ) \in {{\mathcal{E}} \! {\it{xt}}}^m_{{\mathcal{C}}}(Y, Z)$. We define the $(m + n)$-extension $F\#E$ as the Yoneda splice $$0 \rightarrow Z \xrightarrow{\iota_F}F_{m-1} \xrightarrow{\psi_{m-2}}\cdots \xrightarrow{\psi_0}F_0 \xrightarrow{\iota_E\mu_F}E_{n-1} \xrightarrow{\phi_{n-2}}\cdots \xrightarrow{\phi_0}E_0$$ with $\mu_{F\#E} = \mu_E$. This construction passes to the sets $\operatorname{Ext}_{{\mathcal{C}}}$, i.e. it induces a product $\#: \operatorname{Ext}^m_{{\mathcal{C}}}(Y, Z) \times \operatorname{Ext}^n_{{\mathcal{C}}}(X, Y )\rightarrow \operatorname{Ext}^{m+n}_{{\mathcal{C}}}(X, Z)$ which is called the [*Yoneda product*]{}. In particular, for any object $X$ of ${{\mathcal{C}}}$ the set $\operatorname{Ext}^{{{\begin{picture}(2.5,2) (1,1)\put(2,3){\circle*{2}}\end{picture}}}}_{{\mathcal{C}}}(X, X)$ is a ring with respect to operations $+$ and $\#$. If ${{\mathcal{C}}}$ is ${\mathbf{k}}$-linear, then $\operatorname{Ext}^n_{{\mathcal{C}}}(X, X)$ is a ${\mathbf{k}}$-algebra. Note that one can define the derived category ${\rm D}\mathcal{C}$ of the exact category $\mathcal{C}$ (see, for example, [@exact]). Given an $n$-extension $(E, \phi, \mu_E)$ of $X$ by $Y$, one can define a morphism from $X$ to $Y [n]$ in ${\rm D}{{\mathcal{C}}}$ as the composition $\pi_E\mu_E^{-1}$ which makes sense because $\mu_E$ is a quasi isomorphism. This correspondence induces an isomorphism between $\operatorname{Ext}_{\mathcal{C}}^n(X,Y)$ and $\operatorname{Hom}_{{\rm D}\mathcal{C}}(X,Y[n])$ that respects the additive (${\mathbf{k}}$-linear) structure and sends the Yoneda product of two sequences to the composition of the corresponding morphisms in the derived category in the sense that $\pi_{F\#E}\mu_{F\#E}^{-1}$ coincides with $(\pi_F\mu_F^{-1})[n]\pi_E\mu_E^{-1}$ up to a sign. This gives a strong motivation to study the groups $\operatorname{Ext}_{\mathcal{C}}^n(X,Y)$. If the category ${{\mathcal{C}}}$ satisfies an additional property, namely, if it has [*enough projective objects*]{}, then the groups $\operatorname{Ext}^n_{{\mathcal{C}}}(X, Y )$ have another, more usable, description. The object $P$ of an exact category ${{\mathcal{C}}}$ is called [*projective*]{} if any deflation $X\rightarrow P$ is a split epimorphism. The resolution $(P, d, \mu_P )$ of $X \in {{\mathcal{C}}}$ is called projective if $P_i$ is projective for each $i \ge 0$. If $X$ has a projective resolution $(P, d, \mu_P )$, standard arguments show that there exists a canonical isomorphism of abelian groups (${\mathbf{k}}$-spaces if ${{\mathcal{C}}}$ is ${\mathbf{k}}$-linear) $\operatorname{Ext}^n_{{\mathcal{C}}}(X, Y )\cong\operatorname{Ker}\operatorname{Hom}_{{\mathcal{C}}}(d_{n}, Y )/ \operatorname{Im}\operatorname{Hom}_{{\mathcal{C}}}(d_{n-1}, Y )$. Moreover, the Yoneda product on the left side of this isomorphism can be calculated on the right side via the so-called lifting technique. In this paper we will restrict ourselves to the case of $n$-extensions of objects $X\in{{\mathcal{C}}}$ having projective resolutions. Projective resolutions are a standard tool for studying homological algebra and all interesting examples that we know admit this tool, so our setting does not seem to be very restrictive. Let us recall the construction of the isomorphism of abelian groups $\operatorname{Ext}^n_{{\mathcal{C}}}(X, Y )\cong\operatorname{Ker}\operatorname{Hom}_{{\mathcal{C}}}(d_{n}, Y )/ \operatorname{Im}\operatorname{Hom}_{{\mathcal{C}}}(d_{n-1}, Y )$. Let us first pick some [*$n$-cocycle*]{}, i.e. a degree $n$ chain map $f : P\rightarrow Y$. We denote by $K(f)$ the element $$\label{Kf} 0 \rightarrow Y \xrightarrow{\iota_f}K(f)_{n-1}\xrightarrow{d_f} P_{n-2}\xrightarrow{d_{n-3}}\cdots \xrightarrow{d_0}P_0$$ of ${{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$ with $\mu_{K(f)} = \mu_P$, where $K(f)_{n-1}$ is the pushout of the morphisms $d_{n-1}:P_n\rightarrow P_{n-1}$ and $f:P_n\rightarrow Y$. To construct this pushout, one first factors $d_{n-1}$ as $P_n\xrightarrow{\pi_{n-1}}K_{n-1}\xrightarrow{\iota_{n-1}}P_{n-1}$ where $\pi_{n-1}$ is the cokernel of $d_{n}$ and then constructs the pushout of the inflation $\iota_{n-1}$ along the unique morphism $\bar f$ such that $f=\bar f\pi_{n-1}$. We denote the remaining arrow of this pushout by $$\theta_f:P_{n-1}\rightarrow K(f)_{n-1} .$$ The morphism $d_f$ arises as the unique morphism such that $d_f\iota_f = 0$ and $d_f \theta_f = d_{n-2}$. In the case $n = 1$ this construction has to be slightly corrected via applying the pushout construction to obtain $\mu_{K(f)}\not= \mu_P$. The map from $\operatorname{Ker}\operatorname{Hom}_{{\mathcal{C}}}(d_{n}, Y )$ to ${{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y)$ sending $f$ to $K(f)$ induces the required isomorphism. The inverse to this isomorphism can be constructed in the following way. For any $n$-extension $(E, \phi, \mu_E)$ of $X$ by $Y$, there exists a morphism of resolutions $\hat f : P\rightarrow E$. Then the map from ${{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$ to $\operatorname{Ker}\operatorname{Hom}_{{\mathcal{C}}}(d_{n}, Y )$ sending $E$ to $\hat f_n = \pi_E\hat f$ for some morphism of resolutions $\hat f$ induces the required inverse isomorphism not depending on the choice of $\hat f$. Schwede’s and Hermann’s formulas for Retakh’s isomorphism {#sec:SH} ========================================================= An important feature of homotopy groups of extensions is the isomorphism $\operatorname{Ext}_{\mathcal{C}}^{n-i}(X,Y)\cong \pi_i{{\mathcal{E}} \! {\it{xt}}}_{\mathcal{C}}^n(X,Y)$ proved in [@Retakh] for an abelian category and in [@NeRe] for a Waldhausen category. In [@Hermann2] the isomorphism $\operatorname{Ext}_{\mathcal{C}}^{n-1}(X,Y)\cong \pi_1{{\mathcal{E}} \! {\it{xt}}}_{\mathcal{C}}^n(X,Y)$ was established explicitly when $\mathcal{C}$ is a factorizing exact category. Let us recall the definition of a factorizing exact category given in [@Hermann2 §2.1]. Suppose that $(E, \phi, \mu_E),(F, \psi, \mu_F )\in {{\mathcal{E}} \! {\it{xt}}}_{\mathcal{C}}^n(X,Y)$ and $\beta:E\rightarrow F$ is a morphism of $n$-extensions. Let $\hat F$ be the $n$-extension of $X$ by $Y$ defined by the sequence $$\begin{gathered} Y\xrightarrow{\tiny\begin{pmatrix}\iota_F\\0\end{pmatrix}}F_{n-1}\oplus E_{n-2}\xrightarrow{\tiny\begin{pmatrix}\psi_{n-2}&0\\0&1\\0&0\end{pmatrix}}F_{n-2}\oplus E_{n-2}\oplus E_{n-3}\\ \xrightarrow{\tiny\begin{pmatrix}\psi_{n-3}&0&0\\0&0&1\\0&0&0\end{pmatrix}}F_{n-3}\oplus E_{n-3}\oplus E_{n-4}\rightarrow\cdots \rightarrow F_1\oplus E_1\oplus E_0\xrightarrow{\tiny\begin{pmatrix}\psi_{0}&0&0\\0&0&1\end{pmatrix}}F_0\oplus E_0\end{gathered}$$ and the morphism $\mu_{\hat F}=\begin{pmatrix} \mu_F&0 \end{pmatrix}: F_0 \oplus E_0\rightarrow X$. Let us define $\hat\beta\in \operatorname{Hom}_{{{\mathcal{E}} \! {\it{xt}}}_{\mathcal{C}}^n(X,Y)}(E,\hat F)$ degreewise. We set $$\hat\beta_0=\begin{pmatrix}\beta_0\\ 1_{E_0}\end{pmatrix},\, \ \hat\beta_p=\begin{pmatrix}\beta_i\\ 1_{E_i}\\\phi_{i-1}\end{pmatrix}\,(1\le i\le n-2),\, \ \hat\beta_{n-1}=\begin{pmatrix}\beta_{n-1}\\ \phi_{n-2}\end{pmatrix}.$$ Due to [@Hermann2], the exact category $\mathcal{C}$ is called [*factorizing*]{} if all components of $\hat\beta$ are inflations for any $n\ge 1$, $X,Y\in\mathcal{C}$, any $E, F \in {{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$, and any $\beta\in\operatorname{Hom}_{{{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X,Y)}(E, F)$. The next lemma shows that the term “factorizing”  is superfluous and allows us to forget it forever. Any exact category is factorizing. It is easy to see that $\hat\beta_0$ is a split monomorphism with the cokernel $F_0$ and $\hat\beta_i$ ($1\le i\le n-2$) is a split monomorphism with the cokernel $F_i\oplus E_{i-1}$. Thus, it remains to prove that $\hat\beta_{n-1}$ is an inflation. To do this, let us first present $\phi_{n-2}$ and $\psi_{n-2}$ in the form $\phi_{n-2}=\iota_\phi\pi_\phi$ and $\psi_{n-2}=\iota_\psi\pi_\psi$, where $Y\xrightarrow{\iota_E}E_{n-1}\xrightarrow{\pi_\phi}K_\phi$ and $Y\xrightarrow{\iota_F}F_{n-1}\xrightarrow{\pi_\psi}K_\psi$ are conflations. We have $\hat\beta_{n-1}=\begin{pmatrix}\beta_{n-1}\\ \phi_{n-2}\end{pmatrix}=\begin{pmatrix}1_{F_{n-1}}&0\\0& \iota_\phi\end{pmatrix}\begin{pmatrix}\beta_{n-1}\\ \pi_\phi\end{pmatrix}$. Note that $\begin{pmatrix}1_{F_{n-1}}&0\\0& \iota_\phi\end{pmatrix}$ is an inflation, for example, as a pushout of the inflation $\iota_\phi$ along the direct inclusion of $K_\phi$ to $F_{n-1}\oplus K_\phi$, and hence it remains to prove that $\begin{pmatrix}\beta_{n-1}\\ \pi_\phi\end{pmatrix}$ is an inflation. Note that by the cokernel universal property there exists $\gamma:K_\phi\rightarrow K_\psi$ such that $\gamma \pi_\phi=\pi_\psi\beta_{n-1}$. Then $(1_Y,\beta_{n-1},\gamma)$ is a morphism of short exact sequences, and hence the square \(A) [$E_{n-1}$]{}; (Ar) \[right of=A\] ; (B) \[right of=Ar\] [$K_\phi$]{}; (C) \[below of=A\] [$F_{n-1}$]{}; (D) \[below of=B\] [$K_\psi$]{}; \(A) to node\[above\][$\pi_\phi$ ]{} (B) ; (A) to node\[left\][$\beta_{n-1}$ ]{} (C) ; (B) to node\[left\][$\gamma$ ]{} (D) ; (C) to node\[above\][$\pi_\psi$ ]{} (D) ; is a pullback of the deflation $\pi_\psi$. Now it follows from [@exKeller] that $\begin{pmatrix}\beta_{n-1}\\ \pi_\phi\end{pmatrix}$ is an inflation and we are done. \[monoRek\] For any exact category $\mathcal{C}$ there exists an isomorphism $\gamma:\operatorname{Ext}_{\mathcal{C}}^{n-1}(X,Y)\cong \pi_1{{\mathcal{E}} \! {\it{xt}}}_{\mathcal{C}}^n(X,Y)$, explicitly constructed in [@Hermann2]. The isomorphism of Corollary \[monoRek\] was used by Hermann [@Hermann2] to define the Gerstenhaber bracket on the extension algebra of the unit of an exact monoidal category. It was first constructed explicitly by Schwede [@Schwede] for any category of modules. Hermann showed that for a module category his construction coincides up to a sign with that of Schwede, and hence the bracket on Hochschild cohomology constructed by Hermann coincides with the usual Gerstenhaber bracket. The construction of the required isomorphism was done in [@Schwede] using projective resolutions and for this reason is more appropriate for us. Now we will show that if $X$ has a projective resolution, then Schwede’s isomorphism coincides up to a sign with Hermann’s isomorphism, generalizing the results of [@Hermann2] to monoidal categories with enough projectives. Let us first adapt Schwede’s construction to the setting of an arbitrary exact category to construct the isomorphism $\mu : \operatorname{Ext}^{n-1}_{{\mathcal{C}}}(X, Y )\rightarrow \pi_1{{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$ in the case where $X$ has a projective resolution $(P, d, \mu_P )$. Let us fix some $n$-cocycle $f : P\rightarrow Y$ and define $K(f)\in {{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$ as in . Note that any element of $\operatorname{Ext}^n_{{\mathcal{C}}}(X, Y )$ can be represented by $K(f)$ for some $n$-cocycle $f$. Let now $g : P\rightarrow Y$ be an $(n - 1)$-cocycle. The pushout universal property ensures existence of a unique morphism $h : K(f)_{n-1}\rightarrow K(f)_{n-1}$ such that $h\theta_f = \theta_f - \iota_fg$ and $h\iota_f = \iota_f$. This gives the morphism of $n$-extensions $$\mu_f (g) : K(f)\rightarrow K(f)$$ that is the identity in all degrees except $(n - 1)$ where it equals $h$. The morphism $\mu_f (g)$ determines an element of $\pi_1{{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$. The homotopy class of $\mu_f (g)$ is determined by the cohomology class of $g$. This follows from [@Hermann2 Lemma 3.2.4] because, for a degree $(n- 2)$ morphism $p : P\rightarrow Y$ , the degree $-1$ morphism from $K(f)$ to $K(f)$ that equals zero in all degrees except $(n-2)$, where it equals $\iota_f p$, is a homotopy between $\mu_f (g)$ and $\mu_f (g+pd)$. Moreover, it is not difficult to see that $\mu_f(g_1 + g_2) = \mu_f (g_2) \circ \mu_f (g_1)$, and hence the image of $\mu_f (-)$ is an abelian subgroup of $\pi_1\big({{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y ),K(f)\big)$. A little later we will show that $\mu_f(-)$ is an isomorphism, which will ensure that $\pi_1{{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$ does not depend (up to unique isomorphism) on a point in a connected component. Moreover, our arguments will imply that this unique isomorphism sends $\mu_{f_1}(g)$ to $\mu_{f_2}(g)$ if $f_1$ and $f_2$ are cohomologous. For now we choose for each point $E \in {{\mathcal{E}} \! {\it{xt}}}^n_{{\mathcal{C}}}(X, Y )$ a morphism of resolutions $\hat f:P\rightarrow E$ and define $$\mu_E(g):E\rightarrow E$$ to be the conjugation of $\mu_f(g)$, where $f = \pi_E\hat f$, by the path corresponding to the morphism from $K(f)$ to $E$ induced by $\hat f$ (not caring about the dependence of $\mu_E(g)$ on the choice of $\hat f$). Suppose now that we have two morphisms $\alpha,\beta:K(f)\rightarrow E$ for some $(E,\phi,\mu_E)\in{{\mathcal{E}} \! {\it{xt}}}_{\mathcal{C}}^{n}(X,Y)$. We will show how one can recover an $(n-1)$-cocycle $g$ such that $\mu_f(g)$ is homotopic to the loop $\alpha^{-1}\beta$. Note that in fact any loop with the base point $K(f)$ can be put into this form due to the results of [@Hermann2; @Schwede] and so we will be able to recover a preimage of any loop. Our construction imitates, of course, the construction of Schwede, but we give it for convenience, because our settings are more general. Note that $K(f)$ comes with a canonical morphism of resolutions $\Phi_f:P\rightarrow K(f)$ defined by the equalities $(\Phi_f)_i=1_{P_i}$ for $0\le i\le n-2$, $(\Phi_f)_{n-1}=\theta_f$ and $(\Phi_f)_{n}=f$. Then $(\alpha-\beta)\Phi_f$ is a chain map that is annihilated by the quasi isomorphism $\mu_E$. Thus, this map is null homotopic, i.e. there is a degree $-1$ morphism $$s:P\rightarrow E$$ such that $(\alpha-\beta)\Phi_f=\phi s+sd$. Note that $\pi_Esd=0$, and hence $s_{n-1}=\pi_Es:P\rightarrow Y$ is an $(n-1)$-cocycle. \[preim\] The loops $\mu_f(s_{n-1})$ and $\alpha^{-1}\beta$ are homotopic. Let us first replace $\beta$ by $\beta'$, where $\beta'_i=\beta_i+\phi_i s_i+s_{i-1}d_{i-1}$ for $0\le i\le n-2$, $\beta'_{n-1}=\beta_{n-1}+s_{n-2}d_f$ and $\beta'_n=\beta_n$. Then the paths corresponding to $\beta'$ and $\beta$ are homotopic by [@Hermann2 Lemma 3.2.4]. It remains to note that $\alpha\mu_f(s_{n-1})=\beta'$. Let us now recall Hermann’s construction of the isomorphism $$\gamma:\operatorname{Ext}_{\mathcal{C}}^{n-1}(X,Y)\rightarrow \pi_1{{\mathcal{E}} \! {\it{xt}}}_{\mathcal{C}}^n(X,Y) .$$ For $(F,\psi,\mu_F)\in{{\mathcal{E}} \! {\it{xt}}}_{\mathcal{C}}^{n-1}(X,Y)$, let us first construct a loop with a base point in the $n$-extension $\sigma_n(X,Y)$ defined by . We denote by $\bar F$ the $n$-extension $$0\rightarrow Y \xrightarrow{\iota_F} F_{n-2}\xrightarrow{\psi_{n-3}}\cdots\xrightarrow{\psi_0} F_0\xrightarrow{\scriptsize\begin{pmatrix}\mu_F\\-\mu_F\end{pmatrix}} X^2$$ with $\mu_{\bar F}=\begin{pmatrix}1_X&1_X\end{pmatrix}$. There are morphisms of $n$-extensions $\alpha^F,\beta^F:\sigma_n(X,Y)\rightarrow \bar F$ both of which are equal to $\iota_F$ in degree $(n-1)$ and zero in degrees from $1$ to $(n-2)$. In degree zero, $\alpha^F$ equals $\begin{pmatrix}1_X\\0\end{pmatrix}$ while $\beta^F$ equals $\begin{pmatrix}0\\1_X\end{pmatrix}$. These morphisms determine the loop $(\alpha^F)^{-1}\beta^F$ that we denote by $\gamma_{\sigma_n(X,Y)}(F)$. Now, for an arbitrary $E\in{{\mathcal{E}} \! {\it{xt}}}_{\mathcal{C}}^n(X,Y)$, the loop $$\gamma_{E}(F)\in \pi_1{{\mathcal{E}} \! {\it{xt}}}_{{{\mathcal{C}}}}^{n-1}(X,Y)$$ is obtained from the loop $\gamma_{\sigma_n(X,Y)}(F)$ by applying the functor $(-)+E$, where the plus sign denotes the Baer sum of extensions. Since $\sigma_n(X,Y)+E=E$, we get a loop with the base point $E$. See [@Hermann2] for details. Hermann has shown that this construction indeed determines an isomorphism $\gamma:\operatorname{Ext}_{\mathcal{C}}^{n-1}(X,Y)\rightarrow \pi_1{{\mathcal{E}} \! {\it{xt}}}_{\mathcal{C}}^n(X,Y)$. We will show that up to a sign, constructions of Schwede and of Hermann give the same result. This, in particular, will ensure that Schwede’s construction gives a well defined isomorphism between $\operatorname{Ext}_{\mathcal{C}}^{n-1}(X,Y)$ and $\pi_1{{\mathcal{E}} \! {\it{xt}}}_{\mathcal{C}}^n(X,Y)$ in our context and will allow us to use this isomorphism for studying the bracket as introduced in [@Hermann2]. Our aim is to prove that $\mu_E(F)\sim \gamma_E\big((-1)^{n+1}F\big)$ for any $F\in{{\mathcal{E}} \! {\it{xt}}}_{\mathcal{C}}^{n-1}(X,Y)$ and $E\in{{\mathcal{E}} \! {\it{xt}}}_{\mathcal{C}}^n(X,Y)$. Let us pick a morphism of resolutions $\hat f:P\rightarrow E$ and denote by $\bar f:K(f)\rightarrow E$ the morphism induced by it, where $f=\pi_E\hat f$. Note that $$(u+1_E)\bar f=(u+\bar f)=(1_{\bar F}+\bar f)(u+1_{K(f)})$$ for each $u\in\{\alpha^F,\beta^F\}$, and hence, by Hermann’s definition, we have $$\begin{gathered} \gamma_E(F)=(\alpha^F+1_E)^{-1}(\beta^F+1_E)\sim\bar f(\alpha^F+1_{K(f)})^{-1}(1_{\bar F}+\bar f)^{-1}(1_{\bar F}+\bar f)(\beta^F+1_{K(f)})\bar f^{-1}\\ \sim \bar f(\alpha^F+1_{K(f)})^{-1}(\beta^F+1_{K(f)})\bar f^{-1}=\bar f \gamma_{K(f)}\big(F\big)\bar f^{-1}.\end{gathered}$$ Thus, the required equality follows from the definition of $\mu$, our arguments above and the next lemma. \[SchHer\] $\mu_f\big((-1)^{n+1}g\big)\sim \gamma_{K(f)}\big(K(g)\big)$ for all $n$-cocycles $f$ and $(n-1)$-cocycles $g$. We first describe the loop $\gamma_{K(f)}\big(K(g)\big)=(\alpha^{K(g)}+1_{K(f)})^{-1}(\beta^{K(g)}+1_{K(f)})$. To do this we need to compute the extension $\overline{K(g)}+K(f)$ and morphisms $$\alpha^{K(g)}+1_{K(f)}, \ \ \beta^{K(g)}+1_{K(f)}:K(f)\rightarrow \overline{K(g)}+K(f).$$ These can be obtained via a pullback-pushout construction from the morphisms of long exact sequences $$\tiny \begin{xy}*!C\xybox{ \xymatrix{ Y\oplus Y \ar[r]^{\hspace{-0.5cm}\begin{pmatrix}1_Y&0\\0&\iota_f\end{pmatrix}} \ar[d]^{=} & Y\oplus K(f)_{n-1} \ar[r]^{\begin{pmatrix}0&d_f\end{pmatrix}} \ar[d]^{\begin{pmatrix}\iota_g&0\\0&1_{K(f)_{n-1}}\end{pmatrix}} & P_{n-2}\ar[r]^{d_{n-3}} \ar[d]^{\begin{pmatrix}0\\1_{P_{n-2}}\end{pmatrix}} &\cdots \ar[r]^{d_1} &P_1 \ar[d]^{\begin{pmatrix}0\\1_{P_{1}}\end{pmatrix}} \ar[r]^{\begin{pmatrix}0\\d_0\end{pmatrix}} & X\oplus P_0 \ar[r]^{\begin{pmatrix}1_X&0\\0&\mu_P\end{pmatrix}} \ar[d]^{\delta} & X\oplus X\ar[d]^{=} \\ Y\oplus Y \ar[r]_{\hspace{-0.8cm}\vspace{.1cm}\begin{pmatrix}\iota_g&0\\0&\iota_f\end{pmatrix}} & K(g)_{n-2}\oplus K(f)_{n-1} \ar[r]_{\hspace{0.1cm}\vspace{.1cm}\begin{pmatrix}d_g&0\\0&d_f\end{pmatrix}} &P_{n-3}\oplus P_{n-2} \ar[r]_{\hspace{0.1cm}\vspace{.1cm}\begin{pmatrix}d_{n-4}&0\\0&d_{n-3}\end{pmatrix}} &\cdots \ar[r]_{\vspace{.1cm}\begin{pmatrix}d_{0}&0\\0&d_{1}\end{pmatrix}} &P_0\oplus P_1 \ar[r]_{\vspace{.1cm}\begin{pmatrix}\mu_P&0\\-\mu_P&0\\0&d_0\end{pmatrix}} &X\oplus X\oplus P_0 \ar[r]_{\hspace{0.1cm}\vspace{.1cm}\begin{pmatrix}1_X&1_X&0\\0&0&\mu_P\end{pmatrix}} & X\oplus X }} \end{xy}$$ where $\delta=\begin{pmatrix}1_X&0\\0&0\\0&1_{P_0}\end{pmatrix}$ for $\alpha^{K(g)}+1_{K(f)}$ and $\delta=\begin{pmatrix}0&0\\1_X&0\\0&1_{P_0}\end{pmatrix}$ for $\beta^{K(g)}+1_{K(f)}$. Let $K(g)_{n-2}\oplus K(f)_{n-1}\xrightarrow{\pi} L$ be the deflation completing the inflation $\begin{pmatrix}\iota_g\\-\iota_f\end{pmatrix}$ to a conflation. It is easy to see that the diagrams $$\tiny \begin{xy}*!C\xybox{ \xymatrix{ Y\oplus Y\ar[d]^{\begin{pmatrix}1_Y&1_Y\end{pmatrix}}\ar[rr]^{\hspace{-0.7cm}\begin{pmatrix}\iota_g&0\\0&\iota_f\end{pmatrix}} && K(g)_{n-2}\oplus K(f)_{n-1}\ar[d]^{\pi}\\ Y\ar[rr]_{\pi\begin{pmatrix}\iota_g\\0\end{pmatrix}} && L }} \end{xy} \hspace{0.6cm} \mbox{ \normalsize{and} } \hspace{0.6cm} \begin{xy}*!C\xybox{ \xymatrix{ X\oplus P_0\ar[rr]^{\begin{pmatrix}0&\mu_P\end{pmatrix}}\ar[dd]^{\begin{pmatrix}1_X&0\\-1_X&\mu_P\\0&1_{P_0}\end{pmatrix}} && X\ar[dd]^{\begin{pmatrix}1_X\\1_X\end{pmatrix}} \\ &\\ X\oplus X\oplus P_0\ar[rr]_{\hspace{0.2cm}\vspace{.1cm}\begin{pmatrix}1_X&1_X&0\\0&0&\mu_P\end{pmatrix}} && X\oplus X }} \end{xy}$$ are a pushout and a pullback respectively. Then $\overline{K(g)}+K(f)$ is the $n$-extension $$\tiny \begin{xy}*!C\xybox{ \xymatrix{ Y\ar[r]^{\pi\begin{pmatrix}\iota_g\\0\end{pmatrix}} & L\ar[r]^{\hspace{-0.5cm}\overline{\begin{pmatrix}d_g&0\\0&d_f\end{pmatrix}}} &P_{n-3}\oplus P_{n-2}\ar[r]&\cdots\ar[r]&P_0\oplus P_1\ar[r]^{\begin{pmatrix}\mu_P&0\\0&d_0\end{pmatrix}}&X\oplus P_0\ar[r]^{\hspace{0.1cm}\begin{pmatrix}0&\mu_P\end{pmatrix}} & X . }} \end{xy}$$ Moreover, the morphism $\Phi=\big((\alpha^{K(g)}+1_{K(f)})-(\beta^{K(g)}+1_{K(f)})\big)\Phi_f:P\rightarrow \overline{K(g)}+K(f)$ is zero in all degrees except degree zero where it equals $\begin{pmatrix}\mu_P\\0\end{pmatrix}$. Let us define a morphism $s:P\rightarrow \overline{K(g)}+K(f)$ of degree $-1$ by the equalities $s_i=(-1)^i\begin{pmatrix}1_{P_i}\\0\end{pmatrix}$ for $0\le i\le n-3$, $s_{n-2}=(-1)^n\pi\begin{pmatrix}\theta_g\\0\end{pmatrix}$ and $s_{n-1}=(-1)^{n+1}g$. It remains to note that $s$ is a homotopy for $\Phi$ and to apply Lemma \[preim\]. The proof of Lemma \[SchHer\] can be obtained via an adaptation of the proof of [@Hermann2 Lemma 5.3.3], but we include our proof for convenience of the reader and because for us it seems to be more self-contained. Two isomorphisms between $\operatorname{Ext}_{\mathcal{C}}^{n-1}(X,Y)$ and $\pi_1{{\mathcal{E}} \! {\it{xt}}}_{\mathcal{C}}^n(X,Y)$ were defined in [@Hermann2]. The second one $\gamma'$ satisfies the equality $\gamma'_E(F)=\gamma_E\big((-1)^{n+1}F\big)$ and so allows to exclude a sign from the isomorphism stated in the lemma. In fact, the proof of [@Hermann2 Lemma 5.3.3] starts with passing from $\gamma$ to $\gamma'$ and the sign appears exactly at this moment. We do not know why $\gamma$ is more used in [@Hermann2], but actually $\gamma$ works better with injective resolutions while $\gamma'$ is more appropriate for projective resolutions. The Gerstenhaber bracket on the extension algebra of the unit {#sec:Gb} ============================================================= In this section we introduce our definition of the bracket on the extension algebra of the unit of an exact monoidal category satisfying a natural condition. We then prove that, together with the Yoneda product, it gives a Gerstenhaber algebra structure. Our construction will be based on the $A_{\infty}$-coalgebra techniques of [@NVW]. This will allow us to obtain automatically all the desired properties, while formally the conditions required for the constructions of [@NVW] are redundant. Alternatively, one can avoid $A_{\infty}$-coalgebras by using directly the techniques of [@Volkov] (see also [@W Section 6.3]) to define the bracket and then prove its properties by direct calculations using some weaker additional assumptions. In the next section we will show that under our assumptions the bracket defined in this paper coincides with the bracket introduced in [@Hermann2]. This allows us to prove in our setting some properties of the bracket that were left as open questions in [@Hermann2]. We first recall the definition and some basic facts about monoidal categories and discuss some relations between exact and monoidal structures on a category that allow construction of the Gerstenhaber bracket on the extension algebra of the unit. \[defn:moncat\] Suppose that the additive category $\mathcal{C}$ is equipped with a functor $\otimes:\mathcal{C}\times \mathcal{C}\rightarrow \mathcal{C}$, a distinguished object ${\mathbf{1}}$ and natural isomorphisms of functors $$-\otimes (=\otimes \equiv)\xrightarrow{\alpha} (-\otimes =)\otimes \equiv;\hspace{1.5cm}{\mathbf{1}}\otimes -\xrightarrow{\lambda^l}{{\rm Id}}_{\mathcal{C}};\hspace{1.5cm}-\otimes {\mathbf{1}}\xrightarrow{\lambda^r}{{\rm Id}}_{\mathcal{C}}.$$ The 6-tuple $(\mathcal{C},\otimes,{\mathbf{1}},\alpha,\lambda^l,\lambda^r)$ is called a [*monoidal category*]{} if it satisfies the conditions $$\begin{gathered} 1_X\otimes \lambda^l_Y=(\lambda^r_X\otimes 1_Y)\circ \alpha_{X,{\mathbf{1}},Y}:X\otimes ({\mathbf{1}}\otimes Y)\rightarrow X\otimes Y;\\ (\alpha_{W,X,Y}\otimes 1_Z)\circ\alpha_{W,X\otimes Y,Z}\circ(1_W\otimes \alpha_{X,Y,Z}) =\alpha_{W,X, Y\otimes Z}\circ \alpha_{W\otimes X,Y,Z}:\\W\otimes\big(X\otimes(Y\otimes Z)\big)\rightarrow \big((W\otimes X)\otimes Y\big)\otimes Z\end{gathered}$$ for any $X,Y,Z,W\in\mathcal{C}$ (see [@Hermann2] for the definition illustrated with commutative diagrams). In this case $\otimes$ is called a [*monoidal product*]{} for $\mathcal{C}$ and ${\mathbf{1}}$ is the [*unit*]{} of $\otimes$. Mac Lane’s Coherence Theorem (see [@CWM]) states that any “formal” diagram involving identity morphisms and isomorphisms $\alpha$, $\lambda^l$ and $\lambda^r$ commutes. Roughly speaking, this means that if we have a sequence $X_1,\dots, X_n$, where each $X_i$ is either the object ${\mathbf{1}}$ or a formal variable, and a sequence $Y_1,\dots,Y_m$ which is obtained from the first one via exclusion of objects $X_i$ that are equal to ${\mathbf{1}}$, then any two isomorphisms from $\big(X_1\otimes\cdots\otimes(X_{n-1}\otimes X_n)\cdots\big)$ to $\big(\cdots(Y_1\otimes Y_2)\otimes\cdots\otimes Y_m\big)$ formed by formally defined compositions of morphisms of one of the forms $1^{\otimes a}\otimes\alpha_{A,B,C}\otimes 1^{\otimes b}$, $1^{\otimes a}\otimes\lambda_{A}^l\otimes 1^{\otimes b}$ and $1^{\otimes a}\otimes\lambda_{A}^r\otimes 1^{\otimes b}$ are equal. In particular, ${\mathbf{1}}^{{\otimes}r}$ is canonically isomorphic to ${\mathbf{1}}$ for any $r\ge 0$. Similarly to the exact category case, we will usually omit the notation $\otimes,{\mathbf{1}},\alpha,\lambda^l,\lambda^r$ and call $\mathcal{C}$ a monoidal category meaning that there is some fixed monoidal category structure for it. Suppose now that ${{\mathcal{C}}}$ is monoidal and exact at the same time. Let $(E,\phi)$ and $(F,\psi)$ be two complexes over ${{\mathcal{C}}}$. Suppose that either ${{\mathcal{C}}}$ admits arbitrary countable direct sums or $E$ and $F$ are [*bounded below*]{}, i.e. there exists $N\in\mathbb{Z}$ such that $E_i=F_i=0$ for $i<N$. We define their tensor product complex $(E{\otimes}F,\phi{\otimes}\psi)$ in the following way. We set $(E{\otimes}F)_i=\oplus_{j+k=i}E_j{\otimes}F_k$ and $(\phi{\otimes}\psi)_{i-1}|_{E_j{\otimes}F_k}=\phi_{j-1}{\otimes}1_{F_k}+(-1)^{j}(1_{E_j}{\otimes}\psi_{k-1})$. Unfortunately, one cannot guarantee that $(E{\otimes}F,\phi{\otimes}\psi)$ is really a complex, because the definition of a monoidal category does not require bilinearity of the tensor product. This problem does not arise in the case where ${{\mathcal{C}}}$ is a [*tensor category*]{}, but in fact here it is enough to add the condition $0{\otimes}X\cong 0$ for any object $X$ of ${{\mathcal{C}}}$, where $0$ is the zero object. If $f:E\rightarrow E'$ is a degree $n$ morphism and $g:F\rightarrow F'$ is a degree $m$ morphism, then we define the degree $(n+m)$ morphism $f{\otimes}g:E{\otimes}F\rightarrow E'{\otimes}F'$ by the equality $(f{\otimes}g)_i|_{E_j{\otimes}F_k}=(-1)^{mj}(f_j{\otimes}g_k)$. If we forget for some time that the tensor product complex does not have to be a complex, then our definitions turn the category of (bounded below) complexes over ${{\mathcal{C}}}$ into a monoidal category with the unit object ${\mathbf{1}}$. Mac Lane’s Coherence Theorem can be applied in this context and so we will always identify tensor products with different bracket arrangements and complexes $E$, ${\mathbf{1}}{\otimes}E$ and $E{\otimes}{\mathbf{1}}$ without a special mentioning. In particular, the notation $E^{{\otimes}r}$ makes sense for $r\ge 0$. Note also that all of our notation is justified in such a way that the Koszul sign convention can be applied, for example, ${\partial}(f{\otimes}g)={\partial}(f){\otimes}g+(-1)^nf{\otimes}{\partial}(g)$, etc. Suppose now that $(E,\phi,\mu_E)$ is a resolution of $X$ and $(F,\psi,\mu_F)$ is a resolution of $Y$. Then the tensor complex $E{\otimes}F$ is equipped with the morphism $\mu_E{\otimes}\mu_F$ and one can ask if $(E{\otimes}F,\phi{\otimes}\psi,\mu_E{\otimes}\mu_F)$ is a resolution of $X{\otimes}Y$. Of course, in general, there is no reason that this should be true. The resolution $(P,d,\mu_P)$ of ${\mathbf{1}}$ is called [*$n$-power flat*]{} if $(P^{{\otimes}r},d^{{\otimes}r},\mu_P^{{\otimes}r})$ is a resolution of ${\mathbf{1}}$ for each $1\le r\le n$. If $P$ is $n$-power flat for each $n\ge 2$, then we say that $P$ is [*power flat*]{}. The main object of our study is the $\operatorname{Ext}$-algebra $\operatorname{Ext}^{{{\begin{picture}(2.5,2) (1,1)\put(2,3){\circle*{2}}\end{picture}}}}_{{{\mathcal{C}}}} ({\mathbf{1}},{\mathbf{1}})$ of the unit of a category $\mathcal{C}$ that is exact and monoidal at the same time. The assumption that we will need to obtain our results is that ${\mathbf{1}}$ has a projective power flat resolution $P$. Note that in [@Hermann2] the bracket was defined under the condition that, for any $(E,\phi,\mu_E)\in{{\mathcal{E}} \! {\it{xt}}}_{\mathcal{C}}^n({\mathbf{1}},{\mathbf{1}})$ and $(F,\psi,\mu_F)\in{{\mathcal{E}} \! {\it{xt}}}_{\mathcal{C}}^m({\mathbf{1}},{\mathbf{1}})$, $(E{\otimes}F,\phi{\otimes}\psi,\mu_E{\otimes}\mu_F)$ is an element of ${{\mathcal{E}} \! {\it{xt}}}_{\mathcal{C}}^{n+m}({\mathbf{1}},{\mathbf{1}})$. Applying this property to the powers of the $(N+3)$-extension $$P(N)=\left(0\rightarrow{\mathbf{1}}\xrightarrow{1_{{\mathbf{1}}}}{\mathbf{1}}\xrightarrow{0}\operatorname{Ker}(d_{N-1})\hookrightarrow P_N\xrightarrow{d_{N-1}}\cdots\xrightarrow{d_1}P_1\xrightarrow{d_0}P_0\right)$$ with $\mu_{P(N)}=\mu_P$ for big enough $N$, one can see that the property assumed in [@Hermann2] implies power flatness of [*any*]{} resolution of ${\mathbf{1}}$ and our proofs can be applied if ${\mathbf{1}}$ has a projective resolution. Let us now recall some definitions and facts of [@NVW] and adapt them to our context. The nice feature of this approach is that, due to Mac Lane’s Coherence Theorem, the proofs from [@NVW] work without changes and we automatically have a Gerstenhaber algebra structure on $\operatorname{Ext}^{{{\begin{picture}(2.5,2) (1,1)\put(2,3){\circle*{2}}\end{picture}}}}_{{{\mathcal{C}}}} ({\mathbf{1}},{\mathbf{1}})$. This was difficult to do using the approach of [@Hermann2]. In the next sections, we will show that our approach and the approach of [@Hermann2], in the cases where both of them can be applied, give the same operation on $\operatorname{Ext}^{{{\begin{picture}(2.5,2) (1,1)\put(2,3){\circle*{2}}\end{picture}}}}_{{{\mathcal{C}}}} ({\mathbf{1}},{\mathbf{1}})$ up to a sign. Since the proofs of theorems stated in the remaining part of this section do not differ from the proofs given in [@NVW], we leave all of them to the reader. An [*$A_\infty$-coalgebra*]{} over the exact monoidal category ${{\mathcal{C}}}$ is a (bounded below) complex $(C,0)$ with a collection of degree one morphisms $\delta_n:C\rightarrow C^{\otimes n}$, for all $n\ge 1$, such that, for any $N\ge 1$, $$\label{Ainf} 0=\sum\limits_{r+s+t=N}(1_C^{\otimes r}\otimes\delta_s\otimes 1_C^{\otimes t})\delta_{r+t+1}.$$ A degree one map $\mu:C\rightarrow {\mathbf{1}}$ is called a [*weak counit*]{} of the $A_\infty$-coalgebra $C$ if $(\mu\otimes\mu)\delta_2=\mu$ and $\mu^{\otimes n}\delta_n=0$ for all $n>2$. Note that formally the targets of the morphisms $(1_C^{\otimes r}\otimes\delta_s\otimes 1_C^{\otimes t})\delta_{r+t+1}$ can be different, but there exist isomorphisms $\phi_{r,s,t}$ that can be expressed as compositions of isomorphisms of the form $1_C^{\otimes a}\otimes \alpha_{X,Y,Z}\otimes 1_C^{\otimes b}$ such that all morphisms $\phi_{r,s,t}(1_C^{\otimes r}\otimes\delta_s\otimes 1_C^{\otimes t})\delta_{r+t+1}$ make sense and have the same target. Moreover, Mac Lane’s Coherence Theorem guarantees that the isomorphism $\phi_{r,s,t}$ does not depend on a concrete choice of composed isomorphisms and their order. This is the reason why makes sense. In fact, this is an example of an identification of tensor products with different bracket arrangements. Suppose that $C$ is an $A_\infty$-coalgebra as in the definition. Let $f=(f_n)_{n\ge 0}$ and $g=(g_n)_{n\ge 0}$ be two sequences of morphisms, where, for each $n\ge 0$, the morphism $f_n:C\to C^{\otimes n}$ has degree $l$ and the morphism $g_n:C\to C^{\otimes n}$ has degree $k$. Then we define $f\circ g=\big((f\circ g)_n\big)_{n\ge 0}$ by the equality $$(f\circ g)_n= \sum_{r+s+t=n} (1_C^{\otimes r}\otimes f_{s}\otimes 1_C^{\otimes t})g_{r+t+1}$$ and set $[f,g]=f\circ g-(-1)^{kl}g\circ f$. Note that $\delta=(\delta_n)_{n\ge 0}$ with $\delta_0=0$ is a sequence of degree one morphisms satisfying the equality $\delta\circ\delta=0$. For $f$ and $g$ as above, we define also $f\smile g=\big((f\smile g)_n\big)_{n\ge 0}$ by the equality $$(f\smile g)_n= (-1)^k\sum\limits_{r+s+t+u+v=n} (1^{{\otimes}r}{\otimes}f_s{\otimes}1^{{\otimes}t}{\otimes}g_u{\otimes}1^{{\otimes}v})\delta_{r+t+v+2} .$$ \[CDinf\] Let $C$ be an $A_\infty$-coalgebra over ${{\mathcal{C}}}$. A [*degree $l$ $A_\infty$-coderivation*]{} $f:C\to C$ is defined as a sequence of degree $l$ maps $f_n:C\to C^{\otimes n}$, for $n\geq 0$, that satisfy the equality $[f,\delta]=0$. The degree $l$ $A_\infty$-coderivation $f$ is called [*inner*]{} if there exists a sequence of degree $(l-1)$ maps $g_n:C\to C^{\otimes n}$, for all $n\geq 0$, such that $f=[g,\delta]$. We will denote by $\operatorname{Coder}^{\infty}_{{\mathcal{C}}}(C)$ and ${{\rm Inn}}^{\infty}_{{\mathcal{C}}}(C)$ the set of $A_\infty$-coderivations and the set of inner $A_\infty$-coderivations on the object $C$ respectively. Now we can reformulate [@NVW Theorem 2.4.7] in our setting. If $(C,\delta)$ is an $A_{\infty}$-coalgebra over the monoidal category ${{\mathcal{C}}}$, then ${{\rm Inn}}^{\infty}_{{\mathcal{C}}}(C)$ is an ideal in $\operatorname{Coder}^{\infty}_{{\mathcal{C}}}(C)$ with respect to the operations $\smile$ and $[ \ , \ ]$. Moreover, $\Big(\big(\operatorname{Coder}^{\infty}_{{\mathcal{C}}}(C)/{{\rm Inn}}^{\infty}_{{\mathcal{C}}}(C)\big)[1],\ \smile, \ [ \ , \ ]\Big)$ is a Gerstenhaber algebra (in general, nonunital). Suppose now that ${{\mathcal{C}}}$ is an exact monoidal category and $(P,d,\mu_P)$ is a projective power flat resolution of ${\mathbf{1}}$. Then there exists a morphism of resolutions $\Delta_P:P\rightarrow P{\otimes}P$. \[Dchoice\] In our calculations it will be convenient to justify the choice of $\Delta_P$. Namely, let us introduce $\alpha_P=\lambda^l_P(\mu_P\otimes 1_P)\Delta_P$ and $\beta_P=\lambda^r_P(1_P\otimes \mu_P)\Delta_P$. Then the map $\Delta_P':P\rightarrow P{\otimes}P$ defined by the equality $\Delta_P'=(\alpha_P\otimes 1_P-\beta_P\otimes 1_P)\Delta_P+\Delta_P\beta_P$ is also a morphism of resolutions that additionally satisfies the equality $\lambda^l_P(\mu_P\otimes 1_P)\Delta_P'=\lambda^r_P(1_P\otimes \mu_P)\Delta_P'$. Now [@NVW Theorem 3.1.1] (see also [@LowVan Proposition 5.3]) can be transferred to our setting. \[Pinfc\] The complex $(P[-1],0)$ admits an $A_{\infty}$-coalgebra structure $\delta$ with $\delta_1=d$ and $\delta_2=\Delta_P$ such that $\mu_P$ is a weak counit for $(P[-1],\delta)$. Note that $\big(\operatorname{Coder}^{\infty}_{{\mathcal{C}}}(P[-1])/{{\rm Inn}}^{\infty}_{{\mathcal{C}}}(P[-1])\big)[1]$ is a graded associative algebra with respect to the product $\smile$ and $\operatorname{Ext}^{{{\begin{picture}(2.5,2) (1,1)\put(2,3){\circle*{2}}\end{picture}}}}_{{{\mathcal{C}}}}({\mathbf{1}},{\mathbf{1}})$ is a graded associative algebra with respect to the Yoneda product. We state our version of [@NVW Theorem 4.1.1]. \[GerIso\] There exists an isomorphism of graded algebras $$\big(\operatorname{Coder}^{\infty}_{{\mathcal{C}}}(P[-1])/{{\rm Inn}}^{\infty}_{{\mathcal{C}}}(P[-1])\big)[1]\cong \operatorname{Ext}^{{{\begin{picture}(2.5,2) (1,1)\put(2,3){\circle*{2}}\end{picture}}}}_{{{\mathcal{C}}}}({\mathbf{1}},{\mathbf{1}})$$ that sends the class of the sequence $f=(f_n)_{n\ge 0}$ in $\operatorname{Coder}^{\infty}_{{\mathcal{C}}}(P[-1])/{{\rm Inn}}^{\infty}_{{\mathcal{C}}}(P[-1])$ to the class of $f_0$ in $\operatorname{Ker}\operatorname{Hom}_{{\mathcal{C}}}(d,{\mathbf{1}})/\operatorname{Im}\operatorname{Hom}_{{\mathcal{C}}}(d,{\mathbf{1}})$. As a consequence of the isomorphism of Theorem \[GerIso\], there is a Gerstenhaber algebra structure on $\operatorname{Ext}^{{{\begin{picture}(2.5,2) (1,1)\put(2,3){\circle*{2}}\end{picture}}}}_{{{\mathcal{C}}}}({\mathbf{1}},{\mathbf{1}})$. This structure can be described independently of $A_\infty$-coalgebra techniques via the next definition introduced in [@Volkov] (see also [@W Section 6.3]). \[def:hl\] Let $f:P\rightarrow{\mathbf{1}}$ be an $n$-cocycle. A degree $(n-1)$ morphism $\psi_f :P\rightarrow P$ is a [*homotopy lifting of $(f,\Delta_P)$*]{} if $$\label{eqn:hl1} {\partial}(\psi_f) = (f{\otimes}1_P - 1_P {\otimes}f)\Delta_P$$ and $\mu_P \psi_f\sim (-1)^{n+1}f\psi$ for some degree $-1$ map $\psi:P\rightarrow P$ such that $${\partial}(\psi) = (\mu_P{\otimes}1_P - 1_P {\otimes}\mu_P)\Delta_P.$$ \[Lchoice\] It is easy to see from the definition that $\psi_f$ is defined uniquely up to homotopy by $f$ and $\Delta_P$. Moreover, if $\psi_f$ is a homotopy lifting of $(f,\Delta_P)$ and $(\mu_P{\otimes}1_P)\Delta_P=(1_P{\otimes}\mu_P)\Delta_P$, then one can choose $\psi=0$ in the definition. In this case $\mu_P \psi_f$ is a coboundary, and hence there exists a degree $(m-1)$ null-homotopic chain map $\Phi:P\rightarrow P$ such that $\mu_P\Phi=\mu_P\psi_f$. Then $\psi'_f=\psi_f-\Phi$ is a homotopy lifting of $(f,\Delta_P)$ such that $\mu_P\psi'_f=0$. Note that the analog of [@NVW Theorem 4.4.6] states that the isomorphism of Theorem \[GerIso\] is induced by a surjective map from $\operatorname{Coder}^{\infty}_{{\mathcal{C}}}(P[-1])$ to $\operatorname{Ker}\operatorname{Hom}_{{\mathcal{C}}}(d,{\mathbf{1}})$ and the analog of [@NVW Lemma 4.5.4] states that if $f=(f_n)_{n\ge_0}$ is a degree $m$ $A_\infty$-coderivation, then $(-1)^mf_1$ is a homotopy lifting of $(f_0,\Delta_P)$. This argument ensures that the Gerstenhaber bracket coming from Theorem \[GerIso\] can be calculated in the following way. Let $f,g:P\rightarrow {\mathbf{1}}$ be an $m$-cocycle and a $k$-cocycle respectively. Let $\psi_f$ and $\psi_g$ be homotopy liftings of $(f,\Delta_P)$ and $(g,\Delta_P)$ respectively. We set $$\label{eqn:bracketdefn} [f,g] = f \psi_g - (-1)^{(m-1)(k-1)} g \psi_f .$$ It is clear from our discussion that this operation induces an operation on $\operatorname{Ext}^{{{\begin{picture}(2.5,2) (1,1)\put(2,3){\circle*{2}}\end{picture}}}}_{{{\mathcal{C}}}}({\mathbf{1}},{\mathbf{1}})$ that does not depend on the choice of homotopy liftings. We next state an analog of [@Volkov Theorem 4] that ensures this operation also does not depend on the choice of $P$ and $\Delta_P$; the proof is essentially the same. By Theorem \[GerIso\], $\operatorname{Ext}^{{{\begin{picture}(2.5,2) (1,1)\put(2,3){\circle*{2}}\end{picture}}}}_{{{\mathcal{C}}}}({\mathbf{1}},{\mathbf{1}})$ with the Yoneda product and the bracket $[ \ , \ ]$ is a Gerstenhaber algebra. Of course, this algebra has the unit represented by $\mu_P:P\rightarrow{\mathbf{1}}$. \[thm:bracket\] Let $f,g:P\rightarrow {\mathbf{1}}$ be cocycles. The element of $\operatorname{Ext}^{{{\begin{picture}(2.5,2) (1,1)\put(2,3){\circle*{2}}\end{picture}}}}_{{{\mathcal{C}}}}({\mathbf{1}}, {\mathbf{1}})$ given by $[f,g]$ at the cochain level is independent of the choice of a projective resolution $P$ and of a morphism of resolutions $\Delta_P$. In particular, this means that to calculate the bracket $[f,g]$ one can choose $\Delta_P$, $\psi_f$ and $\psi_g$ in such a way that $(\mu_P{\otimes}1_P)\Delta_P=(1_P{\otimes}\mu_P)\Delta_P$ and $\mu_P\psi_f=\mu_P\psi_g=0$ (see Remarks \[Dchoice\] and \[Lchoice\]). Let us recall that, starting from Theorem \[Pinfc\], the resolution $P$ of ${\mathbf{1}}$ is assumed to be projective and power flat. In fact, we need only $2$-power flatness of $P$ to define the bracket, and we only need $n$-power flatness for some small values of $n$ for the Gerstenhaber algebra structure, but a proof would require a generalization of the $A_{\infty}$-coderivation tools of [@NVW] from bimodules to monoidal categories. It is not the aim of this paper and we do not see a big difference between stating the power flatness and stating the $n$-power flatness for small $n$. For example, $P$ is $2$-power flat if $P$ is formed by flat (with respect to ${\otimes}$) objects, but in this case $P$ is power flat as well. One can also get a strict Gerstenhaber algebra structure on $\operatorname{Ext}^{{{\begin{picture}(2.5,2) (1,1)\put(2,3){\circle*{2}}\end{picture}}}}_{{{\mathcal{C}}}}({\mathbf{1}},{\mathbf{1}})$ using the operation $\circ$ on the set of $A_\infty$-coderivations (see the definition of a strict Gerstenhaber algebra in [@Hermann2]), but we will not do this since, as mentioned above, this would require $A_\infty$-coderivation tools for studying the strict Gerstenhaber algebra structure, and this structure is not discussed in [@NVW]. Equivalence of different definitions of the bracket {#sec:equiv} =================================================== We summarize some of Schwede’s and Hermann’s exact sequence interpretations of the Lie structure on $\operatorname{Ext}^{{{\begin{picture}(2.5,2) (1,1)\put(2,3){\circle*{2}}\end{picture}}}}_{{{\mathcal{C}}}}({\mathbf{1}},{\mathbf{1}})$ (see [@Schwede] and [@Hermann2]), and prove that up to signs these give the same operation as our homotopy lifting approach. In particular, our results imply that if ${\mathbf{1}}$ has a projective resolution in ${{\mathcal{C}}}$ and the conditions required in [@Hermann2] are satisfied, then the operations on $\operatorname{Ext}^{{{\begin{picture}(2.5,2) (1,1)\put(2,3){\circle*{2}}\end{picture}}}}_{{{\mathcal{C}}}}({\mathbf{1}},{\mathbf{1}})$ defined in [@Hermann2] give a structure of a Gerstenhaber algebra, answering a question left open there. We will assume in this section that $m,n\ge 1$, because actually the construction of [@Hermann2] works only in this case and to study the case where $m$ or $n$ is zero one has to inspect the construction of [@Hermann1] what we are not going to do here. Consider an $m$-extension $(E,\phi,\mu_E)$ and an $n$-extension $(F,\psi,\mu_F)$ of ${\mathbf{1}}$ by ${\mathbf{1}}$. Assume that $(E\otimes F,\phi{\otimes}\psi,\mu_E\otimes\mu_F)$ and $(F\otimes E,\psi{\otimes}\phi,\mu_F\otimes\mu_E)$ are $(m+n)$-extensions of ${\mathbf{1}}$ by ${\mathbf{1}}$. This assumption is necessary for the constructions of Schwede and of Hermann. Suppose also that ${\mathbf{1}}$ has a projective $2$-power flat resolution $(P,d,\mu_P)$. There exist morphisms of resolutions $\hat{f}:P\rightarrow E$ and $\hat{g}:P\rightarrow F$. Let us set $f=\hat{f}_m=\pi_E\hat{f}$ and $g=\hat{g}_n=\pi_F\hat{g}$. Then $f$ is an $m$-cocycle corresponding to $E$ and $g$ is an $n$-cocycle corresponding to $F$. So for example $f$ is defined via the following commuting diagram: $$\begin{xy}*!C\xybox{ \xymatrix{ \cdots \ar[r]& P_m\ar[r]^{d_m}\ar[d]^{f} & P_{m-1}\ar[r]^{d_{m-1}}\ar[d]^{\hat{f}_{m-1}} & \cdots\ar[r]^{d_1} & P_0\ar[r]^{\mu_P}\ar[d]^{\hat{f}_0} & {\mathbf{1}}\ar[r]\ar[d]^{=} & 0 \\ 0\ar[r]& {\mathbf{1}}\ar[r]^{\iota_E} & E_{m-1}\ar[r]^{\phi_{m-2}} &\cdots\ar[r]^{\phi_0} & E_0\ar[r]^{\mu_E} & {\mathbf{1}}\ar[r]& 0 }} \end{xy}$$ We fix a morphism of resolutions $\Delta_P: P\rightarrow P{\otimes}P$ and set $f\smile g=(-1)^{mn}(f\otimes g)\Delta_P$. Then each of the $(m+n)$-extensions $E\# F$, $E{\otimes}F$, $(-1)^{mn} F\# E$ and $(-1)^{mn} F{\otimes}E$ is represented by $f\smile g$. Let us recall that if $(L,\chi,\mu_L)$ is an extension, then $-L$ denotes the extension $(L,\chi,-\mu_L)$. The fact that these four extensions are all equivalent is depicted in the following diagram involving four specific morphisms defined below: $$\label{eqn:diamond} \begin{xy}*!C\xybox{ \xymatrix{ & E{\otimes}F\ar[dr]^{\rho_{E,F}}\ar[dl]_{\lambda_{E,F}} & \\ E\# F && (-1)^{mn} F\# E \\ & (-1)^{mn} F{\otimes}E\ar[ul]^{\rho_{F,E}}\ar[ur]_{\lambda_{F,E}} & }} \end{xy}$$ To define the morphisms $\lambda_{E,F}$, $\lambda_{F,E}$, $\rho_{E,F}$ and $\rho_{F,E}$, consider the augmented double complex: $$\begin{xy}*!C\xybox{ \xymatrix{ & E_0\ar[dl]\ar[d] & E_1\ar[l]\ar[d] & \cdots\ar[l] & E_{m-1}\ar[l]\ar[d] & {\mathbf{1}}\ar[l]\ar[d]\\ F_{n-1}\ar[d] & E_0{\otimes}F_{n-1}\ar[l]^{\mu_E\otimes 1_{F_{n-1}}}\ar[d] & E_1{\otimes}F_{n-1}\ar[l]\ar[d] & \cdots\ar[l] & E_{m-1}{\otimes}F_{n-1}\ar[l]\ar[d] & F_{n-1}\ar[l]\ar[d] \\ \vdots\ar[d] & \vdots\ar[d] & \vdots\ar[d] & & \vdots\ar[d] & \vdots\ar[d] \\ F_1\ar[d] & E_0{\otimes}F_1\ar[l]^{\mu_E\otimes 1_{F_{1}}}\ar[d] & E_1{\otimes}F_1\ar[l]\ar[d] & \cdots\ar[l] & E_{m-1}{\otimes}F_1\ar[l]\ar[d] & F_1\ar[l]\ar[d] \\ F_0\ar[d] & E_0{\otimes}F_0\ar[l]^{\mu_E\otimes 1_{F_{0}}}\ar[d]_{1_{E_0}{\otimes}\mu_F} & E_1{\otimes}F_0\ar[l]\ar[d]^{1_{E_1}{\otimes}\mu_F} & \cdots\ar[l] & E_{m-1}{\otimes}F_0\ar[l]\ar[d]_{1_{E_{m-1}}{\otimes}\mu_F} & F_0\ar[l]\ar[dl] \\ {\mathbf{1}}& E_0\ar[l] & E_1\ar[l] & \cdots\ar[l] & E_{m-1}\ar[l] & }} \end{xy}$$ All but the leftmost column and bottom row constitute the double complex with totalization $E{\otimes}F$, and the outermost rows and columns are $E\# F$ (left column and top row) and $(-1)^{mn} F\# E$ (right column and bottom row). Now let us pick in general a complex $L$. To define a chain map $\rho:L\rightarrow E\#F$ one has to define a degree $n$ chain map $\rho^1:L\rightarrow E$ and a degree zero chain map $\rho^0:L\rightarrow F$ such that $\mu_E\rho^1=\pi_F\rho^0$. Given the pair $(\rho^1,\rho^0)$, the morphism $\rho$ can be recovered by the equalities $\rho_i=\rho^0_i$ for $0\le i\le n-1$ and $\rho_i=(-1)^{n(i-n)}\rho^1_i$ for $n\le i\le m+n$. In these terms, the maps $\lambda_{E,F}$ and $\rho_{E,F}$ are defined by the pairs $(1_E\otimes\pi_F,\mu_E\otimes 1_F)$ and $\big((-1)^{mn}\pi_E{\otimes}1_F,(-1)^{mn}1_E{\otimes}\mu_F\big)$. Then $\lambda_{E,F}$ and $\rho_{E,F}$ are morphisms of extensions. Similarly there are morphisms $\rho_{F,E}: (-1)^{mn} F{\otimes}E\rightarrow E\# F$ and $\lambda_{F,E}: (-1)^{mn} F{\otimes}E\rightarrow (-1)^{mn} F\# E$. Some additional signs in definitions appear because of the not very natural construction of the Yoneda product. Actually, during the construction of $E\# F$ we use $E[n]$ instead of $E$ and so it would be natural to replace $\phi$ by $(-1)^n\phi$. We have not done this because of some classical traditions concerning the definition of the Yoneda product. Diagram (\[eqn:diamond\]) represents a loop in the extension category ${{\mathcal{E}} \! {\it{xt}}}^{m+n}_{{\mathcal{C}}}({\mathbf{1}},{\mathbf{1}})$. By results of Retakh and Neeman [@NeRe; @Retakh], the homotopy classes of such loops are in one-to-one correspondence with $\operatorname{Ext}^{m+n-1}_{{\mathcal{C}}}({\mathbf{1}},{\mathbf{1}})$. By a result of Schwede [@Schwede], in the case where ${{\mathcal{C}}}$ is the category of $A$-bimodules with $\otimes=\otimes_A$, the loop corresponds up to some sign to the Gerstenhaber bracket $[f,g]$. We refer to Retakh [@Retakh] and Schwede [@Schwede] for details. On the other hand, in an arbitrary monoidal category, Hermann defined the bracket operation using the loop and the isomorphism $\gamma$ from Corollary \[monoRek\] (see the text after Lemma \[preim\] for the description of $\gamma$). Here we adapt Schwede’s proof to show that this loop indeed corresponds up to a sign to the cocycle $[f,g]$ defined by the equality . Note that by Lemma \[SchHer\], we may replace $\gamma$ by $\mu$. \[thm:coincide\] Suppose that $E$ and $F$ are an $m$-extension and an $n$-extension of ${\mathbf{1}}$ by ${\mathbf{1}}$ such that $E{\otimes}F,F{\otimes}E\in {{\mathcal{E}} \! {\it{xt}}}^{m+n}_{{{\mathcal{C}}}}({\mathbf{1}},{\mathbf{1}})$. Assume that $(P,d,\mu_P)$ is a projective $2$-power flat resolution of ${\mathbf{1}}$ and $\Delta_P:P\rightarrow P{\otimes}P$ is a morphism of resolutions. Let $f$ and $g$ be cocycles representing the classes of $E$ and $F$ in $\operatorname{Ext}^{{{\begin{picture}(2.5,2) (1,1)\put(2,3){\circle*{2}}\end{picture}}}}_{{{\mathcal{C}}}}({\mathbf{1}},{\mathbf{1}})$ correspondingly and let $\psi_f$ and $\psi_g$ be homotopy liftings of $(f,\Delta_P)$ and $(g,\Delta_P)$. Then the cocycle $(-1)^m[g,f]$ defined by represents $\mu^{-1}(\rho_{F,E}^{-1}\lambda_{E,F}\rho_{E,F}^{-1}\lambda_{F,E})\in \operatorname{Ext}^{m+n-1}_{{{\mathcal{C}}}}({\mathbf{1}},{\mathbf{1}})$. Let us choose $\Delta_P$, $\psi_f$ and $\psi_g$ in such a way that $(\mu_P{\otimes}1_P)\Delta_P=(1_P{\otimes}\mu_P)\Delta_P$ and $\mu_P\psi_f=\mu_P\psi_g=0$ (see Theorem \[thm:bracket\] and the sentence after it). We may assume that we have morphisms of resolutions $\hat{f}:P\rightarrow E$ and $\hat{g}:P\rightarrow F$ such that $f=\pi_E\hat{f}$ and $g=\pi_F\hat{g}$. Then $(\hat{f}\otimes \hat{g})\Delta_P:P\rightarrow E{\otimes}F$ and $(-1)^{mn}(\hat{g}\otimes \hat{f})\Delta_P:P\rightarrow (-1)^{mn}F{\otimes}E$ are also morphisms of resolutions. Our first aim is to construct a chain map $\varepsilon:P\rightarrow (-1)^{mn}F{\otimes}E$ satisfying $(\mu_F{\otimes}\mu_E)\varepsilon=0$ that makes the rightmost quadrilateral in the following diagram commute. $$\label{eqn:projchange} \begin{xy}*!C\xybox{ \xymatrix{ & E{\otimes}F\ar[dl]_{\lambda_{E,F}}\ar[dr]_{\rho_{E,F}} & & \\ E\# F && (-1)^{mn}F\# E & P\ar[ull]_{(\hat{f}\otimes \hat{g})\Delta_P} \ar[dll]^{\hspace{0.5cm}(-1)^{mn} (\hat{g}\otimes \hat{f})\Delta_P + \varepsilon} \\ & (-1)^{mn} F{\otimes}E \ar[ul]^{\rho_{F,E}}\ar[ur]^{\lambda_{F,E}} && }} \end{xy}$$ Note that the universal property of pushout implies existence of unique morphisms $\bar\alpha:K(f\smile g)_{m+n-1}\rightarrow (E{\otimes}F)_{m+n-1}$ and $\bar\beta:K(f\smile g)_{m+n-1}\rightarrow (F{\otimes}E)_{m+n-1}$ such that $$\begin{gathered} \bar\alpha\theta_{f\smile g}=\big((\hat{f}\otimes \hat{g})\Delta_P\big)_{m+n-1}, \bar\alpha\iota_{f\smile g}=\iota_{E{\otimes}F}, \\ \bar\beta\theta_{f\smile g}=\big((-1)^{mn}(\hat{g}\otimes \hat{f})\Delta_P\big)_{m+n-1}+\epsilon_{m+n-1}, \bar\beta\iota_{f\smile g}=\iota_{F{\otimes}E}.\end{gathered}$$ Hence, there are unique morphisms $\alpha:K(f\smile g)\rightarrow E{\otimes}F$ and $\beta:K(f\smile g)\rightarrow (-1)^{mn}F{\otimes}E$ of $(m+n)$-extensions that satisfy the equalities $\alpha\Phi_{f\smile g}=(\hat{f}\otimes \hat{g})\Delta_P$ and $\beta\Phi_{f\smile g}=(-1)^{mn}(\hat{g}\otimes \hat{f})\Delta_P+\varepsilon$, where $\Phi_{f\smile g}$ is the chain map defined just before Lemma \[preim\]. Another application of the pushout universal property implies that $\rho_{E,F}\alpha=\lambda_{F,E}\beta$, and hence the loop $\rho_{F,E}^{-1}\lambda_{E,F}\rho_{E,F}^{-1}\lambda_{F,E}$ is homotopic to the loop $\beta^{-1}\rho_{F,E}^{-1}\lambda_{E,F}\alpha$ up to conjugation. We will show there is a homotopy between $\lambda_{E,F}(\hat{f}\otimes \hat{g})\Delta_P$ and $\rho_{F,E}\big((-1)^{mn}(\hat{g}\otimes \hat{f})\Delta_P+\varepsilon\big)$, and obtain $\mu^{-1}(\rho_{F,E}^{-1}\lambda_{E,F}\rho_{E,F}^{-1}\lambda_{F,E})$ using Lemma \[preim\]. We want to find a chain map $\varepsilon$ such that the morphism $\lambda_{F,E}\varepsilon$ defined by the pair of morphisms $\big((1_F{\otimes}\pi_E)\varepsilon, (\mu_F{\otimes}1_E)\varepsilon\big)$ is equal to $\Psi=\rho_{E,F}(\hat{f}\otimes \hat{g})\Delta_P-(-1)^{mn}\lambda_{F,E}(\hat{g}\otimes \hat{f})\Delta_P$. The morphism $\Psi$ is defined by the pair of morphisms $(\Psi^1,\Psi^0)$, where $$\begin{gathered} \Psi^0=(-1)^{mn}\Big((1_E{\otimes}\mu_F)(\hat{f}\otimes \hat{g})\Delta_P-(\mu_F{\otimes}1_E)(\hat{g}\otimes \hat{f})\Delta_P \Big)\\ =(-1)^{mn}(\hat{f}{\otimes}\mu_P-\mu_P{\otimes}\hat{f})\Delta_P=(-1)^{mn}\hat{f}(1_P{\otimes}\mu_P-\mu_P{\otimes}1_P)\Delta_P=0\end{gathered}$$ and $$\begin{gathered} \Psi^1=(-1)^{mn}\Big((\pi_E{\otimes}1_F)(\hat{f}\otimes \hat{g})\Delta_P-(1_F{\otimes}\pi_E)(\hat{g}\otimes \hat{f})\Delta_P \Big)\\ =(-1)^{mn}(f\otimes \hat{g}-\hat{g}\otimes f)\Delta_P=(-1)^{mn}\hat{g}(f\otimes 1_P-1_P\otimes f)\Delta_P.\end{gathered}$$ Let us set $$\varepsilon=(-1)^{mn}\big((\hat{g}{\otimes}\kappa_E)(f\otimes 1_P-1_P\otimes f)\Delta_P+(-1)^m(\hat{g}{\otimes}\iota_E\kappa_E)\psi_f\big).$$ Note that $\hat{g}{\otimes}\kappa_E$ means here $(\hat{g}{\otimes}\kappa_E)(\lambda^r)^{-1}$ while $f\otimes 1_P$ and $1_P\otimes f$ as usual mean $\lambda^r(f\otimes 1_P)$ and $\lambda^l(1_P\otimes f)$ correspondingly, where $\lambda^r$, $\lambda^l$ are the natural isomorphisms of Definition \[defn:moncat\]. Now we aim for a homotopy between $\lambda_{E,F}(\hat{f}\otimes \hat{g})\Delta_P$ and $\rho_{F,E}\big((-1)^{mn} (\hat{g}\otimes \hat{f})\Delta_P+\varepsilon\big)$, i.e. we want to find a degree $-1$ map $s:P\rightarrow E\#F$ such that $${\partial}(s)=\Gamma=\rho_{F,E}\big((-1)^{mn} (\hat{g}\otimes \hat{f})\Delta_P+\varepsilon\big)-\lambda_{E,F}(\hat{f}\otimes \hat{g})\Delta_P.$$ The morphism $\Gamma$ is defined by the pair of morphisms $(\Gamma^1,\Gamma^0)$, where $$\begin{gathered} \Gamma^0=(1_F{\otimes}\mu_E)\big((\hat{g}\otimes \hat{f})\Delta_P+(\hat{g}{\otimes}\kappa_E)(f\otimes 1_P-1_P\otimes f)\Delta_P+(-1)^m(\hat{g}{\otimes}\iota_E\kappa_E)\psi_f\big)\\ -(\mu_E{\otimes}1_F)(\hat{f}\otimes \hat{g})\Delta_P=\hat{g}(1_P{\otimes}\mu_P-\mu_P{\otimes}1_P)\Delta_P=0\end{gathered}$$ since $\mu_E\kappa_E=\mu_E\iota_E=0$, and $$\begin{gathered} \Gamma^1=(\pi_F{\otimes}1_E)\big((\hat{g}\otimes \hat{f})\Delta_P+(\hat{g}{\otimes}\kappa_E)(f\otimes 1_P-1_P\otimes f)\Delta_P+(-1)^m(\hat{g}{\otimes}\iota_E\kappa_E)\psi_f\big)\\ -(1_E{\otimes}\pi_F)(\hat{f}\otimes \hat{g})\Delta_P=\hat{f}(g{\otimes}1_P-1_P{\otimes}g)\Delta_P\\ +(-1)^{mn}\kappa_Eg(f\otimes 1_P-1_P\otimes f)\Delta_P+(-1)^{m(n-1)}\iota_E\kappa_Eg\psi_f.\end{gathered}$$ Note that $\bar s=\hat{f}\psi_g+(-1)^{mn+m+n}\kappa_Eg\psi_f:P\rightarrow E$ is a degree $(n-1)$ map such that ${\partial}(\bar s)=\Gamma^1$ and $\mu_E\bar s=0$. Then $\bar s$ determines the required homotopy $s$ by the equalities $s_i=0$ for $0\le i\le n-1$ and $s_i=(-1)^{n(i-n)}\bar s_i$ for $n\le i\le m+n-1$. Thus, $$s_{m+n-1}=(-1)^{n(m-1)}\bar s_{m+n-1}=(-1)^{m}g\psi_f+(-1)^{n(m-1)}f\psi_g=(-1)^m[g,f].$$ [99]{} T. Bühler, [*Exact categories*]{}, Exp. Math., 28 (2010), no. 1, 1–69. R. Hermann, [*Exact sequences, Hochschild cohomology, and the Lie module structure over the $M$-relative center*]{}, J. Algebra 454 (2016), 29–69. R. Hermann, [*Monoidal categories and the Gerstenhaber bracket in Hochschild cohomology*]{}, Mem. Amer. Math. Soc. 243 (2016), no. 1151. B. Keller, [*Chain complexes and stable categories*]{}, Manuscripta Math., 67 (1990), 379–417. W. Lowen and M. Van den Bergh, [*The $B_\infty$-structure on the derived endomorphism algebra of the unit in a monoidal category*]{}, arXiv:1907.06026. S. Mac Lane, [*Categories for the Working Mathematician*]{}, 2nd ed., Springer, 1998. A. Neeman and V. Retakh, [*Extension categories and their homotopy*]{}, Comp. Math., 102 (1996), no. 2, 203–242. C. Negron, Y. Volkov, and S. Witherspoon, [*$A_{\infty}$-coderivations and the Gerstenhaber bracket on Hochschild cohomology*]{}, accepted to J. Noncommutative Geometry. C. Negron and S. Witherspoon, [*An alternate approach to the Lie bracket on Hochschild cohomology*]{}, Homology, Homotopy and Applications 18 (1) (2016), 265–285. V. S. Retakh, [*Homotopy properties of categories of extensions*]{} (Russian) Uspekh. Mat. Nauk 41 (1986), no. 6 (252), 179–180. (English) Russian Math. Surveys 41 (6) (1986), 179–180. S. Schwede, [*An exact sequence interpretation of the Lie bracket in Hochschild cohomology*]{}, J. Reine Angew. Math. 498 (1998), 153–172. B. Shoikhet, [*Differential graded categories and Deligne conjecture*]{}, Adv. Math. 289 (2016), 797–843. B. Shoiket, [*Graded Leinster monoids and generalized Deligne conjecture for 1-monoidal abelian categories*]{}, Int. Math. Res. Not. (2018), no. 19, 5857–5937. Y. Volkov, [*Gerstenhaber bracket on the Hochschild cohomology via an arbitrary resolution*]{}, Proc. Edinburgh Math. Soc. (2) 62 (3) (2019), 817–836. S. Witherspoon, [*Hochschild Cohomology for Algebras*]{}, Graduate Studies in Mathematics 204, Amer. Math. Soc., 2019.
--- abstract: | This work consists of two parts. In the first part, we consider a compact connected strongly pseudoconvex CR manifold $X$ with a transversal CR $S^{1}$ action. We establish an equidistribution theorem on zeros of CR functions. The main techniques involve a uniform estimate of Szegő kernel on $X$. In the second part, we consider a general complex manifold $M$ with a strongly pseudoconvex boundary $X$. By using classical result of Boutet de Monvel-Sjöstrand about Bergman kernel asymptotics, we establish an equidistribution theorem on zeros of holomorphic functions on ${\overline}M$. address: - 'Institute of Mathematics, Academia Sinica and National Center for Theoretical Sciences, Astronomy-Mathematics Building, No. 1, Sec. 4, Roosevelt Road, Taipei 10617, Taiwan' - 'Institute of Mathematics, Academia Sinica, Astronomy-Mathematics Building, No. 1, Sec. 4, Roosevelt Road, Taipei 10617, Taiwan' author: - 'Chin-Yu Hsiao' - Guokuan Shao title: Equidistribution theorems on strongly pseudoconvex domains --- [^1] Introduction and statement of the main results {#s-gue170727} =============================================== The study of equidistribution of zeros of holomorphic sections has become intensively active in recent years. Shiffman-Zelditch [@sz] established an equidistribution property for high powers of a positive line bundle. Dinh-Sibony [@ds] extended the equidistribution with estimate of convergence speed and applied to general measures. More results about equidistribution of zeros of holomorphic sections in different cases, such as line bundles with singular metrics, general base spaces, general measures, were obtained in [@cm; @cmm; @cmn; @dmm; @sh1; @sh2]. Important methods to study equidistribution include uniform estimates for Bergman kernel functions [@mm] and techniques for complex dynamics in higher dimensions [@fs]. Our article is the first to study equidistribution on CR manifolds and on complex manifolds with boundary. In the first part, we establish an equidistribution theorem on zeros of CR functions. The proof involves uniform estimates for Szegő kernel functions [@hhl]. In the second part, we consider a general complex manifold $M$ with a strongly pseudoconvex boundary $X$ and we establish an equidistribution theorem on zeros of holomorphic functions on ${\overline}M$ by using classical result of Boutet de Monvel-Sjöstrand [@BouSj76]. We now state our main results. We refer to Section \[s:prelim\] for some notations and terminology used here. Let $(X, T^{1,0}X)$ be a compact connected strongly pseudoconvex CR manifold with a transversal CR $S^{1}$ action $e^{i\theta}$ (cf. Section 2), where $T^{1,0}X$ is a CR structure of $X$. The dimension of $X$ is $2n+1$, $n\geq 1$. Denote by $T\in C^{\infty}(X, TX)$ the real vector field induced by the $S^{1}$ action. Take a $S^1$ invariant Hermitian metric $\langle\,\cdot\,|\,\cdot\,\rangle$ on $\mathbb{C}TX$ such that there is an orthogonal decomposition $\mathbb{C}TX=T^{1,0}X\oplus T^{0,1}X\oplus\mathbb{C}T$. Then there exists a natural global $L^{2}$ inner product $(\,\cdot\,|\,\cdot\,)$ on $C^{\infty}(X)$ induced by $\langle\cdot|\cdot\rangle$. For every $q\in\mathbb N$, put $$X_q:=\{x\in X: e^{i\theta}\circ x\neq x, \forall\theta\in(0,\frac{2\pi}{q}),\ e^{i\frac{2\pi}{q}}\circ x=x\}.$$ Set $p:=\min\{q\in\mathbb{N}: X_{q}\neq\emptyset \}$. Put $X_{{\rm reg\,}}=X_p$. For simplicity, we assume that $p=1$. Since $X$ is connected, $X_{1}$ is open and dense in $X$. Assume that $X=\cup_{j=0}^{t-1}X_{p_{j}}$, $1=p_0<p_1<\cdots<p_{t-1}$ and put $X_{\rm sing}:=\cup_{j=1}^{t-1}X_{p_{j}}$. Let $\bar\partial_{b}: C^\infty(X){\rightarrow}\Omega^{0,1}(X)$ be the tangential Cauchy-Riemann operator. For each $m\in\mathbb{Z}$, put $$H_{b,m}^{0}(X):=\{u\in C^{\infty}(X):Tu=imu, \bar\partial_{b}u=0 \}.$$ It is well-known that $\dim H_{b,m}^{0}(X)<\infty$ (see [@hl]). Let $f_1\in H^0_{b,m}(X),\ldots,f_{d_m}\in H^0_{b,m}(X)$ be an orthonormal basis for $H^0_{b,m}(X)$. The Szegő kernel function associated to $H^0_{b,m}(X)$ is given by $$S_m(x):=\sum^{d_m}_{j=1}{\left\vertf_j(x)\right\vert}^2.$$ When the $S^1$ action is globally free, it is well-known that $S_m(x)\approx m^n$ uniformly on $X$. When $X$ is locally free, we only have $S_m(x)\approx m^n$ locally uniformly on $X_{{\rm reg\,}}$ in general (see Theorem \[t-gue170704\]). Moreover, $S_m(x)$ can be zero at some point of $X_{\rm sing}$ even for $m$ large (see [@hl] and [@hhla]). Let $$\label{e-gue170704rya} \alpha=[p_1,\ldots,p_r],$$ that is $\alpha$ is the least common multiple of $p_1,\ldots,p_r$. In Theorem \[t-gue170704r\], we will show that there exist positive integers $1=k_0<k_{1}<\cdot\cdot\cdot<k_{t-1}$ independent of $m$ such that $$cm^n\leq S_{\alpha m}(x)+S_{k_1\alpha m}(x)+\cdots+S_{k_{t-1}\alpha m}(x)\leq\frac{1}{c}m^n\ \ \mbox{on $X$}$$ for all $m\gg1$, where $0<c<1$ is a constant independent of $m$. For each $m\in\mathbb N$, put $$\label{e-gue170794lr} A_m(X):=\bigcup^{t-1}_{j=0}H^0_{b,k_j\alpha m}(X).$$ We write $d\mu_m$ to denote the equidistribution probability measure on the unit sphere $$SA_m(X):={\left\{g\in A_m(X);\, (\,g\,|\,g\,)=1\right\}}.$$ Let $a_m={\rm dim\,}A_m(X)$. We fix an orthonormal basis ${\left\{g^{(m)}_j\right\}}^{a_m}_{j=1}$ of $A_m(X)$ with respect to $(\,\cdot\,|\,\cdot\,)$, then we can identify the sphere $S^{2a_m-1}$ to $SA_m(X)$ by $$(z_1,\ldots,z_{a_m})\in S^{2a_m-1}{\rightarrow}\sum^k_{j=1}z_jg^{(m)}_j\in SA_m(X),$$ and we have $$\label{e-gue170703a} d\mu_m=\frac{dS^{2a_m-1}}{{\rm vol\,}(S^{2a_m-1})},$$ where $dS^{2a_m-1}$ denotes the standard Haar measure on $S^{2a_m-1}$. We consider the probability space $\Omega(X):=\prod^\infty_{m=1}SA_m(X)$ with the probability measure $d\mu:=\prod^\infty_{m=1}d\mu_m$. We denote $u={\left\{u_m\right\}}\in\Omega(X)$. Since the $S^1$ action is transversal and CR, $X\times \mathbb{R}$ is a complex manifold with the following holomorphic tangent bundle and complex structure $J$, $$\begin{split} &T^{1,0}X\oplus\{\mathbb{C}(T-i\frac{\partial}{\partial\eta}) \},\\ &JT=\frac{\partial}{\partial\eta}, \ Ju=iu \ \text{for}\ u\in T^{1,0}X.\\ \end{split}$$ For $u\in A_m(X)$, it is easy to see that there exists a unique function $v(x,\eta)\in C^\infty(X\times{\mathbb R})$, which is holomorphic in $X\times \mathbb{R}$ such that $v\big|_{\eta=0}=u$ (see Lemma \[l-gue170704w\]). We write $[v=0]$ to denote the standard zero current for holomorphic functions on $X\times{\mathbb R}$. The main result of the first part is the following \[t-gue170704ryz\] With the above notations and assumptions, Fix $\chi(\eta)\in C^\infty_0({\mathbb R})$ with $\int\chi(\eta)d\eta=1$ and let $\varepsilon_m$ be a sequence with $\lim_{m{\rightarrow}\infty}m\varepsilon_m=0$. Then for $d\mu$-almost every $u={\left\{u_m\right\}}\in\Omega(X)$, we have $$\lim_{m{\rightarrow}\infty}\frac{1}{m}\langle\,[v_{m}=0], f\wedge\omega_0\wedge \frac{1}{\varepsilon_m}\chi(\frac{\eta}{\varepsilon_m})d\eta \,\rangle =\alpha\frac{1+k_{1}^{n+1}+\cdot\cdot\cdot+k_{t-1}^{n+1}}{1+k_{1}^{n}+\cdot\cdot\cdot+k_{t-1}^{n}}\frac{i}{\pi}\int_{X}\mathcal{L}_{X}\wedge f\wedge\omega_{0},$$ for all $f\in\Omega^{n-1,n-1}(X)$, where $\alpha=[p_1,\ldots,p_r]$, $f\wedge\omega_0\wedge \frac{1}{\varepsilon_m}\chi(\frac{\eta}{\varepsilon_m})d\eta$ is a smooth $(n,n)$ form on $X\times{\mathbb R}$, $\eta$ denotes the coordinate on ${\mathbb R}$, $\omega_0$ is the Reeb one form on $X$ (see the discussion in the beginning of Section \[s-gue170627\]), $\mathcal{L}_{X}$ denotes the Levi form of $X$ with respect to the Reeb one form $\omega_0$ (see Definition \[d-1.2\]) and $v_m(x,\eta)\in C^\infty(X\times{\mathbb R})$ is the unique holomorphic function on $X\times{\mathbb R}$ with $v_m(x,\eta)|_{\eta=0}=u_m(x)$. Now we formulate the main result of the second part. Let $M$ be a relatively compact open subset with $C^\infty$ boundary $X$ of a complex manifold $M'$ of dimension $n+1$ with a smooth Hermitian metric $\langle\,\cdot\,|\,\cdot\,\rangle$ on its holomorphic tangent bundle $T^{1,0}M'$. The Hermitian metric on holomorphic tangent bundle induces a Hermitian metric $\langle\,\cdot\,|\,\cdot\,\rangle$ on $\oplus^{2n+2}_{k=1}\Lambda^k({\mathbb C}T^*M')$. Let $r\in C^\infty(M',{\mathbb R})$ be a defining function of $X$, that is, $X={\left\{z\in M';\, r(z)=0\right\}}$, $M={\left\{z\in M';\, r(z)<0\right\}}$. We take $r$ so that ${\left\Vertdr\right\Vert}^2=\langle\,dr\,|\,dr\,\rangle=1$ on $X$. In this work, we assume that $X$ is strongly pseudoconvex, that is, ${\partial}{\overline\partial}r|_{T^{1,0}X}$ is positive definite at each point of $X$, where $T^{1,0}X:=T^{1,0}M'\bigcap{\mathbb C}TX$ is the standard CR structure on $X$. Let $dv_M$ be the volume form on $M$ induced by $\langle\,\cdot\,|\,\cdot\,\rangle$ and let $(\,\cdot\,|\,\cdot\,)_M$ be the $L^2$ inner product on $C^\infty_0(M)$ induced by $dv_M$ and let $L^2(M)$ be the completion of $C^\infty_0(M)$ with respect to $(\,\cdot\,|\,\cdot\,)_M$. Let $H^0(M)={\left\{u\in L^2(M);\, {\overline\partial}u=0\right\}}$. By using classical result of Boutet de Monvel-Sjöstrand [@BouSj76], we see that $C^\infty({\overline}M)\bigcap H^0(M)$ is dense in $H^0(M)$ in the $L^2(M)$ space and we can find $g_j\in C^\infty({\overline}M)\bigcap H^0(M)$ with $(\, g_j\,|\,g_k\,)_M=\delta_{j,k}$, $j, k=1,2,\ldots$, such that the set $$\label{e-gue170709} A(M):=\rm{span\,}{\left\{g_1, g_2,\ldots\right\}}$$ is dense in $H^0(M)$. That is, for every $h\in H^0(M)$, we can find $h_{\ell}\in A(M)$, $\ell=1,2,\ldots$, such that $\lim_{\ell{\rightarrow}\infty}h_{\ell}=h$ in $L^2(M)$ space. To state our equidistribution theorem, we need to introduce some notations. For every $m\in\mathbb N$, let $A_m(M)={\rm span\,}{\left\{g_1,\ldots,g_m\right\}}$, where $g_j\in H^0(M)\bigcap C^\infty({\overline}M)$, $j=1,\ldots,m$, are as . Let $d\mu_m$ be the equidistribution probability measure on the unit sphere $$SA_m(M):={\left\{g\in A_m(M);\, (\,g\,|\,g\,)_M=1\right\}}.$$ Let $\beta:={\left\{b_j\right\}}^\infty_{j=1}$ with $b_1<b_2<\cdots$ and $b_j\in\mathbb N$, for every $j=1,2,\ldots$. We consider the probability space $$\label{e-gue170709I} \Omega(M,\beta):=\prod^\infty_{j=1}SA_{b_j}(M)$$ with the probability measure $$\label{e-gue170709II} d\mu(\beta):=\prod^\infty_{j=1}d\mu_{b_j}.$$ We denote $u={\left\{u_k\right\}}\in\Omega(M,\beta)$. For $g\in H^0(M)\bigcap C^\infty({\overline}M)$, we let $[g=0]$ denote the zero current in $M$. Let $B^{*0,1}M'={\left\{u\in T^{*0,1}M';\, \langle\,u\,|\,{\overline\partial}r\,\rangle=0\right\}}$, where $T^{*0,1}M'$ denotes the bundle of $(0,1)$ forms on $M'$. Let $B^{*1,0}M':={\overline}{B^{*0,1}M'}$ and let $B^{*p,q}M':=\Lambda^p(B^{*1,0}M')\wedge\Lambda^q(B^{*0,1}M')$, $p,q=1,\ldots,n$. Let $\omega_0=J(dr)$, where $J$ is the standard complex structure map on $T^*M'$ and let $\mathcal{L}_X\in C^\infty(X,T^{*1,1}X)$ be the Levi form induced by $\omega_0$ (see Definition \[d-1.2\]). Our second main result is the following \[t-gue170703c\] With the notations and assumptions above, fix $\psi\in C^\infty_0([-1,-\frac{1}{2}])$. There exists a sequence $\beta={\left\{b_j\right\}}^\infty_{j=1}$ independent of $\psi$ with $b_1<b_2<\cdots$, $b_j\in\mathbb N$, $j=1,2,\ldots$, such that for $d\mu(\beta)$-almost every $u={\left\{u_k\right\}}\in\Omega(M,\beta)$, we have $$\label{e-gue170703c} \lim_{k{\rightarrow}\infty}\langle\,[u_k=0], (2i)kr\psi(kr)\phi\wedge{\partial}r\wedge{\overline\partial}r\,\rangle=-(n+2)\frac{i}{2\pi}c_0\int_X\mathcal{L}_X\wedge\omega_0\wedge\phi$$ for all $\phi\in C^\infty({\overline}M,B^{*n-1,n-1}M')$, where $c_0=\int_{\mathbb R}\psi(x)dx$, $\Omega(M,\beta)$ and $d\mu(\beta)$ are as in and respectively. The paper is organized as follows. In Section \[s:prelim\] we collect some notations we use throughout and we recall the basic knowledge about CR manifolds. In Section 3 we recall a theorem about Szegő kernel asymptotics and give a uniform estimate of Szegő kernel functions. Section 4 is devoted to proving Theorem \[t-gue170704ryz\]. In Section \[s-gue170709\], we first construct holomorphic functions with specific rate near the boundary and we prove Theorem \[t-gue170703c\]. Preliminaries {#s:prelim} ============= Standard notations {#s-ssna} ------------------ We shall use the following notations: $\mathbb N={\left\{1,2,\ldots\right\}}$, $\mathbb N_0=\mathbb N\cup{\left\{0\right\}}$, ${\mathbb R}$ is the set of real numbers, ${\overline}{\mathbb R}_+:={\left\{x\in{\mathbb R};\, x\geq0\right\}}$. For a multi-index $\alpha=(\alpha_1,\ldots,\alpha_n)\in\mathbb N_0^n$, we denote by ${\left\vert\alpha\right\vert}=\alpha_1+\ldots+\alpha_n$ its norm and by $l(\alpha)=n$ its length. For $m\in\mathbb N$, write $\alpha\in{\left\{1,\ldots,m\right\}}^n$ if $\alpha_j\in{\left\{1,\ldots,m\right\}}$, $j=1,\ldots,n$. $\alpha$ is strictly increasing if $\alpha_1<\alpha_2<\ldots<\alpha_n$. For $x=(x_1,\ldots,x_n)$, we write $$\begin{split} &x^\alpha=x_1^{\alpha_1}\ldots x^{\alpha_n}_n,\\ & {\partial}_{x_j}=\frac{{\partial}}{{\partial}x_j}\,,\quad {\partial}^\alpha_x={\partial}^{\alpha_1}_{x_1}\ldots{\partial}^{\alpha_n}_{x_n}=\frac{{\partial}^{{\left\vert\alpha\right\vert}}}{{\partial}x^\alpha}\,,\\ &D_{x_j}=\frac{1}{i}{\partial}_{x_j}\,,\quad D^\alpha_x=D^{\alpha_1}_{x_1}\ldots D^{\alpha_n}_{x_n}\,, \quad D_x=\frac{1}{i}{\partial}_x\,. \end{split}$$ Let $z=(z_1,\ldots,z_n)$, $z_j=x_{2j-1}+ix_{2j}$, $j=1,\ldots,n$, be coordinates of ${\mathbb C}^n$. We write $$\begin{split} &z^\alpha=z_1^{\alpha_1}\ldots z^{\alpha_n}_n\,,\quad{\overline}z^\alpha={\overline}z_1^{\alpha_1}\ldots{\overline}z^{\alpha_n}_n\,,\\ &{\partial}_{z_j}=\frac{{\partial}}{{\partial}z_j}= \frac{1}{2}\Big(\frac{{\partial}}{{\partial}x_{2j-1}}-i\frac{{\partial}}{{\partial}x_{2j}}\Big)\,,\quad{\partial}_{{\overline}z_j}= \frac{{\partial}}{{\partial}{\overline}z_j}=\frac{1}{2}\Big(\frac{{\partial}}{{\partial}x_{2j-1}}+i\frac{{\partial}}{{\partial}x_{2j}}\Big),\\ &{\partial}^\alpha_z={\partial}^{\alpha_1}_{z_1}\ldots{\partial}^{\alpha_n}_{z_n}=\frac{{\partial}^{{\left\vert\alpha\right\vert}}}{{\partial}z^\alpha}\,,\quad {\partial}^\alpha_{{\overline}z}={\partial}^{\alpha_1}_{{\overline}z_1}\ldots{\partial}^{\alpha_n}_{{\overline}z_n}= \frac{{\partial}^{{\left\vert\alpha\right\vert}}}{{\partial}{\overline}z^\alpha}\,. \end{split}$$ For $j, s\in\mathbb Z$, set $\delta_{j,s}=1$ if $j=s$, $\delta_{j,s}=0$ if $j\neq s$. Let $W$ be a $C^\infty$ paracompact manifold. We let $TW$ and $T^*W$ denote the tangent bundle of $W$ and the cotangent bundle of $W$, respectively. The complexified tangent bundle of $W$ and the complexified cotangent bundle of $W$ will be denoted by ${\mathbb C}TW$ and ${\mathbb C}T^*W$, respectively. Write $\langle\,\cdot\,,\cdot\,\rangle$ to denote the pointwise duality between $TW$ and $T^*W$. We extend $\langle\,\cdot\,,\cdot\,\rangle$ bilinearly to ${\mathbb C}TW\times{\mathbb C}T^*W$. Let $G$ be a $C^\infty$ vector bundle over $W$. The fiber of $G$ at $x\in W$ will be denoted by $G_x$. Let $E$ be a vector bundle over a $C^\infty$ paracompact manifold $W_1$. We write $G\boxtimes E^*$ to denote the vector bundle over $W\times W_1$ with fiber over $(x, y)\in W\times W_1$ consisting of the linear maps from $E_y$ to $G_x$. Let $Y\subset W$ be an open set. From now on, the spaces of distribution sections of $G$ over $Y$ and smooth sections of $G$ over $Y$ will be denoted by $D'(Y, G)$ and $C^\infty(Y, G)$, respectively. Let $E'(Y, G)$ be the subspace of $D'(Y, G)$ whose elements have compact support in $Y$. Put $C^\infty_0(Y,G):=C^\infty(Y,G)\bigcap E'(Y,G)$. Let $G$ and $E$ be $C^\infty$ vector bundles over paracompact orientable $C^\infty$ manifolds $W$ and $W_1$, respectively, equipped with smooth densities of integration. If $A: C^\infty_0(W_1,E){\rightarrow}D'(W,G)$ is continuous, we write $K_A(x, y)$ or $A(x, y)$ to denote the distribution kernel of $A$. Let $H(x,y)\in D'(W\times W_1,G\boxtimes E^*)$. We write $H$ to denote the unique continuous operator $C^\infty_0(W_1,E){\rightarrow}D'(W,G)$ with distribution kernel $H(x,y)$. In this work, we identify $H$ with $H(x,y)$. Let $M$ be a relatively compact open subset with $C^\infty$ boundary $X$ of a complex manifold $M'$. Let $F$ be a $C^\infty$ vector bundle over $M'$. Let $C^\infty({\overline}M, F)$, $D'({\overline}M,F)$ denote the spaces of restrictions to $M$ of elements in the spaces $C^\infty(M',F)$, $D'(M',F)$ respectively. CR manifolds {#s-gue170627} ------------ Let $(X, T^{1,0}X)$ be a compact, orientable CR manifold of dimension $2n+1$, $n\geq 1$, where $T^{1,0}X$ is a CR structure of $X$, that is, $T^{1,0}X$ is a subbundle of rank $n$ of the complexified tangent bundle $\mathbb{C}TX$, satisfying $T^{1,0}X\cap T^{0,1}X=\{0\}$, where $T^{0,1}X=\overline{T^{1,0}X}$, and $[\mathcal V,\mathcal V]\subset\mathcal V$, where $\mathcal V=C^\infty(X, T^{1,0}X)$. We fix a real non-vanishing $1$ form $\omega_0\in C(X,T^*X)$ so that $\langle\,\omega_0(x)\,,\,u\,\rangle=0$, for every $u\in T^{1,0}_xX\oplus T^{0,1}_xX$, for every $x\in X$. We call $\omega_0$ Reeb one form on $X$. \[d-1.2\] For $p\in X$, the Levi form $\mathcal L_{X,p}$ of $X$ at $p$ is the Hermitian quadratic form on $T^{1,0}_pX$ given by $\mathcal{L}_{X,p}(U,V)=-\frac{1}{2i}\langle\,d\omega_0(p)\,,\,U\wedge{\overline}V\,\rangle$, $U, V\in T^{1,0}_pX$. Denote by $\mathcal{L}_{X}$ the Levi form on $X$. Fix a global non-vanishing vector field $T\in C^\infty(X,TX)$ such that $\omega_0(T)=-1$ and $T$ is transversal to $T^{1,0}X\oplus T^{0,1}X$. We call $T$ Reeb vector field on $X$. Take a smooth Hermitian metric $\langle \cdot \mid \cdot \rangle$ on $\mathbb{C}TX$ so that $T^{1,0}X$ is orthogonal to $T^{0,1}X$, $\langle u \mid v \rangle$ is real if $u, v$ are real tangent vectors, $\langle\,T\,|\,T\,\rangle=1$ and $T$ is orthogonal to $T^{1,0}X\oplus T^{0,1}X$. For $u \in \mathbb{C}TX$, we write $|u|^2 := \langle u | u \rangle$. Denote by $T^{*1,0}X$ and $T^{*0,1}X$ the dual bundles $T^{1,0}X$ and $T^{0,1}X$, respectively. They can be identified with subbundles of the complexified cotangent bundle $\mathbb{C}T^*X$. Define the vector bundle of $(p,q)$-forms by $T^{*p,q}X := (\wedge^pT^{*1,0}X)\wedge(\wedge^qT^{*0,1}X)$. The Hermitian metric $\langle \cdot | \cdot \rangle$ on $\mathbb{C}TX$ induces, by duality, a Hermitian metric on $\mathbb{C}T^*X$ and also on the bundles of $(p,q)$ forms $T^{*p,q}X, p, q=0, 1, \cdots, n$. We shall also denote all these induced metrics by $\langle \cdot | \cdot \rangle$. Note that we have the pointwise orthogonal decompositions: $$\begin{array}{c} \mathbb{C}T^*X = T^{*1,0}X \oplus T^{*0,1}X \oplus \left\{ \lambda \omega_0: \lambda \in \mathbb{C} \right\}, \\ \mathbb{C}TX = T^{1,0}X \oplus T^{0,1}X \oplus \left\{ \lambda T: \lambda \in \mathbb{C} \right\}. \end{array}$$ Let $D$ be an open set of $X$. Let $\Omega^{p,q}(D)$ denote the space of smooth sections of $T^{*p,q}X$ over $D$ and let $\Omega^{p,q}_0(D)$ be the subspace of $\Omega^{p,q}(D)$ whose elements have compact support in $D$. For each point $x\in X$, in this paper, we will identify $\mathcal L_{X,x}$ as a $(1,1)$ form at $x$. Hence, $\mathcal L_X\in\Omega^{1,1}(X)$. Now, we assume that $X$ admits an $S^1$-action: $S^1\times X\rightarrow X, (e^{i\theta}, x)\rightarrow e^{i\theta}\circ x$. Here we use $e^{i\theta}$ to denote the $S^1$-action. Let ${\widetilde}T\in C^\infty(X, TX)$ be the global real vector field induced by the $S^1$-action given as follows $$({\widetilde}Tu)(x)=\frac{\partial}{\partial\theta}\left(u(e^{i\theta}\circ x)\right)\Big|_{\theta=0},~u\in C^\infty(X).$$ We say that the $S^1$-action $e^{i\theta} ~(0\leq\theta<2\pi$) is CR if $$[{\widetilde}T, C^\infty(X, T^{1,0}X)]\subset C^\infty(X, T^{1,0}X),$$ where $[~,~ ]$ is the Lie bracket between the smooth vector fields on $X$. Furthermore, the $S^1$-action is called transversal if for each $x\in X$ one has $$\mathbb C{\widetilde}T(x)\oplus T_x^{1,0}(X)\oplus T_x^{0,1}X=\mathbb CT_xX.$$ If the $S^1$ action is transversal and CR, we will always take the Reeb one form on $X$ to be the global real one form determined by $\langle\,\omega_0\,,\, u\,\rangle=0$, for every $u\in T^{1,0}X\oplus T^{0,1}X$ and $\langle\,\omega_0\,,\,{\widetilde}T\,\rangle=-1$ and we will always take the Reeb vector field on $X$ to be ${\widetilde}T$. Hence, we will also write $T$ to denote the global real vector field induced by the $S^1$-action. Until further notice, we assume that $(X, T^{1,0}X)$ is a compact connected strongly pseudoconvex CR manifold with a transversal CR $S^1$-action $e^{i\theta}$. For every $q\in\mathbb N$, put $$X_q:=\{x\in X: e^{i\theta}\circ x\neq x, \forall\theta\in(0,\frac{2\pi}{q}),\ e^{i\frac{2\pi}{q}}\circ x=x\}.$$ Set $p:=\min\{q\in\mathbb{N}: X_{q}\neq\emptyset \}$. Thus, $X_{{\rm reg\,}}=X_p$. Note that one can re-normalize the $S^1$-action by lifting such that the new $S^1$-action satisfies $X_1\neq\emptyset$, see [@dh]. For simplicity, we assume that $p=1$. If $X$ is connected, then $X_{1}$ is open and dense in $X$. Assume that $$X=\cup_{j=0}^{t-1}X_{p_{j}},\ \ 1=:p_0<p_1<\cdots<p_{t-1}.$$ Put $X_{sing}:=X_{sing}^{1}=\cup_{j=1}^{t-1}X_{p_{j}}$, and $X_{sing}^{r}:=\cup_{j=r}^{t-1}X_{p_{j}}$ for $2\leq r\leq t-1$. Take the convention that $X_{sing}^{t}=\emptyset$. It follows from [@dh] that $X_{sing}^{r}$ is a closed subset of $X$, for $1\leq r\leq t$. Fix $\theta_0\in [0, 2\pi)$. Let $$d e^{i\theta_0}: \mathbb CT_x X\rightarrow \mathbb CT_{e^{i\theta_0}x}X$$ denote the differential map of $e^{i\theta_0}: X\rightarrow X$. By the properties of transversal CR $S^1$-actions, we can check that $$\label{a} \begin{split} de^{i\theta_0}:T_x^{1,0}X\rightarrow T^{1,0}_{e^{i\theta_0}x}X,\\ de^{i\theta_0}:T_x^{0,1}X\rightarrow T^{0,1}_{e^{i\theta_0}x}X,\\ de^{i\theta_0}(T(x))=T(e^{i\theta_0}x). \end{split}$$ Let $(e^{i\theta_0})^\ast: \Lambda^q(\mathbb CT^\ast X)\rightarrow\Lambda^q(\mathbb CT^\ast X)$ be the pull back of $e^{i\theta_0}, q=0,1\cdots, 2n+1$. From , we can check that for every $q=0, 1,\cdots, n$ $$(e^{i\theta_0})^\ast: T^{\ast0,q}_{e^{i\theta_0}x}X\rightarrow T_x^{\ast0,q}X.$$ Let $u\in\Omega^{0,q}(X)$. The Lie derivative of $u$ along the direction $T$ is denoted by $Tu$. We have $Tu\in\Omega^{0, q}(X)$ for all $u\in\Omega^{0, q}(X)$. Let $\overline\partial_b:\Omega^{0,q}(X)\rightarrow\Omega^{0,q+1}(X)$ be the tangential Cauchy-Riemann operator. From , it is straightforward to check that $$\label{c} T\overline\partial_b=\overline\partial_bT~\text{on} ~\Omega^{0,q}(X).$$ For every $m\in\mathbb Z$, put $\Omega^{0,q}_m(X):=\{u\in\Omega^{0,q}(X): Tu=imu\}$. For $q=0$, we write $C^\infty_m(X):=\Omega^{0,0}_m(X)$. We denote by $\overline\partial_{b, m}$ the restriction of $\overline\partial_b$ to $\Omega^{0, q}_m(X)$. From (\[c\]) we have the ${\overline\partial}_{b, m}$-complex for every $m\in\mathbb Z$: $$\label{e-gue140903VI} {\overline\partial}_{b, m}:\cdots{\rightarrow}\Omega^{0,q-1}_m(X){\rightarrow}\Omega^{0,q}_m(X){\rightarrow}\Omega^{0,q+1}_m(X){\rightarrow}\cdots.$$ For $m\in\mathbb Z$, the $q$-th ${\overline\partial}_{b, m}$-cohomology is given by $$\label{a8} H^{q}_{b,m}(X):=\frac{{\rm Ker\,}{\overline\partial}_{b}:\Omega^{0,q}_m(X){\rightarrow}\Omega^{0,q+1}_m(X)}{\operatorname{Im}{\overline\partial}_{b}:\Omega^{0,q-1}_m(X){\rightarrow}\Omega^{0,q}_m(X)}.$$ Moreover, we have [@hl] $$\label{a1} {\rm dim} H^q_{b, m}(X)<\infty, ~\text{for all}~ q=0, \ldots, n.$$ \[def-16-09-01\] A function $u\in C^\infty(X)$ is a Cauchy-Riemann function (CR function for short) if $\overline\partial_bu=0$, that is $\overline Zu=0$ for all $Z\in C^{\infty}(X, T^{1, 0}X)$. For $m\in \mathbb N$, $H^0_{b, m}(X)$ is called the $m$-th positive Fourier component of the space of CR functions. We recall the canonical local coordinates (BRT coordinates) due to Baouendi-Rothschild-Treves, (see [@brt]). \[t-gue170720y\] With the notations and assumptions above, fix $x_0\in X$. There exist local coordinates $(x_1,\cdots,x_{2n+1})=(z,\theta)=(z_1,\cdots,z_{n},\theta), z_j=x_{2j-1}+ix_{2j}, 1\leq j\leq n, x_{2n+1}=\theta$, centered at $x_0$, defined on $D=\{(z, \theta)\in\mathbb C^{n}\times\mathbb R: |z|<\varepsilon, |\theta|<\delta\}$, such that $$\label{e-can} \begin{split} &T=\frac{\partial}{\partial\theta}\\ &Z_j=\frac{\partial}{\partial z_j}+i\frac{\partial\varphi(z)}{\partial z_j}\frac{\partial}{\partial\theta},j=1,\cdots,n, \end{split}$$ where $\{Z_j(x)\}_{j=1}^{n}$ form a basis of $T_x^{1,0}X$, for each $x\in D$ and $\varphi(z)\in C^\infty(D,\mathbb R)$ is independent of $\theta$. We call $D$ a canonical local patch and $(z, \theta, \varphi)$ canonical coordinates centered at $x_0$. Note that Theorem \[t-gue170720y\] holds if $X$ is not strongly pseudoconvex. On the BRT coordinate $D$, the action of the partial Cauchy-Riemann operator is the following $$\bar \partial_{b}u=\sum_{j=1}^{n}(\frac{\partial u}{\partial\bar z_{j}} - i\frac{\partial \varphi}{\partial\bar z_{j}}\frac{\partial u}{\partial\theta} )d\bar z_{j}.$$ We can check that $$\omega_{0}=-d\theta+i\sum_{j=1}^{n}\frac{\partial\varphi}{\partial z_{j}}dz_{j}-i\sum_{j=1}^{n}\frac{\partial\varphi}{\partial \bar z_{j}}d\bar z_{j}.$$ Hence the Levi form is $$\mathcal{L}_{X}=-\frac{1}{2i}d\omega_{0}\big|_{T^{1,0}X}=\partial \bar \partial \varphi.$$ If $u\in H^{0}_{b,m}(X)$, then $\bar \partial_{b}u=0$. It is equivalent to that $$\frac{\partial u}{\partial\bar z_{j}} - i\frac{\partial \varphi}{\partial\bar z_{j}}\frac{\partial u}{\partial\theta}=0,\ \forall j.$$ Moreover, since $Tu=imu$, $u$ can be written locally as $$u\big|_{D}=e^{im\theta}\tilde u(z).$$ Then $$\label{e-gue170723} \begin{split} &\frac{\partial \tilde u}{\partial\bar z_{j}} +m\frac{\partial \varphi}{\partial\bar z_{j}}\tilde u \\ &=\frac{\partial}{\partial\bar z_{j}}(\tilde ue^{m\varphi})= 0,\ \forall j.\\ \end{split}$$ That is to say, $\tilde ue^{m\varphi}$ is holomorphic with respect to the $(z_{1},...,z_{n})$-coordinate. Let $X\times \mathbb{R}$ be the complex manifold with the following holomorphic tangent bundle and complex structure $J$, $$\label{e-gue170723I} \begin{split} &T^{1,0}X\oplus\{\mathbb{C}(T-i\frac{\partial}{\partial\eta}) \},\\ &JT=\frac{\partial}{\partial\eta}, \ Ju=iu \ \text{for}\ u\in T^{1,0}X.\\ \end{split}$$ \[l-gue170704w\] Let $u\in\oplus_{m\in\mathbb Z, |m|\leq N}C^{\infty}_{m}(X)$ with $\bar \partial_{b}u=0$, where $N\in\mathbb N$. Then there exists a unique function $v$, which is holomorphic in $X\times \mathbb{R}$ such that $v\big|_{\eta=0}=u$. Let $D$ be a canonical local coordinate patch with canonical local coordinates $x=(z,\theta)$. On $D$, we write $u=\sum_{m\in\mathbb Z, {\left\vertm\right\vert}\leq N}u_{m}(z)e^{im\theta}$. Note that in canonical local coordinates $x=(z,\theta)$, we have $T=\frac{\partial}{\partial\theta}$. Set $$v:=\sum_{m\in\mathbb Z, {\left\vertm\right\vert}\leq N}u_{m}(z)e^{im(\theta+i\eta)}.$$ From ${\overline\partial}_bu=0$, it is easy to check that $v$ is holomorphic on $D\times{\mathbb R}$ with respect the complex structure and $v|_{\eta=0}=u$. If there exists another function $\tilde v$ satisfying the same properties. Then $\tilde v-v$ is holomorphic, $(\tilde v-v)\big|_{\eta=0}=0$. So $\tilde v=v$. Thus, we can define $v$ as a global CR function on $X\times{\mathbb R}$ and we have $v|_{\eta=0}=u$. The proof is completed. Uniform estimate of Szegő kernel functions {#s-gue170704} ========================================== In this section, we will give a uniform estimate of Szegő kernel function on $X$. We keep the notations and assumptions in the previous sections. We first recall a recent result about Szegő kernel asymptotic expansion on CR manifolds with $S^1$ action due to Herrmann-Hsiao-Li [@hhl]. For $x, y\in X$, let $d(x,y)$ denote the Riemannian distance between $x$ and $y$ induced by $\langle\,\cdot\,|\,\cdot\,\rangle$. Let $A$ be a closed subset of $X$. Put $d(x,A):=\inf{\left\{d(x,y);\, y\in A\right\}}$. \[t-gue170704\] Recall that we work with the assumptions that $X$ is a compact connected strongly pseudoconvex CR manifold of dimension $2n+1$, $n\geq 1$, with a transversal CR $S^1$ action. With the above notations for $X_{p_{r}}, 0\leq r\leq t-1$, there are $b_{j}(x)\in C^{\infty}(X)$, $j=0,1,2,\ldots$, such that for any $r=0,1,\ldots,t-1$, any differential operator $P_{\ell}:C^{\infty}(X){\rightarrow}C^{\infty}(X)$ of order $\ell\in\mathbb{N}_{0}$ and every $N\in \mathbb{N}$, there are $\varepsilon_{0}>0$ and $C_{N}$ independent of $m$ with the following estimate $$\begin{split} &\Bigl|P_{\ell}\Bigl(S_{m} (x) -\sum_{s=1} ^{p_{r}}e^{\frac{2\pi(s-1)}{p_{r}}mi} \sum_{j=0}^{N-1}m^{n-j}b_{j}(x) \Bigr)\Bigr|\\ &\leq C_{N} \Bigl(m^{n-N}+m^{n+\frac{\ell}{2}} e^{-m\varepsilon_{0}d(x,X_{sing}^{r+1})^{2} } \Bigr),\ \forall m\geq 1,\ \forall x\in X_{p_{r}}, \\ \end{split}$$ where $b_{0}(x)\geq \epsilon>0$ on $X$ for some universal constant $\epsilon$. Note that when $m$ is a multiple of $p_{r}$, then $\sum_{s=1} ^{p_{r}}e^{\frac{2\pi(s-1)}{p_{r}}mi}$ is equal to $p_{r}$. When $m$ is not a multiple of $p_{r}$, then $\sum_{s=1} ^{p_{r}}e^{\frac{2\pi(s-1)}{p_{r}}mi}$ is equal to $0$. \[c-gue170722a\] With the above notations and assumptions, we have $$S_{m}(x)\leq Cm^{n}, \ \forall m\geq 1, \ x\in X,$$ where $C>0$ is a constant independent of $m$. Fix $r=0,1,\ldots,t-1$. There is a $m_0>0$ such that for every $m\geq m_0$, $p_r|m$, we have $$S_{m}(x)\geq m^{n}(p_rb_{0}(x)-c_{1}e^{-m\varepsilon_{0}d(x,X_{{\rm sing\,}}^{r+1})^{2}}-c_1\frac{1}{m})$$ for any $x\in X_{p_{r}}$, where $c_1>0$ is a constant independent of $m$. \[c-gue170722Hyc\] With the above notations and assumptions, let $r=0$, we have $$\lim_{m{\rightarrow}\infty}\frac{S_{m}(x)}{m^{n}}=b_{0}(x), \ \forall x\in X_{{\rm reg\,}}.$$ Let $x, x_1\in X$. We have $$\label{e-gue170721cw} \begin{split} &S_m(x)=S_m(x_1)+R_m(x,x_1),\\ &R_m(x,x_1)=\int^1_0\frac{{\partial}}{{\partial}t}\Bigr(S_m(tx+(1-t)x_1)\Bigr)dt. \end{split}$$ By Theorem \[t-gue170704\] with $l=1$, we have the following \[c-gue170721cw\] We have $${\left\vertR_m(x,x_1)\right\vert}\leq c_2m^{n+\frac{1}{2}}d(x,x_1),\ \ \forall (x,x_1)\in X\times X,$$ where $c_{2}>0$ is a constant independent of $m$. The main result in this section is the following \[t-gue170704r\] There exist positive integers $k_{1}<\cdot\cdot\cdot<k_{t-1}$ independent of $m$ and $m_0>0$, such that for all $m\geq m_0$ with $p_{j}|m$, $j=0,1,\ldots,t-1$, we have $$\frac{1}{C}m^n\leq S_{m}(x)+S_{k_{1}m}(x)+\cdot\cdot\cdot+S_{k_{t-1}m}(x)\leq Cm^{n}, \\ \forall x\in X,$$ where $S_{k_{j}m}(x)$ is the Szegő kernel function associated to $H^0_{b,k_{j}m}(X)$ and $C>1$ is a constant independent of $m$. Put $X^0_{{\rm sing\,}}:=X_{{\rm reg\,}}$. We claim that for every $j\in{\left\{0,1,\ldots,t-1\right\}}$, we can find $k_0:=1<k_1<\cdots<k_{t-1-j}$ and $m_0>0$ such that for all $m\geq m_0$ with $p_s|m$, $s=j, j+1,\ldots,t-1$, we have $$\label{e-gue170722y} \frac{1}{C}m^n\leq S_{m}(x)+S_{k_{1}m}(x)+\cdot\cdot\cdot+S_{k_{t-1-j}m}(x)\leq Cm^{n},\ \ \forall x\in X^j_{{\rm sing\,}},$$ where $C>1$ is a constant independent of $m$. We prove the claim by induction over $j$. Let $j=t-1$. Since $X^t_{{\rm sing\,}}=\emptyset$, by Theorem \[t-gue170704\], we see that for all $m\gg1$ with $p_{t-1}|m$, we have $$S_m(x)\approx m^n\ \ \mbox{on $X^{t-1}_{{\rm sing\,}}$}.$$ The claim holds for $j=t-1$. Assume that the claim holds for some $0<j_0\leq t-1$. We are going to prove the claim holds for $j_0-1$. By induction assumption, there exist positive integers $k_0:=1<k_{1}<\cdot\cdot\cdot<k_{t-1-j_0}$ independent of $m$ and $m_0>0$ such that for all $m\geq m_0$ with $p_s|m$, $s=j_0,j_0+1,\ldots,t-1$, we have $$\label{e-gue170722yI} \frac{1}{C}m^n\leq A_m(x):=S_{m}(x)+S_{k_{1}m}(x)+\cdot\cdot\cdot+S_{k_{t-1-j_0}m}(x)\leq Cm^{n}, \ \ \forall x\in X^{j_0}_{{\rm sing\,}},$$ where $C>1$ is a constant independent of $m$. In view of Corollary \[c-gue170722a\], we see that there is a large constant $C_0>1$ and $m_1>0$ such that for all $m\geq m_1$ with $p_{j_0-1}|m$ and all $x\in X_{p_{j_0-1}}$ with $d(x, X^{j_0}_{{\rm sing\,}})\geq\frac{C_0}{\sqrt{m}}$, we have $$\label{e-gue170722yc} S_m(x)\geq cm^n,$$ where $c>0$ is a constant independent of $m$. Fix $C_0>0$, where $C_0$ is as in the discussion before and let $k\in\mathbb N$ and $m\gg1$ with $p_s|m$, $s=j_0,j_0+1,\ldots,t-1$. Consider the set $$S_{k,m}:={\left\{x\in X_{p_{j_0-1}};\, d(x, X^{j_0}_{{\rm sing\,}})\leq\frac{C_0}{\sqrt{km}}\right\}}.$$ Let $x\in S_{k,m}$. Since $X^{j_0}_{{\rm sing\,}}$ is a closed subset of $X$ by Proposition 2.2, there is a point $x_2\in X^{j_0}_{{\rm sing\,}}$ such that $d(x,x_{2})=d(x,X^{j_0}_{{\rm sing\,}})$. By , we write $$\label{e-gue170722pI} \begin{split} &A_m(x)=S_{m}(x)+S_{k_{1}m}(x)+\cdot\cdot\cdot+S_{k_{t-1-j_0}m}(x)\\ &=\Bigr(S_m(x_2)+S_{k_{1}m}(x_2)+\cdot\cdot\cdot+S_{k_{t-1-j_0}m}(x_2)\Bigr)\\ &\quad\quad+\Bigr(R_m(x,x_2)+R_{k_{1}m}(x,x_2)+\cdot\cdot\cdot+R_{k_{t-1-j_0}m}(x,x_2)\Bigr)\\ &=A_{m}(x_{2})(1+v_{m}(x,x_2)), \end{split}$$ where $$v_m(x,x_2):=(A_m(x_2))^{-1}\Bigr(R_m(x,x_2)+R_{k_{1}m}(x,x_2)+\cdot\cdot\cdot+R_{k_{t-1-j_0}m}(x,x_2)\Bigr).$$ Then with Corollary \[c-gue170721cw\], $$\label{e-gue170722p} |v_{m}|\lesssim \frac{C_0}{\sqrt{km}}m^{-n}m^{n+\frac{1}{2}}\lesssim \frac{C_0}{\sqrt{k}}.$$ From and , we see that there is a large constant $k_{t-j_0}$ and $m_2>0$ such that for all $m\geq m_2$ with $p_s|m$, $s=j_0,j_0+1,\ldots,t-1$, we have $$\label{e-gue170722pII} A_m(x)\geq\hat c m^n,\ \ \forall x\in S_{k_{t-j_0},m}:={\left\{x\in X_{p_{j_0-1}};\, d(x, X^{j_0}_{{\rm sing\,}})\leq\frac{C_0}{\sqrt{k_{t-j_0}m}}\right\}},$$ where $\hat c>0$ is a constant independent of $m$. In view of , we see that for all $m\geq\max{\left\{m_1, m_2\right\}}$ with $p_{j_0-1}|m$, we have $$\label{e-gue170722yca} S_{k_{t-j_0}m}(x)\geq {\widetilde}cm^n,\ \ \mbox{$\forall x\in X_{p_{j_0-1}}$ with $d(x, X^{j_0}_{{\rm sing\,}})\geq\frac{C_0}{\sqrt{k_{t-j_0}m}}$},$$ where ${\widetilde}c>0$ is a constant independent of $m$. From and , we get the claim for $j=j_0-1$. By induction assumption, we get the claim and the theorem follows then. Equidistribution on CR manifolds ================================ This section is devoted to proving Theorem \[t-gue170704ryz\]. For simplicity, we assume that $X=X_{p_0}\bigcup X_{p_1}$, $p_0=1$. The proof of general case is similar. Let $k_1$ be as in Theorem \[t-gue170704r\]. Let $\alpha=[1,p_1]=p_1$. We recall some notations used in Section \[s-gue170727\]. For each $m\in\mathbb N$, put $A_m(X):=H^0_{b,\alpha m}(X)\bigcup H^0_{b,\alpha k_1m}(X)$, $SA_m(X):={\left\{g\in A_m(X);\, (\,g\,|\,g\,)=1\right\}}$ and let $d\mu_m$ to denote the equidistribution probability measure on the unit sphere $SA_m(X)$. We consider the probability space $\Omega(X):=\prod^\infty_{m=1}SA_m(X)$ with the probability measure $d\mu:=\prod^\infty_{m=1}d\mu_m$. Fix $\chi(\eta)\in C^\infty_0({\mathbb R})$ with $\int\chi(\eta)d\eta=1$ and let $\varepsilon_m$ be a sequence with $\lim_{m{\rightarrow}\infty}m\varepsilon_m=0$. Let $D$ be a local BRT canonical coordinate patch with canonical local coordinates $(z,\theta,\varphi)$. Let $x=(x_1,\ldots,x_{2n+1})=(z,\theta)$, $z_j=x_{2j-1}+ix_{2j}$, $j=1,\ldots,n$. Fix $f\in\Omega^{n-1,n-1}_0(D)$. We only need to show that for $d\mu$-almost every ${\left\{u_m\right\}}\in\Omega(X)$, we have $$\label{e-gue170729} \lim_{m{\rightarrow}\infty}\frac{1}{m}\langle\,[v_{m}=0], f\wedge\omega_0\wedge \frac{1}{\varepsilon_m}\chi(\frac{\eta}{\varepsilon_m})d\eta\,\rangle =\alpha\frac{1+k_{1}^{n+1}}{1+k_{1}^{n}}\frac{i}{\pi}\int_{X}\mathcal{L}_{X}\wedge f\wedge \omega_{0},$$ where $v_m(x,\eta)\in C^\infty(X\times{\mathbb R})$ is the unique holomorphic function on $X\times{\mathbb R}$ with $v_m(x,\eta)|_{\eta=0}=u_m(x)$. Let $u\in S A_m(X)$ and let $v(z,\theta,\eta)$ be holomorphic function on $X\times{\mathbb R}$ with $v|_{\eta=0}=u$. For simplicity, let $m_1:=\alpha m$, $m_2:=\alpha k_1m$. On $D$, we write $$u=u_{1}+u_{2}=\tilde u_{1}(z)e^{im_{1}\theta}+\tilde u_{2}(z)e^{im_{2}\theta}\in H^{0}_{b,m_{1}}(X)\oplus H^{0}_{b,m_{2}}(X).$$ Then, $$v=\tilde u_{1}(z)e^{im_{1}\theta+m_{1}(\varphi-\eta)}+\tilde u_{2}(z)e^{im_{2}\theta+m_{2}(\varphi-\eta)}.$$ We have $$\begin{split} &\langle[v=0], f\wedge\omega_0\wedge \frac{1}{\varepsilon_m}\chi(\frac{\eta}{\varepsilon_m})d\eta\rangle\\ &=\langle[v(z,\theta,\eta-\varphi(z))=0], f(z,\theta)\wedge\omega_0(z,\theta)\wedge\chi(\frac{\eta-\varphi(z)}{\varepsilon_m})\frac{1}{\varepsilon_m}(d\eta-d\varphi)\rangle. \end{split}$$ Note that $\frac{{\partial}}{{\partial}{\overline}z_j}v(z,\theta,\eta-\varphi(z))=0$, $j=1,\ldots,n$, $(\frac{{\partial}}{{\partial}\theta}-i\frac{{\partial}}{{\partial}\eta})v(z,\theta,\eta-\varphi(z))=0$. From this observation and Lelong-Poincaré formula, we deduce that $$\label{e-gue170729y} \begin{split} &\langle[v=0], f\wedge\omega_0\wedge \frac{1}{\varepsilon_m}\chi(\frac{\eta}{\varepsilon_m})d\eta\rangle\\ &=\frac{i}{2\pi}\int\partial \bar \partial \log|\tilde u_{1}(z)e^{im_{1}\theta+m_{1}(\varphi-\eta)}+\tilde u_{2}(z)e^{im_{2}\theta+m_{2}(\varphi-\eta)}|^2\\ &\quad\wedge f(z,\theta)\wedge\chi(\frac{\eta-\varphi}{\varepsilon_m})\frac{1}{\varepsilon_m}(d\eta-d\varphi), \end{split}$$ where $\partial$ and ${\overline\partial}$ denote the standard ${\partial}$-operator and ${\overline\partial}$-operator on $z$-coordinates. Let $S_{m_1}$ (resp. $S_{m_2}$) be the Szegő kernel functions of $H^{0}_{b,m_1}(X)$ (resp. $H^{0}_{b,m_2}(X)$). By using the same arguments in Shiffman-Zelditch’s paper [@sz Section 3] and , we deduce that for $d\mu$-almost every $\{u_{m}\}\in\Omega(X)$, we have $$\label{e-gue170730} \begin{split} \lim_{m{\rightarrow}\infty}&\Bigl(\frac{1}{m}\langle\,[v_{m}=0], f\wedge\omega_0\wedge \frac{1}{\varepsilon_m}\chi(\frac{\eta}{\varepsilon_m})d\eta\,\rangle -\frac{i}{2m\pi}\\ &\int\partial\bar \partial\log(e^{2m_{1}(\varphi-\eta)}S_{m_1}+e^{2m_{2}(\varphi-\eta)}S_{m_2})\wedge f\wedge\omega_0\wedge\chi(\frac{\eta-\varphi}{\varepsilon_m})\frac{1}{\varepsilon_m}(d\eta-d\varphi)\Bigr)=0. \end{split}$$ Let $F_{m}=e^{2m_{1}(\varphi-\eta)}S_{m_1}+e^{2m_{2}(\varphi-\eta)}S_{m_2}$. In view of , to prove Theorem \[t-gue170704ryz\], it suffices to compute $$\label{e-gue170730yc} \lim_{m{\rightarrow}\infty}\frac{i}{2m\pi}\int\partial\bar \partial \log F_{m}\wedge f\wedge\omega_0\wedge\chi(\frac{\eta-\varphi}{\varepsilon_m})\frac{1}{\varepsilon_m}(d\eta-d\varphi).$$ Recall that $S_{m_1}+S_{m_2}\approx m^{n}$ on $X$ (see Theorem \[t-gue170704r\]). We write $F=F_{m}, a_{1}=S_{m_1}, a_{2}=S_{m_2}$ for short. We have $$\label{e-gue170730Iq} \partial\bar \partial \log F=\frac{\partial\bar \partial F}{F}-\frac{\partial F\wedge \bar \partial F}{F^{2}}.$$ We can check that $$\label{e-gue170730I} \begin{split} \partial F&=\partial(e^{2m_{1}(\varphi-\eta)}a_{1}+e^{2m_{2}(\varphi-\eta)}a_{2})\\ &=e^{2m_{1}(\varphi-\eta)}\partial a_{1}+e^{2m_{2}(\varphi-\eta)}\partial a_{2}\\ &\quad +2m_{1}a_{1}e^{2m_{1}(\varphi-\eta)}\partial (\varphi-\eta)+ 2m_{2}a_{2}e^{2m_{2}(\varphi-\eta)}\partial (\varphi-\eta), \end{split}$$ $$\label{e-gue170730II} \begin{split} \bar\partial F&=\bar\partial(e^{2m_{1}(\varphi-\eta)}a_{1}+e^{2m_{2}(\varphi-\eta)}a_{2})\\ &=e^{2m_{1}(\varphi-\eta)}\bar\partial a_{1}+e^{2m_{2}(\varphi-\eta)}\bar\partial a_{2}\\ &\quad +2m_{1}a_{1}e^{2m_{1}(\varphi-\eta)}\bar\partial (\varphi-\eta)+ 2m_{2}a_{2}e^{2m_{2}(\varphi-\eta)}\bar\partial (\varphi-\eta).\\ \end{split}$$ and $$\label{e-gue170730b} \begin{split} \partial F\wedge\bar\partial F&=(e^{2m_{1}(\varphi-\eta)}\partial a_{1}+e^{2m_{2}(\varphi-\eta)}\partial a_{2}\\ &\quad +2m_{1}a_{1}e^{2m_{1}(\varphi-\eta)}\partial (\varphi-\eta)+ 2m_{2}a_{2}e^{2m_{2}(\varphi-\eta)}\partial (\varphi-\eta))\\ &\quad\wedge (e^{2m_{1}(\varphi-\eta)}\bar\partial a_{1}+e^{2m_{2}(\varphi-\eta)}\bar\partial a_{2}\\ &\quad +2m_{1}a_{1}e^{2m_{1}(\varphi-\eta)}\bar\partial (\varphi-\eta)+ 2m_{2}a_{2}e^{2m_{2}(\varphi-\eta)}\bar\partial (\varphi-\eta)). \end{split}$$ Moreover, we have $$\label{e-gue170730III} \begin{split} \partial\bar\partial F&=\partial(e^{2m_{1}(\varphi-\eta)}\bar\partial a_{1}+e^{2m_{2}(\varphi-\eta)}\bar\partial a_{2}\\ &\quad +2m_{1}a_{1}e^{2m_{1}(\varphi-\eta)}\bar\partial (\varphi-\eta)+ 2m_{2}a_{2}e^{2m_{2}(\varphi-\eta)}\bar\partial (\varphi-\eta))\\ &=e^{2m_{1}(\varphi-\eta)}\partial\bar\partial a_{1}+2m_{1}e^{2m_{1}(\varphi-\eta)}\partial(\varphi-\eta)\wedge\bar{\partial a_{1}}\\ &\quad +e^{2m_{2}(\varphi-\eta)}\partial\bar\partial a_{2}+2m_{2}e^{2m_{2}(\varphi-\eta)}\partial(\varphi-\eta)\wedge\bar{\partial a_{2}}\\ &\quad +2m_{1}a_{1}e^{2m_{1}(\varphi-\eta)}\partial\bar\partial(\varphi-\eta) +2m_{1}\partial(a_{1}e^{2m_{1}(\varphi-\eta)})\wedge\bar\partial(\varphi-\eta)\\ &\quad +2m_{2}a_{2}e^{2m_{2}(\varphi-\eta)}\partial\bar\partial(\varphi-\eta) +2m_{2}\partial(a_{2}e^{2m_{2}(\varphi-\eta)})\wedge\bar\partial(\varphi-\eta), \end{split}$$ and furthermore, we have $$\label{e-gue170730a} \begin{split} &2m_{1}\partial(a_{1}e^{2m_{1}(\varphi-\eta)})\wedge\bar\partial(\varphi-\eta)\\ &=2m_{1}(e^{2m_{1}(\varphi-\eta)}\partial a_{1}\wedge\bar\partial(\varphi-\eta)+2m_{1}a_{1} e^{2m_{1}(\varphi-\eta)}\partial(\varphi-\eta)\wedge\bar\partial(\varphi-\eta) ) \end{split}$$ and $$\label{e-gue170730aI} \begin{split} &2m_{2}\partial(a_{2}e^{2m_{2}(\varphi-\eta)})\wedge\bar\partial(\varphi-\eta)\\ &=2m_{2}(e^{2m_{2}(\varphi-\eta)}\partial a_{2}\wedge\bar\partial(\varphi-\eta)+2m_{2}a_{2} e^{2m_{2}(\varphi-\eta)}\partial(\varphi-\eta)\wedge\bar\partial(\varphi-\eta) ). \end{split}$$ We first compute the following kinds of terms in : $$\label{e-gue170730ycI} \int e^{2m_{j}(\varphi-\eta)}\partial (\varphi-\eta)\wedge\bar\partial a_{j}/F\wedge f\wedge\omega_0\wedge\chi(\frac{\eta-\varphi}{\varepsilon_m})\frac{1}{\varepsilon_m}(d\eta-d\varphi),\ \ j\in{\left\{1,2\right\}}.$$ $$\label{e-gue170730ycIb} \int e^{2m_{j}(\varphi-\eta)}{\overline\partial}(\varphi-\eta)\wedge{\partial}a_{j}/F\wedge f\wedge\omega_0\wedge\chi(\frac{\eta-\varphi}{\varepsilon_m})\frac{1}{\varepsilon_m}(d\eta-d\varphi),\ \ j\in{\left\{1,2\right\}}.$$ $$\label{e-gue170730ycII} \int \frac{1}{m}e^{2m_{j}(\varphi-\eta)}\partial \bar\partial a_{j}/F\wedge f\wedge\omega_0\wedge\chi(\frac{\eta-\varphi}{\varepsilon_m})\frac{1}{\varepsilon_m}(d\eta-d\varphi),\ \ j\in{\left\{1,2\right\}}.$$ $$\label{e-gue170730ycIII} \int a_{j}e^{2m_{j}(\varphi-\eta)}e^{2m_{k}(\varphi-\eta)}\partial a_{k}\wedge\bar\partial (\varphi-\eta)/F^{2}\wedge f\wedge\omega_0\wedge\chi(\frac{\eta-\varphi}{\varepsilon_m})\frac{1}{\varepsilon_m}(d\eta-d\varphi),\ \ j,k\in{\left\{1,2\right\}}.$$ $$\label{e-gue170730ycIIIb} \int a_{j}e^{2m_{j}(\varphi-\eta)}e^{2m_{k}(\varphi-\eta)}{\overline\partial}a_{k}\wedge{\partial}(\varphi-\eta)/F^{2}\wedge f\wedge\omega_0\wedge\chi(\frac{\eta-\varphi_m}{\varepsilon_m})\frac{1}{\varepsilon_m}(d\eta-d\varphi),\ \ j,k\in{\left\{1,2\right\}}.$$ $$\label{e-gue170730ych} \int \frac{1}{m}e^{2m_{j}(\varphi-\eta)}e^{2m_{k}(\varphi-\eta)}\partial a_{j}\wedge\bar\partial a_{k}/F^{2}\wedge f\wedge\omega_0\wedge\chi(\frac{\eta-\varphi}{\varepsilon_m})\frac{1}{\varepsilon_m}(d\eta-d\varphi),\ \ j,k\in{\left\{1,2\right\}}.$$ It is straightforward to check that $$\partial (\varphi-\eta)\wedge\omega_0\wedge (d\eta-d\varphi)=0,\ \ {\overline\partial}(\varphi-\eta)\wedge\omega_0\wedge(d\eta-d\varphi)=0.$$ From this observation, we see that terms , , and are zero. For and , from Theorem \[t-gue170704\] and Lebesgue dominate theorem, we have $$\begin{split} &{\left\vert\int \frac{1}{m}e^{2m_{j}(\varphi-\eta)}\partial \bar\partial a_{j}/F\wedge f\wedge\omega_0\wedge\chi(\frac{\eta-\varphi}{\varepsilon_m})\frac{1}{\varepsilon_m}(d\eta-d\varphi)\right\vert}\\ &\lesssim \frac{1}{m}\int_X\frac{m_{1}^{n}+m_{1}^{n+1}e^{-m_{1}\varepsilon_{0}d^{2}(x,X_{{\rm sing\,}})}}{m^n}{\rightarrow}0\ \ \mbox{as $m{\rightarrow}\infty$},\ \ \forall j\in{\left\{1,2\right\}}, \end{split}$$ and $$\begin{split} &{\left\vert\int \frac{1}{m}e^{2m_{j}(\varphi-\eta)}e^{2m_{k}(\varphi-\eta)}\partial a_{j}\wedge\bar\partial a_{k}/F^{2}\wedge f\wedge\omega_0\wedge\chi(\frac{\eta-\varphi}{\varepsilon_m})\frac{1}{\varepsilon_m}(d\eta-d\varphi)\right\vert}\\ &\lesssim \frac{1}{m}\int_X\frac{m_{1}^{n}+m_{1}^{n+1}e^{-m_{1}\varepsilon_{0}d^{2}(x,X_{{\rm sing\,}})}}{m^n}{\rightarrow}0\ \ \mbox{as $m{\rightarrow}\infty$},\ \ \forall j, k\in{\left\{1,2\right\}}. \end{split}$$ From , , , and the discussion above, we conclude that the only contribution terms in are those involving $\partial\bar \partial\varphi$, which is exactly the Levi form $\mathcal{L}_{X}$ of $X$. Then for $d\mu$-almost every $\{u_{m}\}\in\Omega(X)$, we have $$\label{e-gue170801} \begin{split} &\lim_{m{\rightarrow}\infty}\frac{1}{m}\langle\,[v_{m}=0], f\wedge\omega_0\wedge \frac{1}{\varepsilon_m}\chi(\frac{\eta}{\varepsilon_m})d\eta\,\rangle\\ &=\lim_{m{\rightarrow}\infty}\frac{i}{2m\pi}\int\partial\bar \partial \log F_{m}\wedge f\wedge\omega_0\wedge\chi(\frac{\eta-\varphi}{\varepsilon_m})\frac{1}{\varepsilon_m}(d\eta-d\varphi)\\ &=\lim_{m{\rightarrow}\infty}\frac{i}{2m\pi}\int (2m_{1}a_{1}e^{2m_{1}(\varphi-\eta)}+ 2m_{2}a_{2}e^{2m_{2}(\varphi-\eta)})/F_{m}\cdot\partial\bar \partial\varphi\wedge f\wedge\omega_0\wedge\chi(\frac{\eta-\varphi}{\varepsilon})\frac{1}{\varepsilon}(d\eta-d\varphi)\\ &=\lim_{m{\rightarrow}\infty}\frac{i}{\pi}\int \frac{\alpha S_{\alpha m}(x)+k_1\alpha S_{\alpha k_1m}(x)}{S_{\alpha m}(x)+S_{\alpha k_1m}(x)} \partial\bar \partial\varphi\wedge f\wedge\omega_0. \end{split}$$ From Corollary \[c-gue170722Hyc\], Theorem \[t-gue170704\], Lebesgue dominate theorem and , we deduce . Theorem \[t-gue170704ryz\] follows. Equidistribution on complex manifolds with strongly pseudoconvex boundary {#s-gue170709} ========================================================================= In this section, we will prove Theorem \[t-gue170703c\]. Let $M$ be a relatively compact open subset with $C^\infty$ boundary $X$ of a complex manifold $M'$ of dimension $n+1$ with a smooth Hermitian metric $\langle\,\cdot\,|\,\cdot\,\rangle$ on its holomorphic tangent bundle $T^{1,0}M'$. From now on, we will use the same notations and assumptions as in the discussion before Theorem \[t-gue170703c\]. We will first recall the classical results of Boutet de Monvel-Sjöstrand [@BouSj76] (see also second part in [@Hsiao08]). We then construct holomorphic functions with specific rate near the boundary. We first recall the Hörmander symbol spaces \[Bd:0712101500\] Let $m\in{\mathbb R}$. $S^{m}_{1, 0}(M'\times M'\times]0, \infty[)$ is the space of all $a(x, y, t)\in C^\infty(M'\times M'\times]0, \infty[)$ such that for all local coordinate patch $U$ with local coordinates $x=(x_1,\ldots,x_{2n+2})$ and all compact sets $K\subset U$ and all $\alpha\in\mathbb N^{2n+2}_0$, $\beta\in\mathbb N^{2n+2}_0$, $\gamma\in\mathbb N_0$, there is a constant $c>0$ such that ${\left\vert{\partial}^\alpha_x{\partial}^\beta_y{\partial}^\gamma_t a(x, y, t)\right\vert}\leq c(1+{\left\vertt\right\vert})^{m-{\left\vert\gamma\right\vert}}$, $(x, y, t)\in K\times]0, \infty[$. $S^m_{1, 0}$ is called the space of symbols of order $m$ type $(1, 0)$. We write $S^{-\infty}_{1, 0}=\bigcap S^m_{1, 0}$. Let $S^{m}_{1, 0}({\overline}M\times{\overline}M\times]0, \infty[)$ denote the space of restrictions to $M\times M\times]0, \infty[$ of elements in $S^{m}_{1, 0}(M'\times M'\times]0, \infty[)$. Let $a_j\in S^{m_j}_{1, 0}({\overline}M\times{\overline}M\times]0, \infty[)$, $j=0,1,2,\dots$, with $m_j\searrow -\infty$, $j{\rightarrow}\infty$. Then there exists $a\in S^{m_0}_{1, 0}({\overline}M\times{\overline}M\times]0, \infty[)$ such that $a-\sum_{0\leq j<k}a_j\in S^{m_k}_{1, 0}({\overline}M\times{\overline}M\times]0, \infty[)$, for every $k\in\mathbb N$. If $a$ and $a_j$ have the properties above, we write $a\sim\sum^\infty_{j=0}a_j \text{ in } S^{m_0}_{1, 0}({\overline}M\times{\overline}M\times[0, \infty[)$. Let $dv_M$ be the volume form on $M$ induced by $\langle\,\cdot\,|\,\cdot\,\rangle$ and let $(\,\cdot\,|\,\cdot\,)_M$ be the $L^2$ inner product on $C^\infty_0(M)$ induced by $dv_M$ and let $L^2(M)$ be the completion of $C^\infty_0(M)$ with respect to $(\,\cdot\,|\,\cdot\,)_M$. Let $H^0(M)={\left\{u\in L^2(M);\, {\overline\partial}u=0\right\}}$. Let $B: L^2(M){\rightarrow}H^0(M)$ be the orthogonal projection with respect to $(\,\cdot\,|\,\cdot\,)_M$ and let $B(z,w)\in D'(M\times M)$ be the distribution kernel of $B$. We recall classical result of Boutet de Monvel-Sjstrand [@BouSj76]. \[t-gue170715\] With the notations and assumptions above, we have $$\label{e-gue170716} B(z, w)=\int^\infty_0\!\!e^{i\phi(z, w)t}b(z, w, t)dt+H(z,w),$$ (for the precise meaning of the oscillatory integral $\int^\infty_0\!\!e^{i\phi(z, w)t}b(z, w, t)dt$, see Remark \[Br:0712111922\] below) where $H(z,w)\in C^\infty({\overline}M\times{\overline}M)$, $$\label{e-gue170717a} \begin{split} &b(z, w, t)\in S^{n+1}_{1, 0}({\overline}M\times{\overline}M\times]0, \infty[),\\ &\mbox{$b(z, w, t)\sim\sum^\infty_{j=0}b_j(z, w)t^{n+1-j}$ in the space $S^{n+1}_{1, 0}({\overline}M\times{\overline}M\times]0, \infty[)$},\\ &b_j(z, w)\in C^\infty({\overline}M\times{\overline}M),\ \ j=0,1,\ldots,\\ &b_0(z, z)\neq0,\ z\in X, \end{split}$$ and $$\label{e-gue170717aI} \begin{split} &\phi(z, w)\in C^\infty({\overline}M\times{\overline}M),\\ &\phi(z, z)=0,\ \ z\in X,\ \ \phi(z, w)\neq0\ \ \mbox{if}\ \ (z, w)\notin{\rm diag\,}(X\times X), \\ &{\rm Im\,}\phi(z, w)>0\ \ \mbox{if}\ \ (z, w)\notin X\times X, \\ &\mbox{$\phi(z,z)=r(z)g(z)$ on ${\overline}M$, $g(z)\in C^\infty({\overline}M)$ with ${\left\vertg(z)\right\vert}>c$ on ${\overline}M$, $c>0$ is a constant}. \end{split}$$ Moreover, there is a content $C>1$ such that $$\label{e-gue170717aII} \frac{1}{C}({\rm dist\,}(x,y))^2\leq {\left\vertd_y\phi(x,y)\right\vert}^2+{\left\vert{\rm Im\,}\phi(x,y)\right\vert}\leq C({\rm dist\,}(x,y))^2,\ \ \forall (x,y)\in X\times X,$$ where $d_y$ denotes the exterior derivative on $X$ and ${\rm dist\,}(x,y)$ denotes the distance between $x$ and $y$ with respect to the give Hermitian metric $\langle\,\cdot\,|\,\cdot\,\rangle$ on $X$. \[Br:0712111922\] Let $\phi$ and $b(z, w, t)$ be as in Theorem \[t-gue170715\]. Let $y=(y_1,\ldots,y_{2n+1})$ be local coordinates on $X$ and extend $y_1,\ldots,y_{2n+1}$ to real smooth functions in some neighborhood of $X$. We work with local coordinates $w=(y_1,\ldots,y_{2n+1},r)$ defined on some neighborhood $U$ of $p\in X$. Let $u\in C^\infty_0(U)$. Choose a cut-off function $\chi(t)\in C^\infty({\mathbb R})$ so that $\chi(t)=1$ when ${\left\vertt\right\vert}<1$ and $\chi(t)=0$ when ${\left\vertt\right\vert}>2$. Set $$(B_{\epsilon}u)(z)=\int^\infty_0\int_{{\overline}M}e^{i\phi(z, w)t}b(z, w, t)\chi(\epsilon t)u(w)dv_M(w)dt.$$ Since $d_y\phi\neq0$ where ${\rm Im\,}\phi=0$ (see ), we can integrate by parts in $y$ and $t$ and obtain $\lim_{\epsilon{\rightarrow}0}(B_\epsilon u)(z)\in C^\infty({\overline}M)$. This means that $B=\lim_{\epsilon{\rightarrow}0}B_\epsilon: C^\infty({\overline}M){\rightarrow}C^\infty({\overline}M)$ is continuous. We have the following corollary of Theorem \[t-gue170715\] \[Bi-c:co1\] Under the notations and assumptions above, we have $$\label{e-gue170717f} B(z,z)=F(z)(r(z))^{-n-2}+G(z)\log(-ir(z))\ \ \mbox{on ${\overline}M$},$$ where $F, G\in C^\infty({\overline}M)$ and ${\left\vertF(z)\right\vert}>c$ on $X$, $c>0$ is a constant. Since $C^\infty({\overline}M)\bigcap H^0(M)$ is dense in $H^0(M)$ in $L^2(M)$, we can find $g_j\in C^\infty({\overline}M)\bigcap H^0(M)$ with $(\, g_j\,|\,g_k\,)_M=\delta_{j,k}$, $j, k=1,2,\ldots$, such that the set $$\label{e-gue170709qm} A(M):=\rm{span\,}{\left\{g_1, g_2,\ldots\right\}}$$ is dense in $H^0(M)$. Moreover, for every $u\in L^2(M)$, we have $$\label{e-gue170715} \mbox{$\sum^N_{j=1}g_j(\,u\,|\,g_j\,)_M{\rightarrow}Bu$ in $L^2(M)$}\ \ \mbox{as $N{\rightarrow}\infty$}.$$ Fix $k\in\mathbb N$, $k$ large. Fix $x_0\in M$ with $\frac{1}{2k}\leq{\left\vertr(x_0)\right\vert}\leq\frac{1}{k}$. Let $x=(x_1,\ldots,x_{2n+2})$ be local coordinates of $M$ defined in a small neighborhood of $x_0$ with $x(x_0)=0$. Let $\chi\in C^\infty_0({\mathbb R}^{2n+2})$ with $\chi\equiv1$ near $0\in{\mathbb R}^{2n+2}$. For $\varepsilon>0$, put $\chi_\varepsilon(x)=\varepsilon^{-(2n+2)}\chi(\frac{x}{\varepsilon})$. From , for every $\varepsilon>0$, $\varepsilon$ small, we have $$\label{e-gue170715I} \sum^\infty_{j=0}{\left\vert(\,g_j\,|\,\chi_\varepsilon\,)_M\right\vert}^2=(\,B\chi_\varepsilon\,|\,\chi_\varepsilon\,)_M.$$ Since $B(z,w)\in C^\infty(M\times M)$, we have $$\label{e-gue170715III} \lim_{\varepsilon{\rightarrow}0}\Bigr(\sum^\infty_{j=1}{\left\vert(\,g_j\,|\,\chi_\varepsilon\,)_M\right\vert}^2\Bigr)=B(x_0,x_0)m(x_0),$$ where $m(x)dx_1\cdots dx_{2n+2}=dv_M$. From , for every $\varepsilon_1, \varepsilon_2>0$, $\varepsilon_1, \varepsilon_2$ small, we have $$\label{e-gue170715a} \begin{split} &\sum^\infty_{j=0}{\left\vert(\,g_j\,|\,\chi_{\varepsilon_1}\,)_M-(\,g_j\,|\,\chi_{\varepsilon_2}\,)_M\right\vert}^2\\ &=(\,B\chi_{\varepsilon_1}\,|\,\chi_{\varepsilon_1}\,)_M- (\,B\chi_{\varepsilon_1}\,|\,\chi_{\varepsilon_2}\,)_M-(\,B\chi_{\varepsilon_2}\,|\,\chi_{\varepsilon_1}\,)_M+(\,B\chi_{\varepsilon_2}\,|\,\chi_{\varepsilon_2}\,)_M. \end{split}$$ Since $B(z,w)\in C^\infty(M\times M)$, we deduce that for every $\delta>0$, there is a $C_\delta>0$ such that for all $0<\varepsilon_1, \varepsilon_2<C_\delta$, we have $$\label{e-gue170715aI} \sum^\infty_{j=0}{\left\vert(\,g_j\,|\,\chi_{\varepsilon_1}\,)_M-(\,g_j\,|\,\chi_{\varepsilon_2}\,)_M\right\vert}^2<\delta.$$ Now, we can prove \[t-gue170715b\] We have $\sum^\infty_{j=1}{\left\vertg_j(x_0)\right\vert}^2=B(x_0,x_0)m(x_0)$. From , it is easy to see that $$\label{e-gue170715b} \sum^\infty_{j=1}{\left\vertg_j(x_0)\right\vert}^2\leq B(x_0,x_0)m(x_0).$$ Let $\delta>0$ and fix $0<\varepsilon_0<C_\delta$, where $C_\delta$ is as in . Since $\sum^\infty_{j=1}{\left\vert(\,g_j\,|\,\chi_{\varepsilon_0}\,)_M\right\vert}^2<\infty$, there is a $N\in\mathbb N$ such that $$\label{e-gue170715bI} \sum^{\infty}_{j=N+1}{\left\vert(\,g_j\,|\,\chi_{\varepsilon_0}\,)_M\right\vert}^2<\delta.$$ Now, for every $0<\varepsilon<\varepsilon_0$, from and , we have $$\label{e-gue170715bII} \begin{split} \sum^{\infty}_{j=N+1}{\left\vert(\,g_j\,|\,\chi_{\varepsilon}\,)_M\right\vert}^2 &\leq 2\sum^\infty_{j=N+1}{\left\vert(\,g_j\,|\,\chi_{\varepsilon}\,)_M-(\,g_j\,|\,\chi_{\varepsilon_0}\,)_M\right\vert}^2+2\sum^\infty_{j=N+1}{\left\vert(\,g_j\,|\,\chi_{\varepsilon_0}\,)_M\right\vert}^2\\ &\leq 2\sum^\infty_{j=1}{\left\vert(\,g_j\,|\,\chi_{\varepsilon}\,)_M-(\,g_j\,|\,\chi_{\varepsilon_0}\,)_M\right\vert}^2+2\sum^\infty_{j=N+1}{\left\vert(\,g_j\,|\,\chi_{\varepsilon_0}\,)_M\right\vert}^2\\ &\leq 4\delta. \end{split}$$ From , we deduce that $$\label{e-gue170717y} \limsup_{\varepsilon{\rightarrow}0}\sum^{\infty}_{j=N+1}{\left\vert(\,g_j\,|\,\chi_{\varepsilon}\,)_M\right\vert}^2\leq 4\delta.$$ Now, $$\label{e-gue170717yI} \begin{split} &\sum^\infty_{j=1}{\left\vertg_j(x_0)\right\vert}^2\geq\sum^N_{j=1}{\left\vertg_j(x_0)\right\vert}^2=\lim_{\varepsilon{\rightarrow}0}\sum^N_{j=1}{\left\vert(\,g_j\,|\,\chi_\varepsilon\,)_M\right\vert}^2\\ &\geq\liminf_{\varepsilon{\rightarrow}0}\Bigr(\sum^\infty_{j=1}{\left\vert(\,g_j\,|\,\chi_\varepsilon\,)_M\right\vert}^2-\sum^\infty_{N+1}{\left\vert(\,g_j\,|\,\chi_\varepsilon\,)_M\right\vert}^2\Bigr)\\ &\geq \liminf_{\varepsilon{\rightarrow}0}\sum^\infty_{j=1}{\left\vert(\,g_j\,|\,\chi_\varepsilon\,)_M\right\vert}^2-\limsup_{\varepsilon{\rightarrow}0}\sum^\infty_{N+1}{\left\vert(\,g_j\,|\,\chi_\varepsilon\,)_M\right\vert}^2. \end{split}$$ From , and , we deduce that $$\sum^\infty_{j=1}{\left\vertg_j(x_0)\right\vert}^2\geq B(x_0,x_0)m(x_0)-4\delta.$$ Since $\delta$ is arbitrary, we conclude that $$\label{e-gue170717yII} \sum^\infty_{j=1}{\left\vertg_j(x_0)\right\vert}^2\geq B(x_0,x_0)m(x_0).$$ From and , the theorem follows. From Theorem \[t-gue170715b\] and , we deduce that there is a $N_{x_0}\in\mathbb N$ such that $$\label{e-gue170717ycq} {\left\vertr^{n+2}(x_0)\sum^{N_{x_0}}_{j=1}{\left\vertg_j(x_0)\right\vert}^2\right\vert}\geq\frac{1}{2}{\left\vertF(x_0)\right\vert},$$ where $F$ is as in . Let $$\label{e-gue17717h} h_{x_0}:=\frac{1}{\sum^N_{j=1}{\left\vertg_j(x_0)\right\vert}^2}\sum^N_{j=1}g_j(x){\left\vert{\overline}g_j(x_0)\right\vert}.$$ Then, $h_{x_0}\in H^0(M)\in C^\infty({\overline}M)$ with $(\,h_{x_0}\,|\,h_{x_0}\,)_M=1$ and there is a small neighborhood $U_{x_0}$ of $x_0$ in $M$ such that $$\label{e-gue170717yuI} {\left\verth_{x_0}(x)\right\vert}\geq\frac{1}{4}{\left\vertF(x_0)\right\vert}.$$ Assume that ${\left\{x\in M, \frac{1}{2k}\leq{\left\vertr(x)\right\vert}\leq\frac{1}{k}\right\}}\subset U_{x_0}\bigcup U_{x_1}\bigcup\cdots\bigcup U_{x_{a_k}}$ and let $h_{x_j}$ be as in , $j=0,1,\ldots,a_k$. Take $\beta_k\in\mathbb N$ be a large number so that $${\left\{h_{x_0}, h_{x_1},\ldots, h_{x_{a_k}}\right\}}\subset{\rm span\,}{\left\{g_1, g_2,\ldots, g_{\beta_k}\right\}}.$$ From , it is easy to see that $$\label{e-gue170717fhI} {\left\vertr^{n+2}(x)\sum^{\beta_k}_{j=1}{\left\vertg_j(x)\right\vert}^2\right\vert}\geq\frac{1}{4}{\left\vertF(x)\right\vert}\ \ \mbox{on ${\left\{x\in M, \frac{1}{2k}\leq{\left\vertr(x)\right\vert}\leq\frac{1}{k}\right\}}$}.$$ Note that ${\left\vertF(x)\right\vert}>c$ on $X$, where $c>0$ is a constant. From this observation and , we get \[t-gue170717\] There is a $k_0\in\mathbb N$ such that for every $k\in\mathbb N$, $k\geq k_0$, we can find $\beta_k\in\mathbb N$ such that $$\label{e-gue170717hy} {\left\vertr^{n+2}(x)\sum^{\beta_k}_{j=1}{\left\vertg_j(x)\right\vert}^2\right\vert}\geq c_0\ \ \mbox{on ${\left\{x\in M, \frac{1}{2k}\leq{\left\vertr(x)\right\vert}\leq\frac{1}{k}\right\}}$},$$ where $c_0>0$ is a constant independent of $k$. Let $b_j=\beta_{k_0+j}\in\mathbb N$, $j=1,2,\ldots$, where $\beta_j$ and $k_0$ are as in Theorem \[t-gue170717\]. For every $m\in\mathbb N$, let $A_m(M)$, $SA_m(M)$ and $d\mu_m$ be as in the discussion before . Let $\beta:={\left\{b_j\right\}}^\infty_j$ and let $\Omega(M,\beta)$ and $d\mu(\beta)$ be as in and respectively. For each $k=1,2,3,\ldots$, let $$P_k(x):=\sum^{b_k}_{j=1}{\left\vertg_j(x)\right\vert}^2.$$ Let $u_k\in SA_{b_k}(M)$. Then, $u_k$ can be written as $u_k=\sum^{b_k}_{j=1}\lambda_jg_j$ with $\sum^{b_k}_{j=1}{\left\vert\lambda_j\right\vert}^2=1$. We have \[t-ue170717j\] With the notations and assumptions above, fix $\psi\in C^\infty_0([-1,-\frac{1}{2}])$. Then, for $d\mu(\beta)$-almost every $u={\left\{u_k\right\}}\in\Omega(M,\beta)$, we have $$\label{e-gue170717jI} \lim_{k{\rightarrow}\infty}\Bigr(\langle\,[u_k=0], (2i)kr\psi(kr)\phi\wedge{\partial}r\wedge{\overline\partial}r\,\rangle+\frac{1}{\pi}\int_{{\overline}M}\Bigr(\log P_k(x)\Bigr)kr\psi(kr){\partial}{\overline\partial}\phi\wedge{\partial}r\wedge{\overline\partial}r\Bigr)=0,$$ for all $\phi\in C^\infty({\overline}M,B^{*n-1,n-1}M')$. The proof is essentially follows from Shifffman-Zelditch [@sz], we only sketch the proof. By using density argument, we only need to prove that for any $\phi\in C^\infty({\overline}M, B^{*n-1,n-1}T^*M')$, there exist $d\mu(\beta)$-almost every $u={\left\{u_k\right\}}\in\Omega(M,\beta)$, such that $$\label{e-gue170717jIy} \lim_{k{\rightarrow}\infty}\Bigr(\langle\,[u_k=0], (2i)kr\psi(kr)\phi\wedge{\partial}r\wedge{\overline\partial}r\,\rangle+\frac{1}{\pi}\int_{{\overline}M}\Bigr(\log P_k(x)\Bigr)kr\psi(kr){\partial}{\overline\partial}\phi\wedge{\partial}r\wedge{\overline\partial}r\Bigr)=0.$$ We claim that $$\label{e-gue170717jyI} \begin{split} &R_k:=\int_{S^{2b_k+1}}\Bigr|\langle\,[\sum^{b_k}_{j=1}\lambda_jg_j=0], (2i)kr\psi(kr)\phi\wedge{\partial}r\wedge{\overline\partial}r\,\rangle\\ &\quad+\frac{1}{\pi}\int_{{\overline}M}(\log P_k(x))kr\psi(kr){\partial}{\overline\partial}\phi\wedge{\partial}r\wedge{\overline\partial}r\Bigr|^2d\mu_{b_k}(\lambda)=O(\frac{1}{k^2}). \end{split}$$ From , we see that $\sum^\infty_{k=1}R_k<+\infty$ and by Lebesgue measure theory, we get . Hence, we only need to prove . For $(x, y)\in{\overline}M\times{\overline}M$, put $$\begin{split} &Q_k(x,y):=\int_{S^{2b_k+1}}\log\Bigr(\frac{{\left\vert\sum^{b_k}_{j=1}\lambda_jg_j(x)\right\vert}^2}{P_k(x)}\Bigr) \log\Bigr(\frac{{\left\vert\sum^{b_k}_{j=1}\lambda_jg_j(y)\right\vert}^2}{P_k(y)}\Bigr)d\mu_{b_k}(\lambda),\\ &f_k:=-\frac{1}{\pi}kr\psi(ku){\partial}{\overline\partial}\phi(y)\wedge{\partial}r\wedge{\overline\partial}r\in C^\infty_0(M,T^{*n+1,n+1}M'). \end{split}$$ By using the same argument in [@sz] (see also Theorem 5.3.3 in [@mm]), we can check that $$\label{e-gue170717yc} R_k=\int_{{\overline}M\times{\overline}M} Q_k(x,y)f_k(x)\wedge f_k(y).$$ Moreover, from Lemma 5.3.2 in [@mm], there is a constant $C_k>0$ independent of $(x,y)\in{\overline}M\times {\overline}M$ such that $$\label{e-gue170717yca} {\left\vertQ_k(x,y)-C_k\right\vert}\leq C,\ \ \forall (x, y)\in M\times M,$$ where $C>0$ is a constant independent of $k$. From , it is easy to check that $$\label{e-gue170717ycI} {\left\vert\int_{{\overline}M\times{\overline}M} \Bigr(Q_k(x,y)-C_k\Bigr)f_k(x)\wedge f_k(y)\right\vert}=O(\frac{1}{k^2}).$$ By using integration by parts, we see that $\int_{{\overline}M\times{\overline}M} \Bigr(Q_k(x,y)-C_k\Bigr)f_k(x)\wedge f_k(y)=R_k$. From this observation and , the claim follows. In view of Theorem \[t-ue170717j\], we only need to show that $$\lim_{k{\rightarrow}\infty}-\frac{1}{\pi}\int_{{\overline}M}(\log P_k(x))kr\psi(kr){\partial}{\overline\partial}\phi\wedge{\partial}r\wedge{\overline\partial}r=-(n+2)\frac{i}{2\pi}c_0\int_X\mathcal{L}_X\wedge\omega_0\wedge\phi,$$ where $c_0=\int_{\mathbb R}\psi(x)dx$. Now, $$\label{e-gue170717yhc} \begin{split} &-\frac{1}{\pi}\int_{{\overline}M}\Bigr(\log P_k(x)\Bigr)kr\psi(kr){\partial}{\overline\partial}\phi\wedge{\partial}r\wedge{\overline\partial}r\\ &=-\frac{1}{\pi}\int_{{\overline}M}\Bigr(\log (P_k(x)r^{n+2}(x))\Bigr)kr\psi(kr){\partial}{\overline\partial}\phi\wedge{\partial}r\wedge{\overline\partial}r\\ &\quad+\frac{n+2}{\pi}\int_{{\overline}M}\Bigr(\log r(x)\Bigr)kr\psi(kr){\partial}{\overline\partial}\phi\wedge{\partial}r\wedge{\overline\partial}r\\ &=-\frac{1}{\pi}\int_{{\overline}M}\Bigr(\log (P_k(x)r^{n+2}(x))\Bigr)kr\psi(kr){\partial}{\overline\partial}\phi\wedge{\partial}r\wedge{\overline\partial}r\\ &\quad-i\frac{n+2}{2\pi}\int_{{\overline}M}\Bigr(\log r(x)\Bigr)kr\psi(kr){\partial}{\overline\partial}\phi\wedge\omega_0\wedge dr, \end{split}$$ where $\omega_0=J(dr)$, $J$ is the standard complex structure map on $T^*M'$. From Theorem \[t-gue170717\], it is easy to see that $$\label{e-gue170717e} \lim_{k{\rightarrow}+\infty}-\frac{1}{\pi}\int_{{\overline}M}\Bigr(\log (P_k(x)r^{n+2}(x))\Bigr)kr\psi(kr){\partial}{\overline\partial}\phi\wedge{\partial}r\wedge{\overline\partial}r=0.$$ By using integration by parts, we have $$\label{e-gue170717eI} \begin{split} &-i\frac{n+2}{2\pi}\int_{{\overline}M}\Bigr(\log r(x)\Bigr)kr\psi(kr){\partial}{\overline\partial}\phi\wedge\omega_0\wedge dr\\ &=-i\frac{n+2}{2\pi}\int_{{\overline}M}\Bigr(({\partial}{\overline\partial}\log r)(x)\Bigr)kr\psi(kr)\phi\wedge\omega_0\wedge dr\\ &=-i\frac{n+2}{2\pi}\int_{{\overline}M}{\partial}{\overline\partial}r(x) k\psi(kr)\phi\wedge\omega_0\wedge dr\\ &{\rightarrow}-(n+2)\frac{i}{2\pi}c_0\int_X\mathcal{L}_X\wedge\omega_0\wedge\phi \ \ \mbox{as $k{\rightarrow}\infty$}, \end{split}$$ where $c_0=\int_{\mathbb R}\psi(x)dx$. From , and , the theorem follows then. [99]{} M.-S. Baouendi, L.-P. Rothschild, and F. Treves, *CR structures with group action and extendability of CR functions*, Invent. Math., **83** (1985), 359–396. L. Boutet de Monvel and J. Sj[ö]{}strand, *Sur la singularit[é]{} des noyaux de [B]{}ergman et de [S]{}zeg[ő]{}*, Ast[é]{}risque, **34–35** (1976), 123–164. D. Coman and G. Marinescu, [*Equidistribution results for singular metrics on line bundles*]{}, Ann. Sci. École Norm. Supér., [**48**]{} (2015), no.3, 497–536. D. Coman, X. Ma and G. Marinescu, [*Equidistribution for sequences of line bundles on normal Kähler spaces*]{}, Geom. Topo. [**21**]{} (2017) 923–962. D. Coman, G. Marinescu and V.-A. Nguyên, [*Hölder singular metrics on big line bundles and equidistribution*]{}, Int. Math. Res. Not., (2016), no.16, 5048–5075. J.-J. Duistermaat and G.-J. Heckman, *On the variation in the cohomology of the symplectic form of the reduced phase space*, Invent. Math., **69** (1982), 259-268. J.-P. Demailly, [*Complex analytic and differential geometry*]{}, available at [**www.fourier.ujf-grenoble.fr/ $\sim$ demailly**]{}. T.-C. Dinh, X. Ma and G. Marinescu, [*Equidistribution and convergence speed for zeros of holomorphic sections of singular Hermitian line bundles*]{}, J. Funct. Anal., [**271**]{} (2016), 3082–3110. T.-C. Dinh and N. Sibony, [*Distribution des valeurs de transformations méromorphes et applications*]{}, Comment. Math. Helv., [**81**]{} (2006), no. 5, 221–258. J.-E. Fornæss and N. Sibony, [*Complex dynamics in higher dimensions II*]{}, Modern methods in complex analysis, Ann. of Math. Stud., [**137**]{} (1995), 135–182. H. Herrmann, C.-Y. Hsiao and X. Li, *Szegő kernel expansion and equivariant embedding of CR manifolds with circle action*, to appear in Ann. Global Anal. Geom., available at preprint arXiv:1512.03952. H. Herrmann, C.-Y. Hsiao and X. Li, *Szegő kernel asymptotic expansion on CR manifolds with $S^1$-action*, available at preprint arXiv: 1610.04669v1. C.-Y. Hsiao, *Projections in several complex variables*, Mém. Soc. Math. France, Nouv. Sér., **123** (2010), 131 p. C.-Y. Hsiao and X. Li, *Morse inequalities for Fourier components of Kohn-Rossi cohomology of CR manifolds with $S^1$-action*, Math. Z. **284** (2016), no. 1-2, 441–468. X. Ma and G. Marinescu, [*Holomorphic Morse inequalities and Bergman kernels*]{}, Progress in Math., vol. 254, Birkhäuser, Basel, 2007. G. Shao, [*Equidistribution of zeros of random holomorphic sections for moderate measures*]{}, Math. Z., [**283**]{} (2016), no. 3-4, 791–806. G. Shao, [*Equidistribution on big line bundles with singular metrics for moderate measures*]{}, J. Geom. Anal., [**27**]{} (2016), no. 2, 1295–1322. B. Shiffman and S. Zelditch , [*Distribution of zeros of random and quantum chaotic sections of positive line bundles*]{}, Commun. Math. Phys., [**200**]{}, (1999), 661–683. [^1]: The first author was partially supported by Taiwan Ministry of Science and Technology project 104-2628-M-001-003-MY2 , the Golden-Jade fellowship of Kenda Foundation and Academia Sinica Career Development Award. This work was initiated when the second author was visiting the Institute of Mathematics at Academia Sinica in the summer of 2016. The second author would like to thank the Institute of Mathematics at Academia Sinica for its hospitality and financial support during his stay. The second author was also supported by Taiwan Ministry of Science and Technology project 105-2115-M-008-008-MY2
--- author: - title: '**The Title of Your Manuscript**' --- Introduction ============ Column 1 Column 2 Column 3 ------------------- ---------- ---------- Table Content$^a$ : Example Table[]{data-label="tableexample"} \ $^a$Table footnotes go here.\ Acknowledgments {#acknowledgments .unnumbered} =============== Smith, A. B., Jones, C. D., Brown, E. F. Year, Journal, Volume, Page
--- abstract: | Scale-invariant morphology parameters applied to atomic hydrogen maps () of galaxies can be used to quantify the effects of tidal interaction or star-formation on the ISM. Here we apply these parameters, Concentration, Asymmetry, Smoothness, Gini, 20, and the  parameter, to two public surveys of nearby dwarf galaxies, the VLA-ANGST and LITTLE-THINGS survey, to explore whether tidal interaction or the ongoing or past star-formation is a dominant force shaping the  disk of these dwarfs. Previously,  morphological criteria were identified for ongoing spiral-spiral interactions. When we apply these to the Irregular dwarf population, they either select almost all or none of the population. We find that only the Asymmetry-based criteria can be used to identify very isolated dwarfs (i.e., these have a low tidal indication). Otherwise, there is little or no relation between the level of tidal interaction and the  morphology. We compare the  morphology to three star-formation rates based on either H$\alpha$,  or the resolved stellar population, probing different star-formation time-scales. The  morphology parameters that trace the inequality of the distribution, the Gini, , and 20 parameters, correlate weakly with all these star-formation rates. This is in line with the picture that local physics dominates the ISM appearance and not tidal effects. Finally, we compare the SDSS measures of star-formation and stellar mass to the  morphological parameters for all four  surveys. In the two lower-resolution  surveys (12“), there is no relation between star-formation measures and  morphology. The morphology of the two high-resolution  surveys (6”), the Asymmetry, Smoothness, Gini, 20, and , do show a link to the total star-formation, but a weak one. author: - | B. W. Holwerda$^{1}$[^1], N. Pirzkal,$^{2}$, W.J.G. de Blok$^{3}$, and S–L. Blyth$^{4}$\ $^{1}$ European Space Agency, ESTEC, Keplerlaan 1, 2200 AG, Noordwijk, the Netherlands\ $^{2}$ Space Telescope Science Institute, Baltimore, MD 21218, USA\ $^{3}$ Netherlands Institute for Radio Astronomy, Schattenberg 1, 9433 TA Zwiggelte, The Netherlands\ $^{4}$ Astrophysics, Cosmology and Gravity Centre, Department of Astronomy,\ University of Cape Town, Private Bag X3, Rondebosch 7701, Republic of South Africa date: 'Accepted 1988 December 15. Received 1988 December 14; in original form 1988 October 11' title: | Quantified  Morphology VII:\ star-formation and tidal influence on local dwarf  morphology --- \[firstpage\] (galaxies:) Local Group galaxies: dwarf galaxies: ISM galaxies: structure ISM: structure radio lines: ISM \[s:intro\]Introduction ======================= It has recently become clear that the ongoing star formation in smaller galactic systems strongly influences the structure of the dwarf system’s interstellar matter (ISM), and vice versa [e.g., @Weisz09a]. The low-density and -metallicity environment as well as strong effects of feedback make local dwarfs an outstanding laboratory to understand the physics of star-formation. In addition, $\Lambda$CDM predicts the dynamics to be dominated by their dark matter content but this is observationally still debated [e.g., @Oh11; @Swaters11]. Dwarf galaxy morphology is related to their environment, as evident from the relation of stellar morphology with tidal index [@Weisz11b], as is their gas content [@Grcevich09]. Both strongly point to the gas content as the main driver of dwarf morphology. Similarly, [@Geha12] find that all field and central dwarf galaxies have a low fraction of quenched star-formation, i.e. they all have a substantial gas reservoir. This gas is at significantly sub-solar metallicities [@Berg12] and there is strong evidence for metal loss from supernova [@Tremonti04; @Dalcanton07; @Bouche07; @Kirby11]. For these reasons, the local sample of low-mass galaxies has been studied extensively using ultra-violet, optical and near-infrared tracer of star-formation [e.g., @Hunter06; @McQuinn12a], ISM [@littlethings; @Ott12], and resolved stellar populations [@angst; @Weisz11b; @Weisz11a]. These studies have been made possible by space observatories which allow for observations of stars and dusty ISM with surface brightnesses, and large programs on the Karl. G. Jansky Very Large Array (VLA) which observe the 21cm line of neutral atomic hydrogen (). A key HST program, the ANGST survey [@angst], has observed a large sample of nearby galaxies, mostly dwarfs, uniformly and in unprecedented depth with the Advanced Camera for Surveys (ACS) on the Hubble Space Telescope (HST). To accompany the HST observations, a large VLA program, the VLA-ANGST survey has observed the neutral ISM in great detail [@Ott12]. The star-formation history from the resolved stellar populations’ colours and luminosities has already revealed that star-bursts in these galaxies occur stochastically in both time and location [@McQuinn12b] over the last several hundred Myrs [see also @McQuinn09; @McQuinn10a; @McQuinn10b] A second program, LITTLE-THINGS, has observed a different set of nearby dwarfs with Herschel, Spitzer, GALEX and a large program on the VLA. The present consensus from these programs is that the processes related to star-formation are all inefficient in dwarf galaxies: the star-formation efficiency, the quenching of star-formation, and the interactions between the star-formation and the ISM dynamics [@Skillman12]. In this series of papers, we have explored the quantified morphology of available  maps with the common parameters for observed optical or ultra-violet morphology: concentration-asymmetry-smoothness [@CAS], Gini and 20 [@Lotz04] and $G_M$ [@Holwerdapra09; @Holwerda11c]. Recent interest in these morphology parameters has shifted from high-mass spirals and major interaction to more unequal mass interactions [@Lotz10a], more gas-rich interactions [@Lotz10b], both of which typically involve dwarf galaxies, and the visibility times of mergers in this parameter space [@Lotz11b]. In [@Holwerda11a], we compare the   morphology to those at other wavelengths for the THINGS sample, noting that the  and ultraviolet morphologies are closely related, which would make quantified  morphology a reasonable tracer for interactions. In the next papers of the series, we use the  morphology to identify mergers [@Holwerda11b], their visibility time [@Holwerda11c], and subsequently infer a merger rate from the  survey [@Holwerda11d], as well as identify phenomena unique to cluster members [@Holwerda11e] and the those  disks hosting an extended ultraviolet disks [, @Holwerda12c]. In this paper, we explore the  morphology of low-mass local dwarf galaxies. These have recently been observed in 21cm radio emission () by the VLA-ANGST and LITTLE-THINGS surveys. A third survey is underway to observe the lower mass galaxies in the local volume (the ÒSurvey of H I in Extremely Low-mass DwarfsÓ [SHIELD, @shield] but we do not include it due to its low spatial resolution ($\sim 20"$ beam). The combined VLA-ANGST and LITTLE-THINGS span a representative selection of the smallest members of the local volume (60  maps). The general picture that emerges from these surveys of low-mass galaxies in the local Universe is that the appearance of the  becomes amorphous with lower masses: there is a progression from disks with spiral structure to mostly featureless rotating disks to a collection of clouds supported by both rotation and dispersion. Our motivation for this study was to explore how much information there still is in the morphology of the  in these systems. These morphological parameterizations are used to high redshift with HST imaging of distant galaxies, which equally appear less structured beyond $z\sim2$, i.e., more as a collection of star-forming regions rather than organized in disks with spiral pattern and bulges. We shall compare the  morphological parameters to indicators of tidal disturbance and star-formation. While there are many outstanding questions as to the nature of dwarf galaxies and their ISM and star-formation [see @Skillman10; @Skillman12], we focus here on two: What is the impact of the star formation on the structure of their ISM? Does star formation induce or quench further star formation?, i.e., does star formation propagate through the host galaxy or is it stochastic?. The paper is organized as follows: Section \[s:data\] describes the data products and sample from the two surveys used for this paper, Section \[s:qm\] briefly describes the six morphological parameters, Section \[s:results\] presents the results, Section \[s:sdss\] compares the  morphology of all our catalogs to SDSS estimates of star-formation and mass, Section \[s:disc\] briefly discusses them, and Section \[s:concl\] lists our conclusions. Data {#s:data} ==== The “Local Irregulars That Trace Luminosity Extremes" [LITTLE-THINGS, @littlethings] and the VLA-ANGST [@Ott12] surveys, are close in observational setup to the ÒThe H I Nearby Galaxy SurveyÓ [THINGS, @Walter08], the sample for our first paper [@Holwerda11a]. Both surveys were conducted while the VLA transitioned to the Karl G. Jansky Very Large Array. For this paper, we use the robustly-weighted  surface density maps (RO). These maps are the highest resolution, contain the most small detail, essential for quantified morphology measurements, at the expense of some large-scale faint structure. This trade-off is essential for quantified morphological measurements which are the most sensitive when sampling at sub-kiloparsec physical scales [@Lotz04], at which point the diffuse large-scale  emission barely contributes signal in most parameters [see the comparison in @Holwerda11a]. LITTLE-THINGS {#s:little-things} ------------- The LITTLE-THINGS sample [@littlethings] is made up of 42 dwarf irregular (dIm) and Blue Compact Dwarf (BCD) galaxies. The  observations are a mix of new (21 galaxies) and archival observations. Some galaxies were dropped from the sample due to issues with individual observations. The LITTLE-THINGS sample was drawn from a larger multi-wavelength effort [@Hunter04; @Hunter06a] and there are extensive ancillary data available for the full sample. Observational setup was kept identical to the THINGS survey [@Walter08] and data is public at <https://science.nrao.edu/science/surveys/littlethings/>. We converted the the LITTLE-THINGS moment 0 maps into column density maps using the expression in [@Walter08], their equation 5, and the major and minor axes from [@littlethings] to conform to the VLA-ANGST data products. Typical resolution is slightly lower than VLA-ANGST ($\sim$6-10"), depending on the observational configuration. VLA-ANGST {#s:vla-angst} --------- The VLA-ANGST sample is based on the volume-limited ANGST survey [@angst]. The galaxies in these surveys drawn from the local volume compilation from [@Karachentsev04]. This catalog lists relevant parameters and, of specific interest, the tidal index $\Theta$ (see §\[s:theta\]). The ANGST survey targets the local volume, mostly less than 3.5 Mpc away with a limiting distance of 4 Mpc, in order to resolve low-level star-formation from resolved stellar populations. From the 89 ANGST galaxies, VLA-ANGST is a subset of 29 detected galaxies, excluding southern objects and those with no single-dish  detections or low star-formation. All VLA-ANGST galaxies were observed in both high spatial ($\sim$6", corresponding to $\sim$100 pc) and spectral (corresponding to 0.65-2.6 km/s in velocity) resolution in the VLA B, C, and D array configurations. For this study, the high velocity resolution is not pertinent but the spatial resolution and depth comparable to the THINGS survey are. Data are available at <https://science.nrao.edu/science/surveys/vla-angst/> and described in detail in [@Ott12] and [@Warren12] presents the  profiles for these galaxies. Final Sample ------------ Some of the  observations for LITTLE-THINGS are not yet archived and there is some overlap with the VLA-ANGST and LITTLE-THINGS surveys with galaxies included under a different name. Omitted galaxies are NGC1156, NGC6822, DDO6, KDG63, HS117, NGC4190, DDO113, DDO125 DDO181 and DDO183. The galaxies CVnIdwA (UGCA292), GR 8 (DDO155) and UGC 8508 are in both the LITTLE-THINGS and VLA-ANGST surveys. The final tally of  maps is 60 galaxies. As noted, the sampling of the two surveys is slightly different: the mean LITTLE-THINGS beam is $7\farcs2 \times 8\farcs8$ and the VLA-ANGST one $6\farcs1 \times 7\farcs4$ but these resolutions are comparable in the sampling of the  disk (Figure \[f:B\]) and the two surveys can be treated as a single data-set. ----------- ----------- -------- --------------- Galaxy Beam Size Survey Minor Major CVnIdwA 10.5 10.9 LITTLE-THINGS DDO 43 6.0 8.1 LITTLE-THINGS DDO 46 5.2 6.3 LITTLE-THINGS DDO 47 9.0 10.4 LITTLE-THINGS DDO 50 6.1 7.0 LITTLE-THINGS DDO 52 5.2 6.8 LITTLE-THINGS DDO 53 5.7 6.3 LITTLE-THINGS DDO 63 6.0 7.8 LITTLE-THINGS DDO 69 5.4 5.8 LITTLE-THINGS DDO 70 13.2 13.8 LITTLE-THINGS DDO 75 6.5 7.5 LITTLE-THINGS DDO 87 6.2 7.6 LITTLE-THINGS DDO 101 7.0 8.3 LITTLE-THINGS DDO 126 5.6 6.9 LITTLE-THINGS DDO 133 10.8 12.4 LITTLE-THINGS DDO 154 6.3 7.9 LITTLE-THINGS DDO 155 10.1 11.3 LITTLE-THINGS DDO 165 7.6 10.0 LITTLE-THINGS DDO 167 5.3 7.3 LITTLE-THINGS DDO 168 5.8 7.7 LITTLE-THINGS DDO 187 5.5 6.2 LITTLE-THINGS DDO 210 8.6 11.7 LITTLE-THINGS DDO 216 15.4 16.2 LITTLE-THINGS IC 10 5.5 5.9 LITTLE-THINGS IC 1613 6.5 7.7 LITTLE-THINGS LGC 3 9.3 11.8 LITTLE-THINGS M81dwA 6.3 7.8 LITTLE-THINGS NGC 1569 5.2 5.9 LITTLE-THINGS NGC 2366 5.9 6.9 LITTLE-THINGS NGC 3738 5.5 6.3 LITTLE-THINGS NGC 4163 5.9 9.7 LITTLE-THINGS NGC 4214 6.4 7.6 LITTLE-THINGS SagDIG 16.9 28.2 LITTLE-THINGS UGC 8508 4.9 5.9 LITTLE-THINGS WLM 5.1 7.6 LITTLE-THINGS Haro 29 5.6 6.8 LITTLE-THINGS Haro 36 5.8 7.0 LITTLE-THINGS Mrk 178 5.5 6.2 LITTLE-THINGS VIIZw 403 7.7 9.4 LITTLE-THINGS ----------- ----------- -------- --------------- : \[t:beam\] The spatial resolution for the Robustly Weighted column density maps from the LITTLE-THINGS and VLA-ANGST surveys. Information from [@littlethings] and [@Ott12]. ---------------- ----------- -------- ----------- Galaxy Beam Size Survey Minor Major NGC 247 6.2 9.0 VLA-ANGST DDO 6 6.3 7.2 VLA-ANGST NGC 404 6.1 7.1 VLA-ANGST KKH37 5.8 6.5 VLA-ANGST UGC 4483 5.7 7.6 VLA-ANGST KK 77 5.8 6.1 VLA-ANGST BK3N 5.8 6.3 VLA-ANGST AO0952+69 5.9 6.4 VLA-ANGST Sextans B 7.5 9.5 VLA-ANGST NGC 3109 5.0 7.6 VLA-ANGST Antlia 9.6 10.5 VLA-ANGST KDG 63 6.0 6.2 VLA-ANGST Sextans A 6.0 7.3 VLA-ANGST HS 117 6.1 8.6 VLA-ANGST DDO 82 5.7 5.8 VLA-ANGST KDG 73 5.6 6.9 VLA-ANGST NGC 3741 4.8 5.5 VLA-ANGST DDO 99 5.2 7.7 VLA-ANGST NGC 4163 5.4 7.6 VLA-ANGST NGC 4190 5.3 6.1 VLA-ANGST DDO 113 7.7 9.9 VLA-ANGST MCG +09-20-131 5.3 6.1 VLA-ANGST DDO 125 5.4 6.3 VLA-ANGST UGCA 292 5.0 7.0 VLA-ANGST GR 8 5.4 5.8 VLA-ANGST UGC 8508 6.4 8.2 VLA-ANGST DDO 181 5.5 7.6 VLA-ANGST DDO 183 6.2 7.6 VLA-ANGST KKH 86 5.8 7.5 VLA-ANGST UGC 8833 11.2 12.4 VLA-ANGST KKH 230 5.2 5.9 VLA-ANGST DDO187 5.7 7.1 VLA-ANGST DDO 190 9.9 10.8 VLA-ANGST KKR 25 4.4 5.5 VLA-ANGST KKH 98 5.2 6.2 VLA-ANGST ---------------- ----------- -------- ----------- ![image](./holwerda_f1a.pdf){width="49.00000%"} ![image](./holwerda_f1b.pdf){width="49.00000%"} Quantifying Morphology {#s:qm} ====================== We use the Concentration-Asymmetry-Smoothness parameters [CAS, @CAS], combined with the Gini-$M_{20}$ parameters from [@Lotz04], and our own $G_M$. We have discussed the definitions of these parameters in the previous papers, as well as how we estimate uncertainties for each. Here, we will give a brief overview but for details we refer the reader to [@Holwerda11a; @Holwerda11b] or Holwerda et al. [*submitted*]{}. We select pixels in an image as belonging to the galaxy based on the outer  contour ($5. \times 10^{19}$ atoms/cm$^2$) and adopt the position from the respective survey catalogs as the central position of the galaxy [as reported in @littlethings; @Ott12 respectively]. Given a set of $n$ pixels in each object, iterating over pixel $i$ with value $I_i$, pixel position $x_i,y_i$ with the centre of the object at $x_c,y_c$ these parameters are defined as: $$C = 5 ~ \log (r_{80} / r_{20}), \label{eq:c}$$ with $r_{f}$ as the radial aperture, centered on $x_c,y_c$ containing percentage $f$ of the light of the galaxy [see definitions of $r_f$ in @se; @seman][^2]. This concentration index can be used to quickly discern between light profiles; a de Vaucouleurs profile ($I \propto R^{-4}$) has Concentration value of $C=5.2$, and a purely exponential one has a value of $C=2.7$. It also can be used to identify unique phenomena, for for example HI disk stripping [@Holwerda11e]. $$A = {\Sigma_{i} | I_i - I_{180} | \over \Sigma_{i} | I(i) | }, \label{eq:a}$$ where $I_{180}$ is the pixel at position $i$ in the galaxy’s image, after it was rotated $180^\circ$ around the centre of the galaxy. Fully symmetric galaxies have very low values of Asymmetry. A regular spiral need not show a high value of Asymmetry, e.g., a grand-design spiral galaxy’s spiral arms map onto each other with a 180$^\circ$ rotation [the rotational symmetry of galaxies can be used to infer dust extinction in pairs of galaxies, see @kw92; @kw00a; @kw00b; @kw01a; @kw01b; @Holwerda07c; @Keel13; @Holwerda13a; @Holwerda13b]. Flocculant spirals can be expected to be slightly more Asymmetric still. The highest values of Asymmetry can be found in galaxies with strong tidal disruptions, provided the tidal structures are included in the calculation, which they are in . $$S = {\Sigma_{i,j} | I(i,j) - I_{S}(i,j) | \over \Sigma_{i,j} | I(i,j) | }, \label{eq:s}$$ where $I_{S}$ is pixel $i$ in a smoothed image. The type of smoothing (e.g., boxcar or Gaussian) has changed over the years. We chose a fixed 5“ Gaussian smoothing kernel for simplicity. We note that we use the term ”Smoothness“ for historical reasons as this has become the de facto designation of this parameter (the CAS scheme), even though an increase in its value means a more clumpy appearance of the image (hence its original designation ”clumpiness"). Very smooth galaxies have very low values of Smoothness but in other galaxies, the value of the Smoothness parameter depends on the size of the smoothing kernel used. If the kernel’s size correspond to, for example, the width of spiral arms at the distance of the galaxy, then grand design spirals will have relatively high Smoothness values. The Gini coefficient is defined as: $$G = {1\over \bar{I} n (n-1)} \Sigma_i (2i - n - 1) I_i , \label{eq:g}$$ where the list of $n$ pixels was first ordered according to value and $\bar{I}$ is the mean pixel value in the image. [@Lotz04] introduce the relative second-order moment (20) of an object. The second-order moment of a pixel is: $M_i = I_i \times R_i = I_i \times [(x_i ? x_c)^2 + (y_i ? y_c)^2]$. The total second-order moment of an image is defined as: $$M_{tot} = \Sigma M_i = \Sigma I_i [(x_i - x_c)^2 + (y_i - y_c)^2]$$ The relative second-order moment of the brightest 20% of the flux: $$M_{20} = \log \left( {\Sigma_i^k M_i \over M_{tot}}\right), ~ {\rm for ~ which} ~ \Sigma_i^k I_i < 0.2 ~ I_{tot} {\rm ~ is ~ true}.\\ \label{eq:m20}$$ where pixel $k$ marks the top 20% point in the flux-ordered pixel-list. The 20 parameter is a parameter that is sensitive to bright structure away from the center of the galaxy: flux is weighted in favor of the outer parts. It therefore is relatively sensitive to tidal structures. Instead of using the intensity of pixel $i$, the Gini parameter can be defined using the second order moment: $$G_M = {1\over \bar{M} n (n-1)} \Sigma_i (2i - n - 1) M_i , \label{eq:gm}$$ These parameters trace different structural characteristics of a galaxy’s image but these do not span an orthogonal parameter space [see also the discussion in @Scarlata07 Holwerda et al. [*in preparation*]{}]. Two crucial input parameters for the computation of the morphology are the central position ($x_c$ and $y_c$) and the threshold for including pixels into the calculations. We use the positions reported by [@Ott12] for the VLA-ANGST galaxies and those in the NED database for the LITTLE-THINGS galaxies for the central pixel position. To determine which pixels to include, we adopt a threshold of $5 \times 10^{19}$ atoms/cm$^2$, the practical limiting depth of both of these surveys. [@Holwerda11a] discuss the uncertainties in these parameters in detail. To estimate their errors, we both vary the input central position and compute the rms from the resulting spread in values. Secondly, we scramble the pixels (but keep the central position identical) to asses the effect of random noise. Thirdly, in the case of the Gini parameter, there is no dependence on the central position. In this case we compute the variance by sub-sampling the pixel collection. Spatial Sampling {#s:samp} ---------------- Interferometric radio observations filter out large-scale faint emission, a unique feature with respect to the characterization of morphology. To remedy this, the total-power information from short baseline observations are needed, i.e., a large single-dish telescope or a radio array more compact than the VLA-A configuration. Fortunately, one of these galaxies, NGC 3109, was observed with the Karoo Array Telescope (KAT-7), a seven-dish precursor array to the MeerKAT telescope [@MeerKAT; @meerkat1; @meerkat2]. These observations and results are described in detail in [@Carignan13]. The resulting  map is sensitive to larger scale  features such as wide tidal tails or warps. Figure \[f:n3109\] shows both  maps to illustrate the lack of large-scale, diffuse emission in VLA observations. For example. [@Carignan13] note that the [*total*]{}  mass estimated from the KAT-7 observations agrees well with single-dish observations which do not resolve out any structure. Figure \[f:n3109\] shows how the KAT-7 observations reveal a pronounced warp in the edge-on  disk while this is only visible as a slight dip in the VLA data. The question remains if the addition of an additional diffuse level will change the global morphology parameters or if their value is mostly determined by the morphological detail in the VLA data. We ran our morphological code on the KAT-7 image twice, delineated by different contours, one similar to the area covered by the VLA-ANGST outer contour and one defining the limit of the diffuse emission. Table \[t:kat\] lists the resulting parameters. There are notable differences between the VLA and KAT-7 observations, to both the outer contour as well as an area corresponding to the VLA-ANGST outer contour. The differences between the two KAT-7 contours are noticeable in S, G and . The inclusion of a large number of low-intensity pixels will result in a completely different distribution and hence Gini and parameters. The higher range in contrast results in a higher Smoothness –meaning a clumpier image– compared to just the inner contour. Comparing the inner contour in the KAT-7 observations and the VLA-ANGST observations (second and fourth column in Table \[t:kat\]), we note differences in C, S, G, and to a lesser extend 20 and . The majority of morphological parameters are modified if we change spatial resolution, especially sampling over areas greater than a kpc. The addition of a large-scale structure only changes the measures of (in)equality in the distribution: Gini and . Thus, while large-scale structure is missed by VLA interferometric surveys such as LITTLE-THINGS and VLA-ANGST, most of the morphological information is contained in the small-scale structures that are resolved by such observations. ![image](./holwerda_f2a.pdf){width="49.00000%"} ![image](./holwerda_f2b.pdf){width="49.00000%"} ----------- -------------------- ------------------ ------------------- Parameter VLA-ANGST KAT-7 inner contour (32Jy/Beam) C 0.0 $\pm$ 0.010 $0.20\pm 0.02$ $0.20 \pm 0.08$ A 1.0 $\pm$ 0.000 $1.0 \pm 0.0$ $1.0 0.0$ S 0.047 $\pm$ 0.031 $0.22 \pm 0.09$ $0.38 \pm 0.16$ G 0.445 $\pm$ 0.010 $ 0.68 \pm 0.01$ $0.23 \pm 0.11$ 20 -0.741 $\pm$ 0.002 $-0.71\pm 0.02$ $-0.70 \pm 0.019$ 0.429 $\pm$ 0.011 $0.67 \pm 0.01$ $0.22 \pm 0.11$ ----------- -------------------- ------------------ ------------------- : The different morphological parameters for NGC 3109. \[t:kat\] Results {#s:results} ======= To explore the relationships between the  morphological parameters and the tidal and star-formation tracers, we show two plots, one where we compare  parameters against each other, colour-coded with a comparison parameter, if available. This is to identify possible sections of   morphology parameter space where special cases reside. Secondly, we plot the comparison parameter (e.g., a star-formation measure) against the six   morphological parameters directly and, thirdly, we calculate the Spearman ranking (-1 perfectly anti-correlated, 0 uncorrelated, and 1 fully correlated) between the comparison parameter and  morphological parameter. The LITTLE-THINGS sample was drawn from [@Hunter04] and the VLA-ANGST from [@Karachentsev04], meaning that the parameters on tidal effect or star-formation from the literature are not available for our full sample. We compare the  morphology to the tidal disturbance, and several parameterizations of the ongoing and past star-formation, to explore which of these are the dominant factor in the overall shape of the  in these dwarf galaxies. ![image](./holwerda_f3a.pdf){width="\textwidth"} ![image](./holwerda_f3b.pdf){width="110.00000%"} Tidal Index {#s:theta} ----------- Figure \[f:Theta\] shows the distribution of  column density map morphologies, coded by the tidal parameter ($\Theta$) from [@Karachentsev04]. Figure \[f:Theta:6par\] shows the direct relation between the six  morphological parameters and the tidal parameter. Of all the parameters, only  Asymmetry is weakly related to $\Theta$ (see also Table \[t:spear\]). The six morphological criteria for interaction of more massive galaxies are denoted with dashed lines in Figure \[f:Theta\] and further. These criteria are: $$A > 0.38 ~ {\rm and} ~S>A \label{eq:AS}$$ from [@CAS]. This is the straight dashed line in sub-panels (d), (e) and (f) in Figure \[f:Theta\] etc. [@Lotz04] added two different criteria, one using Gini and $M_{20}$: $$G > -0.115 \times M_{20} + 0.384 \label{eq:GM20}$$ shown by the dashed line in sub-panel (b) in Figure \[f:Theta\]. [@Lotz04] also defined a interaction criterion based on Gini and Asymmetry: $$G > -0.4 \times A + 0.66 ~ \rm or ~ A > 0.4. \label{eq:GA}$$ which is shown as an inclined dashed line in sub-panel (d) in Figure \[f:Theta\]. This latter criterion is a refinement of the Conselice et al. A-S criterion in equation \[eq:AS\]. [@Holwerda11b] defined three interaction criteria specifically for  data (typically lower spatial resolution, affected by spatial filtering (i.e., sensitivity to a specific angular scale), and smaller dynamical range than optical data). Ongoing spiral-spiral tidal interactions can be identified by: $$G_M > 0.6, \label{eq:GM}$$ which is not shown in Figure \[f:Theta\] as the range of values in the Dwarf galaxy  surveys does not extend this high. However, it is shown as the vertical dashed line in sub-panels (a), (c), (f) and (j) in Figures \[f:sdss:sm\], \[f:sdss:sf\] and \[f:sdss:ssf\]. Or their interaction can be identified based on Asymmetry and 20: $$A > -0.2 \times M_{20} + 0.25, \label{eq:AM20}$$ or concentration and 20, similar to the criteria from [@Lotz04] (equations \[eq:GM20\] and \[eq:GA\]), as: $$C_{82} > -5 \times M_{20} + 3. \label{eq:CM20}$$ The first thing we note, is that the vast majority of dwarfs galaxies lie on one side of these criteria. The G-A and G-20 criteria include almost all; the C-20 and  criteria both completely exclude the dwarf from the tidally interacting. Only the Gini-20 criterion bisects the dwarf sample. If we compare these criteria to the values of the tidal index $\Theta$, there is little correlation with the position in   morphology parameter space. The exception are three galaxies with low values of $\Theta$, i.e. very isolated, and a low asymmetry value ($A<0.4$). Figures \[f:Theta:6par\] and \[f:Theta\] show that the  morphology is not primarily affected by the gravitational interaction. One can identify very isolated galaxies from the  morphology ($A<0.4$) but the majority of criteria that apply to spiral galaxies cannot be applied to dwarf  morphology to identify or even rank the level of interaction. We identify DDO47, DDO87, and UGC8833 as the most isolated dwarfs in our sample, based on their Asymmetry. Ongoing and Past Star-Formation {#s:sfr} ------------------------------- The star-formation can be measured by a variety of techniques corresponding to different typical timescales: (a) H$\alpha$ emission which traces the currently forming massive stars still in their ionized birth clouds (tens of Myr), (b) far-ultraviolet () emission which traces the population of massive young stars, after the surrounding gas has dissipated (hundreds of Myr), and (c) resolved stellar populations which trace the star-formation history to Gyr timescales. Here we compare the  morphologies to these three star-formation tracers to explore which time-scale of star-formation informs the morphology of the atomic gas: current from H$\alpha$ emission, reported in [@Hunter06], recent from  fluxes, reported in [@Hunter10] and McQuinn et al. [*in preparation*]{}, or long-term star-formation history from HST resolved stellar populations, reported in [@Weisz11b] and [@McQuinn12a]. ![image](./holwerda_f4a.pdf){width="\textwidth"} ![image](./holwerda_f4b.pdf){width="110.00000%"} ![image](./holwerda_f5a.pdf){width="\textwidth"} ![image](./holwerda_f5b.pdf){width="110.00000%"} ![The 20 and Gini parameters of the  maps colour coded by the star-formation surface density over the optical disk ($log_{10}(SFR_{M25})$) from [@Hunter04]. The dashed line is a criterion for major interaction from [@Holwerda11c; @Lotz04]. []{data-label="f:GM20:sfrd"}](./holwerda_f6.pdf){width="50.00000%"} ### Current Star-Formation: $SFR_{H\alpha}$ {#s:sfrHa} Figure \[f:sfr\] shows the distribution of  column density map morphologies, coded by their [*total*]{} star-formation, inferred from H$\alpha$ flux from [@Hunter04], their Table 3. Typical low star-formation rates from H$\alpha$ flux values ($SFR_{H\alpha} \sim -2.8$) are found predominantly in low  Asymmetry galaxies. They also suggestively cluster elsewhere in   morphology space, e.g., Figure \[f:sfr\], sub-panels (a), (g) or (j). However, a direct comparison between current star-formation and the  morphology reveals little direct correlation between the   morphology parameters and the current star-formation (Figure \[f:sfr:6par\] and Table \[t:spear\]). Figure \[f:sfr\] shows the distribution of  column density map morphologies, coded by the star-formation surface density ($log_{10}(SFR_{M25})$) from [@Hunter04], their Table 3, based on the H$\alpha$ luminosity over the 25 mag/arcsec$^2$ radius ($R_{25}$). A similar value can be obtained over the optical radius ($R_D$). The star-formation surface density stands out in  morphology space in the Gini parameter (when computed over $R_D$, see Table \[t:spear\]). Because the dwarfs straddle the Gini-20 criterion for interaction (equation \[eq:GM20\]), we explore these parameters with current star-formation surface density in detail in Figure \[f:GM20:sfrd\]. Lower star-formation surface densities ($log_{10}(SFR_{M25}) < -3$) tend to lie below this interaction criterion while those above it have star-formation surface densities above it. That is not to say that those dwarfs are indeed interacting but their  appearance parameterised by Gini-20 combined and their current star-formation do appear to be linked. ![image](./holwerda_f7a.pdf){width="\textwidth"} ![image](./holwerda_f7b.pdf){width="110.00000%"} ### Gas Exhaustion Time $\tau_c$ {#s:tau} [@Hunter04] supply an estimate of the time it will take each galaxy to exhaust its gas supply (estimated from single-dish observations) by their current star-formation. Figure \[f:tau\] shows the morphology distribution colour-coded by the gas exhaustion time ($\tau$). The quickly exhausted galaxies are –unsurprisingly– those with a high star-formation surface density. The exhaustion time estimate is a simple one, otherwise one could perhaps expect a relation with the concentration of fuel or the Gini parameter (an indication of inequality). ISM in lumps close to the star-formation would be consumed much faster than a smooth gas disk that would need to coalesce in star-forming clouds first. However, the gas exhaustion time is not strongly related to any of the  morphology parameters (Figure \[f:tau:6par\] and Table \[t:spear\]). ![image](./holwerda_f8a.pdf){width="\textwidth"} ![image](./holwerda_f8b.pdf){width="110.00000%"} ![The 20 and Gini parameters of the  maps colour coded by the star-formation rate based on  flux ($log_{10} (SFR_{FUV})$) reported in [@Hunter10]. The dashed line is a criterion for major interaction from [@Holwerda11c; @Lotz04]. []{data-label="f:GM20:sfr:fuv"}](./holwerda_f9.pdf){width="50.00000%"} ### Recent Star-formation: $SFR_{FUV}$ {#s:sfr:fuv} [@Hunter10] and McQuinn et al. [*in prep*]{} report  fluxes and derived star-formation rates. The McQuinn results are corrected for dust extinction using an estimate of the total far-infrared flux based on Spitzer/MIPS 24, 70 and 160 $\mu$m maps. The Hunter et al. values are not corrected for dust extinction. The general agreement between the Hunter et al. and McQuinn et al. values in a couple of overlap cases is good enough for us to combine both  star-formation values sets to mark the points in Figure \[f:sfr:fuv\]. In Figure \[f:sfr:fuv\], there are some weak trends already evident between recent star-formation ($SFR_{FUV}$) and the Gini and 20 values in , but no clear delineations in parameter-space. Figure \[f:sfr:fuv:6par\] confirms these trends: Gini increases with $SFR_{FUV}$ and 20  decreases, i.e.,  disks become less smooth (higher G) but relatively fewer bright spots at higher radii (lower 20). The Spearman indices in Table \[t:spear\] corroborate this and also reveal (weak) relations with Concentration and  with recent star-formation. Figure \[f:GM20:sfr:fuv\] highlights the Gini-20 relation. The majority of galaxies with an $SFR_{FUV}$ measurement straddle the G-20 interaction line. To better constrain the relation between  Gini and 20 (and other  morphological parameters) the sample of $SFR_{FUV}$ measurements will need to be expanded to include all VLA-ANGST and LITTLE-THINGS galaxies. [@Hunter10] also provide comparisons to the str-formation rates inferred from H$\alpha$ fluxes from [@Hunter04] and a V-band photometry-based star-formation rate. These values could possibly provide a useful direct comparison to which time-scale of ongoing star-formation dominates the morphology for the same sample. The figures in Appendix \[s:h10\] show the dependence of  morphology on these ratios, $SFR_{FUV}/SFR_{H\alpha}$ and $SFR_{FUV}/SFR_{V}$ in Figures \[f:ratHa\] and \[f:ratV\] respectively. Apart from some suggestive clusterings of a few points, there is no real relation between the $SFR_{FUV}/SFR_{H\alpha}$ ratio and  morphology. There may be a some relation between the $SFR_{FUV}/SFR_{V}$ ratio and Gini (see Table \[t:spear\]). ![image](./holwerda_f10a.pdf){width="\textwidth"} ![image](./holwerda_f10b.pdf){width="110.00000%"} Resolved stellar populations: Star-formation History ---------------------------------------------------- One of the main science drivers behind the ANGST survey was to obtain an accurate star-formation history from the resolved stellar population as observed with the Hubble Space Telescope [see @angst; @Weisz11a; @Weisz11b; @Weisz11c; @Weisz12a; @Weisz12b; @McQuinn10a; @McQuinn10b; @McQuinn12a; @McQuinn12b]. The main values to compare to the  morphology are the mean star-formation time, the mass-to-light ratio and the mean age of the stars in each galaxy from [@Weisz11b], their Tables 2 and 3, supplemented with a few average star-formation rates $<SFR>$ values from [@McQuinn12b], their Table 1. Based on the (weak) relations with ongoing star-formation tracers in the previous sections, one could expect some correlation between the shape of the atomic hydrogen distribution and the star-formation history, depending on the typical time-scale of the relation. ![The 20 and Gini parameters of the  maps colour coded by the mean past star-formation rate from [@Weisz11b] or [@McQuinn12b]. The dashed line is a criterion for major interaction from [@Holwerda11c; @Lotz04]. []{data-label="f:GM20:tau"}](./holwerda_f11.pdf){width="50.00000%"} Figure \[f:SFtime\] shows the  morphology, colour-coded for the ANGST sample by the average star-formation rate ($<SFR>$) over the entire history of the galaxy, calculated over the past 10 Gyr [@Weisz11b] or 6 Gyr [@McQuinn12b]. Figure \[f:SFtime:6par\] shows the direct relation between the typical star-formation over these longer time-scales $<SFR>$ and the  morphology parameters. Those galaxies with low star-formation rates are typically low in Gini, , and Asymmetry, and high in 20 values. The  and 20 values appear to be related for those galaxies with a low lifetime star-formation rate ($\sim10^{-3} M_\odot/yr$). The Spearman indices are high for 20 and Gini: $<SFR>$ is anti-correlated to 20, and correlated with Gini. Thus, there is some relation between the mean star-formation rate in a dwarf galaxy and how the  is distributed. Galaxies that are not forming stars at a high rate right now and have not in the past ($<SFR> ~ \sim 1-2 \times 10^{-3} M_\odot/yr$) show lower values of Gini,  and higher values of 20. In comparison, appendix \[s:w11\] shows that there is little or no relation with mass-to-light ratio or mean stellar age from [@Weisz11b]. ![The distribution of the  morphological parameters colour coded by the relative extent of the H$\alpha$ disk ($R_{H\alpha}/R_{R25}$) from [@Hunter04]. Dashed lines are the criteria for major interaction from [@Holwerda11c], based on the WHISP sample or established morphological selections of mergers in optical data. The ratio is related with the 20 and anti-correlated to the Gini parameter (Table \[t:spear\]), already identified in [@Holwerda12c] as the right combination to identify extended star-formation disks.[]{data-label="f:xdisk"}](./holwerda_f12.pdf){width="50.00000%"} ![The Asymmetry and 20 of the  maps colour coded by the relative extent of the H$\alpha$ disk ($R_{H\alpha}/R_D$) from [@Hunter04]. The dashed lines are the  disk criteria identified in [@Holwerda12c] for  images. []{data-label="f:xuv"}](./holwerda_f13.pdf){width="50.00000%"} Extended H$\alpha$ disks {#s:RHII} ------------------------ In [@Holwerda12c], we used these parameters to identify extended  disks () in the   survey. [@Hunter04] note the relative scale of the HII regions to that of the optical disk ($R_{25}$), Holmberg radius ($R_H$), and disk scale length ($R_D$). Figure \[f:xdisk\] shows the morphological parameters distribution, colour coded by the relative extent of the H$\alpha$ emission. The highest correlation between these axes ratios with the  parameters is with 20 and anti-correlated with Gini (Table \[t:spear\]). We find that galaxies that have a high star-formation surface density, are also relatively compact. Figure \[f:xuv\] shows the Asymmetry-20 for the  maps and the  disk criterion for 6"   data from [@Holwerda12c]. Eleven galaxies are in this selection. Five have extended HII radii ($R_{HII}/R_{25} > 1$), and five of these do not have a HII radius from [@Hunter04] (Figure \[f:xdisk\]). The one $R_{HII}/R_{25}$ extreme value in our sample is selected but the remainder is not exceptionally extended or concentrated. We conclude that extended HII disks are not related to extended  disks. Name C A S 20 G note -------------------------------------------------------- --------------- -------------- --------------- --------------- --------------- ------- ------ $v_{rot}$ 0.23 -0.06 -0.04 -0.02 0.05 0.33 from [@Karachentsev04]: $\Theta$ -0.03 [**0.42**]{} -0.02 0.14 -0.03 0.21 (1) from [@Hunter06]: $M_{HI}/L$ -0.13 -0.13 [**-0.46**]{} -0.11 0.00 -0.25 (2) $log_{10}(SFR) M_\odot/yr$ 0.17 0.15 0.16 -0.06 0.13 0.32 (3) $log_{10}(SFR_{M25})$ 0.12 -0.01 0.06 0.00 0.16 0.12 (4) $log_{10}(SFR_D)$ 0.32 0.17 0.23 -0.24 [**0.44**]{} 0.28 (5) $log_{10}(\tau)$ -0.20 -0.14 -0.23 0.13 -0.21 -0.22 (6) $R_{HII}/R_{25}$ -0.28 -0.17 -0.33 [**0.40**]{} -0.36 0.07 (7) $R_{HII}/R_{H}$ -0.29 -0.11 -0.21 [**0.61**]{} [**-0.40**]{} 0.23 (8) $R_{HII}/R_{D}$ -0.09 -0.17 -0.15 0.34 -0.10 0.23 (9) from [@Weisz11b] and [@McQuinn12a]: $f_{10}$ -0.14 -0.26 -0.19 0.05 -0.16 0.25 $f_{06}$ 0.03 -0.21 -0.14 -0.16 0.16 -0.02 $f_{03}$ -0.26 0.15 [**0.41**]{} 0.14 -0.30 0.01 $f_{02}$ -0.28 0.28 [**0.48**]{} 0.27 -0.42 -0.13 $f_{01}$ -0.07 -0.01 0.38 0.07 -0.12 -0.29 Mean Stellar Age (Gyr) -0.07 -0.26 -0.09 -0.02 -0.10 0.20 (10) $<SFR>$ ($10^{-3} M_\odot/yr$) 0.35 -0.08 -0.30 [**-0.50**]{} [**0.44**]{} 0.15 (11) Mass-to-light ratio -0.11 0.08 -0.11 0.11 -0.07 0.05 (12) from [@Hunter10] and McQuinn et al. [*in preparation*]{}: log$_{10}$(SFR$_{FUV}$) ($M_\odot/yr$) [**0.44**]{} -0.14 0.03 -0.32 [**0.44**]{} 0.36 (13) log$_{10}$(SFR$_{H\alpha}$) ($M_\odot/yr$) 0.24 0.20 0.19 -0.20 [**0.48**]{} 0.21 (14) log$_{10}$(SFR$_{V}$) ($M_\odot/yr$) 0.38 -0.24 0.05 -0.38 0.32 0.06 (15) log$_{10}$(SFR$_{FUV}$/SFR$_{H\alpha}$) ($M_\odot/yr$) 0.15 -0.38 0.18 -0.15 -0.21 -0.04 (16) log$_{10}$(SFR$_{FUV}$/SFR$_{V}$) ($M_\odot/yr$) [**-0.43**]{} 0.06 0.06 [**0.40**]{} [**-0.53**]{} -0.14 (17) Mass-to-light M/L$_V$ ($M_\odot/L_\odot$) -0.01 -0.25 -0.04 0.04 -0.22 -0.18 (18) Tidal parameter from [@Karachentsev04]. The mass-to-light ratio inferred in [@Hunter06] from H$\alpha$ emission. The [*total*]{} star-formation rate inferred in [@Hunter06] from H$\alpha$ emission. The star-formation rate over the $R_{25}$ radius inferred in [@Hunter06] from H$\alpha$ emission. The star-formation rate over the $R_D$ radius inferred in [@Hunter06] from H$\alpha$ emission. The gas-depletion time inferred by [@Hunter06] from H$\alpha$ emission. The ratio between the radius containing the HII regions to the de Vaucouleur radius $R_{25}$. The ratio between the radius containing the HII regions to the Holmberg radius ($R_H$). The ratio between the radius containing the HII regions to the optical radius ($R_D$). The mean age of the stellar population computed from the fractions ($f_{1}-f_{10}$) reported in [@Weisz11b]. The mean star-formation rate over the last 10 Gyr reported in [@Weisz11b]. The mean stellar mass-to-light ratio inferred from the stellar population in [@Weisz11b]. The star-formation rate inferred from  flux reported in [@Hunter10]. The star-formation rate inferred from $H\alpha$ flux reported in [@Hunter10]. The star-formation rate inferred from V-band flux reported in [@Hunter10]. The ratio between star-formation rates inferred from  and $H\alpha$ flux reported in [@Hunter10]. The ratio between star-formation rates inferred from  and V-band flux reported in [@Hunter10]. The inferred stellar mass-to-light ratio report reported in [@Hunter10]. The Star-formation and  Morphology Relations Spirals and Irregulars {#s:sdss} =================================================================== In the past few sections, we have tried to establish if there is a relation between the quantified  morphological parameters of dwarf galaxies and their star-formation measured over different time-scales by different tracers. To place this in a larger context, we will now compare the quantified   morphology parameters of [*all*]{} our samples, THINGS [@Holwerda11a], WHISP [@Holwerda11b], LITTLE-THINGS and VLA-ANGST (this paper) to a common star-formation and stellar mass estimate from the Sloan Digital Sky Survey from [@Brinchmann04]. Total stellar mass, total star-formation and specific star formation are available for the SDSS DR7 sample of galaxies as described by [@Kauffmann03] and [@Brinchmann04] here: <http://www.mpa-garching.mpg.de/SDSS/DR7/>. Cross-correlating with position, we obtain a common measurements for our full  sample. We note that these measures are based on a 3” aperture typically centered on the galaxy nucleus. The  samples divides into two sub-samples: a high spatial resolution ($\sim$6“), the VLA-ANGST and THINGS surveys, and a lower resolution one ($\sim$10-12”), the WHISP and LITTLE-THINGS surveys. We will do the comparison with the SDSS parameters for the combined and the high- and low-resolution subsamples. To distinguish these plots from the previous comparisons, the markers are triangles for the low-resolution sample and squares for the high-resolution sample in the following plots. We note two caveats in the comparison: first, the interferometric observations miss low-intensity, large-scale emission (see section \[s:samp\]). This introduces a different sensitivity in the low and high-resolution maps to diffuse, large-scale tidal features for example. And secondly, the SDSS survey skipped very nearby galaxies which were resolved in individual HII regions for spectroscopic follow-up (e.g., M101 is not in the SDSS DR7 spectroscopic sample). Stellar Mass {#s:sdss:m} ------------ The range in stellar masses is lower than one would expect (Figure \[f:sdss:sm\]), given that two of the surveys target dwarf systems (VLA-ANGST and LITTLE-THINGS) but many of these galaxies do not feature in the SDSS DR7 spectroscopic catalog. Figure \[f:sdss:sm:6par\] shows the relation (if any) between the six  morphological parameters and the stellar mass ($log_{10}(M_*/M_\odot)$). There appears to be little or no relation between a galaxy’s stellar mass and the  morphology, in the full or either the high or lower-resolution samples (6“ and 12” respectively, see Table \[t:sdss\]). ![image](./holwerda_f14a.pdf){width="\textwidth"} ![image](./holwerda_f14b.pdf){width="110.00000%"} Star-Formation {#s:sdss:sfr} -------------- Figures \[f:sdss:sf\] shows the distribution of  morphology parameters but now colour-coded by total star-formation ($log(SFR)$). There are a few suggestive outliers, especially with respect to the Gini parameters (panel (b)), but a clean trend is impossible to distinguish. In the plot of star-formation rate agains the six  morphology parameters in Figure \[f:sdss:sf:6par\], clear trends are also absent, as reflected in the spearman rankings for the full sample (Table \[t:sdss\]). However, there is an interesting difference between the low- and high-resolution samples: in there a much better correlations between the overall  morphology and the total star-formation in the high-resolution sample (6"). In the high-resolution sample, star-formation is weakly related to Asymmetric, Smoothness, 20, Gini, and . This is similar to what we found for the smaller systems (Table \[t:spear\]), but some of the relations are inverted (e.g., SFR and Gini). The inversion of the relation between star-formation and those  parameters that measure the clumpiness of the ISM is intriguing: if one extends the mass range of the sample to high-mass galaxies (in fact the high-resolution sample is dominated by them), the Gini parameters [*lowers*]{} slightly with higher star-formation (Figure \[f:sdss:sf:6par\], and more clearly in Table \[t:sdss\]). One could speculate that in high-mass, high-star-formation cases, the clumping is taken to extremes and the majority of hydrogen is in dense molecular clumps, leaving a relatively smooth  disk. Because many of the dwarf systems do not have a reliable H$\alpha$ star-formation traces in SDSS, primarily because their stellar light is too diffuse, we plot the combination of the [@Hunter06] values and the [@Brinchmann04] values in Figure \[f:sf:Ha\] for Gini, and 20. We note that there is some relation, but most interestingly the range of  morphology values increases to higher star-formation (and the higher-mass systems). ![image](./holwerda_f15a.pdf){width="\textwidth"} ![image](./holwerda_f15b.pdf){width="110.00000%"} Specific Star-Formation ----------------------- Normalizing the total star-formation with the stellar mass, there is again little or no relation between the  morphology and the specific star-formation (Figures \[f:sdss:ssf\] and \[f:sdss:ssf:6par\] and table \[t:sdss\]), either for the total or high- or low-resolution samples. ![image](./holwerda_f16a.pdf){width="\textwidth"} ![image](./holwerda_f16b.pdf){width="110.00000%"} --------------------------------------------------------------------------------------------------------------------------------------- Sample & parameter C A S $M_{20}$ G $G_M$ ------------------------------------------------------------------------------------- ------- ------- ------ ---------- ------- ------- 6“ & & & & & &\ Star Formation Rate (SFR) & -0.05 & 0.44 & 0.34 & -0.35 & -0.36 & 0.36\ Specific Star Formation Rate (SSFR) & -0.08 & -0.07 & -0.13 & -0.17 & -0.05 & 0.23\ Stellar Mass ($M_*$) & -0.03 & 0.28 & 0.13 & 0.13 & -0.08 & -0.03\ 12” Star Formation Rate (SFR) -0.04 0.01 0.15 -0.05 -0.07 -0.02 Specific Star Formation Rate (SSFR) -0.05 -0.02 0.04 -0.03 -0.01 -0.04 Stellar Mass ($M_*$) -0.01 -0.01 0.00 -0.03 -0.04 -0.01 All Star Formation Rate (SFR) 0.00 0.08 0.17 -0.10 -0.13 0.02 Specific Star Formation Rate (SSFR) -0.05 -0.00 0.02 -0.04 -0.02 -0.03 Stellar Mass ($M_*$) -0.01 -0.00 0.01 -0.01 -0.05 -0.01 --------------------------------------------------------------------------------------------------------------------------------------- \[t:sdss\] ![image](./holwerda_f17a.pdf){width="32.00000%"} ![image](./holwerda_f17b.pdf){width="32.00000%"} ![image](./holwerda_f17c.pdf){width="32.00000%"} Discussion {#s:disc} ========== As part of the local volume of galaxies, many dwarf galaxies can be expected to be tidally disrupted by either a close dwarf companion or a nearby massive galaxy (the Milky Way, Andromeda or M81). However, the  morphology does not seem to be affected as much by tidal interaction as by star formation. Since these galaxies are relatively shallow gravitational potentials, one would expect the tidal forces to play a significant role in the general shape of the ISM. However, the short kinematic time scales of the ISM may result in a quick relaxation of any tidal disturbance or tidal effects on the  in these galaxies are too subtle to detect in this parameter space. We compared the  morphology to a series of star-formation indicators, sensitive to different time-scales of star-formation. The measures of inequality in the  morphology, the Gini,  and 20 parameter are weakly related to various tracers of star-formation (H$\alpha$,  or based on resolved stellar population). This result is largely in line with what is becoming the general picture of these galaxies: the local physics dominate over environment [@McQuinn12a]. That is not to say that the star-formation is the shaping agent of the  morphology through, for example, supernova feedback and stellar winds. The reverse could well be true; the inequality distributed ISM more often generates the conditions for local star-formation. [@McQuinn12a] show that the star-formation is stochastic both in time and location within star-bursting galaxies, corroborating such a scenario. And interestingly, the [*mean*]{} star-formation rate of the galaxies appear to be related to the   morphological parameters as well as the current star-formation indicator (Figure \[f:sfr\]). We propose a scenario where as soon as the ISM is transformed to an unequal or clumpy state, most likely by an external trigger –e.g., tidal or gravitational turbulence from the inflow of gas– the star-formation rate is elevated. The combined effect of elevated star-formation and the external factor keep the ISM clumpy over longer time-scales than the current (ionizing, $H\alpha$) star-formation timescale. We note a few things in the comparison of [*all*]{} the  morphological parameters available and a common stellar mass and star-formation estimates from SDSS (section \[s:sdss\]). First, any relation manifests itself only in the [*high-resolution*]{} ($\sim$6") sample and not in the lower-resolution one. Because the sampling is sub-kpc typically in the high-resolution sample, this scale is one where the effects of feedback from star-formation (or the effect of  overdensities on star-formation) can be seen. Coarser observations simply wash any ISM-star-formation relation out. The importance of sampling on our study of the effects of star-formation on the ISM is noteworthy for the future all-sky  surveys (the WNSHS survey with the WSRT/APERTIF and the Wallaby survey with ASKAP, Koribalski et al. [*in preparation*]{}). The spatial resolution of these surveys is expected to be $\sim$10", which means that the effects of star-formation on the  morphology will largely be smoothed out. While these surveys will be indispensable to identify exceptional systems and characterize global gas characteristics in disks, follow-up observations at higher resolution will remain essential to characterize the interplay of star-formation and the ISM. Secondly, the effects of star-formation reversed for some parameters when we compared for a low-mass sample (LITTLE-THINGS and VLA-ANGST) to the high-resolution sample (VLA-ANGST and THINGS). For example, the relation between total star-formation and the Gini parameter (or 20 or ) reverses sign (Tables \[t:sdss\] and \[t:spear\]). We put forward that in low-mass systems, it is predominantly the  that is related and regulates the star-formation [as argued by @Bigiel08] but in a higher-mass sample, the molecular component is the dominant ISM component in the relation with star-formation. That is to say, the clumping seen in the  of low-mass systems with increased star-formation happens in the molecular phase in more massive systems, leaving a relatively smooth  disk. Therefore the Gini parameter lowers with star-formation if one includes more massive galaxies. And the third point to make is that Asymmetry is strongly related to star-formation for the high-resolution subsample. This could be indirect evidence of gas accretion, especially for massive systems, as a strong relation between any of the star-formation tracers and  Asymmetry is absent in the low-mass sample (Table \[t:spear\]). The relation between Asymmetry and 20 for  disks reported in [@Holwerda12c] certainly appears to hint in that direction. However, this could also be taken as evidence of star-formation feedback on larger scales. Fact remains that the strongest and most consistent relationship in the high-resolution sample is between star-formation and  Asymmetry. Conclusion {#s:concl} ========== We applied the quantified morphology parameterization to two  surveys of nearby dwarf galaxies, LITTLE-THINGS and VLA-ANGST, and compared these to indicators of interaction and star-formation. We find that: 1. The  morphological criteria for interaction, developed for use on massive galaxies do not apply to these smaller dwarf irregular galaxies (Figure \[f:Theta\]). 2. A low value of the Asymmetry of the  map may point to an isolated dwarf (Figure \[f:Theta\]). 3. Current star-formation surface density (H$\alpha$) is related to the  morphology, specifically the Gini, , and 20 parameters. These indicate a stronger inequality in the neutral ISM distribution (Figure \[f:sfd\] & \[f:sfr\]). 4. Consequently, clumpy  is also the quickest depleted (Figure \[f:tau\]). 5. Based on previous resolved stellar population results, the star-formation (current and history) is linked to the Gini, , and 20 parameters of the  maps (Figure \[f:SFtime\]): high star-formation and unequally distributed  are closely linked but not necessarily causal. 6. Over a large sample of galaxies, spanning a wider mass range, there is no relation between stellar mass, total or specific star-formation (Table \[t:sdss\] and Figures \[f:sdss:sm\], \[f:sdss:sm:6par\], \[f:sdss:sf\],\[f:sdss:sf:6par\], \[f:sdss:ssf\], and \[f:sdss:ssf:6par\]). 7. To detect any relation between star-formation tracers and  morphology, high-resolution ($\sim$6")  maps are critical (Table \[t:sdss\]). 8. There is a relation between  Asymmetry and the ongoing [*total*]{} star-formation in massive galaxies (Figure \[f:sdss:sf:6par\], Table \[t:sdss\]). Future applications of the quantified morphology parameters on  maps will be on the large catalogs of moderately resolved galaxies in the WNSHS (Jozsa et. al. [*in preparation*]{}) and WALLABY (Koribalski et al. [*in preparation*]{}) surveys. These will produce significantly improved statistics on  morphology which can then be combined with all-sky surveys of star-formation tracers (e.g., GALEX and WISE catalogs). Acknowledgements {#acknowledgements .unnumbered} ================ The authors would like to thank the anonymous referee whose comments and suggestions improved the manuscript significantly. The lead author thanks Dr McQuinn for the useful discussions on the topic of dwarfs and their star-formation and for making her  star-formation estimates available to us and A. Leroy for the suggestion for the comparison against SDSS measures. The authors thank C. Carignan for making the  moment-0 map of NGC 3109 from the Karoo Array Telescope available for a direct comparison with VLA data. The lead author thanks the European Space Agency for the support of the Research Fellowship program. We thank the National Radio Astronomy Observatory for their generous time allocation, observing, and data reduction support for these two Large Projects, the LITTLE-THINGS and VLA-ANGST. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. We would like to thank the LITTLE-THINGS and VLA-ANGST teams for their effort on the calibration and imaging and making their products available to the astronomical community. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research has made use of NASA’s Astrophysics Data System. [64]{} natexlab\#1[\#1]{} D. A., [Skillman]{} E. D., [Marble]{} A. R., [van Zee]{} L., [Engelbracht]{} C. W., [Lee]{} J. C., [Kennicutt]{} Jr. R. C., [Calzetti]{} D., [Dale]{} D. A., [Johnson]{} B. D., 2012, , 754, 98 E., [Arnouts]{} S., 1996, , 117, 393, provided by the NASA Astrophysics Data System F., [Leroy]{} A., [Walter]{} F., [Brinks]{} E., [de Blok]{} W. J. G., [Madore]{} B., [Thornley]{} M. D., 2008, , 136, 2846 R. S., [de Blok]{} W. J. G., [Jonas]{} J. L., [Fanaroff]{} B., 2009, ArXiv e-prints/0910.2935 N., [Lehnert]{} M. D., [Aguirre]{} A., [P[é]{}roux]{} C., [Bergeron]{} J., 2007, , 378, 525 J., [Charlot]{} S., [White]{} S. D. M., [Tremonti]{} C., [Kauffmann]{} G., [Heckman]{} T., [Brinkmann]{} J., 2004, , 351, 1151 J. M., [Giovanelli]{} R., [Haynes]{} M. P., [Janowiecki]{} S., [Parker]{} A., [Salzer]{} J. J., [Adams]{} E. A. K., [Engstrom]{} E., [Huang]{} S., [McQuinn]{} K. B. W., [Ott]{} J., [Saintonge]{} A., [Skillman]{} E. D., [Allan]{} J., [Erny]{} G., [Fliss]{} P., [Smith]{} A., 2011, ArXiv e-prints C., [Frank]{} B. S., [Hess]{} K. M., [Lucero]{} D. M., [Randriamampandry]{} T. H., [Goedhart]{} S., [Passmoor]{} S. S., 2013, ArXiv e-prints C. J., 2003, , 147, 1 J., [Williams]{} B., [Gogarten]{} S., [Weisz]{} D., [Skillman]{} E., [Seth]{} A., [ANGST Team]{}, 2007, in American Astronomical Society Meeting Abstracts, Vol. 211, American Astronomical Society Meeting Abstracts, pp. 79.05–+ J. J., [Williams]{} B. F., [Seth]{} A. C., [Dolphin]{} A., [Holtzman]{} J., [Rosema]{} K., [Skillman]{} E. D., [Cole]{} A., [Girardi]{} L., [Gogarten]{} S. M., [Karachentsev]{} I. D., [Olsen]{} K., [Weisz]{} D., [Christensen]{} C., [Freeman]{} K., [Gilbert]{} K., [Gallart]{} C., [Harris]{} J., [Hodge]{} P., [de Jong]{} R. S., [Karachentseva]{} V., [Mateo]{} M., [Stetson]{} P. B., [Tavarez]{} M., [Zaritsky]{} D., [Governato]{} F., [Quinn]{} T., 2009, , 183, 67 W. J. G., [Jonas]{} J., [Fanaroff]{} B., [Holwerda]{} B. W., [Bouchard]{} A., [Blyth]{} S., [van der Heyden]{} K., [Pirzkal]{} N., 2009, in Panoramic Radio Astronomy: Wide-field 1-2 GHz Research on Galaxy Evolution D. L., [Keel]{} W. C., [White]{} III R. E., 2000, , 545, 171 M., [Blanton]{} M., [Yan]{} R., [Tinker]{} J., 2012, ArXiv e-prints J., [Putman]{} M. E., 2009, , 696, 385 B. W., 2005, astro-ph/0512139 B. W., [B[ö]{}ker]{} T., [Dalcanton]{} J. J., [Keel]{} W. C., [de Jong]{} R. S., 2013, B. W., [de Blok]{} W. J. G., [Bouchard]{} A., [Blyth]{} S., [van der Heyden]{} K., [Pirzkal]{} N., 2009, in Conference Proceedings of the “Panoramic Radio Astronomy: Wide-field 1-2 GHz research on galaxy evolution”, June 02 - 05, 2009 Groningen, the Netherlands B. W., [Keel]{} W. C., 2013, ArXiv e-prints B. W., [Keel]{} W. C., [Bolton]{} A., 2007, , 134, 2385 B. W., [Pirzkal]{} N., [Cox]{} T. J., [de Blok]{} W. J. G., [Weniger]{} J., [Bouchard]{} A., [Blyth]{} S.-L., [van der Heyden]{} K. J., 2011, , 416, 2426 B. W., [Pirzkal]{} N., [de Blok]{} W. J. G., [Bouchard]{} A., [Blyth]{} S.-L., [van der Heyden]{} K. J., 2011, , 416, 2437 B. W., [Pirzkal]{} N., [de Blok]{} W. J. G., [Bouchard]{} A., [Blyth]{} S.-L., [van der Heyden]{} K. J., [Elson]{} E. C., 2011, , 416, 2401 —, 2011, , 416, 2415 B. W., [Pirzkal]{} N., [de Blok]{} W. J. G., [van Driel]{} W., 2011, , 416, 2447 B. W., [Pirzkal]{} N., [Heiner]{} J. S., 2012, , 427, 3159 D. A., [Elmegreen]{} B. G., 2004, , 128, 2170 —, 2006, , 162, 49 D. A., [Elmegreen]{} B. G., [Ludka]{} B. C., 2010, , 139, 447 D. A., [Elmegreen]{} B. G., [Martin]{} E., 2006, ArXiv Astrophysics e-prints D. A., [Ficut-Vicas]{} D., [Ashley]{} T., [Brinks]{} E., [Cigan]{} P., [Elmegreen]{} B. G., [Heesen]{} V., [Herrmann]{} K. A., [Johnson]{} M., [Se-Heon]{}, [Rupen]{} M. P., [Schruba]{} A., [Simpson]{} C. E., [Walter]{} F., [Westpfahl]{} D. J., [Young]{} L. M., [Zhang]{} H.-X., 2012, ArXiv e-prints J., 2007, in From Planets to Dark Energy: the Modern Radio Universe. October 1-5 2007, The University of Manchester, UK. Published online at SISSA, Proceedings of Science, p.7 I. D., [Karachentseva]{} V. E., [Huchtmeier]{} W. K., [Makarov]{} D. I., 2004, , 127, 2031 G., [Heckman]{} T. M., [White]{} S. D. M., [Charlot]{} S., [Tremonti]{} C., [Brinchmann]{} J., [Bruzual]{} G., [Peng]{} E. W., [Seibert]{} M., [Bernardi]{} M., [Blanton]{} M., [Brinkmann]{} J., [Castander]{} F., [Cs[á]{}bai]{} I., [Fukugita]{} M., [Ivezic]{} Z., [Munn]{} J. A., [Nichol]{} R. C., [Padmanabhan]{} N., [Thakar]{} A. R., [Weinberg]{} D. H., [York]{} D., 2003, , 341, 33 W. C., [Manning]{} A. M., [Holwerda]{} B. W., [Mezzoprete]{} M., [Lintott]{} C. J., [Schawinski]{} K., [Gay]{} P., [Masters]{} K. L., 2013, , 125, 2 W. C., [White]{} III R. E., 2001, , 121, 1442 —, 2001, , 122, 1369 E. N., [Martin]{} C. L., [Finlator]{} K., 2011, , 742, L25 J. M., [Jonsson]{} P., [Cox]{} T. J., [Croton]{} D., [Primack]{} J. R., [Somerville]{} R. S., [Stewart]{} K., 2011, , 742, 103 J. M., [Jonsson]{} P., [Cox]{} T. J., [Primack]{} J. R., 2010, , 404, 590 —, 2010, , 404, 575 J. M., [Primack]{} J., [Madau]{} P., 2004, , 128, 163 K. B. W., [Skillman]{} E. D., [Cannon]{} J. M., [Dalcanton]{} J., [Dolphin]{} A., [Hidalgo-Rodr[í]{}guez]{} S., [Holtzman]{} J., [Stark]{} D., [Weisz]{} D., [Williams]{} B., 2010, , 721, 297 —, 2010, , 724, 49 K. B. W., [Skillman]{} E. D., [Cannon]{} J. M., [Dalcanton]{} J. J., [Dolphin]{} A., [Stark]{} D., [Weisz]{} D., 2009, , 695, 561 K. B. W., [Skillman]{} E. D., [Dalcanton]{} J. J., [Cannon]{} J. M., [Dolphin]{} A. E., [Holtzman]{} J., [Weisz]{} D. R., [Williams]{} B. F., 2012, ArXiv e-prints K. B. W., [Skillman]{} E. D., [Dalcanton]{} J. J., [Dolphin]{} A. E., [Cannon]{} J. M., [Holtzman]{} J., [Weisz]{} D. R., [Williams]{} B. F., 2012, , 751, 127 S.-H., [de Blok]{} W. J. G., [Brinks]{} E., [Walter]{} F., [Kennicutt]{} Jr. R. C., 2011, , 141, 193 J., [Stilp]{} A. M., [Warren]{} S. R., [Skillman]{} E. D., [Dalcanton]{} J. J., [Walter]{} F., [de Blok]{} W. J. G., [Koribalski]{} B., [West]{} A. A., 2012, ArXiv e-prints C., [Carollo]{} C. M., [Lilly]{} S., [Sargent]{} M. T., [Feldmann]{} R., [Kampczyk]{} P., [Porciani]{} C., [Koekemoer]{} A., [Scoville]{} N., [Kneib]{} J.-P., [Leauthaud]{} A., [Massey]{} R., [Rhodes]{} J., [Tasca]{} L., [Capak]{} P., [Maier]{} C., [McCracken]{} H. J., [Mobasher]{} B., [Renzini]{} A., [Taniguchi]{} Y., [Thompson]{} D., [Sheth]{} K., [Ajiki]{} M., [Aussel]{} H., [Murayama]{} T., [Sanders]{} D. B., [Sasaki]{} S., [Shioya]{} Y., [Takahashi]{} M., 2007, , 172, 406 E., 2010, in JENAM 2010, Joint European and National Astronomy Meeting E. D., [Simones]{} J., [Weisz]{} D. R., [Dalcanton]{} J. J., [Williams]{} B. F., [PHAT team]{}, 2012, in American Astronomical Society Meeting Abstracts, Vol. 219, American Astronomical Society Meeting Abstracts, p. \#151.07 R. A., [Sancisi]{} R., [van Albada]{} T. S., [van der Hulst]{} J. M., 2011, ArXiv e-prints C. A., [Heckman]{} T. M., [Kauffmann]{} G., [Brinchmann]{} J., [Charlot]{} S., [White]{} S. D. M., [Seibert]{} M., [Peng]{} E. W., [Schlegel]{} D. J., [Uomoto]{} A., [Fukugita]{} M., [Brinkmann]{} J., 2004, , 613, 898 F., [Brinks]{} E., [de Blok]{} W. J. G., [Bigiel]{} F., [Kennicutt]{} R. C., [Thornley]{} M. D., [Leroy]{} A., 2008, , 136, 2563 S. R., [Skillman]{} E. D., [Stilp]{} A. M., [Dalcanton]{} J. J., [Ott]{} J., [Walter]{} F., [Petersen]{} E. A., [Koribalski]{} B., [West]{} A. A., 2012, ArXiv e-prints D. R., [Dalcanton]{} J. J., [Williams]{} B. F., [Gilbert]{} K. M., [Skillman]{} E. D., [Seth]{} A. C., [Dolphin]{} A. E., [McQuinn]{} K. B. W., [Gogarten]{} S. M., [Holtzman]{} J., [Rosema]{} K., [Cole]{} A., [Karachentsev]{} I. D., [Zaritsky]{} D., 2011, , 739, 5 D. R., [Dolphin]{} A. E., [Dalcanton]{} J. J., [Skillman]{} E. D., [Holtzman]{} J., [Williams]{} B. F., [Gilbert]{} K. M., [Seth]{} A. C., [Cole]{} A., [Gogarten]{} S. M., [Rosema]{} K., [Karachentsev]{} I. D., [McQuinn]{} K. B. W., [Zaritsky]{} D., 2011, , 743, 8 D. R., [Johnson]{} B. D., [Johnson]{} L. C., [Skillman]{} E. D., [Lee]{} J. C., [Kennicutt]{} R. C., [Calzetti]{} D., [van Zee]{} L., [Bothwell]{} M., [Dalcanton]{} J. J., [Dale]{} D. A., [Williams]{} B. F., 2011, ArXiv e-prints D. R., [Johnson]{} B. D., [Johnson]{} L. C., [Skillman]{} E. D., [Lee]{} J. C., [Kennicutt]{} R. C., [Calzetti]{} D., [van Zee]{} L., [Bothwell]{} M. S., [Dalcanton]{} J. J., [Dale]{} D. A., [Williams]{} B. F., 2012, , 744, 44 D. R., [Skillman]{} E. D., [Cannon]{} J. M., [Walter]{} F., [Brinks]{} E., [Ott]{} J., [Dolphin]{} A. E., 2009, , 691, L59 D. R., [Zucker]{} D. B., [Dolphin]{} A. E., [Martin]{} N. F., [de Jong]{} J. T. A., [Holtzman]{} J. A., [Dalcanton]{} J. J., [Gilbert]{} K. M., [Williams]{} B. F., [Bell]{} E. F., [Belokurov]{} V., [Wyn Evans]{} N., 2012, ArXiv e-prints R. E., [Keel]{} W. C., 1992, , 359, 129 III R. E., [Keel]{} W. C., [Conselice]{} C. J., 2000, , 542, 761 The Morphological Parameters of the  column density maps based on the RO maps from LITTLE-THINGS and VLA-ANGST surveys. ======================================================================================================================= Name Gini 20 $C_{20/80}$ A S E ------------- ------------------- -------------------- ------------------- ------------------- ------------------- ------------------- ------------------- CVnIdwA 0.596 $\pm$ 0.015 -1.582 $\pm$ 0.074 2.388 $\pm$ 0.135 1.941 $\pm$ 0.092 0.046 $\pm$ 0.049 0.276 $\pm$ 0.027 0.290 $\pm$ 0.039 DDO43 0.499 $\pm$ 0.013 -1.275 $\pm$ 0.060 2.076 $\pm$ 0.114 1.793 $\pm$ 0.049 0.187 $\pm$ 0.043 0.201 $\pm$ 0.013 0.320 $\pm$ 0.015 DDO46 0.485 $\pm$ 0.012 -1.171 $\pm$ 0.038 2.014 $\pm$ 0.088 1.384 $\pm$ 0.050 0.228 $\pm$ 0.038 0.079 $\pm$ 0.018 0.387 $\pm$ 0.017 DDO47 0.399 $\pm$ 0.008 -0.948 $\pm$ 0.012 1.769 $\pm$ 0.036 0.612 $\pm$ 0.058 0.107 $\pm$ 0.022 0.118 $\pm$ 0.009 0.360 $\pm$ 0.010 DDO50 0.483 $\pm$ 0.007 -1.146 $\pm$ 0.015 1.846 $\pm$ 0.043 1.742 $\pm$ 0.014 0.239 $\pm$ 0.020 0.118 $\pm$ 0.011 0.396 $\pm$ 0.007 DDO52 0.386 $\pm$ 0.012 -0.998 $\pm$ 0.056 1.838 $\pm$ 0.083 1.239 $\pm$ 0.159 0.253 $\pm$ 0.025 0.337 $\pm$ 0.017 0.396 $\pm$ 0.014 DDO53 0.535 $\pm$ 0.018 -1.323 $\pm$ 0.053 2.244 $\pm$ 0.119 1.951 $\pm$ 0.052 0.192 $\pm$ 0.043 0.086 $\pm$ 0.030 0.374 $\pm$ 0.025 DDO63 0.483 $\pm$ 0.013 -1.116 $\pm$ 0.029 2.172 $\pm$ 0.059 1.449 $\pm$ 0.041 0.247 $\pm$ 0.030 0.059 $\pm$ 0.021 0.438 $\pm$ 0.011 DDO69 0.520 $\pm$ 0.012 -1.227 $\pm$ 0.041 2.287 $\pm$ 0.102 1.551 $\pm$ 0.154 0.210 $\pm$ 0.024 0.477 $\pm$ 0.016 0.429 $\pm$ 0.011 DDO70 0.414 $\pm$ 0.011 -1.461 $\pm$ 0.025 2.264 $\pm$ 0.055 1.136 $\pm$ 0.037 0.057 $\pm$ 0.018 0.101 $\pm$ 0.015 0.308 $\pm$ 0.010 DDO75 0.551 $\pm$ 0.006 -0.697 $\pm$ 0.001 0.104 $\pm$ 0.012 2.000 $\pm$ 0.000 0.152 $\pm$ 0.025 0.187 $\pm$ 0.012 0.552 $\pm$ 0.006 DDO87 0.383 $\pm$ 0.014 -1.039 $\pm$ 0.028 1.808 $\pm$ 0.059 0.736 $\pm$ 0.066 0.198 $\pm$ 0.026 0.158 $\pm$ 0.018 0.371 $\pm$ 0.016 DDO101 0.356 $\pm$ 0.022 -1.003 $\pm$ 0.025 1.716 $\pm$ 0.144 1.682 $\pm$ 0.162 0.238 $\pm$ 0.046 0.298 $\pm$ 0.030 0.360 $\pm$ 0.031 DDO126 0.453 $\pm$ 0.018 -1.083 $\pm$ 0.061 2.166 $\pm$ 0.124 1.360 $\pm$ 0.266 0.128 $\pm$ 0.045 0.530 $\pm$ 0.017 0.464 $\pm$ 0.018 DDO133 0.403 $\pm$ 0.022 -1.107 $\pm$ 0.031 1.940 $\pm$ 0.059 1.820 $\pm$ 0.091 0.044 $\pm$ 0.054 0.196 $\pm$ 0.019 0.338 $\pm$ 0.020 DDO154 0.524 $\pm$ 0.006 -1.479 $\pm$ 0.031 2.845 $\pm$ 0.110 1.519 $\pm$ 0.058 0.201 $\pm$ 0.031 0.523 $\pm$ 0.013 0.443 $\pm$ 0.007 DDO155 0.487 $\pm$ 0.017 -1.083 $\pm$ 0.057 1.898 $\pm$ 0.151 1.481 $\pm$ 0.138 0.079 $\pm$ 0.040 0.153 $\pm$ 0.023 0.335 $\pm$ 0.028 DDO165 0.526 $\pm$ 0.012 -0.992 $\pm$ 0.045 1.795 $\pm$ 0.103 1.955 $\pm$ 0.055 0.135 $\pm$ 0.052 0.360 $\pm$ 0.031 0.420 $\pm$ 0.018 DDO167 0.480 $\pm$ 0.014 -1.303 $\pm$ 0.129 2.116 $\pm$ 0.212 1.847 $\pm$ 0.167 0.187 $\pm$ 0.038 0.473 $\pm$ 0.018 0.416 $\pm$ 0.020 DDO168 0.613 $\pm$ 0.009 -1.878 $\pm$ 0.057 2.986 $\pm$ 0.136 1.137 $\pm$ 0.043 0.140 $\pm$ 0.052 0.393 $\pm$ 0.013 0.383 $\pm$ 0.012 DDO187 0.539 $\pm$ 0.013 -1.554 $\pm$ 0.093 2.397 $\pm$ 0.212 1.438 $\pm$ 0.238 0.215 $\pm$ 0.058 0.248 $\pm$ 0.028 0.341 $\pm$ 0.022 DDO210 0.509 $\pm$ 0.013 -0.699 $\pm$ 0.001 0.021 $\pm$ 0.003 2.000 $\pm$ 0.000 0.089 $\pm$ 0.037 0.161 $\pm$ 0.055 0.508 $\pm$ 0.013 DDO216 0.464 $\pm$ 0.019 -1.374 $\pm$ 0.113 2.360 $\pm$ 0.256 1.688 $\pm$ 0.101 0.091 $\pm$ 0.073 0.480 $\pm$ 0.020 0.383 $\pm$ 0.036 F564-V3 0.381 $\pm$ 0.013 -1.539 $\pm$ 0.113 2.324 $\pm$ 0.155 1.670 $\pm$ 0.204 0.162 $\pm$ 0.024 0.325 $\pm$ 0.022 0.305 $\pm$ 0.023 IC10 0.483 $\pm$ 0.004 -1.525 $\pm$ 0.024 2.767 $\pm$ 0.033 1.984 $\pm$ 0.003 0.346 $\pm$ 0.004 0.310 $\pm$ 0.005 0.534 $\pm$ 0.002 IC1613 0.465 $\pm$ 0.008 -1.020 $\pm$ 0.024 1.562 $\pm$ 0.044 1.076 $\pm$ 0.029 0.153 $\pm$ 0.021 0.289 $\pm$ 0.015 0.391 $\pm$ 0.008 LGS3 0.237 $\pm$ 0.021 -1.003 $\pm$ 0.053 1.948 $\pm$ 0.176 1.179 $\pm$ 0.092 0.348 $\pm$ 0.051 0.317 $\pm$ 0.017 0.442 $\pm$ 0.015 M81dwA 0.373 $\pm$ 0.015 -0.912 $\pm$ 0.029 1.251 $\pm$ 0.097 2.000 $\pm$ 0.000 0.274 $\pm$ 0.031 0.116 $\pm$ 0.023 0.350 $\pm$ 0.019 NGC1569 0.656 $\pm$ 0.003 -1.784 $\pm$ 0.073 3.118 $\pm$ 0.273 1.987 $\pm$ 0.002 0.310 $\pm$ 0.036 0.294 $\pm$ 0.030 0.525 $\pm$ 0.005 NGC2366 0.567 $\pm$ 0.007 -1.326 $\pm$ 0.031 2.437 $\pm$ 0.052 1.565 $\pm$ 0.084 0.164 $\pm$ 0.017 0.533 $\pm$ 0.006 0.438 $\pm$ 0.005 NGC3738 0.611 $\pm$ 0.008 -1.543 $\pm$ 0.063 2.531 $\pm$ 0.226 1.916 $\pm$ 0.019 0.283 $\pm$ 0.034 0.317 $\pm$ 0.015 0.440 $\pm$ 0.007 NGC4163 0.534 $\pm$ 0.017 -1.345 $\pm$ 0.097 2.249 $\pm$ 0.188 1.194 $\pm$ 0.224 0.204 $\pm$ 0.049 0.091 $\pm$ 0.038 0.361 $\pm$ 0.020 NGC4214 0.478 $\pm$ 0.006 -1.319 $\pm$ 0.023 2.143 $\pm$ 0.044 0.969 $\pm$ 0.040 0.230 $\pm$ 0.017 0.167 $\pm$ 0.014 0.386 $\pm$ 0.008 SagDIG 0.421 $\pm$ 0.021 -0.703 $\pm$ 0.002 0.068 $\pm$ 0.004 2.000 $\pm$ 0.000 0.076 $\pm$ 0.049 0.175 $\pm$ 0.016 0.416 $\pm$ 0.021 UGC8508 0.587 $\pm$ 0.011 -1.493 $\pm$ 0.073 2.736 $\pm$ 0.224 1.937 $\pm$ 0.073 0.269 $\pm$ 0.054 0.584 $\pm$ 0.044 0.442 $\pm$ 0.014 WLM 0.534 $\pm$ 0.005 -0.701 $\pm$ 0.003 0.282 $\pm$ 0.014 2.000 $\pm$ 0.000 0.213 $\pm$ 0.018 0.553 $\pm$ 0.004 0.540 $\pm$ 0.005 Haro29 0.508 $\pm$ 0.016 -1.401 $\pm$ 0.149 2.419 $\pm$ 0.241 0.949 $\pm$ 0.088 0.246 $\pm$ 0.030 0.199 $\pm$ 0.019 0.366 $\pm$ 0.018 Haro36 0.597 $\pm$ 0.016 -1.615 $\pm$ 0.146 2.590 $\pm$ 0.177 1.605 $\pm$ 0.076 0.176 $\pm$ 0.041 0.316 $\pm$ 0.018 0.398 $\pm$ 0.028 Mrk178 0.500 $\pm$ 0.020 -1.362 $\pm$ 0.072 2.506 $\pm$ 0.292 1.254 $\pm$ 0.142 0.226 $\pm$ 0.049 0.429 $\pm$ 0.020 0.500 $\pm$ 0.019 VIIZw403 0.591 $\pm$ 0.012 -1.774 $\pm$ 0.178 2.714 $\pm$ 0.378 2.000 $\pm$ 0.000 0.180 $\pm$ 0.089 0.464 $\pm$ 0.025 0.362 $\pm$ 0.018 NGC247 0.344 $\pm$ 0.006 -0.698 $\pm$ 0.002 0.000 $\pm$ 0.004 2.000 $\pm$ 0.000 0.061 $\pm$ 0.009 …$\pm$ 0.002 0.344 $\pm$ 0.005 NGC404 0.296 $\pm$ 0.005 -0.875 $\pm$ 0.011 0.000 $\pm$ 0.033 1.434 $\pm$ 0.019 0.356 $\pm$ 0.013 …$\pm$ 0.008 0.456 $\pm$ 0.004 UGC4483 0.563 $\pm$ 0.014 -1.652 $\pm$ 0.128 0.000 $\pm$ 0.169 1.946 $\pm$ 0.075 0.131 $\pm$ 0.039 …$\pm$ 0.029 0.283 $\pm$ 0.024 SextansB 0.429 $\pm$ 0.011 -1.422 $\pm$ 0.032 0.000 $\pm$ 0.044 0.858 $\pm$ 0.071 0.068 $\pm$ 0.016 …$\pm$ 0.013 0.322 $\pm$ 0.013 NGC3109 0.445 $\pm$ 0.010 -0.741 $\pm$ 0.002 0.000 $\pm$ 0.010 2.000 $\pm$ 0.000 0.047 $\pm$ 0.031 …$\pm$ 0.006 0.429 $\pm$ 0.011 Antlia 0.281 $\pm$ 0.015 -0.703 $\pm$ 0.001 0.000 $\pm$ 0.004 2.000 $\pm$ 0.000 0.229 $\pm$ 0.036 …$\pm$ 0.019 0.278 $\pm$ 0.015 SextansA 0.491 $\pm$ 0.009 -0.695 $\pm$ 0.002 0.000 $\pm$ 0.002 2.000 $\pm$ 0.000 0.069 $\pm$ 0.021 …$\pm$ 0.011 0.490 $\pm$ 0.009 DDO82 0.396 $\pm$ 0.020 -1.013 $\pm$ 0.089 0.000 $\pm$ 0.169 1.924 $\pm$ 0.053 0.278 $\pm$ 0.038 …$\pm$ 0.022 0.402 $\pm$ 0.031 KDG73 0.224 $\pm$ 0.034 -0.780 $\pm$ 0.048 0.000 $\pm$ 0.377 2.000 $\pm$ 0.000 0.423 $\pm$ 0.058 …$\pm$ 0.014 0.413 $\pm$ 0.050 NGC3741 0.392 $\pm$ 0.010 -1.752 $\pm$ 0.059 0.000 $\pm$ 0.079 1.974 $\pm$ 0.010 0.143 $\pm$ 0.014 …$\pm$ 0.006 0.456 $\pm$ 0.009 DDO99 0.447 $\pm$ 0.010 -1.412 $\pm$ 0.035 0.000 $\pm$ 0.097 1.903 $\pm$ 0.028 0.151 $\pm$ 0.028 …$\pm$ 0.010 0.375 $\pm$ 0.011 NGC4163 0.534 $\pm$ 0.013 -1.360 $\pm$ 0.083 2.225 $\pm$ 0.159 1.281 $\pm$ 0.136 0.204 $\pm$ 0.055 0.091 $\pm$ 0.038 0.351 $\pm$ 0.021 MCG9-20-131 0.539 $\pm$ 0.016 -1.735 $\pm$ 0.146 0.000 $\pm$ 0.205 1.033 $\pm$ 0.357 0.131 $\pm$ 0.057 …$\pm$ 0.021 0.277 $\pm$ 0.049 UGCA292 0.562 $\pm$ 0.019 -1.439 $\pm$ 0.104 0.000 $\pm$ 0.137 0.984 $\pm$ 0.474 0.078 $\pm$ 0.051 …$\pm$ 0.037 0.308 $\pm$ 0.040 GR8 0.441 $\pm$ 0.017 -0.982 $\pm$ 0.055 0.000 $\pm$ 0.108 1.368 $\pm$ 0.222 0.105 $\pm$ 0.038 …$\pm$ 0.015 0.361 $\pm$ 0.018 UGC8508 0.587 $\pm$ 0.012 -1.493 $\pm$ 0.106 2.736 $\pm$ 0.185 1.937 $\pm$ 0.086 0.269 $\pm$ 0.063 0.584 $\pm$ 0.033 0.442 $\pm$ 0.017 KKH86 0.205 $\pm$ 0.032 -0.702 $\pm$ 0.058 0.000 $\pm$ 0.267 1.995 $\pm$ 0.173 0.452 $\pm$ 0.079 …$\pm$ 0.037 0.410 $\pm$ 0.047 UGC8833 0.541 $\pm$ 0.022 -1.338 $\pm$ 0.113 0.000 $\pm$ 0.218 0.543 $\pm$ 0.268 0.051 $\pm$ 0.045 …$\pm$ 0.040 0.249 $\pm$ 0.035 KK230 0.361 $\pm$ 0.022 -1.162 $\pm$ 0.079 0.000 $\pm$ 0.195 1.036 $\pm$ 0.368 0.220 $\pm$ 0.034 …$\pm$ 0.015 0.370 $\pm$ 0.050 DDO187 0.539 $\pm$ 0.011 -1.554 $\pm$ 0.128 2.397 $\pm$ 0.137 1.438 $\pm$ 0.252 0.215 $\pm$ 0.044 0.248 $\pm$ 0.037 0.341 $\pm$ 0.027 \[t:qm\] Star-formation ratios from Hunter et al 2010 {#s:h10} ============================================ [@Hunter10] provide ratios between the star-formation derived from  and those from H$\alpha$ and broadband V. Figures \[f:ratHa\] and \[f:ratHa:6par\] show the relations with the $SFR_{FUV}/SFR_{H\alpha}$ and Figures \[f:ratV\] and \[f:ratV:6par\] show the relations with the $SFR_{FUV}/SFR_{V}$. Only very weak trends are visible (see also Table \[t:spear\]). Figures \[f:sfr:Ha\], \[f:sfr:Ha:6par\], \[f:sfr:Ha\] and \[f:sfr:V:6par\] show the direct relations with the H$\alpha$ and V-band star-formation estimates from [@Hunter10] with the  morphology parameters. ![image](./holwerda_f18a.pdf){width="\textwidth"} ![image](./holwerda_f18b.pdf){width="110.00000%"} ![image](./holwerda_f19a.pdf){width="\textwidth"} ![image](./holwerda_f19a.pdf){width="110.00000%"} ![image](./holwerda_f20a.pdf){width="\textwidth"} ![image](./holwerda_f20b.pdf){width="110.00000%"} ![image](./holwerda_f21a.pdf){width="\textwidth"} ![image](./holwerda_f21b.pdf){width="110.00000%"} Values from Weisz et al 2011 {#s:w11} ============================ In addition to a mean star-formation rate, [@Weisz11b] provide a mass-to-light ratio from their stellar populations. Figure \[f:ML\] shows the ANGST galaxies colour-coded by their M/L ratio and Figure \[f:ML:6par\] the direct relations but no trend emerges with any of the  morphological parameters (low values in Table \[t:spear\]). [@Weisz11b] also give the fraction of the stars formed at a given epoch (1, 2, 3, 6, or 10 Gyr ago). Converting these fractions to a mean age, we colour-code the ANGST galaxies with this mean age in Figure \[f:meanage\]. There is no relation between the mean stellar age and the distribution of the  morphology parameters from the VLA-ANGST (Table \[t:spear\]). ![image](./holwerda_f22a.pdf){width="\textwidth"} ![image](./holwerda_f22b.pdf){width="110.00000%"} ![image](./holwerda_f23a.pdf){width="\textwidth"} ![image](./holwerda_f23b.pdf){width="110.00000%"} [^1]: E-mail: benne.holwerda@esa.int [^2]: We must note that the earlier version of our code contained an error, artificially inflating the concentration values. A check revealed this to be $C_{new} = 0.38 \times C_{old}$, and we adopt the new, correct values in this paper.
--- abstract: 'The nonlinear dynamics of energetic-particle (EP) driven geodesic acoustic modes (EGAM) is investigated here. A numerical analysis with the global gyrokinetic particle-in-cell code ORB5 is performed, and the results are interpreted with the analytical theory, in close comparison with the theory of the beam-plasma instability. Only axisymmetric modes are considered, with a nonlinear dynamics determined by wave-particle interaction. Quadratic scalings of the saturated electric field with respect to the linear growth rate are found for the case of interest. The EP bounce frequency is calculated as a function of the EGAM frequency, and shown not to depend on the value of the bulk temperature. Near the saturation, we observe a transition from adiabatic to non-adiabatic dynamics, i.e., the frequency chirping rate becomes comparable to the resonant EP bounce frequency. The numerical analysis is performed here with electrostatic simulations with circular flux surfaces, and kinetic effects of the electrons are neglected.' --- \ Introduction ============ Two main issues related to magnetic confinement fusion are the turbulent transport, and the dynamics of energetic particles (EP), produced by fusion reactions, or injected for heating purposes. Zonal (i.e. axisymmetric) electric fields are observed to interact with turbulence in tokamaks, in the form of zero-frequency zonal flows (ZF) [@Hasegawa79; @Rosenbluth98; @Diamond05] and finite-frequency geodesic acoustic modes (GAM) [@Winsor68; @Zonca08]. Geodesic acoustic modes, due to their finite frequency, can also interact with EP via inverse Landau damping, leading to EP-driven GAM (EGAM) given that EP drive is sufficient to overcome the threshold condition induced by GAM Landau damping and continuum damping [@Fu08; @Qiu10; @Qiu11; @Qiu12; @Zarzoso12; @Wang13; @Girardo14]. Understanding the dynamics of EGAMs is crucial due to their interaction with turbulence, which can modify the turbulent transport [@Zarzoso13; @Zarzoso17]. Moreover, a strong nonlinear interaction of EGAM with EP is observed in tokamaks [@Lauber14; @Horvath16], leading potentially to a strong redistribution of the EP in phase space. In order to predict the importance of the interaction of EGAMs with turbulence and with EP, it is important to understand their nonlinear saturation mechanisms. One of the reasons of saturation for EGAMs is the wave-particle nonlinear interaction. In this case, the saturation occurs due to the EP nonlinear redistribution in phase space, and the consequent decrease of the energy exchange between the EP and the EGAM. Another possible reason of saturation is the wave-wave coupling. This can occur for an EGAM interacting with another EGAM and generating zonal side-bands, or for an EGAM with non-zonal modes, e.g., turbulence [@Zarzoso13]. In this paper, we focus on the wave-particle nonlinear interactions. For an investigation of the EGAM self-coupling, see Ref. [@Qiu17]. The nonlinear saturation of EGAMs is investigated here by means of electrostatic simulations with the gyrokinetic particle-in-cell code ORB5 [@Jolliet07; @Bottino11; @Bottino15JPP]. ORB5 has been succesfully verified against analytical theory and benchmarked against other gyrokinetic codes, for the linear dynamics of GAMs [@Biancalani17veri], and EGAMs [@Biancalani14; @Zarzoso14]. A detailed comparison with the beam-plasma instability (BPI) [@Oneil65; @Oneil68] in a 1-dimensional uniform plasma is also done, following the scheme anticipated in Ref. [@Qiu11; @Qiu14]. Similarly to the BPI, the saturation level of EGAM is shown to scale quadratically with the linear growth rate. A similar investigation was also previously done for Alfvén modes (see, for example, Ref. [@Briguglio14]). The EGAM frequency is shown to evolve in time when approaching the saturation, like the BPI (see, for example, Ref. [@Morales72; @Armon16]). The chirping rate is observed to get of the order of magnitude of the squared of the EP bounce frequency near the saturation. This denotes a transition from adiabatic to non-adiabatic regimes. The paper is organized as follows. The adopted gyrokinetic model is described in Sec. \[sec:model\], and the equilibrium and case definition in Sec. \[sec:equil\]. The linear dynamics is described in Sec. \[sec:linear\]. The saturation levels are investigated in Sec. \[sec:scalings\]. The regimes of different adiabaticity are investigated by means of the analysis of the frequency, in Sec. \[sec:frequency\]. Finally, a summary of the results is given in Sec. \[sec:conclusions\]. The model {#sec:model} ========= The main damping mechanism of GAM and EGAM is the Landau damping, which makes the use of a kinetic model necessary. In this paper we use the global gyrokinetic particle-in-cell code ORB5 [@Jolliet07]. ORB5 was originally developed for electrostatic ITG turbulence studies, and recently extended to its multi-species electromagnetic version in the framework of the NEMORB project [@Bottino11; @Bottino15JPP]. In this paper, collisionless electrostatic simulations are considered. The model equations of the electrostatic version of ORB5 is made by the trajectories of the markers, and by the gyrokinetic Poisson law for the scalar potential $\phi$. These equations are derived in a Lagrangian formulation [@Bottino15JPP], based on the gyrokinetic Vlasov-Maxwell equations of Sugama, Brizard and Hahm [@Sugama00; @Brizard07]. The equations for the marker trajectories (in the electrostatic version of the code) are [@Bottino15JPP]: $$\begin{aligned} \dot{\bf R}&=&\frac{1}{m_s}p_\|\frac{\bf{B^*}}{B^*_\parallel} + \frac{c}{q_s B^*_\parallel} {\bf{b}}\times \left[\mu \nabla B + q_s \nabla \tilde\phi \right] \label{eq:traj-1}\\ \dot{p_\|}&=&-\frac{\bf{B^*}}{B^*_\parallel}\cdot\left[\mu \nabla B + q_s \nabla \tilde\phi \right] \label{eq:traj-2}\\ \dot{\mu} & = & 0 \label{eq:traj-3}\end{aligned}$$ The coordinates used for the phase space are $({\bf{R}},p_\|,\mu)$, i.e. respectively the gyrocenter position, canonical parallel momentum $p_\| = m_s v_\|$ and magnetic momentum $\mu = m_s v_\perp^2 / (2B)$ (with $m_s$ and $q_s$ being the mass and charge of the species). $v_\|$ and $v_\perp$ are respectively the parallel and perpendicular component of the particle velocity. The gyroaverage operator is labeled here by the tilde symbol $\tilde{}$. ${\bf{B}}^*= {\bf{B}} + (c/q_s) {\bf{\nabla}}\times ({\bf{b}} \, p_\|)$, where ${\bf{B}}$ and ${\bf{b}}$ are the equilibrium magnetic field and magnetic unitary vector. Kinetic effects of the electrons are neglected. This is done by calculating the electron gyrocenter density directly from the value of the scalar potential as [@Bottino15JPP]: $$n_e({\bf{R}},t) = n_{e0} + \frac{q_s n_{e0}}{T_{e0}} \big( \phi - \bar\phi \big) \label{eq:adiabatic-electrons}$$ where $\bar\phi$ is the flux-surface averaged potential, instead of treating the electrons with markers evolved with Eqs. \[eq:traj-1\], \[eq:traj-2\], \[eq:traj-3\]. We focus on the dynamics of zonal perturbations, by filtering out all non-zonal components (this is to avoid interactions of zonal/non-zonal modes). Wave-wave coupling is neglected, by evolving the bulk-ion and electron markers along unperturbed trajectories. This means that, in Eqs. \[eq:traj-1\], \[eq:traj-2\], \[eq:traj-3\] for the bulk ions, the last terms, proportional to the EGAM electric field, are dropped. The nonlinear wave-particle dynamics is studied by evolving the EP markers along the trajectories which include perturbed terms associated with the EGAM electric field. This means that the EP markers are evolved with Eqs. \[eq:traj-1\], \[eq:traj-2\], \[eq:traj-3\] where the terms proportional to the EGAM electric field are retained. The gyrokinetic Poisson equation is [@Bottino15JPP]: $$- {\bf{\nabla}} \cdot \frac{n_0 m_i c^2}{B^2} \nabla_\perp \phi= \sum_{i} \int \d W q_s \, \tilde{\delta f_s} -n_e({\bf{R}},t)\label{eq:Poisson}$$ with $n_0 m_i$ being here the total plasma mass density (approximated as the ion mass density). The summation over the species is performed for the bulk ions and for the EP, whereas the electron contribution is given by $-n_e({\bf{R}},t)$. Here $\delta f = f - f_0$ is the gyrocenter perturbed distribution function, with $f$ and $f_0$ being the total and equilibrium (i.e. independent of time, assumed here to be a Maxwellian) gyrocenter distribution functions. The integrals are over the phase space volume, with $\d W =(2\pi/m_i^2) B_\|^* \d p_\| \d \mu$ being the velocity-space infinitesimal volume element. Equilibrium and simulation parameters {#sec:equil} ===================================== The tokamak magnetic equilibrium is defined by a major and minor radii of $R_0=1$ m and $a=0.3125$ m, a magnetic field on axis of $B_0=1.9$ T, a flat safety factor radial profile, with $q=2$, and circular flux surfaces (with no Grad-Shafranov shift). Flat temperature and density profiles are considered at the equilibrium. The bulk plasma temperature is defined by $\rho^*=\rho_s/a$, with $\rho_s = c_s/\Omega_i$, with $c_s = \sqrt{T_e/m_i}$ being the sound speed and $\Omega_i$ the cyclotron frequency. Three increasing values of bulk plasma temperature are considered, for investigating the dependence of our results on the Landau damping, corresponding to: $\rho^*_1= 1/256=0.0039$, $\rho^*_2= 1/128=0.0078$, and $\rho^*_3= 1/64=0.0156$ ($\tau_e=T_e/T_i=1$ for all cases described in this paper). All parameters defined so far, are adopted by ORB5 with no consideration of the mass of the bulk ion species. The choice of the bulk ion mass is done only during the post-processing of the results of ORB5, if we need to know the values of equilibrium or perturbed quantities in non-normalized units. In particular, in the case of a hydrogen plasma, we get a value of the ion cyclotron frequency of $\Omega_i = 1.82 \cdot 10^8 rad/s$. With this choice of bulk specie, we can calculate the plasma temperature and sound frequency. The three values of plasma temperature are $T_{i1}=$ 515 eV, $T_{i2}=$ 2060 eV, and $T_{i3}=$ 8240 eV. The sound frequency is defined as $\omega_s = 2^{1/2} v_{ti}/R$ (with $v_{ti} = \sqrt{T_i/m_i}$, which for $\tau_e=1$ reads $v_{ti}=c_s$). We obtain the following three values of the sound velocity: $c_{s1} = 2.22\cdot 10^5$ m/s, $c_{s2} = 4.44 \cdot 10^5$ m/s, $c_{s1} = 8.88\cdot 10^5$ m/s. These correspond to the following three values of the sound frequency: $\omega_{s1} = 3.14 \cdot 10^5$ rad/s, $\omega_{s2} = 6.28 \cdot 10^5$ rad/s, and $\omega_{s3} = 1.25 \cdot 10^6$ rad/s. ![Initial EP distribution function for a simulation with $n_{EP}/n_i = 0.12$, $v_{bump}/v_{ti}$ = 4.\[fig:fvel2D\_t0\]](fvel2D_vpar-t-0){width="51.00000%"} The energetic particle distribution function is a double bump-on-tail, with two bumps at $v_\| = \pm v_{bump}$ (see Fig. \[fig:fvel2D\_t0\]), like in Ref. [@Biancalani14]. In this paper, $v_{bump}=4 \, v_{ti}$ is chosen. In order to initialize a distribution function which is function of constants of motion only, the modified variable $\tilde{v}_\| = \sqrt{2(E-\mu B_{max})/m/v_{ti}}$ is used instead of $v_\|= \sqrt{2(E-\mu B(r,\theta))/m/v_{ti}}$ (similarly to Ref. [@Zarzoso12; @Biancalani14; @Zarzoso14]). Neumann and Dirichlet boundary conditions are imposed to the scalar potential, respectively at the inner and outer boundaries, $s=0$ and $s=1$. Linear dynamics {#sec:linear} =============== ![Frequency (left) and growth rate (right) vs EP concentration, for simulations with $\bar\zeta=v_{bump}/v_{th}$=4.\[fig:omegagamma\_nEP\]](omega_nEP "fig:"){width="49.00000%"} ![Frequency (left) and growth rate (right) vs EP concentration, for simulations with $\bar\zeta=v_{bump}/v_{th}$=4.\[fig:omegagamma\_nEP\]](gamma_nEP "fig:"){width="49.00000%"} The linear dynamics of EGAMs in an equilibrium similar to the one considered in Sec. \[sec:equil\], has been investigated in Ref. [@Zarzoso14] for a bulk plasma temperature given by $\rho^*=1/64=0.0156$, by means of ORB5 simulations and analytical theory. A scan on the EP concentration is performed here, similarly to what was done in Ref. [@Zarzoso14], but with three different values of bulk plasma temperature, corresponding to three different values of $\rho^*$, as described in Sec. \[sec:equil\]. The dependence of the linear dynamics (frequency and growth rate) on the EP concentration is shown in Fig. \[fig:omegagamma\_nEP\]. Both the frequency and the growth rates are observed to follow the qualitative scalings as described in Ref. [@Biancalani14] and [@Zarzoso14]. Note, in particular, that the growth rate does not grow linearly with $n_{EP}$. The dependence of the frequency on the EP concentration is not observed to change with $\rho^*$. Regarding the growth rate, no change is observed when going from $\rho^*_3 = 0.0039$ to $\rho^*_2 = 0.0078$, meaning that the measured growth rate is basically given by the value of the drive, and the Landau damping here is negligible for the chosen values of EP concentration. On the other hand, when further increasing the temperature, and going to $\rho^*_1 = 0.0156$, a smaller value of growth rate is measured, meaning a higher Landau damping. The transit resonance velocity of the EP can be calculated by knowing the EGAM frequency of a specific simulation. Considering a case with $n_{EP}/n_i = 0.12$ as an example, the frequency is measured as: $\omega_L= 1.2 \, \omega_s$. For comparison, the GAM frequency for these parameters is $\omega_{GAM}= 1.8 \, \omega_s$. Then, the transit resonance velocity in the linear phase is calculated as $v_{\|0} = qR \, \omega_L = 3.4 \, v_{ti}$, with $\omega_L$ being the EGAM linear frequency. Scaling of the saturated amplitudes {#sec:scalings} =================================== In this Section, we focus on the value of the saturated electric field $\bar{\delta{E_r}}$, and we investigate its dependence on the value of the linear growth rate and of the damping. The corresponding scaling of $\bar{\delta{E_r}}$ sheds light on the mechanism which is responsible for the saturation. A one-to-one comparison with the saturation mechanism of the BPI is also described. ![On the left, EGAM normalized radial structure for $\rho^*=0.0156$, $n_{EP}/n_i=0.176$. On the right, absolute value of the electric field measured at the position of the peak, $s=0.6$, for three different simulations with respectively $\rho^*=0.0039$ (blue), $\rho^*=0.0078$ (red), $\rho^*=0.0156$ (green). All simulations here have $n_{EP}/n_i=0.30$. The time is expressed in units of $\Omega_i^{-1}$.\[fig:nonlinear-Omi\]](Rad-struct-LIN-vs-NL "fig:"){width="48.00000%"} ![On the left, EGAM normalized radial structure for $\rho^*=0.0156$, $n_{EP}/n_i=0.176$. On the right, absolute value of the electric field measured at the position of the peak, $s=0.6$, for three different simulations with respectively $\rho^*=0.0039$ (blue), $\rho^*=0.0078$ (red), $\rho^*=0.0156$ (green). All simulations here have $n_{EP}/n_i=0.30$. The time is expressed in units of $\Omega_i^{-1}$.\[fig:nonlinear-Omi\]](logabsE_t-n30 "fig:"){width="50.00000%"} The amplitude of the EGAM at saturation has been measured in different simulations performed with ORB5, for different values of the bulk temperature, and different values of the EP concentrations. As an example, the radial structure of a nonlinear simulation with $\rho^*=0.0156$, $n_{EP}/n_i=0.176$ is depicted in Fig. \[fig:nonlinear-Omi\]-a. No sensible change in the radial wave-number is observed when going from the linear phase, to the saturation, and after the saturation. This confirms that in this particular configuration, where all equilibrium radial profiles are flat, EGAM can be treated as a 1-dimensional problem where the radial direction does not play an important role. When varying only the bulk temperature, both the linear frequency and growth rate are observed to scale with the sound frequency, which is a good normalization frequency (consistently with Fig. \[fig:omegagamma\_nEP\]). The saturation level increases with the linear growth rate, similarly to other instabilities like the BPI in a uniform system and the Alfvén instabilities in tokamaks. This is depicted in Fig. \[fig:nonlinear-Omi\]-b, where non-normalized units are used (in particular, the ion cyclotron frequency is selected as a time unit not depending on the temperature). The scalings with the energetic particle concentration are also investigated. The results are shown in Fig. \[fig:E\_gamma\]a. We obtain that, in the considered regime, the saturated level scales as the quadratic power of the linear growth rate. This quadratic scaling is typical for marginally stable bump-on-tail instabilities, as derived by O’Neil [@Oneil65; @Oneil68]. We can consider the problem to be similar to a monochromatic beam-plasma system, in which particles are moving in the potential well of the perturbed electric field. Depending on the energy, some particles are trapped inside the well and execute bounce motion with frequency $\omega_b$ in the frame moving with the wave phase velocity. The resonant particles exchange energy with the mode, causing the amplitude to grow and the particles to redistribute in phase space, flattening the velocity distribution in the vicinity of the resonant parallel velocity $v_\parallel= \omega q R_0$. The drive is due to the positive slope of the particle distribution in the velocity space at the resonant parallel velocity, which acts as an inverse Landau damping. In the initial stage $\omega_b \ll \gamma_L$, the mode grows exponentially with a linear growth rate, making more and more particles to become trapped in the phase space. After some significant particle velocity redistribution, the power exchange between the wave and the particles is balanced, causing the wave amplitude to saturate. Since the initial perturbation is negligible, the saturation level is determined by the exchange of energy between the mode and a band of resonant particles [@Levin72]. The chirping of the mode seen in Fig. \[fig:NL-freq\] is a strongly non-linear effect that occurs when the amplitude is large enough to have the trapped (and more generally resonant) particles with $\omega_b \sim \gamma_L$ drastically change the dynamics of the mode through the modification of the distribution function and non-perturbative fast particle response. Namely, the mode dynamics is determined by all resonant particles that exhibit a continuous oscillation, trapping and detrapping in the potential well of the mode, thus contributing (non-perturbatively) to the non-adiabatic behavior observed in the simulation. ![On the left, maximum value of the EGAM radial electric field, vs linear growth rate, for the same simulations as in Fig. \[fig:omegagamma\_nEP\]. The red, blue and green crosses refer respectively to $\rho^*=0.0039$, $\rho^*=0.0078$, $\rho^*=0.0156$. The dashed lines are the quadratic fitting formulas. On the right, the value of $\beta$ as given in Eq. \[eq:beta\], vs the linear frequency, for the same simulations. The black dashed line is the sqruare root interpolation. For a reference, the black star shows the result obtained for the BPI in Ref. [@Levin72]. \[fig:E\_gamma\]](Emax_gamma "fig:"){width="48.50000%"} ![On the left, maximum value of the EGAM radial electric field, vs linear growth rate, for the same simulations as in Fig. \[fig:omegagamma\_nEP\]. The red, blue and green crosses refer respectively to $\rho^*=0.0039$, $\rho^*=0.0078$, $\rho^*=0.0156$. The dashed lines are the quadratic fitting formulas. On the right, the value of $\beta$ as given in Eq. \[eq:beta\], vs the linear frequency, for the same simulations. The black dashed line is the sqruare root interpolation. For a reference, the black star shows the result obtained for the BPI in Ref. [@Levin72]. \[fig:E\_gamma\]](beta_omeganorm "fig:"){width="49.00000%"} The quadratic scaling of the saturation level with the damping obtained with our simulations is similar to the saturation of the BPI, where it occurs due to wave-particle trapping [@Oneil65]. For the BPI, the original reference of M. B. Levin (1972) gives a value of $\omega_b = 3.06 \, \gamma_L$ at saturation [@Levin72] (for comparison, note that more recent numerical calculations find $\omega_b = 3.2 \, \gamma_L$ [@Lesur09; @Carlevaro17]). For EGAMs, the bounce frequency is given by [@Qiu11]: $$\label{eq:omegab_vs_E} \omega_b^2 = \alpha_1 \, \bar{\delta{E_r}} \, , \;\;\;\; \text{with} \; \alpha_1 \equiv \frac{ e \hat{V}_{dc}}{2 m_{EP} v_{\|0} q R_0}$$ with $m_{EP}$ being the mass of the energetic particle specie, considered equal to the bulk ion mass in this paper, $v_{\|0}$ the velocity matching the resonance condition, and $\hat{V}_{dc} = m_{EP} v^2_{\|0}/(eB R)$ the magnetic curvature drift. Therefore we have: $$\label{eq:alpha_1} \alpha_1 = \frac{v_{\|0}}{2 q R^2 B} = \frac{\omega_L}{2 R B}$$ We emphasize that the value of $\alpha_1$ depends on $\omega_L$. This is a main difference with respect to the BPI, where there is only one value $\omega_{lin}=\omega_{pe}$, with $\omega_{pe}$ being the plasma frequency. The dependence of the maximum electric field on the linear growth rate can be measured with the results of the numerical simulations. For the simulations shown in Fig. \[fig:E\_gamma\], we find: $$\bar{\delta{E_r}} = \alpha_2 \gamma_L^2$$ The values of $\alpha_2$ are found to depend on the bulk temperature. For the three chosen increasing values of $\rho^*$, i.e. $\rho^* =$ 0.0039, 0.0078 and 0.0156, we have respectively $\alpha_2= 0.47\cdot 10^7$ V/m, $0.9\cdot 10^7$ V/m, and $2.0\cdot 10^7$ V/m. Finally the relationship between the EP bounce frequency and the linear growth rate is obtained: $$\omega_b = \beta \, \gamma_L \label{eq:omegab_vs_gammalin}$$ where $\beta$ is calculated as $\beta = (\alpha_1 \alpha_2 )^{1/2}/\omega_s$, which yields: $$\beta = \beta_0 \Big(\frac{\omega_L}{\omega_{GAM}}\Big)^{1/2}, \;\; \text{with} \;\; \beta_0 = \frac{1}{\omega_s}\Big( \frac{\omega_{GAM} \, \alpha_2}{2RB} \Big)^{1/2} \label{eq:beta}$$ Note that here, $\beta$ depends on the EGAM frequency, which changes with the intensity of the drive, given here by the EP concentration (see Fig. \[fig:omegagamma\_nEP\]). As a comparison, note that in the problem of the BPI, solved in Ref. [@Oneil65; @Oneil68; @Levin72], on the other hand, the mode frequency is assumed to be constant and equal to the frequency measured in the absence of EP. On the other hand, the values of $\beta$ are found not to depend on the damping, which changes with the three different bulk temperatures: an interpolation can be drawn for all considered simulations, and shown to depend on $\omega_L$ only (see Fig. \[fig:E\_gamma\]b). In this sense, the formula given in Eq. \[eq:beta\] is universal for the chosen regime, because it does not depend on the bulk plasma temperature. The considered equilibrium has been chosen in order to excite EGAMs out of a GAM, and not out of a Landau pole, as described in Ref. [@Zarzoso14]. This choice of the mode is done, in order to have a one-to-one correspondence with the BPI, where the mode which is excited by the energetic electron beam is the Langmuir wave which is an eigenmode of the system in the absence of EP. Following this consideration, we can consider the interpolation of the results shown in Fig. \[fig:E\_gamma\], and take the extrapolation to $\omega_L\rightarrow \omega_{GAM}$, which is the limit assumed in the resolution of the BPI. In this case, the extrapolation gives a unique value, which defines the EGAM instability, i.e. $\beta_0 = \beta(\omega_L/ \omega_{GAM} =1) = 2.66$. This is to be compared with the value of $\beta$ obtained for the BPI [@Levin72; @Lesur09; @Carlevaro17], i.e. $\beta_{BPI}=3.2$ (originally estimated as $\beta_{BPI}=3.06$ by Levin). Finally, by using Eqs. \[eq:omegab\_vs\_E\], \[eq:alpha\_1\], \[eq:omegab\_vs\_gammalin\] and \[eq:beta\], we can write the formula for the saturated electric field as a function of the linear characteristics of the mode: $$\delta E_r = \frac{2 R B \beta_0^2}{\omega_{GAM}} \; \gamma_L^2$$ with the value of $\beta_0 = 2.66$ in the regime considered in this paper. Frequency {#sec:frequency} ========= In this section, we show the results of the measurement of the time evolution of the EGAM frequency. In Sec. \[sec:scalings\], we have shown that a quadratic scaling of the saturated electric field on the linear growth rate is found. This quadratic scaling, has a one-to-one correspondence on the Langmuir wave problem investigated by O’Neil, where the saturation occurs due to wave-particle trapping. The wave-particle trapping mechanism, is usually referred to as adiabatic, meaning that a slowly increasing potential well traps more and more energetic particles. In this adiabatic regime, the mode frequency varies very slowly with respect to the bounce frequency. On the other hand, in the EGAM case considered here, we show that the saturation is not strictly adiabatic, but a transition between adiabatic and non-adiabatic regime occurs at the time of the saturation. ![Nonlinear evolution of the frequency, measured as a short-time average of the period between the peaks (left) or with a short-time Fourier transform (right), for $\rho^*=0.0078$, $n_{EP}/n_i=0.12$.\[fig:NL-freq\]](freq_t "fig:"){width="51.00000%"} ![Nonlinear evolution of the frequency, measured as a short-time average of the period between the peaks (left) or with a short-time Fourier transform (right), for $\rho^*=0.0078$, $n_{EP}/n_i=0.12$.\[fig:NL-freq\]](freq_t-STFT-zoomt "fig:"){width="47.00000%"} In this section, we take again the EGAM case with $\rho^*=0.0078$, $n_{EP}/n_i=0.12$, as a typical case, and we investigate the variation of the frequency in time in comparison with the bounce frequency. For the measurement of the frequency, we use the radial zonal electric field measured at s=0.5. The measurement of the frequency is performed in two different ways: a) as an average of the period between several EGAM oscillation peaks, as shown in Fig. \[fig:NL-freq\]-a; b) with a short-time Fourier transform (STFT), as shown in Fig. \[fig:NL-freq\]-b. With the first technique, namely measuring the frequency by inversion of the period between neighbouring peaks, an upward chirping is observed in the nonlinear phase, of the order of 10% of the linear frequency. This means that the resonance condition changes in time, with resonance velocity slightly increasing at the time of the saturation or in the later phase (see Fig. \[fig:NL-freq\]-a). A dominantly upward chirping of EGAMs was previously observed and documented in Ref. [@Berk06; @Wang13]. The second technique, consists in measuring the frequency with a short-time Fourier transform (STFT) on a Hamming time-window. With this technique, the error bar in frequency is large (due to the small number of oscillations in the nonlinear regime around the saturation), namely of the order of 10-20% (see Fig. \[fig:NL-freq\]-b). With such a big error-bar, no clear upward chirping is observed. Near the time of the saturation, i.e. $t\simeq 2.2 \, \Omega_i^{-1}$, only one mode is observed. This is the condition of application of the direct technique of measurement of the frequency described above, where the frequency can be measured as the inverse of the period among peaks. ![Squared bounce frequency (left) and adiabaticity (right) for the EGAM with $\rho^*=0.0078$, $n_{EP}/n_i=0.12$.\[fig:adiabaticity\]](omegabsq_t "fig:"){width="46.00000%"} ![Squared bounce frequency (left) and adiabaticity (right) for the EGAM with $\rho^*=0.0078$, $n_{EP}/n_i=0.12$.\[fig:adiabaticity\]](adiabaticity_t "fig:"){width="48.00000%"} As mentioned above, the EP bounce frequency $\omega_b$ depends on the mode amplitude. Its value for the considered simulation is shown in Fig. \[fig:adiabaticity\]-a. When the EGAM frequency evolves slowly in time with respect to the inverse of the bounce frequency, then the EP can bounce back and forth many times, and this is called adiabatic dynamics. The adiabaticity parameter, defined as $\omega'/\omega_b^2$, measures the level of adiabaticity of the dynamics. The time evolution of the adiabaticiy parameter for the considered simulation is depicted in Fig. \[fig:adiabaticity\]-b (with $\omega$ being the instantaneous mode frequency). A transition from adiabatic to non-adiabatic dynamics near the saturation is observed. In particular, near the saturation, the EP do not have the time to perform many bounces during the nonlinear modification of the wave. From this respect, the EGAM dynamics and the BPI are not in analogy. The details of the EGAM saturation mechanism will be investigated with a power-balance diagnostics in phase space, and discussed in a separate paper. Summary and discussion {#sec:conclusions} ====================== The importance of understanding the nonlinear dynamics of EGAM, i.e. energetic particle (EP) driven geodesic acoustic modes (GAM), is due to their possible role in interacting with turbulence and with the EP population present in tokamak plasmas as a result of nuclear reactions and/or heating mechanisms. In particular, the level of the nonlinear saturation of EGAM is directly related to their efficiency in regulating turbulence or redistributing EP in phase space. In this paper, we have investigated the problem of the nonlinear saturation of EGAM with the gyrokinetic particle-in-cell code ORB5, focusing on the wave-particle nonlinearity. Electrostatic collisionless simulations have been considered, with circular flux surfaces magnetic equilibria, and neglecting the kinetic effects of electrons. The level of the saturated electric field has been shown to scale quadratically with the linear growth rate, similarly to the beam-plasma instability (BPI) in 1D uniform plasmas, and to Alfvén Eigenmodes (AE) near marginal stability. We note that, in the case of beta-induced AE, due to the finite radial mode structure compared to resonant particle radial excursion, deviation from the $\delta E_r\propto \gamma_L^2$ is observed due to the competition of resonance detuning and radial decoupling [@Wang12; @Zhang12]. A similar deviation is anticipated as one further increases the drive of EPs, from the correspondence of EP pitch angle scattering by frequency chirping EGAMs and the radial wave-particle pumping by frequency chirping finite-n BAEs. This will be the content of our next publication. We have also investigated the relationship of the bounce frequency of the EP in the field of the EGAM, with the linear growth rate, finding a linear dependence, similarly to the BPI. The linear coefficient, in the case of the EGAM problem, is proportional to the square root of the EGAM frequency, and is not a constant like in the beam-plamsa instability. In the limit of $\omega_L/\omega_{GAM}\rightarrow 1$, which corresponds to the BPI problem when the mode frequency does not move sensibly from the value of the plasma frequency $\omega_p$, a unique value of the linear coefficient can be estimated: $\beta_0 = 2.66$, to be compared with the factor $\beta_{BPI}\simeq 3.2$ of the BPI. This value of $\beta_0 = 2.66$ defines the EGAM in the regime of interest. The investigation of the dependence of $\beta$ on equilibrium parameters like the safety factor $q$, the magnetic flux surface elongation $e$, the EP distribution function, and the effect of kinetic electrons is left to a future work. Finally, we have investigated the temporal variation of the EGAM frequency during the saturation phase. The initial adiabatic regime, defined as the regime where the time derivative of the EGAM frequency is much smaller than the squared EP bounce frequency, has been shown to have a transition to a non-adiabatic regime at the saturation. The exact analytical model of this phenomenon is still a work in progress and will be the first expansion of this work. The simplified picture is that the mode will chirp to maintain maximal exchange of power with the energetic particles [@Chen16RMP], while balancing the Landau damping. The mode frequency is at the same time bound by the wave dispersion relation and plasma equilibrium, making their simultaneous examination with the power exchange equation necessary for determining the rate and direction of chirping. The non-adiabatic behavior is enabled by the non-perturbative response of the energetic particles, which consists of significant distortion of the distribution function in the resonant region, as well as of the particle trapping and detrapping [@Chen16RMP]. Differently, the adiabatic chirping is connected with either external driven fluctuations, or slow energetic particle redistribution, provided that the drive is sufficiently weak [@Breizman97]. Acknowledgements {#acknowledgements .unnumbered} ================ Interesting discussions with N. Carlevaro, G. Montani, F. Zonca on the nonlinear dynamics of the energetic particles are acknowledged. Interesting discussions with A. Mishchenko and B. Finney McMillan on the treatment of EGAMs with ORB5 are also acknowledged. Part of this work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under grant agreement No 633053, within the framework of the [*Nonlinear energetic particle dynamics*]{} (NLED) European Enabling Research Project, WP 15-ER-01/ENEA-03. The views and opinions expressed herein do not necessarily reflect those of the European Commission. Part of this work has been funded by the Centre de Coopération Universitaire Franco-Bavarois - Bayerisch-Französisches Hochschulzentrum, for the grant FK 30\_15. Simulations were performed on the Marconi supercomputer within the framework of the OrbZONE and ORBFAST projects. Part of this work was done while two of the authors (A. Biancalani and I. Novikau) were visiting LPP-Palaiseau, whose team is acknowledged for the hospitality. One of the authors (I.C.) wants to thank the Max Planck Princeton Center for the support during this work. [99]{} A. Hasegawa, C. G. Maclennan, and Y. Kodama, [*Phys. Fluids*]{} [**22**]{}, 2122 (1979) M.N. Rosenbluth and F.L. Hinton, [*Phys. Rev. Lett.*]{} [**80**]{},4 724 (1998) P.H. Diamond, S.-I. Itoh, K. Itoh, and T. S. Hahm, [*Plasma Phys. Controlled Fusion*]{} [**47**]{}, R35 (2005) N. Winsor, J. L. Johnson, and J. M. Dawson, [*Phys. Fluids*]{} **11**, 2448, (1968) F. Zonca and L. Chen, [*Europhys. Lett.*]{} [**83**]{}, 35001 (2008) G. Fu et al. [*Phys. Rev. Lett.*]{} [**101**]{}, 185002 (2008) Z. Qiu, F. Zonca and L. Chen, [*Plasma Phys. Controlled Fusion*]{} [**52**]{}, 095003 (2010) Z. Qiu, F. Zonca, L. Chen [*Plasma Science and Tech.*]{} [**13**]{}, 257 (2011) Z. Qiu, F. Zonca, L. Chen [*Phys. Plasmas*]{} [**19**]{} 082507 (2012) D. Zarzoso et al. [*Phys. Plasmas*]{} [**19**]{}, 022102-1 (2012) H. Wang, Y. Todo and C. C. Kim [*Phys. Rev. Lett.*]{} [**110**]{}, 155006 (2013) G. B. Girardo, et al. [*Phys. Plasmas*]{} [**21**]{} 092507 (2014) D. Zarzoso et al. [*Phys. Rev. Lett.*]{} [**110**]{}, 125002 (2013) D. Zarzoso et al. [*Nucl. Fusion*]{} [**57**]{} 072011 (2017) P. Lauber, ITPA technical meeting on energetic particles (2014) L. Horváth et al. [*Nucl. Fusion*]{} [**56**]{} 112003 (2016) Z. Qiu, I. Chavdarovski, A. Biancalani, and J. Cao, [*Phys. Plasmas*]{} [**24**]{}, 072509 (2017) S. Jolliet, A. Bottino, P. Angelino, R. Hatzky, T. M. Tran, B. F. Mcmillan, O. Sauter, K. Appert, Y. Idomura, and L. Villard, [*Comput. Phys*]{} [**177**]{}, 409 (2007). A. Bottino, T. Vernay, B. D. Scott, S. Brunner, R. Hatzky, S. Jolliet, B. F. McMillan, T. M. Tran, and L. Villard [*Plasma Phys. Controlled Fusion*]{} [**53**]{}, 124027 (2011) A. Bottino and E. Sonnendrücker, [*J. Plasma Phys.*]{} [**81**]{}, 435810501 (2015). A. Biancalani, A. Bottino, C. Ehrlacher, V. Grandgirard, G. Merlo, I. Novikau, Z. Qiu, E. Sonnendrücker, X. Garbet, T. Görler, S. Leerink, F. Palermo, and D. Zarzoso, [*Phys. Plasmas*]{} [**24**]{}, 062512 (2017) A. Biancalani, et al. [*Nucl. Fusion*]{} [**54**]{}, 104004 (2014) D. Zarzoso, A. Biancalani, A. Bottino, Ph. Lauber, E. Poli, J.-B. Girardo, X. Garbet and R.J. Dumont, [*Nucl. Fusion*]{} [**54**]{}, 103006 (2014) T. M. O’Neil [*Phys. Fluids*]{} [**8**]{}, 2255 (1965) T. M. O’Neil and J. H. Malmberg [*Phys. Fluids*]{} [**11**]{}, 1754 (1968) Z. Qiu, L. Chen and F. Zonca [*41st EPS Conference on Plasma Physics*]{}, P4.004 (2014) S. Briguglio, X. Wang, F. Zonca, G. Vlad, G. Fogaccia, C. Di Troia and V. Fusco, Phys. Plasmas [**21**]{}, 112301 (2014). G. J. Morales and T. M. O’Neil, [*Phys. Rev. Letters*]{} [**28**]{}, 417 (1972). T. Armon, L. Friedland, [*J. Plasma Phys.*]{} [**82**]{}, 705820501 (2016) M. B. Levin, M. G. Lyubarskii, I. N. Onishchenko, V. D. Shapiro, and V. I. Shevchenko, [*Sov. Phys. JETP*]{} [**35**]{}, 898 (1972) G. Fu and J. W. Van Dam, [*Phys. Fluids B*]{} [**1**]{}, 1949 (1989) H. Sugama, [*Phys. Plasmas*]{} [**7**]{}, 466 (2000) A. J. Brizard and T. S. Hahm, [*Rev. Mod. Phys*]{} [**79**]{}, 421 (2007) M. Lesur, Y. Idomura, and X. Garbet, [*Phys. Plasmas*]{} [**16**]{}, 092305 (2009). N. Carlevaro, et al, to be submitted (2017) H.L. Berk, C.J. Boswell, D. Borba, A.C.A. Figueiredo, T. Johnson, M.F.F. Nave, S.D. Pinches, S.E. Sharapov and JET EFDA contributors, [*Nucl. Fusion*]{} [**46**]{} S888-S897 (2006) X. Wang, S. Briguglio, L. Chen, C. Di Troia, G. Fogaccia, G. Vlad, and F. Zonca [*Phys. Rev. E*]{} [**86**]{}, 045401(R) (2012) H. S. Zhang, Z. Lin, and I. Holod [*Phys. Rev. Lett.*]{} [**109**]{}, 025001 (2012) L. Chen and F. Zonca, [*Rev. Mod. Phys.*]{} [**88**]{} 1, (2016) B. N. Breizman, H. L. Berk, M. S. Pekker, F. Porcelli, G. V. Stupakov, and K. L. Wong, [*Phys. Plasmas*]{} [**4(5)**]{} 1559-1568, (1997)